Cambridge : A new research led by the University of Cambridge’s Centre for the Study of Existential Risk (CSER) has recommended a new call to action in order to earn the trust of the governments and the public.
The study has been published in the ‘Science Journal’.
They said that companies building intelligent technologies should harness techniques such as “red team” hacking, audit trails and “bias bounties” – paying out rewards for revealing ethical flaws – to prove their integrity before releasing AI for use on the wider public.
Otherwise, the industry faced a “crisis of trust” in the systems that increasingly underpin our society, as the public concerned continued to mount over everything from driverless cars and autonomous drones to secret social media algorithms that spread misinformation and provoked political turmoil.
The novelty and “black box” nature of AI systems, and ferocious competition in the race to the marketplace, had hindered the development and adoption of auditing or third-party analysis, according to lead author Dr Shahar Avin of CSER.
The experts argued that incentives to increase trustworthiness should not be limited to regulation, but must also come from within an industry yet to fully comprehend that public trust is vital for its own future – and trust is fraying.
Related stories
Subscribe
- Never miss a story with notifications
- Gain full access to our premium content
- Browse free from up to 5 devices at once
Latest stories