12.6 C
London
Sunday, October 20, 2024
HomeNewsTechnologyStudy finds community of ethical hackers required to prevent AI’s looming trust...

Study finds community of ethical hackers required to prevent AI’s looming trust crisis

Related stories

J&K police release list of seized assets used for terrorism

Jammu, Feb 16 : The police in Jammu and...

Israel says 4 mln citizens vaccinated against Covid-19

Jerusalem, Feb 17 : Israeli officials announced that some...

Hungary to receive first shipment of Chinese vaccines

Beijing, Feb 17 : A Hungarian cargo plane loaded...

Cambridge : A new research led by the University of Cambridge’s Centre for the Study of Existential Risk (CSER) has recommended a new call to action in order to earn the trust of the governments and the public.
The study has been published in the ‘Science Journal’.
They said that companies building intelligent technologies should harness techniques such as “red team” hacking, audit trails and “bias bounties” – paying out rewards for revealing ethical flaws – to prove their integrity before releasing AI for use on the wider public.
Otherwise, the industry faced a “crisis of trust” in the systems that increasingly underpin our society, as the public concerned continued to mount over everything from driverless cars and autonomous drones to secret social media algorithms that spread misinformation and provoked political turmoil.
The novelty and “black box” nature of AI systems, and ferocious competition in the race to the marketplace, had hindered the development and adoption of auditing or third-party analysis, according to lead author Dr Shahar Avin of CSER.
The experts argued that incentives to increase trustworthiness should not be limited to regulation, but must also come from within an industry yet to fully comprehend that public trust is vital for its own future – and trust is fraying.

Subscribe

- Never miss a story with notifications

- Gain full access to our premium content

- Browse free from up to 5 devices at once

Latest stories