News, Smart Home

OpenAI, Google DeepMind Employees Warn of AI Risks, Demand Better Whistleblower Protection Policies

Published on

Amazon Affiliate Disclaimer! Some links on this page are Amazon affiliate links which means that, as an Amazon Associate, I earn from qualifying purchases. I greatly appreciate your support!

Please Share With Your Friends

OpenAI and Google DeepMind are among the top tech companies at the forefront of building artificial intelligence (AI) systems and capabilities. However, several current and former employees of these organisations have now signed an open letter claiming that there is little to no oversight in building these systems and that not enough attention is being paid toward major risks posed by this technology. The open letter is endorsed by two of the three ‘godfathers’ of AI, Geoffrey Hinton and Yoshua Bengio and seeks better whistleblower protection policies from their employers.

OpenAI, Google DeepMind Employees Demand Right to Warn about AI

The open letter states that it was written by current and former employees at major AI companies who believe in the potential of AI to deliver unprecedented benefits to humanity. It also points towards the risks posed by the technology which include strengthening societal inequalities, spreading misinformation and manipulation, and even losing control over AI systems that could lead to human extinction.

The open letter highlights that the self-governance structure implemented by these tech giants is ineffective in ensuring scrutiny of these risks. It also claimed that “strong financial incentives” further incentivise companies to overlook the potential danger AI systems can cause.

Claiming that AI companies are already aware of AI’s capabilities, limitations, and risk levels of different kinds of harm, the open letter questions their intention to take corrective measures. “They currently have only weak obligations to share some of this information with governments, and none with civil society. We do not think they can all be relied upon to share it voluntarily,” it states.

See also  Xiaomi 14 Civi Key Specifications Confirmed Ahead of June 12 India Launch

The open letter has made four demands from their employers. First, the employees want companies to not enter into or enforce any agreement that prohibits their criticism for risk-related concerns. Second, they have asked for a verifiably anonymous process for current and former employees to raise risk-related concerns to the company’s board, to regulators, and to an appropriate independent organisation.

The employees also urge the organisations to develop a culture of open criticism. Finally, the open letter highlights that employers should not retaliate against existing and former employees who publicly share risk-related confidential information after other processes have failed.

A total of 13 former and current employees of OpenAI and Google DeepMind have signed the letter. Aside from the two ‘godfathers’ of AI, British computer scientist Stuart Russell has also endorsed this move.

Former OpenAI Employee Speaks on AI Risks

One of the former employees of OpenAI who signed the open letter, Daniel Kokotajlo, also made a series of posts on X (formerly known as Twitter), highlighting his experience at the company and the risks of AI. He claimed that when he resigned from the company, he was asked to sign a nondisparagement clause to prevent him from saying anything critical of the company. He also claimed that the company threatened Kokotajlo with taking away his vested equity for refusing to sign the agreement.

Kokotajlo claimed that the neural nets of AI systems are rapidly growing from the large datasets being fed to them. Further, he added that there were no adequate measures in place to monitor the risks.

See also  Apple Could Reportedly Announce Major AI Upgrades for Siri at WWDC 2024

“There is a lot we don’t understand about how these systems work and whether they will remain aligned to human interests as they get smarter and possibly surpass human-level intelligence in all arenas,” he added.

Notably, OpenAI is building Model Spec, a document through which it aims to better guide the company in building ethical AI technology. Recently, it also created a Safety and Security Committee. Kokotajlo applauded these promises in one of the posts.


Please Share With Your Friends

Leave a Comment