OpenAI and Google DeepMind Employees Publish Open Letter on AI Risks

On June 5, TapTechNews reported that several former and current employees of OpenAI and Google DeepMind recently jointly released an open letter, expressing concerns about the potential risks of advanced artificial intelligence and the current lack of regulation on AI technology companies.

OpenAI and Google DeepMind Employees Publish Open Letter on AI Risks_0

The open letter noted that the development of artificial intelligence may bring a series of risks, such as exacerbating existing social inequalities, promoting the spread of manipulation and misinformation, and the possibility that out-of-control autonomous AI systems could lead to the extinction of humanity, etc.

It was written in the letter that AI companies have strong economic interests driving them to continue advancing AI research and development, while being reticent about information on protection measures and risk levels. The open letter believes that these companies cannot be expected to voluntarily share this information, so it calls on insiders to speak out.

Due to the lack of effective government regulation, these current and former employees have become one of the few groups that can hold these companies accountable to the public. However, due to strict confidentiality agreements, employees are restricted from speaking out and can only report problems to companies that may not be properly handling these issues. Traditional whistleblower protection measures do not apply as they focus on illegal acts, while many of the risks currently of concern are not regulated.

The employees called on AI companies to provide reliable whistleblower protection measures for those who expose AI risks, specifically including:

Not to create or enforce agreements that prevent employees from raising criticisms related to risk issues;

To provide a verifiable anonymous procedure that allows employees to raise risk-related concerns to the board of directors, regulatory agencies and independent organizations in related fields;

To support a culture of open criticism that allows employees to raise risk concerns related to technology to the public, the board of directors, regulatory agencies, etc. under the premise of protecting trade secrets;

To avoid retaliation against employees who publicly share risk-related confidential information after other procedures have failed.

A total of 13 employees signed the open letter, including 7 former OpenAI employees, 4 current OpenAI employees, 1 former Google DeepMind employee and 1 current Google DeepMind employee. It is reported that OpenAI once threatened to cancel the vested rights and interests of employees who spoke out and required them to sign strict confidentiality agreements to restrict their criticism of the company.

Likes