OpenAI Faces Opposition from Former Researchers over California AI Safety Act

TapTechNews August 24th news, according to a report by Business Insider this morning Beijing time, after OpenAI publicly expressed opposition to California's SB1047 bill (AI Safety Act), two former OpenAI researchers then came out and publicly opposed their 'former employer' and issued a warning.

The California AI Safety Act in the US will require AI companies to take measures to prevent their models from causing'serious damage', such as developing biological weapons that may cause a large number of casualties or causing more than 500 million US dollars (TapTechNews note: currently about 3.566 billion yuan) of economic losses.

These former employees wrote a letter to California Governor Gavin Newsom and other legislators stating that OpenAI's opposition to the bill is disappointing, but not surprising.

Two researchers, William Saunders and Daniel Kokotailo, wrote in the letter, 'The reason why we (previously) chose to join OpenAI is because we wanted to ensure the safety of the'very powerful AI system' developed by the company. But we chose to leave because it lost our trust - to be able to develop the AI system safely, honestly, and responsibly.'

The letter also mentioned that OpenAI CEO Altman has repeatedly publicly supported the regulation of AI, but when the actual regulatory measures were about to be introduced, they expressed opposition. 'Developing cutting-edge AI models without sufficient safety precautions will bring a foreseeable catastrophic harm risk to the public.'

Related reading:

'OpenAI publicly opposes the California AI Safety Act in the US'

Likes