OpenAI Establishes Safety and Security Committee

TapTechNews on May 29th reported that OpenAI announced the establishment of the Safety and Security Committee of the board of directors to make recommendations on key safety and security decisions for OpenAI projects and operations.

The committee's current top priority is to assess and further develop OpenAI's development processes and safeguards within the next 90 days. After the 90 days, the Safety and Security Committee will share its recommendations with the entire board of directors.

OpenAI Establishes Safety and Security Committee_0

It is introduced that the committee will be led by the board of directors Bret Taylor (chairman), Adam D'Angelo, Nicole Seligman, and OpenAI's chief executive officer Sam Altman.

In addition, OpenAI technical and policy experts Aleksander Madry (director of preparatory work), Lilian Weng (director of safety systems), John Schulman (director of alignment science), Matt Knight (director of safety), and Jakub Pachocki (chief scientist) will also join the committee.

TapTechNews noticed that OpenAI also stated that it will hire and consult other safety, security, and technical experts to support this work, including former cyber security officials Rob Joyce and John Carlin who provide security advice to OpenAI.

OpenAI also disclosed that it has recently begun training its next-generation cutting-edge model and expects to take them to the next level on the path to AGI.

Likes