OpenAI Executive's Departure and Company's Response

TapTechNews May 21st news, after the departure of OpenAI co-founder Ilya Sutskever, another OpenAI executive, Jan Leike, posted on the X platform and announced that he had left the company last week.

OpenAI Executives Departure and Company's Response_0

It is known that Jan Leike is the co-head of OpenAI's Superalignment team. He said that in recent years, OpenAI has ignored its internal culture and safety guidelines and insisted on rapidly推出 eye-catching products.

TapTechNews found out that OpenAI established the Superalignment team in July 2023, with the task of ensuring that AI systems with'super intelligence' and'smarter than humans' can follow human intentions. At that time, OpenAI promised to invest 20% of its computing power in the next four years to ensure the safety of AI models. However, according to Bloomberg, OpenAI has now reportedly dissolved the Superalignment team.

OpenAI Executives Departure and Company's Response_1

Leike said that the reason he joined OpenAI was that he thought OpenAI was the most suitable place in the world for AI safety research. However, currently, the OpenAI leadership has highly ignored the safety of the model and put the core priorities on profitability and obtaining computing resources.

As of the time of writing, OpenAI's Greg Brockman and Sam Altman have jointly responded to Leike's views, stating that they have increased the awareness of AI risks and will continue to enhance safety work to deal with the stakes of each new model, TapTechNews translation as follows:

We are very grateful to Jan for everything he has done for OpenAI, and we know he will continue to contribute to our mission externally. Given some of the issues raised by his departure, we would like to explain our thinking on our overall strategy.

First, we have increased our awareness of the risks and opportunities of AGI to better prepare the world for it. We have repeatedly demonstrated the great possibilities brought by the expansion of deep learning and analyzed its impact; called for governance of AGI internationally (before such calls became popular); and have done pioneering work in the scientific field of assessing the catastrophic risks of AI systems.

Second, we are laying the foundation for the safe deployment of increasingly powerful systems. It is not easy to make a new technology safe for the first time. For example, our team has done a lot of work to safely bring GPT-4 to the world, and since then, has continuously improved model behavior and abuse monitoring to deal with the lessons learned from deployment.

Third, the future will be more difficult than the past. We need to continuously improve our safety work to match the risks of each new model. Last year, we adopted a readiness framework to systematize our working methods.

Now is a good time to talk about how we see the future.

As the model capabilities continue to increase, we expect them to be more deeply integrated with the world. Users will increasingly interact with systems composed of multiple multimodal models and tools that can act on behalf of the users rather than just communicate with a single model through text input and output.

We think these systems will be very beneficial and helpful to peop le and can be delivered safely, but this requires a lot of fundamental work. This includes thoughtful consideration of the connected content during training, solving difficult problems such as scalable supervision, and other new types of safety work. When building in this direction, we are not sure when we can reach the safety standards for release, and if this delays the release time, we also think it is acceptable.

We know that we cannot foresee every possible future scenario. Therefore, we need very close feedback loops, rigorous testing, careful consideration at every step, world-class safety, and a harmonious unity of safety and capabilities. We will continue to conduct safety research on different time scales. We also continue to cooperate with the government and many stakeholders on safety issues.

There is no ready-made manual to guide the development path of AGI. We think that experiential understanding can help guide the way forward. We both believe in realizing the great potential benefits and strive to mitigate serious risks; we take our role very seriously and carefully weigh the feedback on our actions.

—Sam and Greg

Likes