OpenAI's DevDay Update From Conference Transformation to Future Prospects

On August 6th, it was reported that last year, the artificial intelligence startup OpenAI held its first developer conference in San Francisco with great fanfare and launched several new products and tools, including the ultimately unsuccessful GPTStore (similar to the Apple App Store). However, this year's event will be relatively low-key. On Monday this week, OpenAI announced that it will transform its DevDay developer conference into a series of participatory meetings focused on developers. The company also confirmed that it will not release the next-generation flagship model during DevDay, but will focus on the updates of its API and developer services.

An OpenAI spokesperson disclosed: 'We don't plan to announce our next model at the developer conference. We will focus more on introducing existing resources to developers and showcasing the stories of the development community.'

The OpenAI DevDay event this year will be held on October 1st in San Francisco, October 30th in London, and November 1st in Singapore. All events will be held in the form of seminars, group discussions, live demonstrations by the OpenAI product and engineering team, and developer meetings. The registration fee is 450 US dollars, and the registration deadline is August 15th.

In recent months, OpenAI has adopted a more stable iterative strategy in the field of generative artificial intelligence rather than pursuing a breakthrough leap. The company chooses to fine-tune and tweak its tools while training the follow-up products of its current leading model GPT-4 and GPT-4mini. The company has improved the methods to enhance the overall performance of the model and tries to reduce the frequency of the model deviating from the predetermined track, but according to some benchmark tests, OpenAI seems to have lost its technological leadership position in the generative artificial intelligence race.

One of the reasons may be that it is increasingly difficult to find high-quality training data.

Like most generative artificial intelligence models, OpenAI's model is trained on a large amount of network data - many creators choose to block their data because they are worried about their data being plagiarized or that they cannot get the due recognition or compensation. According to the data of the artificial intelligence content detection and plagiarism detection tool Originality.AI, more than 35% of the top 1000 websites in the world now block OpenAI's web crawler. The research of the Massachusetts Institute of Technology Data Source Program also found that about 25% of the 'high-quality' data has been excluded from the main data set used to train AI models.

The research institute EpochAI predicts that if the current trend of data access blocking continues, developers will run out of data available for training generative artificial intelligence models between 2026 and 2032. This, together with the fear of copyright litigation, forces OpenAI to sign costly licensing agreements with publishers and various data brokers.

It is said that OpenAI has developed an inference technique that can improve the response ability of its model on certain questions, especially in mathematical problems. The company's chief technology officer, Mira Murati, promised that future OpenAI models will have 'doctorate-level' intelligence. This prospect is full of hope, but also faces great pressure. It is known that OpenAI has spent billions of dollars on training its models and hiring highly paid researchers.

Time will tell whether OpenAI can achieve its grand goals while dealing with numerous controversies. In any case, slowing down the product cycle may help refute those claims that OpenAI ignores AI safety work while pursuing more powerful generative artificial technology.

Likes