OpenAI to Open GPT-4o's Voice Mode to ChatGPT Plus Users Next Week

TapTechNews July 26th news, OpenAI's CEO Sam Altman replied to a netizen's question today, indicating that it will open the Voice Mode of the Alpha version GPT-4o to ChatGPT Plus users next week to achieve seamless chatting.

OpenAI to Open GPT-4os Voice Mode to ChatGPT Plus Users Next Week_0

OpenAI to Open GPT-4os Voice Mode to ChatGPT Plus Users Next Week_1

OpenAI to Open GPT-4os Voice Mode to ChatGPT Plus Users Next Week_2

TapTechNews reported in May this year that OpenAI's Chief Technology Officer Muri Murati said in a speech:

In GPT-4o, we have trained an all-new end-to-end unified model across text, vision, and audio, which means that all inputs and outputs are processed by the same neural network.

Since GPT-4o is our first model that combines all these modes, we are still in the early stages of exploring the functions and limitations of this model.

The OpenAI company originally planned to invite a small number of ChatGPT Plus users to test the GPT-4o voice mode at the end of June this year, but the official announced the postponement in June, stating that more time is needed to polish the model and improve the model's ability to detect and reject certain contents.

According to the previously exposed information, the average voice feedback delay of the GPT-3.5 model is 2.8 seconds, while the delay of the GPT-4 model is 5.4 seconds, so it is not very good in terms of voice communication. And the upcoming GPT-4o can greatly shorten the delay time and achieve nearly seamless conversation.

Likes