OpenAI's Struggle to Meet EU Data Rules with ChatGPT

TapTechNews May 25th news, according to Reuters, a special working group of the European Union Data Protection Commission recently said that although OpenAI has made certain efforts in reducing the error rate of ChatGPT's output information, it is still not enough to ensure fully compliant with the EU's data rules.

On Friday local time, the working group released a report and pointed out that in order to comply with the principle of transparency, OpenAI has taken some measures, which is also conducive to avoiding incorrect information output by ChatGPT, but these measures are still not enough to comply with the principle of data accuracy.

 OpenAIs Struggle to Meet EU Data Rules with ChatGPT_0

TapTechNews note: National regulatory agencies led by the Italian government previously raised concerns about the widely used AI services, and the European Union Data Protection Commission then established the ChatGPT Special Working Group.

Some national privacy regulatory agencies of member states are still conducting various investigations on ChatGPT. It is known that data accuracy is one of the guiding principles of the EU's data protection rules. The report also pointed out that in fact, due to the probabilistic nature of the system, the current training method leads to the model that may also generate biased or fabricated output results. In addition, the results output by ChatGPT are likely to be eventually regarded by users as factually accurate, regardless of the actual accuracy.

In April last year, OpenAI was required to inform Italian users about the methods and logic behind handling the data required for the operation of ChatGPT. The country also requires OpenAI to provide tools so that people involved in the data (including non-users) can request to correct inaccurate personal data generated by the service, or delete such data if it cannot be corrected. In that month, ChatGPT was once banned in the country and then unbanned on April 29th.

Likes