OpenAI has until April 30 to comply with EU laws, ‘almost impossible’, say experts

Related articles


OpenAI could soon face its biggest regulatory challenge yet, as Italian authorities insist the company has until April 30 to comply with local and EU data protection and privacy laws. a task that artificial intelligence (AI) experts say could be nearly impossible.

Italian authorities issued a blanket ban on OpenAI’s GPT products in late March, becoming the first Western country to avoid the products altogether. The action followed a data breach in which ChatGPT and GPT API customers could see data generated by other users.

According to a translation from Italian powered by Bing order directing OpenAI to cease its ChatGPT operations in the country until it is able to demonstrate compliance:

“In its order, the Italian SA emphasizes that no information is provided to users and data subjects whose data is collected by Open AI; more importantly, there appears to be no legal basis underlying the massive collection and processing of personal data in order to “train” the algorithms the platform relies on.

The Italian complaint goes on to state that OpenAI must also implement age verification measures to ensure that its software and services comply with the company’s own terms of service requiring users to be aged. over 13 years old.

Related: EU lawmakers call for ‘safe’ AI as Google CEO warns of rapid development

In order to respect privacy in Italy and the rest of the European Union, OpenAI will need to provide a basis for its extensive data collection processes.

Under the EU’s General Data Protection Regulation (GDPR), tech outfits duty seek user consent to train with personal data. In addition, companies operating in Europe must also give Europeans the option to opt out of data collection and sharing.

According to experts, this will prove a tough challenge for OpenAI, as its models are trained on massive troves of data, which are mined from the internet and bundled into training sets. This form of black box training goals to create a paradigm called “emergence”, where useful traits manifest themselves in patterns in unpredictable ways.

Unfortunately, that means developers rarely have a way of knowing exactly what’s in the dataset. And, because the machine tends to merge multiple data points when it generates outputs, it may be out of the reach of modern technicians to extract or modify individual data items.

Margaret Mitchell, AI ethics expert, said The MIT Technology Review states that “OpenAI will find it nearly impossible to identify individuals’ data and remove it from its models.”

To achieve compliance, OpenAI will need to demonstrate that it obtained the data used to train its models with user consent – which the company research the articles show that’s not true – or demonstrate that he had a ‘legitimate interest’ in deleting the data in the first place.

Lilian Edwards, a professor of internet law at Newcastle University, told MIT’s Technology Review that the dispute is more important than just the Italian action, saying “the OpenAI breaches are so egregious that it is likely that this case will end up in the Court of Justice of the European Union, the highest court in the EU.

This puts OpenAI in a potentially precarious position. If it cannot identify and delete individual data at the request of users, nor make changes to data that misrepresent individuals, it may find itself unable to operate its ChatGPT products in Italy after the deadline of the April 30.

The company’s problems may not end there, as French, German, Irish and European regulators are also considering taking steps to regulate ChatGPT.