France’s privacy watchdog eyes protection against data-scraping in AI action plan

France’s privacy watchdog eyes protection against data-scraping in AI action plan

“The CNIL wants to establish clear rules protecting the personal data of European citizens in order to contribute to the development of privacy-friendly AI systems,” it writes.

Barely a week goes by without another bunch of high profile calls from technologists asking regulators to get to grips with AI. And just yesterday, during testimony in the US Senate, OpenAI’s CEO Sam Altman called for lawmakers to regulate the technology, suggesting a licensing and testing regime.

However data protection regulators in Europe are far down the road already — with the likes of Clearview AI already widely sanctioned across the bloc for misuse of people’s data, for example. While the AI chatbot, Replika, has faced recent enforcement in Italy.

OpenAI’s ChatGPT also attracted a very public intervention by the Italian DPA at the end of March which led to the company rushing out with new disclosures and controls for users, letting them apply some limits on how it can use their information.

At the same time, EU lawmakers are in the process of hammering out agreement on a risk-based framework for regulating applications of AI which the bloc proposed back in April 2021.

This framework, the EU AI Act, could be adopted by the end of the year and the planned regulation is another reason the CNIL highlights for preparing its AI action plan, saying the work will “also make it possible to prepare for the entry into application of the draft European AI Regulation, which is currently under discussion”.

Existing data protection authorities (DPAs) are likely to play a role in enforcement of the AI Act so regulators building up AI understanding and expertise will be crucial for the regime to function effectively. While the topics and details EU DPAs choose focus their attention on are set to weight the operational parameters of AI in the future — certainly in Europe and, potentially, further afield given how far ahead the bloc is when it comes to digital rule-making.

Data scraping in the frame

On generative AI, the French privacy regulator is paying special attention to the practice by certain AI model makers of scraping data off the Internet to build data-sets for training AI systems like large language models (LLMs) which can, for example, parse natural language and respond in a human-like way to communications.

It says a priority area for its AI service will be “the protection of publicly available data on the web against the use of scraping, or scraping, of data for the design of tools”.

This is an uncomfortable area for makers of LLMs like ChatGPT that have relied upon quietly scraping vast amounts of web data to repurpose as training fodder. Those that have hoovered up web information which contains personal data face a specific legal challenge in Europe — where the General Data Protection Regulation (GDPR), in application since May 2018, requires them to have a legal basis for such processing.

There are a number of legal bases set out in the GDPR however possible options for a technology like ChatGPT are limited.

In the Italian DPA’s view, there are just two possibilities: Consent or legitimate interests. And since OpenAI did not ask individual web users for their permission before ingesting their data the company is now relying on a claim of legitimate interests in Italy for the processing; a claim that remains under investigation by the local regulator, Garante. (Reminder: GDPR penalties can scale up to 4% of global annual turnover in addition to any corrective orders.)

The pan-EU regulation contains further requirements to entities processing personal data — such as that the processing must be fair and transparent. So there are additional legal challenges for tools like ChatGPT to avoid falling foul of the law.

And — notably — in its action plan, France’s CNIL highlights the “fairness and transparency of the data processing underlying the operation of [AI tools]” as a particular question of interest that it says its Artificial Intelligence Service and another internal unit, the CNIL Digital Innovation Laboratory, will prioritize for scrutiny in the coming months.

Other stated priority areas the CNIL flags for its AI scoping are:

  • the protection of data transmitted by users when they use these tools, ranging from their collection (via an interface) to their possible re-use and processing through machine learning algorithms;
  • the consequences for the rights of individuals to their data, both in relation to those collected for the learning of models and those which may be provided by those systems, such as content created in the case of generative AI;
  • the protection against bias and discrimination that may occur;
  • the unprecedented security challenges of these tools.

Giving testimony to a US senate committee yesterday, Altman was questioned by US lawmakers about the company’s approach to protecting privacy and the OpenAI CEO sought to narrowly frame the topic as referring only to information actively provided by users of the AI chatbot — noting, for example, that ChatGPT lets users specify they don’t want their conversational history used as training data. (A feature it did not offer initially, however.)

Asked what specific steps it’s taken to protect privacy, Altman told the senate committee: “We don’t train on any data submitted to our API. So if you’re a business customer of ours and submit data, we don’t train on it at all… If you use ChatGPT you can opt out of us training on your data. You can also delete your conversation history or your whole account.”

But he had nothing to say about the data used to train the model in the first place.

Altman’s narrow framing of what privacy means sidestepped the foundational question of the legality of training data. Call it the ‘original privacy sin’ of generative AI, if you will. But it’s clear that eliding this topic is going to get increasingly difficult for OpenAI and its data-scraping ilk as regulators in Europe get on with enforcing the region’s existing privacy laws on powerful AI systems.

In OpenAI’s case, it will continue to be subject to a patchwork of enforcement approaches across Europe as it does not have an established base in the region — which the GDPR’s one-stop-shop mechanism does not apply (as it typically does for Big Tech) so any DPA is competent to regulate if it believes local users’ data is being processed and their rights are at risk. So while Italy went in hard earlier this year with an intervention on ChatGPT that imposed a stop-processing-order in parallel to it opening an investigation of the tool, France’s watchdog only announced an investigation back in April, in response to complaints. (Spain has also said it’s probing the tech, again without any additional actions as yet.)

In another difference between EU DPAs, the CNIL appears to be concerned about interrogating a wider array of issues than Italy’s preliminary list — including considering how the GDPR’s purpose limitation principle should apply to large language models like ChatGPT. Which suggests it could end up ordering a more expansive array of operational changes if it concludes the GDPR is being breached. 

Add a Comment