In today's data-driven world, the rapid advancement of artificial intelligence (AI) is revolutionizing industries and empowering businesses to unlock unprecedented opportunities. But this changing landscape has given rise to privacy and security issues that demand a hard look at the way AI handles personal data and how it affects data protection regulation in the EU. Can GDPR safeguard us against AI’s data-hungry programming?

On 31 March 2023, the Italian data protection regulator, Garante per la Protezione dei Dati Personali, issued a temporary ban against OpenAI’s ChatGPT for the use of personal information of millions of Italian citizens exploited for the AI’s training data. This event opened the floodgates to regulatory scrutiny from other European supervisors and has placed the abuse of personal data from machine-learning algorithms in the spotlight.
GDPR
Since 2018, the EU’s General Data Protection Regulation (GDPR) has been the staple of data protection, setting a new standard for global data regulations and reshaping the way organizations handle personal data in the digital age.
However, recent breakthroughs with AI language models such as GPT-3 and 4 used for ChatGPT has created a unique challenge to GDPR compliance due to the lack of specific consideration for AI technologies within the regulation.

Although GDPR covers crucial aspects of data privacy and protection, it does not specifically address the complexities and potential threats related to AI systems. Let’s explore in what ways GDPR tackles this regulatory gap and what the future holds for data privacy.


What dilemmas does AI pose to privacy?

AI language models, such as GPT-3, need data to be able to be trained to write text, discover correlations, make predictions, improve their performance, and more. These models acquire their knowledge through a combination of methods, including scrubbing the internet for information, being fed data from third-party licenses, and through material inputted by users via chats. This also includes personal data of EU citizens that is publicly available.

However, according to GDPR, this does not mean that it can be freely used to train AI models. Article 6 of the regulation states that unless there is a sufficient legal basis, personal data that directly or indirectly identifies an individual may not be collected, stored, or processed without legal ground. However, no consent to the use of personal information was asked to individuals during the training phase of the AI models. In almost all cases, there was no way to know what the AI systems were scrubbing online.

“A large amount of data on the internet relates to people, so our training information does incidentally include personal information.” ~ OpenAI

Another issue is the right to be forgotten. Article 17 of GDPR, often known as the "Right to erasure" or "Right to be forgotten," gives people the right to ask that their personal data be deleted in certain situations. Can AI language models forget personal data of an individual?

In an article in Forbes, AI expert and social entrepreneur Miguel Luengo-Oroz noted that AI neural networks don’t forget like humans do, but instead modify their weights to reflect fresh data more accurately. This means that the information stays with them, and the networks focus on collecting more new data. It is currently impossible to reverse the modifications made an AI system by a single data point at the request of the data owner.

How AI can become GDPR compliant




A look at the case of ChatGPT and GDPR breaches

The recent action taken against ChatGPT by Italy’s data protection regulator marks the first instance of a regulator addressing privacy concerns surrounding the development of large generative AI models.

Italy's Garante has identified four specific GDPR issues concerning ChatGPT. These include the absence of age controls to prevent usage by individuals under the age of 13, potential provision of inaccurate information about individuals, lack of disclosure regarding data collection, and the absence of a legal basis for collecting personal information during ChatGPT's training.

GDPR AI data protection
The Irish Data Protection Commission has stated its intention to collaborate with other EU data protection authorities regarding this matter. Other European data protection regulators, including Belgium and France, are looking into the matter as well.

OpenAI's lack of transparency regarding the dataset used to train ChatGPT has also raised concerns. Researchers at Microsoft – OpenAI's main investor – have admitted to not having access to the full details of ChatGPT's extensive training data. Moreover, a data breach in March 2023 resulted in the exposure of users' conversations and payment information, further complicating OpenAI's position.

Regulators, such as the commissioner for the German state of Schleswig-Holstein, are calling for data protection impact assessments and information regarding compliance with GDPR.


What other AI systems were in breach of data protection rules?

ChatGPT is not the sole AI system that has been found to be in breach of data protection rules. Below are a few more examples:
  • In October 2022, Clearview AI received a €20 million fine from the French data protection regulator, CNIL, for its facial recognition service that collected photographs of French individuals without legal basis.

  • In February 2023, Italy's Garante acted against Replika, an AI chatbot, by requesting it to cease processing the data of Italian users. The concern stemmed from the lack of a proper legal basis for processing children's data in compliance with GDPR.

  • In May 2022, the Hungarian data protection authority fined Budapest Bank HUF 250 million (€665.000) for the use of an AI solution used to analyze voice recordings of calls conducted between its customers and the call center. The bank provided vague information on how the AI processed customer data, and both its data protection impact assessment and paperwork for the balancing test were in violation of GDPR.




How can AI models become GDPR compliant?

The European Parliament published a study in June 2020 titled “The impact of the General Data Protection Regulation (GDPR) on artificial intelligence,” which helps to shed some light on how an AI model can be GDPR compliant.

It states that an AI system that uses personal data must be created, trained, and deployed with a specific, legal goal in mind. This goal needs to be known, clear, and in line with the organization's mission. This should be developed in advance during the project's planning process.

The use of AI systems requires a legitimate foundation. These legal justifications include:

GDPR AI legal justifications

Whether during the AI model's training or operational phases, it is important to use data gathered legally. GDPR protection is still applicable for learning purposes if the learning phase is clearly distinguished from the operational implementation and has the sole objective of improving the performance of the AI system.

A crucial rule for AI systems using personal data is data reduction. Only information that is pertinent, required, and appropriate should be gathered and used to achieve the stated goal.

The quantity of data needed to train the system must be carefully considered during the learning process. Companies should carefully consider the type and volume of data, test system performance with fresh data, clearly differentiate between learning and production data, use data pseudonymization or filtering mechanisms, maintain documentation on dataset compilation and properties, routinely reassess risks for data subjects, ensure data security, and establish access authorization frameworks.


What is the future of GDPR and AI?

The future of GDPR and AI presents a unique set of challenges and considerations. While GDPR was primarily focused on addressing emerging challenges related to the internet, it did not include concepts related to AI. Although the European regulation does not require significant changes to accommodate AI, there are uncertainties and gaps in addressing AI-related data protection issues.
Proposed in 2021, the EU AI Regulation is a significant development in this field. It distinguishes between high-risk AI systems (meant to be used as safety components) and other AI systems, acknowledging the need for tailored approaches.
GDPR and AI gaps
Additionally, GDPR plays a role in addressing bias monitoring and the processing of biometric data within AI systems. But while the AI Regulation aims to complement GDPR, it provides limited clarity on the processing of personal data by AI systems other than high-risk ones.

For AI models to be compliant with GDPR, ongoing discussions, collaborations between authorities and companies, and further guidance from the EU is necessary in the near-term.
0 comments
Add your comment

Related articles

How has GDPR influenced data protection legislation internationally? Learn how Brazil's LGPD, China's PIPL, and ...

Data Protection Authority Tue 25 January 2022

Explore the list of GDPR enforcement actions, fines, and penalties against companies, institutions, and organization gro...

Data breach Tue 19 February 2019

What financial crimes could take place in the metaverse? Learn how theft, terrorist financing, and money laundering coul...

Compliance Wed 05 October 2022

Learn what a Data Protection Authority (DPA) is and find the complete compiled list of EU DPAs.

Data Protection Authority Tue 05 September 2023
Experts in risk management and regulatory compliance

Pideeco is a consultancy firm providing legal services, business solutions, operational assistance and educational material for professionals in the financial industry.

We are based in Brussels and we specialize in regulatory risk compliance services covering the Eurozone.

Pideeco combines professional Regulatory knowledge and technical expertise to safeguard your business’ reputational and operational risk. Our unique customer-centric approach helps us build strategical and legitimate cost-efficient remedies.

Working with us means reaching out to complementary people, allowing for original thinking and innovative vision.

Our Network Learn more about us