
EDPB opinon on AI

European Data Protection Board has issued new statement on certain data protection aspects related to the processing of personal data in the context of AI models (the “Opinion”). The Opinion addresses certain elements of training, updating, developing and operation of AImodels where personal data form part of a relevant dataset since from the perspective of data protection, development and deployment of AI models raise data protection questions.
Anonymity of an AI Model (Application of Personal Data Protection):
If an AI model is anonymous, GDPR rules do not apply to it. Data protection rules do not apply to anonymous information, i.e. information which (i) was never related to an individual or (ii) data which has been rendered anonymous (meaning an individual may no longer be identified on the basis of such data). AI models trained with personal data cannot, in all cases, be considered anonymous (e.g. if information from the training dataset still remains and can be obtained from the model). Therefore, anonymity of an AI model must be assessed on a case-by-case basis in order to determine if GDPR rules apply to it and must be observed.
Appropriateness of Legitimate Interest as a Legal Basis for Processing of Personal Data (in the Context of Development and Deployment of AI Models):
If personal data is used during development/deployment of AI models, GDPR applies and legal basis for processing of personal data must be stated. In this regard, the Opinion states that the GDPR does not establish any hierarchy between different legal bases, the controllers are those to identify the appropriate legal basis for their processing activities. Therefore, legitimate interest may be set as a legal basis for processing of personal data.
Consequences of Unlawful Processing of Personal Data in the Development Phase of an AI Model on the Subsequent Processing or Operation of the Model:
The Opinion depicts 3 scenarios:
- personal data is retained in the AI model (i.e. model is not anonymous) and is subsequently processed by the same controller (e.g. during deployment of the AI model). The lawfulness of the subsequent processing should be assessed on a case-by-case basis, depending on the context of the case.
- personal data is retained in the AI model and is processed by another controller in the context of the deployment of the AImodel. The controller deploying the AI model should conduct an assessment to ascertain that the AI model has not been developed by unlawfully processing personal data.
- A controller unlawfully processes personal data to develop the AI model, then ensures that the AI model is anonymised before the same or another controller initiates another processing of personal data in the context of the deployment of the AI model. The unlawfulness of the initial processing should not impact (the lawfulness of) the subsequent operation of the AI model.
Use of AI and its impact on personal data protection, in combination with GDPR, is crucially important.
Please do not hesitate to contact us if you would like to consult use of AI tools in your business.

Similar articles
New ÚST-27 and rules for advertising of pharmaceitals in the Czech Republic
We would like to inform you that effective January 1, 2025, the State Institute for Drug Control (“SÚKL”) has issued a new…
Reimbursement of digital therapies and telemedicine
The term digital therapeutics (Dtx) is not well known in the Czech Republic and there is no legal definition of…
How to behave in the event of a personal data breach
The personal data protection is extremely important in today’s fast-paced digital age, and it is not rare for attempts to…