Share This Article
The CJEU’s decision in EDPS v. SRB provides crucial guidance on the pseudonymization of personal data, confirming that identifiability depends on the perspective of the controller or recipient.
While the judgment did not directly address marketing or technology, its reasoning has far-reaching implications for direct marketing and AI training, where pseudonymized datasets are frequently used.
The Court’s Reasoning in EDPS v. SRB
On 4 September 2025, the Court of Justice of the European Union (CJEU) issued a pivotal judgment in Case C-413/23 P, EDPS v. SRB, offering key clarifications on the concept of personal data, particularly in the context of pseudonymized information transferred to third parties.
This decision stems from a dispute involving the Single Resolution Board (SRB) and the European Data Protection Supervisor (EDPS) concerning the processing of personal data following the resolution of Banco Popular Español. After the bank’s resolution in June 2017, the SRB transferred certain pseudonymized comments to Deloitte, tasked with conducting a valuation of the resolution’s effects.
Several complainants — affected shareholders and creditors — lodged complaints with the EDPS, arguing that SRB’s privacy statement failed to inform them that their personal data might be disclosed to Deloitte. The EDPS determined that the data shared were pseudonymization of personal data: the comments expressed personal opinions and included an alphanumeric code linking registration and consultation phase responses, even though Deloitte did not receive names or direct identifiers.
The SRB challenged this before the General Court, claiming that the dataset was not personal data. The General Court annulled the EDPS’s decision, but the EDPS appealed to the CJEU.
The CJEU addressed two central issues:
-
Data that “relates” to a person. The Court ruled that the General Court erred in requiring an analysis of the comments’ content, purpose, or effect. Personal opinions or views are inherently linked to their authors. Citing the Nowak judgment (C-434/16), the Court reiterated that examiner comments in an exam script relate to the examiner as author, even if they also relate to the candidate.
-
Identifiability and pseudonymization. The Court upheld that pseudonymized data cannot be treated as anonymous in every case. Identifiability depends on who holds the data and whether they have access to re-identification means that are “reasonably likely” to be used.
-
For the SRB, which retained the re-identification key, the comments were still personal data, and SRB failed in its duty to inform data subjects about the transfer.
-
For Deloitte, which only received pseudonymized comments with no identifiers and no access to the key, the data were not personal, provided safeguards prevented re-identification (including cross-checks with other sources).
The ruling therefore established a relative test for the pseudonymization of personal data: the same dataset may be personal data for one party but not for another.
Broader Implications
Direct Marketing
The CJEU did not expressly examine marketing, but the reasoning has direct effects on how pseudonymized datasets are shared for advertising and customer engagement:
-
Controllers (e.g., online platforms, retailers, or service providers) that keep re-identification capabilities must treat such datasets as personal data, with full GDPR obligations including transparency in privacy notices and lawful processing bases.
-
Third parties (including marketing agencies for instance) receiving pseudonymized datasets may, under strict technical and organizational safeguards, process them without classification as personal data — but only if re-identification is realistically impossible in their hands.
This dual standard means businesses cannot rely on pseudonymization of personal data as a compliance shortcut. Customer profiling, targeted advertising, and data sharing must all be accompanied by clear information duties and contractual safeguards.
AI Training
The same logic applies to the increasingly common use of pseudonymized datasets in AI training:
-
Data providers such as hospitals, banks, or platforms remain fully responsible under GDPR because they hold the means of re-identification. Even if the data is pseudonymised before being transferred, they must inform data subjects and ensure a lawful basis for processing.
-
AI developers or research institutions may, however, process the same data outside GDPR if they cannot — under reasonable means — re-identify individuals. This requires strong pseudonymisation techniques and clear contractual and technical limits on data use.
This creates a split responsibility: controllers remain accountable, but AI developers may sometimes benefit from reduced compliance obligations. Nonetheless, any weakness in safeguards could re-expose the dataset as personal data.
Conclusion
The EDPS v. SRB decision provides detailed guidance on the pseudonymization of personal data, confirming that pseudonymization reduces risks but does not eliminate accountability. For sectors like direct marketing and AI training, the impact is significant:
-
Controllers must assume GDPR obligations whenever they hold the re-identification key.
-
Recipients may sometimes operate outside GDPR, but only with robust and provable safeguards.
When pseudonymized data are not personal data they are not subject to GDPR compliance obligations, including the need to rely on a valid legal basis that normally is the major issue.
While pseudonymization remains a valuable privacy tool, this judgment makes clear that it is not a silver bullet. Companies must now reassess privacy notices, contracts, and governance models to ensure compliance, transparency, and user trust in data-driven strategies.
On a similar issue, you can read the article “EDPB opinion on AI model Training: How to Address GDPR Compliance?“.