Data Protection

Protecting data in use when consent is not enough

By Gary LaFever, Co-Founder, CEO and General Counsel at  Anonos

In today’s data driven world, new technical safeguards are required to help balance data innovation and protect privacy rights when consent no longer works. For example, while consent may look like a panacea for privacy, it falls through the gaps in the overall data ecosystem in numerous ways, including the following:

  • Unpredictability of Operations – Consent exposes data controllers to unpredictable interruptions in processing – e.g. when consent is revoked by a data subject under the EU General Data Protection Regulation (GDPR) or data subjects exercise their Right to be Forgotten/Right to Erasure.
  • Legal Processing Complications – The requirements for consent as a basis for lawful processing under the GDPR may be impossible to achieve – e.g. when sophisticated iterative processing is incapable of being described in advance with detailed specificity as now required under the GDPR (generalised consent is no longer legally effective).
  • Self-Selection Issues – Reliance on consent can result in inaccurate and incomplete data – e.g. relying on consent to include personal data in studies has been shown to result in biased data that is not representative of the larger population.
  • Liability Across Data Ecosystem – All stakeholders must ‘live’ with the results of inadequate consent secured by other parties in the ecosystem – e.g. controllers and processors involved in co-processing activities are jointly and severally liable under the GDPR for the failure of other parties to secure lawful consent along the data pipeline.

Regulator guidance and enforcement actions by EU Data Protection Authorities recognise the inadequacy of consent as used prior to the GDPR coming into effect.

The use of consent as a basis for lawful processing of personal data is now severely restricted, particularly when it comes to secondary uses of personal data such as analytics, machine learning and artificial intelligence. New technical safeguards are now necessary to replace consent as the bulwark for ensuring individual privacy while still enabling innovation using data. 

The GDPR highlights pseudonymisation as a technological solution. Pseudonymisation – legally defined at the EU level for the first time in the GDPR with a heightened standard relative to past practice – is repeatedly mentioned as a recommended safeguard. In more than a dozen places, the GDPR links -pseudonymisation to express statutory benefits.

GDPR Article 25(1) for example identifies pseudonymisation as an “appropriate technical and organisational measure” while Article 25(2) requires controllers to “implement appropriate technical and organisational measures for ensuring that, by default, only personal data which are necessary for each specific purpose of the processing are processed.

That obligation applies to the amount of personal data collected, the extent of their processing, the period of their storage and their accessibility. In particular, such measures shall ensure that by default, personal data are not made accessible without the individual’s intervention to an indefinite number of natural persons.”

While pseudonymised data – as newly defined under GDPR Article 4(5) – remains within the scope of the GDPR as personal data, it provides express benefits in the form of:

  • Greater predictability of operations than alternatively trying to ‘anonymise’ data (it is nearly impossible to ensure the non-linkability to identifying data now required under the GDPR to be anonymous)
  • Express statutory rights under the GDPR for expanded data use

According to GDPR Section 4(5) definitional requirements, data is not pseudonymised if it can be attributed to a specific data subject without requiring the use of separately kept “additional information” that prevents unauthorised re-identification.

Pseudonymised data embodies the state-of-the-art in Data Protection by Design and by Default engineering to enforce dynamic (versus static) protection of both direct and indirect identifiers. The shortcomings of prior static approaches to data protection (e.g. failed attempts at anonymisation) are highlighted in two well-known historical examples of unauthorised re-identification of individuals using AOL and Netflix data.

These examples were made possible because the data was statically protected without the use of separately kept “additional information” as now required under the GDPR. Because access to this “additional information” required for re-identification is possible, but only under the control of the processor, using GDPR compliant pseudonymisation controls exposure to liability, thereby enabling greater compliant innovative use of data.

State-of-the-art pseudonymisation not only enables greater privacy-respectful use of data in today’s ‘big data’ world of data sharing and combining, but it also enables data controllers and processors to reap explicit benefits under the GDPR. The benefits of properly pseudonymised data are highlighted in multiple GDPR Articles, including:

  • Article 6(4) as a safeguard to help ensure the compatibility of new data processing
  • Article 25 as a technical and organisational measure to help enforce data minimisation principles and compliance with data protection by design and by default obligations
  • Articles 32, 33 and 34 as a security measure helping to make data breaches “unlikely to result in a risk to the rights and freedoms of natural persons” thereby reducing liability and notification obligations for data breaches
  • Article 89(1) as a safeguard in connection with processing for archiving purposes in the public interest; scientific or historical research purposes; or statistical purposes

Moreover, the benefits of pseudonymisation under Article 89(1) also provide greater flexibility under:

  • Article 5(1)(b) with regard to purpose limitation
    • Article 5(1)(e) with regard to storage limitation
    • Article 9(2)(j) with regard to overcoming the general prohibition on processing Article 9(1) special categories of personal data

In conclusion, GDPR compliant pseudonymisation enables the achievement of a concept known as ‘Aristotle’s Golden Mean’, which stands for the proposition that on a spectrum you can have an excess of behavior at one end and a deficiency of behaviour at the other end, but somewhere in the middle is perfectly balanced behaviour.

About Anonos

Anonos enables lawful analytics, AI and ML that preserves 100% of data accuracy while expanding opportunities to ethically share and combine data.

Anonos Pseudonymisation and Data Protection by Design & by Default technology reconciles conflicts between protecting the rights of individuals and achieving business and societal objectives to use, share, combine and relink data in a lawful manner.

Anonos patented Variant Twins® enable sharing, collaboration, and analytics of personal data by technologically enforcing dynamic, fine-grained privacy, security and data protection policies in compliance with the GDPR, CCPA and other evolving data privacy regulations.