Data Protection update - March 2024

Data Protection update - March 2024

Welcome to the Stephenson Harwood Data Protection bulletin, covering the key developments in data protection law from March 2024.

This month, the European Commission started investigations under the Digital Services Act, the House of Representatives passed a bill to prevent Americans' personal data from being sent to states designated as "foreign adversaries", and the Information Commissioner's Office issued new fining guidance.

In AI news, the European Parliament formally approved the AI Act.

Elsewhere this month, the European Parliament adopted the Cyber Resilience Act at first reading and the Italian data protection regulator issued fines totalling €4,950,000 for GDPR breaches.

In this month's issue:

Data protection



Enforcement and civil litigation

Data protection

EDPS finds that European Commission's use of Microsoft 365 infringes data protection laws

On 11 March 2024, the European Data Protection Supervisor ("EDPS") announced that use by the European Commission ("Commission") of Microsoft 365 had breached data protection laws.

Following an extensive investigation, the EDPS found that the Commission's use of Microsoft's cloud-based services did not comply with several provisions of Regulation (EU) 2018/1725, a framework which governs the protection of personal data processed by Union institutions, bodies, offices and agencies and is equivalent to the EU GDPR.

These breaches were mainly specific to the licensing agreement between the Commission and Microsoft 365, as in this agreement the Commission failed to specify:

  • what types of personal data could be transferred;
  • to which recipients;
  • in which third country; and
  • for what purpose the transfer was taking place.

As the Commission failed to clearly indicate the above in their agreement with Microsoft, EDPS found that the Commission failed to provide "appropriate safeguards" when transferring personal data to countries not covered by an adequacy decision.

These findings related to the Commission's use of Microsoft 365 prior to the implementation of the EU-US Data Privacy Framework. Following the implementation of the framework, EU companies who transfer personal data to U.S. companies participating in the framework no longer have to implement "appropriate safeguards" to protect users' personal data.

The EDPS ordered that from 9 December 2024, the Commission must:

  • suspend all data flows related to its use of Microsoft 365 to Microsoft, its affiliates and sub-processors located in countries outside EU or EEA that are not covered by an adequacy decision; and
  • bring processing operations resulting from its use of Microsoft 365 into compliance with Regulation (EU) 2018/1725.

CAC publishes new regulation on cross-border data flows

On 22 March, Cyberspace Administration of China ("CAC") published Provisions on Promoting and Regulating Cross-border Data Flows (the "Provisions").

The Provisions build on the draft version published for consultation, please see our insight from October where we covered this. They amend existing Chinese data privacy laws, providing exemptions to the current cross-border data transfer requirements for data exports from China, including: (i) the obligation to apply for a data export security assessment; (ii) signing a standard contract for personal information export; and (iii) passing a personal information protection certificate.

The data processor will be exempt from the above where:

  • data is collected or produced in activities such as international trade, cross-border transportation, academic cooperation, and cross-border manufacturing or marketing, and does not contain important data;
  • data is collected or produced outside of Mainland China and processed and exported by data exporters and there is no domestic personal information or important data involved;
  • the transfer is necessary for the purpose of entering into and performing a contract to which an individual is party (including cross-border shopping, delivery and payment, as well as visa handling, air ticket and hotel reservations, etc.);
  • the transfer is necessary for the purposes of cross-border human resource management;
  • personal data must be transferred abroad in order to protect the safety of natural persons' lives, health, and property in an emergency; or
  • if data processors (other than critical information infrastructure operators ("CIIOs")) have transferred less than 100,000 people's personal information overseas since 1 January of that year.

Any transfers by CIIOs, or of "important data", will still require a security assessment by CAC.

These Provisions took effect immediately on 22 March 2024.

US House of Representatives passes bill to block data brokers from selling personal data to "foreign adversaries"

The U.S. House of Representatives (the "House") unanimously passed a bill which bans organisations which sell personal data (known as data brokers) from making Americans' "sensitive data" available to foreign adversaries or entities controlled by foreign adversaries (the "Bill"). The House highlighted that the sharing of Americans' personal data abroad was a risk to national security.

Key terms:

The countries designated as "foreign adversaries" under the Bill are China, Russia, Cuba, Iran, North Korea and Venezuela.

"Data broker" is also widely defined, with many third parties that process and share consumer data coming under this definition. This Bill may therefore impact a wide variety of companies that collect and sell personal data (rather than solely those companies registered as data brokers).

"Sensitive data" has been widely defined to include government-issued ID numbers, biometric information, precise geolocation information, individuals' private communications, information identifying an individual's online activities over time and across websites or "any other data" that a data broker makes available to a foreign adversary for the purpose of identifying Americans' "sensitive data".

This Bill forms part of the wave of recent data-related regulation in the U.S., with a Bill prohibiting the distribution, maintenance or providing of internet hosting services for an application controlled by a foreign adversary passed earlier in March.

ICO launches a call for views on "Consent or Pay" business models

In early March, the UK Information Commissioner's Office (the "ICO") launched a call for views on "consent or pay" models. This business model (also known as "pay or okay") is when a business offers users the option to either consent to their personal information being processed for certain purposes (typically for personalised advertising), or instead to pay a fee to access the services without their data being used for advertising purposes.

The ICO points out that these models are not specifically prohibited by UK data protection law, but it warns businesses that they must ensure that consent must comply with the UK GDPR and be freely-given, unambiguous, specific, fully-informed and capable of being withdrawn, when providing personal data for personalised advertisements.

The ICO also listed four factors that it expects businesses operating a "consent or pay" model to take into account:

  • whether there is a power balance between the company and its users – are users providing consent because they have to use the service?
  • equivalence of the ad-funded and free service – where paid services come with benefits (e.g. premium content), this may induce users to opt for the paid version of the business;
  • the fee charged – where fees are high, users are unlikely to consent to the processing of their personal data; and
  • privacy by design – users should be able to clearly understand their options (as well as the consequences of selecting a particular choice).

The ICO's consultation is open until 17 April 2024.

"Consent or pay" models have also recently come under scrutiny in the EU, with the Commission launching an investigation under the Digital Markets Act into Meta's "pay or okay" model. Under the Digital Markets Act, gatekeepers must obtain consent from users if they intend to use their personal data across different core platform services. The Commission has expressed concerns that Meta's use of the "consent or pay" model "may not provide a real alternative in case users do not consent".

Currently it appears that the ICO is taking a more moderate approach to "consent or pay" models than EU regulators, however the results of the ICO's consultation will provide greater clarity on how these models are to be regulated in the UK.

Regulations amending UK "immigration exemption" come into force

In December 2023, the Court of Appeal ruled that a revised amendment to the Data Protection Act 2018 ("DPA 2018"), which disapplied certain UK GDPR data subject rights for activities relating to immigration control, was unlawful (the "Immigration Exemption").

The Immigration Exemption stated that individuals' data protection rights could be excluded if these rights prejudiced the maintenance of effective immigration control. The Court of Appeal found that the Immigration Exemption in the DPA 2018 did not comply with Article 23(2) of the UK GDPR and ruled that this exemption was unlawful. Please see our insight from December 2023 for further details.

The DPA 2018 was subsequently amended to reflect this, and the updated legislation came into force on 8 March 2024.

The recent amendments provide safeguards to this exemption, for example, providing that the exemption must be applied on a case-by-case basis, and that the Secretary of State must take into account all the circumstances of the case, including the vulnerability of the person and the impact infringing their data protection rights would have on their rights and freedom.


EU Parliament approves AI Act

AI companies operating within the EU will soon be subject to new obligations under the EU's new wide-ranging AI law, following the approval of the AI Act (the "Act") by the European Parliament ("EP") on 13 March 2024. This marks a significant milestone in shaping the future of AI regulation, with 523 parliamentarians voting in favour, 46 against, and 49 abstentions. This follows the provisional agreement that was reached in December 2023 by EU legislators following lengthy negotiations. Please see our insight from December on this.

The AI Act classifies products according to risk and adjusts scrutiny accordingly. The higher the risk, the stricter the rules. For example, AI systems that pose a "clear risk to fundamental rights", such as those that provide social scoring of individuals that lead to discriminatory outcomes, are subject to being banned.

The Act also introduces measures to address risks associated with generative AI tools and chatbots like OpenAI's ChatGPT. It mandates transparency from producers of general-purpose AI systems, requiring disclosure of training data and compliance with EU copyright law. Please see our summary from December for more key takeaways.

Legislators have also suggested that the EP should work on additional AI legislation after the EU elections in June; for example, legislation on AI and employment that goes beyond what is already addressed within the Act. The AI Act awaits final approval by EU ministers, which is scheduled for late April or early May.

Please keep an eye out for the range of materials on the AI Act, which we will be producing to support organisations in their compliance.


European Parliament adopts Cyber Resilience Act at first reading

On 12 March 2024, the EP formally adopted its first reading position on the proposal for the Cyber Resilience Act (the "Act"). The primary objective of the Act is to establish standards that ensure that products with digital features are secure, resilient against cyber threats and are sufficiently transparent about their security properties.

Where connected products are placed on the EU market, they will have to adhere to the comprehensive cybersecurity requirements established by the Act. Products such as connected doorbells, baby monitors, Wi-Fi routers, biometric readers, smart home assistants and private security cameras are examples of products that would be covered by the legislation.

A key provision of the Act entails the categorisation of products based on their criticality and the level of cybersecurity risk they pose. The Commission will propose and regularly update two lists specifying products deemed vital or critical, subjecting them to more stringent scrutiny by designated notified bodies. On the other hand, products with lower cybersecurity risk profiles may undergo a less arduous conformity assessment process, typically administered internally by manufacturers.

For the Act to become law, the Council of the EU must also adopt the same text. Most of the Act's provisions will apply three years after its entry into force, but some will apply earlier. The Council is expected to formally adopt the Act at first reading at an upcoming meeting. This is considering the Council's undertaking by letter of 20 December 2023 to approve the EP's first reading position without further amendments.

Enforcement and civil litigation

Italy's launches investigation into OpenAI tool over data protection concerns

On 8 March 2024, Italy's Data Protection Authority (the "Garante") announced that it would be opening an investigation into OpenAI's Sora, a new AI tool that generates videos based on text prompts. The tool is still in a test phase and is not yet available to the public. OpenAI was given 20 days to respond to the Garante's investigation, in order to allow the Garante to assess the potential implications that the tool might have on the processing of personal data within Italy.

One of the issues that the investigation concerns is regarding the nature of the data collected and used in training Sora. Of particular interest to the Garante is whether this includes personal data, especially sensitive categories of data such as health, political opinions and religious beliefs, and how Sora will comply with European data protection rules if it is released in the EU. This includes how OpenAI will obtain user consent and communicate its data processing activities transparently.

Another key issue the investigation seeks to clarify is how OpenAI ensures the accuracy and integrity of data used in training Sora. The Garante is concerned about potential biases or inaccuracies in Sora's algorithms and wants to ensure ethical standards are upheld, avoiding discriminatory outcomes in Sora's outputs.

ICO publishes new fining guidance

ICO has published new guidance on how it decides to issue penalties and calculate fines for infringements of data protection laws.

Circumstances in which ICO will issue a fine

The ICO has stated that the decision to impose a fine will be fact-specific and will be based on the circumstances of each individual case. The Commissioner will take the following into account:

  • the seriousness of the infringement;
  • any relevant aggravating or mitigating factors; and
  • whether imposing a fine would be effective, proportionate and dissuasive.

Calculation of the fine

The ICO laid out a five-step approach to determining the penalty amount:

  1. Assessment of the seriousness of the infringement;
  2. Accounting for turnover (where the controller or processor is part of an undertaking);
  3. Calculation of the starting point having regard to the seriousness of the infringement and, where relevant, the turnover of the undertaking;
  4. Adjustment to take into account any aggravating or mitigating factors; and
  5. Assessment of whether the fine is effective, proportionate and dissuasive.

This updated guidance should provide greater transparency to organisations as to why the ICO issues fines, and how the fine is calculated. This is particularly important, as where controllers are part of undertaking (for example, if they are a subsidiary of a parent company), any fine the ICO issues will be calculated on the turnover of the undertaking as a whole.

EC announces investigations under the Digital Services Act

This month the Commission announced three investigation procedures under the Digital Services Act ("DSA").

1. AliExpress

The Commission has opened formal proceedings to investigate whether AliExpress failed to manage and mitigate the risks arising from its services.

The Commission's proceedings will focus on two key areas, whether:

  • AliExpress is adequately enforcing its terms of service. This is particularly in regard to prohibiting products that are damaging to consumers' health (such as fake medicines and dietary supplements) and that could be harmful for minors (for instance access pornographic materials); and
  • the platform is taking effective measures to prevent the dissemination of illegal content, intentional manipulation on the online platform (through "hidden links") and risks deriving from features, such as influencers promoting illegal or harmful products through AliExpress' "Affiliate Programme".

2. LinkedIn

The Commission has sent LinkedIn a formal request for information in relation to how the platform complies with the DSA's prohibition on personalised advertisements based on special categories of personal data (including sexuality, race or political opinions).

LinkedIn must provide the requested information by 5 April 2024, at which point the Commission will review the results and decide on next steps.

Under the DSA, the Commission has the power to impose a fine of up to 1% of total annual income or worldwide turnover of the preceding year on LinkedIn.

3. Generative AI

The Commission has also sent a formal request for information under the DSA to two Very Large Online Search Engines (Bing and Google Search) and six Very Large Online Platforms (including Facebook, Instagram, TikTok and YouTube). A Very Large Online Platform is an online platform which provides their services to at least 45 million average active monthly recipients in the EU.

The Commission has asked these organisations to provide more information on how they mitigate risks linked to generative AI, such "hallucinations", the viral dissemination of deepfakes and the automated manipulation of services that can mislead voters.

The Commission is particularly concerned with the organisations' risk assessments and mitigation measures linked to the impact generative AI content on electoral processes, the dissemination of illegal content, the protection of personal data and consumer protection.

CJEU – March data protection judgements

In March the Court of Justice of the European Union ("CJEU") issued several judgments on data protection issues.

1. IAB Europe

In a preliminary ruling on a case between IAB Europe and the Belgian DPA, the court confirmed that user preferences for targeted advertising represented in a string known as the "Transparency and Consent String" (the "TC String") constitutes personal data, and that IAB is the joint controller in such arrangements.

The CJEU clarified that because a TC String contains information concerning an identifiable user, it would be considered personal data under the GDPR. Additionally, IAB was considered a joint controller (with its members) of the TC String as IAB's Transparency and Consent Framework established clear guidelines as to how IAB's members' must seek consent to process users' personal data. So due to this framework, the CJEU ruled that IAB exerted influence over the personal data processing and would therefore be considered a joint controller with regard to the TC String.

2. Oral disclosure of personal data can be subject to GDPR

In this case, Endemol, a TV production company, made an oral request to the Finnish District Court for information on an individual's possible criminal proceedings. This request was refused on the basis that the reason Endemol supplied when requesting this data was not a valid reason for processing this personal information. Endemol appealed on the basis that an oral disclosure was not considered processing under the GDPR.

However, the CJEU ruled that the concept of "processing" under the GDPR was very broad, which meant that oral disclosure could fall under this definition.

However, any oral disclosure would also have to fall within the material scope of the GDPR under Article 2: in this case, as the data was non-automated, it would have to form part of a filing system to be within scope. As the data Endemol requested was stored in the criminal court register, the CJEU considered that the information Endemol requested could be within scope of GDPR.

3. OC v Commission

This case concerned OLAF (the European Anti-Fraud Office), which had released a press release concerning a Greek academic. The applicant (the academic) argued that she could easily be identified via the information in the press release (even though she was not named), and thus that OLAF had infringed the provisions of Regulation 2018/1725 relating to the protection of personal data.

The General Court ruled that there was no personal data in the press release, as the applicant was not identified in it, and she was also unlikely to be identified by any means reasonably likely to be used by a reader. However, on appeal, the CJEU upheld the applicant's appeal, finding that the General Court had misinterpreted the concept of "personal data" in its judgment.

The CJEU also specified that just because additional information was necessary to identify the data subject, this did not mean that the data in question could not be classified as personal data.

Round-up of enforcement actions





UniCredit S.p.A.

Italy DPA

€2.8 million

UniCredit suffered a cyberattack which led to some customers' personal data being leaked. The DPA found that UniCredit failed to enforce adequate technical, organisational and security measures to protect customers' personal data.

NTT Data Italia ("NTT")

Italy DPA


NTT was engaged to perform cybersecurity assessments for UniCredit S.p.A, but NTT contracted a third party to carry out the tests on its behalf without seeking UniCredit's authorisation to do so.


Sweden DPA

SEK 7.5 million

Klarna failed to provide users with sufficient information about how it would store their personal data.

X (formerly known as Twitter)

Italy DPA

€1.35 million

X unlawfully allowed gambling advertising on its platform.

Finland DPA


The DPA found that the online shopping platform violated the GDPR, as it required customers to create an account before making a purchase, as an online purchase does not require the storage of personal data or the creation of an account.