Data Protection update - June 2023

Data Protection update - June 2023

Welcome to the Stephenson Harwood Data Protection bulletin, covering the key developments in data protection law from June 2023.

June saw the US and UK strengthen their relationship by reaching an agreement in principle to create a “data bridge” to allow the free flow of data between the two countries.

The pivotal deal, which followed two years of discussions, is expected to make it easier for around 55,000 UK businesses to transfer data freely to certified US organisations without red tape and deliver annual savings to businesses of around £92.4 million. Chloe Smith, the UK’s Secretary of State for Science, Innovation, and Technology, said the international collaboration is “key” to the UK’s science and technology superpower ambitions and will “open new opportunities” to grow the UK’s innovation economy.

June was also the month that saw the European Parliament adopt its negotiating position on the AI Act.

The Act takes a risk-based approach and establishes obligations for providers and those deploying AI systems. It includes a full ban on AI use for biometric surveillance, emotional recognition and predictive policing. It also requires generative AI systems like ChatGPT to disclose that content was AI-generated. Talks with EU countries on the final form of the law are now underway and a final agreement is expected to be reached by the end of the year.

Elsewhere, Spotify found itself in hot water after being fined €5 million by the Swedish Authority for Privacy Protection for breaching GDPR rules by not providing information about personal data it processes following individual requests. The music streaming giant has vowed to appeal.

In this month’s issue:

Data Protection 

Artificial Intelligence

Cyber Security

Enforcement and Civil Litigation 

Data Protection

US and UK governments reach agreement in principle on US-UK data bridge

On 8 June – a year after the UK government set out its plans to create 'data bridges' with a number of priority jurisdictions – US President Joe Biden and UK Prime Minister Rishi Sunak announced the Atlantic Declaration for a Twenty-First Century US-UK Economic Partnership (the "Declaration"), providing for closer cooperation on emerging technologies and responsible digital transformation. The Declaration includes a commitment in principle between the US and UK governments to establish a US-UK Data Bridge (the "Bridge").

The Bridge is intended to facilitate trusted, efficient, and secured data flows between the US and UK. The UK government estimates that the Bridge will provide for approximately 55,000 UK businesses to freely transfer data to certified US organisations, which is estimated to translate into £92.4 million in direct savings per year.

The agreement in principle builds on the US-EU Data Privacy Framework ("DPF") which was introduced by Executive Order 14086 on enhancing Safeguards for the United States Signals Intelligence Activities (the "EO"). With the DPF now in the final stages of implementation, the announcement of the Bridge indicates that both the US and UK governments intend to establish a "UK Extension" to facilitate transfers between UK businesses and companies in the US that opt-in to the DPF. The Bridge will facilitate transfers to US organisations using various data transfer mechanisms under UK law.

Whilst the announcement of the Bridge is a good first step for the future of UK-US transfers, work must still be done before the Bridge is fully operational. The UK government will now need to assess the protection offered under US law, which will be done in consultation with the Information Commissioner's Office ("ICO"). Simultaneously, the US will look to designate the UK as a qualifying state under the EO such that the protections offered by it can apply to UK data subjects. For more on the EO and DPF, please see our article here.

You can read the Statement here.

UK cookie enforcement looms as cookie fatigue continues to be felt across the UK and EU

Stephen Bonner, the ICO's deputy commissioner, has warned that "there is no excuse" for companies not to have a 'reject all' button on their website cookie banners. Bonner emphasised that companies who do not have 'reject all' on their top-level cookie banners are breaking the law, that the ICO is "paying attention in this area" and that its position is "pretty straightforward and robust". Although Bonner made clear that the ICO will not immediately issue fines but rather will "move through a set of regulatory interventions getting harder and harder", it was confirmed that the ICO will issue fines if organisations fail to remediate non-compliant cookie banners.

In the EU, law makers and commentators are also looking at reforming cookie regulation. In April, EU Justice Commissioner Didier Reynders acknowledged so-called 'cookie fatigue', noting that Europeans are frustrated with the proliferation of pop-ups and notices asking for consent to cookies.

Simultaneously a European Commission official confirmed that a new browser-based adtech consent solution is currently being discussed by members of the European Parliament and Council of the EU. This comes alongside the EU's long-awaited e-Privacy Regulation proposal. The proposed solution would allow users to indicate their preferences for online advertising in a single interface, which would allow users to opt-out of targeted advertising altogether or opt-in to receive only specific advertisements.

New guidelines for fines issued under the GDPR published by EDPB

The European Data Protection Board ("EDPB") issued its final guidelines (the "Guidelines") on how to calculate fines under the GDPR on 8 June. The Guidelines are intended for use by supervisory authorities and are designed to ensure a consistent application and enforcement of the GDPR.

The Guidelines set out a five-step methodology for assessing fines on a case-by-case basis, which involves:

  • identifying the processing operations involved;
  • finding the starting point for the fine based on the seriousness of the infringement and the turnover of the business involved;
  • evaluating aggravating and mitigating circumstances related to past or present behaviour of the controller/processor and increasing or decreasing the fine accordingly;
  • identifying legal maximums for different processing operations; and
  • analysing whether the final amount of the calculated fine meets the requirements of effectiveness, dissuasiveness and proportionality.

Commentators have criticised the EDPB's use of turnover as the basis for a starting point, on the basis that under the EU GDPR, revenue is the main determinant of the fine cap.

The Guidelines can be accessed via the EDPB's website here.

Irish watchdog's decision stayed as Meta seeks judicial review

In May, we reported that the Irish Data Protection Commission (the "DPC") had fined Meta €1.2 billion in a significant decision (the "DPC's Decision"). The DPC's Decision requires Meta to suspend future transfers of personal data to the US and to bring its processing operations into compliance with the EU GDPR by ceasing processing in the US of personal data of EU/EEA users within six months.

In response, Meta filed two cases at the Irish High Court on 8 June, seeking a judicial review of the DPC's Decision, and was granted a stay of the deadlines in the orders imposed on it. On 26 June, Mr Justice Denis McDonald, the head of the Irish Commercial Court, agreed that the stay he imposed earlier this month is to continue until the end of July. Catherine Donnelly SC, for the DPC, said that she understood the Commission is to shortly give an adequacy decision on the level of protection the US offers to personal data which "may render all this unnecessary". EU officials have also said that a data transfer agreement with the US should be in place in the summer.

Change on the horizon for the GDPR: DPDI Bill and EU GDPR procedural reform

In May, we reported that the Data Protection and Digital Information (No. 2) Bill (the "DPDI Bill") had its second reading in the House of Commons and moved to Committee stage.  On 13 June, the Public Bill Committee (the "Committee") for the Bill concluded. The Committee made a handful of substantive amendments to the DPDI Bill, with the overwhelming majority of the changes made by the Committee being technical amendments tabled by the Government to clarify clauses and bring consistency to legislation.

The DPDI Bill will now move on to the Report Stage, which represents an opportunity for all members of the Commons to consider amendments to the DPDI Bill and suggest further changes or new clauses for consideration.

Artificial Intelligence

European Parliament adopts its position on AI Act

On 14 June, the European Parliament became one of the first legislatures to adopt its position on a law focusing on AI. Its position on the Artificial Intelligence Act ("AI Act") was adopted with 499 votes in favour, 28 against and 93 abstentions.

Despite the approval from the European Parliament, it may be a while before the AI Act comes into force. The next stage is for the European Parliament and European Council to agree upon the AI Act's final form.

The version of the AI Act approved by the European Parliament adopts a risk-based structure, with the aim of providing a proportionate and future-proofed outlook on AI regulation. Under the AI Act, AI systems with an unacceptable level of risk to people's safety would be prohibited. The AI Act describes what 'high-risk' AI may look like, such as systems that cause harm to people's health, safety or fundamental rights, or to the environment. This includes systems that deploy manipulative techniques or exploit people's vulnerabilities and systems that aim to influence voters in political campaigns. Lucilla Sioli, the European Commission's director of AI policy, commented that one of the main challenges in the final talks on the AI Act will be whether big generative AI models, such as ChatGPT, fall within the definition of 'high-risk'.

For more information on AI, see our AI insights.

Privacy concerns delay EU Launch of Google's AI chatbot

In tune with international fears around generative AI technologies, the launch of Google's Bard in the EU has been delayed due to privacy concerns. Google's AI Chatbot is a competitor to other leading AI technologies including OpenAI's ChatGPT and DeepMind's Sparrow. Having already launched the chatbot in the US and UK, Google intended to launch Bard across the EU in mid-June.

However, these plans were delayed following notification to the DPC that Bard was to be launched in the EU. In response, the DPC raised concerns, stating that it is yet to see detailed briefing, a data protection impact assessment or any supporting documentation relating to Bard.

This development comes as many national regulators express their concerns surrounding the development of generative AI. Regulation is yet to catch-up with AI's rapid development, leaving regulators and organisations in an uncertain landscape. While this uncertainty remains, organisations should stay up to date with legislative updates and conduct the recommended risk assessment procedures when using or deploying AI technologies.

Sunak's AI policy goals

London Tech Week took place between 10-14 June. To commemorate the occasion, Prime Minister Rishi Sunak gave a speech outlining the pillars of the UK government's policy on AI. Sunak's comments underscore the government's focus on cementing the UK's position as a global leader in AI innovation, whilst retaining its focus on global AI safety.

For a deeper dive into Sunak's speech and the government's approach to AI regulation, read our blog post.

Rishi Sunak announced on 7 June that the UK will host the first major global summit on AI safety in autumn of this year.

In its press release, the UK government stated that this summit is required as the world is grappling with the challenges and opportunities presented by the rapid advancement of AI. It argued that this fast-paced development requires agile leadership, and that the UK has a 'global duty' to ensure AI technology is developed and adopted safely and responsibly.

Cyber Security

Data breaches hit the news

Over May and June, we have seen some significant data breaches hit the news.

At least 90 organisations were affected by a cyber-attack against Capita, though less than 0.1% of its servers were affected. Capita is one of the UK's largest outsourcers and works with both public and private sector clients. The incident highlights the significant impact of incidents that affect supply chains and managed service providers. For more information, access our blog post focusing on the Capita leak.

British Airways, Boots, the BBC and Ofcom also announced that they were affected by a data breach. The attack is linked to Russian-speaking criminal hackers who were able to exploit a vulnerability in MOVEit (a file transfer software). Since then, a number of organisations have been reported as impacted by the breach including law firms. For more information, stay tuned for our upcoming blog post that provides a deeper dive into the MOVEit data breach.

ENISA sets the tone on good cyber security practices for AI

On 7 June, the European Union Agency for Cyber Security ("ENISA") published four reports on the cyber security challenges posed by AI.

The new reports focus on the following key issues:

  • Setting good cyber security practices for AI.
  • Cyber security and privacy in AI: forecasting demands on electricity grids and medical imaging diagnosis.
  • AI and cyber security research.

The report focusing on setting good cyber security practices is of particular interest. The report is designed to be applicable across a wide range of sectors, build on international best practice, consider AI's place within broader IT systems, and be adaptable for future AI developments. It provides guidance on how organisations can secure their AI systems, operations and processes in order to build trustworthiness in their AI activities. In particular, the report provides details on:

  • AI threat assessments: how organisations should implement additional risk assessment efforts in order to reduce the risk of cyber security threats.
  • AI security management: how security controls, such as pre-processing or adversarial training, can be used to minimise the impact of factors compromising the trustworthiness of AI.

You can download the new ENISA reports here.

Enforcement and Civil Litigation

Class action filed against major AI operators citing privacy concerns

OpenAI, the organisation behind the ChatGPT and DALL-E generative AI systems was hit by a new class action on 28 June 2023, filed in the US and citing data privacy breaches. The lawsuit, filed in the US District Court for the Northern District of California yesterday, claims that OpenAI and Microsoft (the largest investor in OpenAI) have committed multiple privacy and consumer law violations. These include the harvesting of huge volumes of personal data, including medical data and data about children, without permission as part of the training process of chatbots (i.e. ChatGPT) and translation applications. Additional claims include the lack of effective procedures to allow individuals to request the deletion of their data and challenges over the lack of effective safeguards for minors. The claimants seek damages and a "freeze" on OpenAI's development activities while an independent AI council is set up to approve AI products and allow users to make opt-out and deletion requests.

The privacy law challenge to the widespread "scraping" of data published online to train AI applications dovetails with other legal claims which have been filed in respect of alleged copyright violations in respect of such material, much of which was used without the permission of the authors and copyright owners.

Latest EU Google transfers decisions

There has been yet another recent decision that a website's use of Google Analytics led to the unlawful transfer of personal data to the US in violation of the EU GDPR, this time by the Austrian Federal Administrative Court ("BVwG"). On 12 May, the BVwG declared the use of Google Analytics by an Austrian website unlawful. Despite reliance on SCCs and Google's implementation of technical and organisation measures, the safeguards were insufficient to prevent access to the data by US intelligence agencies. A Google report itself indicated a significant number of access requests by US intelligence agencies.

Elsewhere in the EU, the Finnish Data Protection Authority ("FDPA") ruled on 23 May that the use of Google Analytics and Google's reCAPTCHA service by the Finnish Meteorological Institute (the "Institute") was in breach of the EU GDPR's provisions on international transfers, given the transfers of personal data to the US. The FDPA ordered the Institute to stop using the Google tools and also to delete the transferred personal data.

Spotify to appeal Swedish data access GDPR fine

Earlier this month, the Swedish Privacy Authority ("IMY") imposed a 58 million Swedish krona (approximately £4.2 million) fine on Spotify for failing to provide enough information in response to data access requests.  In a statement released on 13 June, the IMY confirmed that although it believes that Spotify provides the personal data it has to data subjects when they request it, it does not inform data subjects clearly enough as to how this data is used by the company. In particular, the IMY said the information Spotify provides about how and for what purpose personal data is handled should be specific and that it should also be easy for data subjects to access and understand.  Furthermore, personal data that is difficult to understand, such as data of a technical nature, may need to be explained not only in English but also in data subjects' own language. The IMY said that its review had identified shortcomings in each of these areas.

The IMY recognised that Spotify had taken several measures focused on meeting the requirement of a data subject's right to access and that the GDPR deficiencies discovered were of a low severity. In light of this, the number of Spotify's registered users and its turnover, the IMY issued the administrative fine of 58 million Swedish krona.  This amounts to approximately one percent of the potential maximum. Spotify has confirmed that it plans to appeal the fine.

You can read the decision of the IMY in Swedish here.

Online advertising company Criteo hit with €40 million fine for deploying cookies without consent

On 22 June, the French Data Protection Authority ("CNIL") announced that it had fined Criteo €40 million after it found that its tracker cookie was deposited by several of its partners on users' devices without their consent.

A specialist in "behavioural retargeting", Criteo displays personalised advertisements by collecting the browsing data of users through the use of the Criteo tracker cookie. This is placed on user terminals when users visit various Criteo partner websites and allows Criteo to analyse browsing habits and subsequently determine which advertiser and which product it would be most relevant to display to a particular user.

Pursuant to Article 26 of the EU GDPR, Criteo had entered into a joint controller agreement ("Controller Agreement") with partners, under which its partners were required to obtain consent from their users to deploy the tracker cookie. However, the CNIL found that Criteo had not implemented any measures to ensure the personal data it processes is only that for which a data subject's valid consent has been obtained. This was because, among the websites investigated by the CNIL, more than half of the sites published by Criteo's partners did not collect valid consent and Criteo had not implemented an audit mechanism in respect of its partners' consent collection. On this basis, the CNIL inferred that Criteo was, at the time of the checks, "processing a large volume of browsing data for which internet users had not given valid consent."

The Controller Agreement included a provision requiring Criteo's partners to provide consent mechanisms that were compliant with applicable laws. However, given the fact that, in practice, Criteo never terminated a contract for breach of the consent mechanism provision nor ensured compliance with it, the consent mechanism provision in the Controller Agreement was insufficient. It may be inferred from this that online advertising companies are under an expectation to not only monitor compliance but also terminate contracts with partners who fail to comply with their data protection obligations.

The CNIL also found that Criteo had committed other breaches of the EU GDPR, by failing to comply with information and transparency obligations, failing to respect users' rights to access their data and also failing to comply with users' right to withdraw consent and to request erasure of their data.  The CNIL cut the final amount of the fine by €20 million following a hearing in March, but the final amount of the fine, constituting approximately 2% of Criteo's worldwide turnover, still amounts to a significant penalty. Criteo has since announced an intention to appeal the fine, stating that it "remains vastly disproportionate in light of the alleged breaches".

You can read the decision of the CNIL in French here.

CJEU: no automatic right to know which individuals accessed personal data

In 2014, a Finish citizen – both a customer and an employee at the bank Pankki S – learnt that his personal information had been consulted by other members on the bank's staff on several occasions in late 2013. Given his doubts as to the lawfulness of the consultations, in 2018 the citizen asked Pankki S to inform him of the identity of the persons who had consulted his customer data, the exact dates of the consultations and the purposes for which the data had been processed.

Pankki S explained that it had consulted his data in order to check a possible conflict of interests but refused to disclose the identity of the employees who had carried out the consulting operations on the basis that it constituted the personal data of those employees.  The citizen sought an order from the Data Protection Supervisor's Office, Finland, that the bank provide him with the information he requested, but the Finnish watchdog rejected the request. The citizen appealed in Finnish court, which referred the case to the European Court of Justice ("CJEU").

The CJEU held that Article 15(1) EU GDPR does not grant data subjects a right to access the identity of the employees of the controller who carried out the processing operations under its authority and in accordance with its instructions, unless that information is essential in order to enable the data subject effectively to exercise their Article 15 rights and provided that the rights and freedoms of those employees are taken into account.  In the event of a conflict between, on the one hand, the exercise of a right of access and, on the other hand, the rights and freedoms of others, the ECJ held that "a balance will have to be struck between the rights and freedoms in question".

The CJEU also ruled on a related point that data subject access requested do cover data processed before the GDPR came into force.

You can read the judgment of the CJEU in English here.

Round up of enforcement actions





German bank

German DPA


A bank failed to provide a data subject with a meaningful explanation about the logic involved in an automated decision affecting them.


Swedish DPA

58 million Swedish krona

See our summary above.

Sports betting agency

Croatian DPA


A sports betting agency that collected and stored both sides of data subjects' credit cards was fined for various GDPR violations.

Digi Spain Telecom


Spanish DPA


A third party managed to obtain a duplicate SIM card from the telecom company without the data subjects' authorisation and gained access to the data subjects' bank account.



Italian DPA

€7.6 million

TIM failed to adequately oversee "abusive" call centres that are not related to the company's official network.


French DPA

€40 million

See our summary above.