Data Protection update - October 2023

Data Protection update - October 2023

Welcome to the Stephenson Harwood Data Protection bulletin, covering the key developments in data protection law from October 2023.

The ICO hit the headlines this month, publishing new guidance on employee monitoring to ensure that employers comply with their obligations under data protection legislation, and draft data protection fining guidance, which details the ICO's ability to impose fines, how fines are calculated and the situations in which penalty notices may be issued. The ICO has also been granted permission to intervene in a High Court case involving alleged breaches of UK data protection laws by Meta.

In other news, Clearview AI has succeeded in appealing against the ICO's £7.5 million fine for its facial recognition software. Although the Tribunal overturned the fine, it did so on a very narrow ground relating to the non-applicability of the GDPR to foreign law enforcement activities. Had this not applied, the Tribunal concluded that Clearview would have been responsible for the monitoring of UK individuals carried out by Clearview's customers, and that Clearview would have been caught by the GDPR and UK GDPR, even though Clearview was established overseas.

Elsewhere this month, 23andMe, a prominent genomics and biotechnology company, announced that it had experienced a massive data breach that exposed the sensitive genetic data of millions of customers.

In this month's issue:

Data protection


Cyber security

Enforcement and civil litigation

Data protection

ICO publishes new guidance on employee monitoring

On 3 October 2023, the ICO published new guidance on employee monitoring. This aims to provide practical advice to ensure that employers comply with their obligations under the UK GDPR and Data Protection Act 2018. Please see our recent blog post for a full summary here.

ICO to intervene in Meta UK targeted advertising High Court claim

In November 2022, UK-based digital rights campaigner Tanya O'Carroll sued Meta in the High Court of England and Wales, asserting that Meta had breached UK data protection laws. She alleged that Meta failed to stop collecting and processing her data, despite her demands to the contrary.

O'Carroll claimed that by refusing to action her objection, Meta had breached Article 21(2) of UK GDPR which gives individuals the right to object against the processing of their personal data for marketing purposes.

O’Carroll stated “we shouldn’t have to give up every detail of our personal lives just to connect with friends and family online. The law gives us the right to take back control over our personal data and stop Facebook surveiling and tracking us.”

Typically, data controllers can continue to process data if their interests override those of the data subjects. However, with direct marketing, objection rights are stronger, and a data controller must stop using personal data.

Meta has denied O'Carroll's claims. Meta's defence filed in June stated that the alleged processing did not constitute processing for direct marketing purposes, but for personalised advertising purposes instead. It continued, stating that the EU and UK GDPR do not contain wording to suggest that online advertising constitutes direct marking and therefore the definition does not stretch; personalised adverts are simply part of the Facebook service that users have chosen to use.

After requesting to intervene in August, the ICO has recently been granted permission to intervene in writing and through oral submissions in the High Court case. A spokesperson for the ICO stated that "the commissioner has chosen to intervene in these proceedings to assist the court with the interpretation of part of the UKGDPR that was not previously considered.

Eased cross-border data transfer provisions for China: what do they mean for UK and EU businesses?

On 28 September 2023, the Cyberspace Administration of China published the draft Provisions on Regulating and Facilitating Cross-Border Data Flow (the "New Draft Provisions"). A full analysis of the New Draft Provisions can be found here.

The impact on UK and EU businesses should not be overlooked. By relaxing the cross-border transfer requirements, this will enable regulated, free flows of data from China. It is hoped that the clarification of regulations will promote and encourage multinationals to continue and develop their relationship with China by having more autonomy in exporting data.

The approach to the New Draft Provisions appears to be business friendly. For example, data handlers exporting personal data for the purpose of the performance of a contract to which the data subject is a party to (such as for cross-border e-commerce, cross-border payments, plane ticket and hotel bookings, and visa applications – Article 4(1)) are exempt.

The exemptions proposed are likely to reduce compliance costs and facilitate data exports. It is hoped that clarity over data export restrictions will attract foreign investment in China, not only benefiting China, but also opening doors for UK businesses in sectors such as hospitality and the airline industry, as well as financial service institutions that provide services to Chinese customers. We note, however, that while this may ease data exports from China, the rules relating to international transfers of personal data to China from the UK and EU would still apply.


Google's privacy policy should settle any generative AI claims

Google has asked a US federal court to dismiss a class action claiming that Google's generative AI software infringed privacy rights.

The claim against Google alleged that Google users' personal data (including Gmail, Google Docs and Google Search) was being used to build Google's generative AI products, including its chatbot Bard, without consent.

Google, however, has dismissed this allegation. The motion filed by Google earlier this month argued that Google users freely agreed to share their data and stated that the claims are based on the false idea that "training Generative AI models on information publicly shared on the internet is ‘stealing.'” Google assert that it is within its rights to feed such data to train its software as the data was publicly available, and so contest a privacy claim.

Google clarified in its motion to dismiss that “Google’s clear disclosures in its privacy policy that it ‘may collect information that’s publicly available online … to help train Google’s language models’ … undermine plaintiffs’ claim of a reasonable expectation of privacy in information that they posted publicly."

This is the latest of many 'clampdowns' on the use of generative AI. Google suggested that yet another lawsuit such as this one, stifles not only Google's services and growing technology, but the whole industry of evolving generative AI. This case is yet to be decided, but as generative AI tools rely on vast quantities of data for training, the outcome could have major implications for the AI technology space and its future developments.

UK holds AI Safety Summit

The UK Government held the first global AI Safety Summit (the "Summit") on 1 and 2 November 2023. The Summit considered the key risks of AI systems, goals for mitigation of these risks and improvements in AI safety, bringing together global leaders, executives at leading AI companies, academics, and civil society figures.

A series of significant announcements and commitments were made at the Summit. The most significant being the Bletchley Declaration, the world-first agreement on the safety of AI. Signatories included 28 countries, including those that are at the forefront of developing AI technologies such as the UK, US, China, the EU and Japan. The Bletchley Declaration established risks anticipated with AI systems such as misuse, cybersecurity, disinformation, biotechnology, privacy concerns, and ability for bias. The Bletchley Declaration highlights the potentially 'catastrophic' risk of AI and how classification and categorisation of risk is fundamental to mitigation. The Bletchley Declaration also addressed actors who are developing 'unusually powerful and potentially dangerous' AI systems and said they had a responsibility to ensure the safety of such developments, including secure testing measures.

Other outcomes include the United Nations confirming its support for the establishment of an expert AI panel, reminiscent of the Intergovernmental Panel on Climate Change and major technology companies agreeing to collaborate with governments in testing their advanced AI models both before and after they are released to the public.

The next Summit has been set to be held in the Republic of Korea in May 2024.

Cyber security

Cyber threat to UK's Critical National Infrastructure prompts committee inquiry

The Science, Innovation and Technology Committee ("SITC"), a committee appointed by the House of Commons to examine the expenditure, administration and policy of the Department for Science, Innovation and Technology, has launched an inquiry into the cyber resilience of the UK's critical national infrastructure ("CNI"). This marks a significant step in addressing vulnerabilities in sectors vital for public services, national security, and the state's proper functioning. Such sectors include Food, Energy, Transport, and Health, which are the backbone of the nation's stability and safety. Disruption within these sectors could have far-reaching consequences, impacting not just the daily lives of citizens but also the economy and national security.

A startling revelation from the inquiry is that the UK is currently the third most targeted country globally for cyber-attacks, following the United States and Ukraine. Moreover, with the UK's heavy reliance on digital infrastructure, the cyber threat to the UK's CNI is a great matter of concern that needs to be addressed.

The SITC has issued a call for evidence, closing on 10 November 2023, inviting expert input and recommendations. Submissions on topics such as the types and sources of cyber threats to UK CNI and the strengths and weaknesses of the UK cyber strategies in relation to UK CNI are welcomed. This open approach demonstrates a commitment to a comprehensive perspective on this issue.

23andMe data breach exposes sensitive genetic data

On 6 October 2023, 23andMe, a prominent genomics and biotechnology company, announced that it had experienced a massive data breach that exposed the sensitive genetic data of millions of customers. Hackers did not infiltrate the company's servers, but instead targeted individual user accounts. The hack was allegedly carried out by using the credential stuffing technique, where hackers try combinations of usernames or emails and corresponding passwords that are already in the public domain from other data breaches. The full extent of the personal data accessed and stolen is still unclear.

The breach subsequently took a distressing turn as hackers offered the stolen personal data for sale and reportedly created lists of individuals based on different heritages, race, and ancestry. It is feared these individuals could face an elevated risk of discrimination and harassment. This incident raises critical concerns about the increased risk of data breaches and unauthorised access to sensitive personal data as more medical and genetic information is digitalised and stored electronically.

The 23andMe breach has raised interesting questions about the risks involved with providing consent on behalf of others. As a product of the way 23andMe works, when a sample of genetic information is received and analysed, the company indirectly has genetic information of their relatives even if those relatives did not send a sample or consent to any data collection. In addition, 23andMe has attributed fault to an opt-in feature called DNA Relatives, which allows users to see the data of other opted-in users whose genetic data matches theirs. The hackers were therefore able to collect data on more than one user by only breaking into a single user's account. As a result, the data breach highlighted serious issues for companies handling vast quantities of personal data and raised questions on how to maintain total digital privacy and security for individuals.

In response, 23andMe has launched an investigation into the data leak and increased their security levels, prompting users to change their passwords as well as encouraging the switch to multi-factor authentication.

Breach response: How do we reconcile international incident and breach reporting requirements?

A big disparity between incident and reporting requirements across different countries is causing difficulty for multinational organisations on agreeing a global approach adhering to these differing rules. With cybersecurity threats becoming alarmingly more frequent, privacy and security executives have called for a need to reconcile incident and breach reporting requirements. See our blog post for a greater insight into this issue.

Enforcement and civil litigation

Clearview wins appeal against ICO

Clearview AI has succeeded in appealing against the ICO's £7.5 million fine for its facial recognition software. Although the Tribunal overturned the fine, it did so on a very narrow ground relating to the non-applicability of the GDPR to foreign law enforcement activities. Had this not applied, the Tribunal concluded that Clearview would have been responsible for the monitoring of UK individuals carried out by Clearview's customers, and that Clearview would have been caught by the GDPR and UK GDPR, even though Clearview was established overseas. The decision has significant implications for service providers facilitating monitoring activities by their customers. See our blog post for further insights here.

ICO's Data Protection Finding Guidance

The UK's data protection authority has released its Draft Data Protection Fining Guidance ("Draft Guidance") which provides details on the ICO's ability to impose fines, how fines are calculated and the situations in which penalty notices may be issued. This Draft Guidance is currently open for consultation. See our blog post for a summary of the key highlights of the Draft Guidance.

ICO issue preliminary enforcement notice over Snapchat's AI chatbot

The introduction of 'My AI' to all Snapchat users in April this year is under investigation by the ICO after noting that Snapchat potentially failed to assess the privacy risks associated with the 'My AI' bot. The ICO has issued a preliminary enforcement notice and are awaiting a response from Snapchat. See our blog post for a summary of the key takeaways.

ICO raises threat level on cookie enforcement

The ICO has stressed its intention to intensify scrutiny of companies that have non-compliant cookie banners. Currently, non-compliance could result in a fine of up to £500,000 under the EU ePrivacy Directive, but the Data Protection and Digital Information Bill will allow for UK GDPR level fines. See our blog post for information on how cookie complaints are affecting organisations, a summary of recent fines issued by various data protection authorities and the effect of the Cookie Banner Taskforce.

Equifax settles UK data breach litigation but suffers £11 million FCA fine

In 2017, Equifax Ltd ("Equifax") suffered one of the largest data breaches ever seen - 13.8 million UK customers alone were affected. In 2018, the ICO fined Equifax £500,000 (the maximum fine pre-UK GDPR) in relation to the incident for failing to take "appropriate technical and organisational measures against unauthorised and unlawful processing of that data".

In a recent development, six years after the initial 2017 cyberattack, the Financial Conduct Authority ("FCA") has fined Equifax £11,164,400 for similar breaches, namely failing to adequately protect its UK customers' data, where such data had been outsourced to its US parent company, Equifax Inc. This highlights the widespread potential enforcement action surrounding personal data and demonstrates that the many controls asserted by data protection regulators is not limited to their remit but may also be enforced by other sectors such as financial services.

A wide variety of personal data was exposed including names, home addresses, phone numbers and partially exposed credit card details. The FCA has confirmed that this cyberattack was "entirely preventable"; by not regarding the relationship with its US parent company as outsourcing, Equifax demonstrated an insufficient ability to manage and protect data appropriately. It is claimed that Equifax had knowledge of the fact that its data security systems were vulnerable to attack, yet it still failed to take the necessary precautions and actions to safeguard the UK customer data.

Equifax only became aware of the breach six weeks after the initial hacking, but just five minutes before the US parent company announced the incident. As a result of this, Equifax were ill-prepared to deal with the complaints that it received and UK customers experienced delays in being contacted by the company. Although Equifax issued multiple public statements following the incident regarding how its UK consumers would be impacted, these statements contained inaccurate representations of the amount of people affected. Quality assurance checks for complaints were not upheld which resulted in the mishandling of complaints.

The multi-claimant action has now been settled.

Each month, we bring you a round-up of notable data protection enforcement action.






Dutch DPA


The company did not adhere to a court ruling that required it to reveal details about its automated decision-making process after Uber failed to inform three drivers about how their data was processed.


Italian DPA


The Swiss gas and electricity supplier was fined after it processed outdated data of more than 5,000 individuals unlawfully.


French DPA


After receiving 31 complaints regarding the way in which potential new customers are targeted by Canal+, it received a fine for preventing individuals being able to exercise their rights.