Data Protection update - August 2023
Welcome to the Stephenson Harwood Data Protection bulletin, covering the key updates in data protection law from August 2023.
Data breaches hit the news this month, with the UK Electoral Commission and the Police Service of Northern Ireland suffering significant leaks of personal data. The ICO also reprimanded two organisations for data breaches resulting from group emails, reminding businesses of their responsibility to train staff and put appropriate measures in place to avoid these common breaches.
The EU-US Data Privacy Framework ("DPF") continued to make news in August, with the first US organisations self-certifying to its UK extension. The US Federal Trade Commission privacy chief, Samuel Levine, confirmed its stance on enforcing the DPF, promising a similarly aggressive approach to that taken under the Privacy Shield.
In this month's issue:
- First US organisations self-certify to 'UK Extension' to the EU-US Data Privacy Framework
- US Federal Trade Commission privacy chief vows "aggressive" enforcement of the DPF
- New Swiss data protection law to take effect from September 2023
- ICO consults on draft biometric data guidance
- India pushes ahead with new Digital Personal Data Protection Bill
- Zoom subject to speculation surrounding use of customer data for AI training
- Meta moves towards commercial AI products
- UK Electoral Commission data breach
- Police Service of Northern Ireland suffers data breaches
- ICO discusses email data breaches following two reprimands
- Advocacy groups' complaint lands French train operator's booking policy before CJEU
- First organisation fined under UK GDPR fails in appeal bid
- Meta takes Norwegian data regulator, Datatilsynet, to court to avoid ban on behavioural advertising
- Round-up of enforcement action
First US organisations self-certify to 'UK Extension' to the EU-US Data Privacy Framework
In July, the European Commission adopted its adequacy decision on the DPF, a new mechanism for sending personal data from the EU to US data importers who self-certify compliance with its principles. While the UK has not yet adopted its own adequacy regulations, we reported in June that the UK and US governments had reached agreement in principle on a 'UK Extension' ("UKE") to the DPF.
In order to participate in the UKE, US companies must self-certify adherence to the DPF. Garmin International, GoDaddy and Avature were among the first companies to self-certify their compliance with the UKE. However, it is important to note that while US companies have been able to certify their compliance with the UKE since mid-July, organisations cannot rely on the UKE as the appropriate safeguard for international transfers of personal data to the US until the UK government issues its own adequacy regulations in respect of both the DPF and UKE.
While it is expected that the UK's adequacy regulations in respect of the DPF and UKE will be made soon, it is unclear exactly when we can expect to see data transfers take place under the UKE. Before any such regulations can be passed, ministers are obliged to formally consult the Information Commissioner's Office ("ICO"), although the ICO's findings will not bind the government.
US Federal Trade Commission privacy chief vows "aggressive" enforcement of the DPF
The director of the FTC's Bureau of Consumer Protection, Samuel Levine, has vowed aggressive enforcement against FTC-regulated companies that fail to comply with their commitments under the DPF. Confirming that the FTC was going to be "aggressive" in its enforcement of the DPF, Levine emphasised the importance of DPF enforcement with regard to collaboration with EU regulators. Levine explained that this is because the FTC is facing similar issues to EU regulators "like data patterns, like privacy abuses, like deceptive green claims" and that there are "lessons … that they are learning from us and lessons we're learning from them".
As a consequence, it will be crucial for self-certifying US importers to stand by the DPF commitments they make.
New Swiss data protection law to take effect from September 2023
From 1 September 2023, Swiss companies will be required to comply with the new Federal Act on Data Protection ("FADP"). Passed in 2020, the FADP grants new rights to Swiss citizens and substantially revises Switzerland's first Federal Data Protection Act, which dates back to 1992.
The FADP makes a number of key changes to Swiss data protection law. For instance, the F
DADP concerns only the data of natural persons and not legal persons and the definition of sensitive data has also been amended to cover genetic and biometric data, the processing of which now requires specific consent. Although there are some exceptions for SMEs whose data processing is deemed to pose a limited risk of harm to data subjects, keeping a register of processing activities is set to become mandatory for Swiss companies. Furthermore, the FADP introduces the principles of 'Privacy by Design' – which requires developers to integrate the protection and respect of users' privacy into the structure of the products or services that collect personal data – and 'Privacy by Default', which ensures the highest level of security to users' personal data as soon as a product or service is released by default. Data breaches must also now be reported to the Federal Data Protection and Information Commissioner.
Similarly to the EU GDPR, the FADP provides that data may only be transferred internationally where an adequate level of data protection is guaranteed. Crucially, the FADP should make it possible to maintain the free flow of data with the EU and therefore avoid a loss of competitiveness for Swiss companies.
ICO consults on draft biometric data guidance
The ICO is producing guidance on biometric data and biometric technologies (the "Guidance") and has recently published the first phase (accessible here) for public consultation. The first phase of the Guidance (draft biometric data guidance) considers basic concepts such as what constitutes biometric data, what biometric recognition is and provides examples as to how biometric recognition can be used. Additionally, the first phase of the Guidance explains data protection requirements when using biometric data, such as when it is necessary to conduct a data processing impact assessment, whether explicit consent is required to process biometric data as well as whether biometric recognition systems can be used to make automated decisions about a data subject. Consultation on the first phase of the Guidance will end on 20 October 2023.
The Guidance is aimed at organisations that use or are considering using biometric recognition systems and should assist such organisations with ensuring compliance with data protection laws when doing so. The second phase of the Guidance, which will consider biometric classification and data protection, will include a call for evidence early next year.
India pushes ahead with new Digital Personal Data Protection Bill
India's new draft data protection legislation – the Digital Personal Data Protection Bill, 2023 – was passed by the country's lower house of parliament on 8 August. See our blog post for a summary of the key provisions.
Zoom subject to speculation surrounding use of customer data for AI training
In August, Zoom was the target of public backlash following speculation that it was using customer data to train AI models. This criticism stemmed from an update to Zoom's terms of service that appeared to enable Zoom to access more personal data than required for the purpose of improving its new AI features. The terms were interpreted by some as giving Zoom the right to use customer video calls and presentations for AI training purposes.
In response, Zoom published a blog post, in which it denied using audio, video or chat content to train its AI models without consent. However, the post also shows that Zoom took notice of the criticism, as it emphasised its aim to be more transparent – the blog post announced that the terms of service would be revised again to explicitly state that such data would not be used for AI training. This rollback shows that tech companies embarking on AI development must not only consider potential regulatory consequences, but also the consequences of public criticism.
Meta moves towards commercial AI products
Meta presided over a team of scientists that used AI to predict and create a database of protein structures. The project, ESMFold, aimed to assist the development of new drugs and treatments. In August, reports emerged that Meta had disbanded this team. This reflects Meta's gradual move away from scientific projects and towards moneymaking AI products, suggesting a new prioritisation of AI products that will generate revenue.
The Financial Times reported in August that Meta is aiming to release multiple chatbots in September. According to sources with knowledge of the plans, the chatbots will exhibit different personalities in an attempt to boost engagement with Meta's social media platforms. The chatbots could enable Meta to collect significantly more personal data on users' interests, facilitating its targeted advertising. This is the latest development in the race between tech companies to develop generative AI products. Through these chatbots, Meta will catch-up to its key tech competition: OpenAI, Microsoft and Google (all of which already have consumer-facing chatbots involving generative AI).
UK Electoral Commission data breach
The Electoral Commission announced in August that hostile actors gained access to the data of 40 million voters. The systems were first accessed in August 2021, yet this went undetected until October 2022. The identity of the hostile actors is not yet known.
The targets of the attack were electoral registers and the Commission's email system. The registers revealed: (i) the name and address of anyone in Great Britain who was registered to vote between 2014 and 2022; (ii) the names of those registered as overseas voters during the same period; and (iii) the names and addresses of anyone registered in Northern Ireland in 2018.
The news of this data breach stirred up concerns as to the integrity of UK elections. However, the Electoral Commission stated the breach did not have an impact on any elections or the status of any voter's registration status, and confirmed that it has taken significant steps with specialist support to improve the security and resilience of its IT systems.
Police Service of Northern Ireland suffers data breaches
The names of 10,000 officers and civilian staff members were included on a document mistakenly shared online by the Police Service of Northern Ireland ("PSNI") in relation to a freedom of information request in August this year. The document included the surname and first initial of each employee, as well as their rank, grade, location and unit.
Despite only appearing online briefly, the PSNI's chief constable stated that he believed republican paramilitaries obtained the information and may use the data to generate fear and uncertainty, and possibly target officers and staff. In a context where many keep employment with the PSNI a secret, this breach represents a significant threat to officers and will require individuals to be vigilant about their personal security on and off duty.
The chair of the Police Federation confirmed the PSNI are working around the clock to assess the risk and take measures to mitigate it. Despite this, the PSNI will be preparing for potential data protection fines, as well as possible claims for compensation from affected employees.
This news follows an earlier data breach on 6 July, where the personal data of 200 PSNI employees was leaked following the theft of a police-issue laptop.
ICO discusses email data breaches following two reprimands
The ICO reprimanded two Northern Irish organisations for inappropriate disclosures of personal data over email. Both breaches resulted from a similar mistake: using inappropriate group email options.
The Patient and Client Council sent a group email to 15 recipients in relation to their experiences of gender dysphoria. The body of the email did not contain personal data, but recipients could infer from the other recipients' inclusion on the email that they also had experience of gender dysphoria. Likewise, a newsletter sent to 251 recipients by the Executive Office's Interim Advocate Office enabled recipients to infer that the other recipients also were likely to be victims and survivors of abuse due to the email's content.
John Edwards, UK Information Commissioner, commented that this type of data breach is 'all too common' despite being easily avoidable.
Advocacy groups' minimisation complaint lands French train operator's booking policy before CJEU
Following a complaint by LGBTQI+ advocacy groups (the "Groups"), French train operator SNCF's policy of requiring its passengers to choose between 'sir' or 'madame' when booking a train ticket is under review at the Court of Justice of the European Union ("CJEU"). The basis of the complaint is that SNCF's policy is a violation of the data minimisation principle of the EU GDPR because a passenger's gender is not data which is required when booking train travel. The Groups further complained that SNCF's collection of passenger gender data is unlawful because it is not based on any of the six grounds for processing set out in the GDPR.
At the time of making the complaint to France's data protection authority ("CNIL") in January 2021, the groups also argued that the 'binary alternative' locks travellers into gender stereotypes and does not provide any option for non-binary people or those who do not wish to restrict their identity. The CNIL rejected the complaint in March 2021, concluding that there was no breach of the GDPR. The Groups appealed to the State Council, seeking an annulment of the CNIL's decision to reject the complaint. The State Council stayed its decision on 21 June, referring questions to the CJEU on the interpretation of the GDPR such as whether collecting 'civil status information' could be considered necessary under the GDPR when the CNIL is considering SNCF's 'legitimate interests' of processing data for booking travel.
First organisation fined under UK GDPR fails in appeal bid
Doorstep Dispensaree ("Doorstep"), the first ever recipient of a fine under the UK GDPR, has failed in its appeal of the First Tier-Tribunal's ("FTT") decision that reduced the fine but did not overturn it.
Back in December 2019, the ICO had fined the London-based pharmacy £275,000 for failing to ensure the security of special-category data as a result of leaving a large number of documents showing names, addresses and medical information in unlocked containers at the back of its premises. The FTT reduced the ICO's fine to £92,000 in August 2021 due to the fact the original penalty had factored in a much larger number of documents and affected individuals than were affected by the breach.
Upper Tribunal judge Edward Mitchell refused Doorstep's appeal on the basis that the decision of the FTT did not involve an error on a point of law. Doorstep had appealed the FTT's judgment on 7 grounds, 3 of which focussed on the approach of the FTT. In particular, Doorstep argued that the burden of proof 'starts and remains' with the ICO, and also that the ICO should have proved the alleged infringement to the criminal standard of beyond reasonable doubt.
However, the judge ruled against this line of argument, noting that the main issue of potentially wider interest on the appeal was the standard of proof in proceedings before the FTT on appeal against a monetary penalty notice and held 'disputed matters of fact are to be resolved according to the civil standard of proof, rather than the criminal standard'.
You can read the decision of the Upper Tribunal here.
Meta takes Norwegian data regulator to court to avoid ban on behavioural advertising
In July, the Norwegian data protection authority, Datatilsynet ("DPA"), imposed a temporary ban on Meta conducting behavioural advertising in Norway on the basis of Meta's 'surveillance and profiling of users in Norway'. The ban follows a ruling of the CJEU that Meta cannot use 'legitimate interests', rather than consent, as lawful grounds for behavioural advertising under the EU GDPR.
The ban took effect from 4 August and will last for three months or until Meta can demonstrate compliance. The Norwegian DPA also said in July that it would fine Meta up to one million Norwegian kroner (approx. £75,000) per day from 14 August onwards if it failed to comply.
The ban was imposed on Meta through the urgency procedure set out in Article 66 EU GDPR, which allows supervisory authorities to take immediate action and bypass a company's lead authority in the EU. On 4 August, Meta sought a stay in the Oslo District Court, arguing that it should only deal with the Irish Data Protection Commissioner as its lead regulator in the EU. However, despite the fact Meta is seeking a temporary injunction against the Norwegian DPA's decision while the matter is resolved in court, the daily fine will continue to apply unless Meta succeeds in obtaining the injunction or until it complies with the DPA's order.
The DPA has confirmed that both Facebook and Instagram can continue to operate in Norway, although it must base personalised advertising on information about the user, such as their age, gender or information about their interests which they have provided.
Round-up of enforcement action
Autostrade per l'Italia
For incorrectly identifying an app developer as the controller, rather than itself, and for not failing to put in place a contract to regulate its relationship with the processor.
Failing to ensure secure processing, not carrying out a DPIA before profiling and for not specifying the data retention period.
Storing data for an excessive period, using it illegally for marketing and profiling, and failing to implement sufficient security measures.
Meta Platforms Ireland
Korean Personal Information Protection Commission
6.5 billion won (approx. £3.9 million)
Unlawfully using third-party data for targeted advertising without consent
Korean Personal Information Protection Commission
886 million won (approx. £530,000)
Unlawfully using third-party data for targeted advertising without consent