Data Protection update - March 2023

Data Protection update - March 2023

Welcome to the Stephenson Harwood Data Protection bulletin, covering the key developments in data protection law from March 2023.

If one theme defined the month of March it was that of artificial intelligence and more crucially, how to properly regulate the increasingly dynamic sector. The UK Government offered its own solution in the form of an AI white paper, which sets out its recommendations for the industry and emphasises that proportionality will underpin its method. The UK’s approach is purposefully hands-off and the government has not proposed legislation because it believes industry investment will go to the jurisdiction with the most “pro-innovation" regime. Whether the government is right remains to be seen. What is clear though, is that the UK’s approach significantly departs from that of the EU, which is pushing through a more detailed legal framework on AI. May the best authority win.

Also on the AI front, the Information Commissioner's Office ("ICO") updated its AI and data protection guidance, following requests from the UK industry to clarify the standards for fairness in AI.

Separately, the UK Government introduced its revised Data Protection Bill. While the bill was not radically different from the bill first proposed last year, the new version did make some changes to the definition of personal data and reduced compliance burdens by limiting certain record-keeping requirements. Let us hope this one sticks.

In this month’s issue:

Data protection

Enforcement

Cyber security

Civil litigation

Data protection

UK Government introduces a revised Data Protection Bill

On 8 March 2023, the UK Department for Science, Information and Technology ("DSIT") published the Data Protection and Digital Information (No.2) Bill ("New Bill") which replaces the Data Protection and Digital Information (No.1) Bill published in July 2022 ("Previous Bill"). The New Bill updates the Previous Bill and reimagines the government's plans to reforming the current UK data protection framework, which is currently comprised of the UK General Data Protection Regulation ("UK GDPR"), the Data Protection Act 2018 and the Privacy and Electronic Communications (EC Directive) Regulations 2003 ("PECR").

You can read a full overview of the New Bill and its contents in our insight.

The European Parliament adopts the Data Act

The European Parliament adopted the draft legislation for the Data Act ("DA") on 14 March 2023 with an overwhelming 500 votes to 23, with 110 abstentions.

The DA focuses on data access, data portability and data sharing. It will regulate the circumstances where users, third parties and public sector bodies can access personal and non-personal data. The DA will enhance a user's right to share their data with third parties and facilitate the wider use of non-personal data for commercial purposes.

The key provisions and outcomes of the DA include:

  • Data sharing and access – The DA will introduce common rules governing the sharing of data generated by connected products or related services to ensure fairness in data sharing contracts. It will also require manufacturers and service providers of connected products to make the data generated by such products accessible to end users on request.
  • Contribution to technology – The provisions of the DA will provide for the development of new technology relying on algorithms.
  • Fair pricing – The DA introduces provisions to allow for better trained algorithms designed to ensure better prices for after-sales services and repairs of connected devices.
  • Trade secrets – The DA introduces provisions aimed at protecting trade secrets so as to avoid a situation where increased access to data is used by competitors to retro-engineer services or devices.
  • Cloud service providers – The DA facilitates switching between providers of cloud services and other data processing services.
  • Public body data sharing: Under the DA, data holders will have to make data available to public bodies in the EU without undue delay where there is an exceptional need for the public body to use the requested data
  • Restrictions on data access and sharing: Non-personal data held in the EU by cloud service providers must not be accessed by non-EU governmental entities or transferred internationally.

The DA follows the passing of the EU Data Governance Act ("DGA") in June 2022 which shared the aim of making more data available for use and facilitating data sharing across sectors and EU countries. Both the DA and the DGA seek to promote data accessibility and reuse within the EU. The DGA sets out a framework for data to move freely within the EU, whilst the DA sets out who can use certain types of data and under what circumstances.

See our previous articles on the DGA here: Data Protection update - December 2021/January 2022 and Data Protection update - May 2022.

The next step is for MEPs to negotiate with the European Council on the final form of the law. The Council can choose whether to accept the European Parliament's position (and thus adopt the legislation) or to return the proposal to Parliament for a second reading.

For more information, you can find the European Parliament's press release here.

UK Government releases a new AI White Paper

The UK Government has published a new white paper outlining its plans to regulate artificial intelligence (the "AI White Paper"). The AI White Paper builds on the National AI Strategy released in September 2021.

According to DSIT, the AI White Paper demonstrates "a pro-innovation approach to AI regulation" aimed at enabling responsible and trustworthy innovation. It seeks to focus on the benefits and potential of AI, avoiding unnecessary burdens to business, and allowing for technological growth.

The AI White Paper states that the government does not intend to establish a single AI regulator but rather existing, industry-specific regulators are going to be required to coordinate with the government to produce a context-specific approach. The government is set to provide specific central functions to support these regulators, such as monitoring, assessment and feedback or education and awareness.

The AI White Paper also establishes five cross-sectoral principles: safety, security and robustness; appropriate transparency and explainability; fairness; accountability and governance; and contestability and redress. These principles are to be introduced on a non-statutory basis to begin with, and upheld by the regulators acting in coordination with the government. Statutory regulations will only be implemented after careful consideration and if the government considers them necessary.

This sector-specific approach to AI regulation differs from the approach taken by the EU under the Artificial Intelligence Act ("AI Act"), which seeks to implement a single, legislative framework for all businesses using AI. In contrast, the AI White Paper's regulator-led approach is designed to provide flexibility in developing and enforcing AI rules.

While the AI White Paper clarifies the government's intended approach to regulating AI, it gives no further indication of what the regulatory codes of practice will look like. It even goes as far to say that "it would be premature to take specific regulatory action" as it might "risk stifling innovation, preventing AI adoption and distorting the UK's thriving AI ecosystem." This has led to some criticism over the UK's ability to regulate AI at the pace required to keep up with the development of the technology itself.

In terms of next steps, the AI White Paper is open for consultation until 21 June 2023, following which the government will publish its response to the consultation alongside an AI regulation roadmap. It is envisaged at this point that regulators will be encouraged to publish guidance on the use of AI within 12 months.

The ICO updates AI and data protection guidance

Following requests from the UK industry for clarification around Artificial Intelligence ("AI") systems, the ICO has updated its guidance on AI and data protection (the "AI Guidance"). The AI Guidance provides an overview of how data protection law applies to AI systems that process personal data. It addresses the governance implications of AI and how AI interacts with the principles of lawfulness, accuracy, transparency, and fairness.

The updates include revisions to existing content as well as the introduction of new chapters. The key updates are summarised below:

  • The guidance on accountability and governance implications of AI has been revised to provide further, practical advice on what to include in a Data Protection Impact Assessment. Organisations are advised to provide evidence that they have considered "less risky" options and explain why those alternatives were not chosen.
  • A new chapter titled "How do we ensure transparency in AI?" supplements the existing practical guidance by explaining how the UK GDPR transparency principle applies in relation to AI. The AI Guidance notes that companies should make clear the purpose for processing using AI, the retention period for data processed using AI and the output of the AI, and who the data or output will be shared with.
  • New content has been added in relation to lawfulness in AI. This includes new guidance on when an inference drawn from AI processing is personal data. According to the AI Guidance, where inferred data fits within the definition of special category data, you must treat it as such if you can (or intend to) infer relevant information about an individual or intend to treat someone differently based on the inference.
  • A new chapter and annex have been inserted to provide further guidance on fairness and AI. These sections outline different types of bias, how building AI impacts fairness, and provide practical advice on how to mitigate bias.

The ICO commented that it "supports the government's mission to ensure that the UK's regulatory regime keeps pace with and responds to new challenges and opportunities presented by AI".

Whilst these most recent changes will be a welcome clarification for business, the fast-paced development of AI will require further, frequent updates to this guidance.

The EDPB launches a coordinated enforcement action focussing on Data Protection Officers

The European Data Protection Board ("EDPB") is conducting a new coordinated enforcement action ("CEA"), focusing on Data Protection Officers ("DPOs"), with 26 Data Protection Authorities ("DPAs") spanning the EEA taking part. In its press release, the EDPB explained that the CEA aims "to gauge whether DPOs have the position in their organisations required by Art. 37-39 GDPR and the resources needed to carry out their tasks".

The DPAs will implement this CEA at a national level by commencing or following-up on formal investigations. In the first instance, DPOs will be sent questionnaires designed to fact-find and to identify where a formal investigation may be warranted. The questionnaire has not yet been released.

Once each DPA has circulated the questionnaire and obtained the relevant responses, the results from each DPA will be collated and analysed to facilitate targeted follow-up at EU level. It is understood the EDPB will publish a report following this analysis.

We note that commentary from the DPAs so far suggests that the method of carrying out the fact-finding might differ, although it appears the questionnaire will not differ in content.

The President of the European Federation of DPOs commented "we welcome that the supervisory authorities in Europe dedicate their attention to the designation and position of data protection officers".

This marks the EDPB's second annual coordinated enforcement action. As reported in February 2022, the EDPB's first coordinated action focused on reviewing the use of cloud-based services by the public sector.

You can read the EDPB's press release here.

ICO has been asked to compel YouTube to delete algorithms

Duncan McCann, a campaigner employed by the child advocacy group 5Rights, has filed a complaint with the ICO under the Age Appropriate Design Code (the "Code"). The Code is the ICO’s official child protection standards for online services.

McCann argues that YouTube gathers data about children under the age of 13 despite their terms of service stating that users should be "at least 13 years old”. However, McCann argues that children from 3 to 13 are regularly using the platform without any form of parental consent. According to McCann, therefore, YouTube are processing information about the videos children watch, where they are watching those videos and what device they are watching on.

McCann claims that YouTube is in breach of the UK GDPR, the Data Protection Act 2018 ("DPA") and the Code as the lack of specific parental consent makes the processing unlawful. It also calls for the ICO to consider issuing an order for YouTube to delete algorithms trained on children's data. McCann commented "imagine YouTube as an adult stranger following your child ‘online’ with a virtual clipboard recording everything they do. That is what is happening every day".

The ICO commented that, in theory, its broad powers to compel organisations to comply with data protection law could extend to algorithmic deletion but that this will depend on the facts of the particular case. 

No orders to delete an algorithm have been issued by the ICO to date. An alternative response could be to ban the processing of personal data through the algorithm. However, this may effectively put the algorithm out of use anyway, as removing all elements of personal data from the algorithm would be highly complex.

We await a formal response by the ICO to the complaint.

Enforcement

ICO fines TikTok £12.7 million

The ICO has issued a £12,700,000 fine to TikTok Information Technologies UK Limited ("TikTok"). The fine relates to a number of breaches of data protection law, including failing to use children's personal data lawfully.

For more information on the fine and for an overview of TikTok's latest data protection issues, please keep your eyes peeled for our upcoming Blog post via the data protection hub.

ICO lowers Easylife's privacy fine

The ICO and Easylife Limited ("Easylife") have agreed a reduction to Easylife's fine, issued in relation to their breach of the UK GDPR last October.

As we reported, the ICO issued two fines to Easylife. The first fine issued in the amount of £130,000 was in response to evidence that Easylife was conducting predatory nuisance calls in breach of Regulation 21 PECR. Easylife did not appeal this decision and the fine was paid in full.

The second fine, however, was in relation to a breach of Article 5(1) of the UK GDPR. The ICO found that some customer purchases from Easylife's catalogue between August 2019 and 2020 had triggered third party marketing calls to the customer. Critically, these marketing calls were in relation to health conditions linked to the purchased items. The ICO issued a fine of £1,350,000 for this breach.

Easylife appealed this fine to the First-Tier Tribunal ("Tribunal"), arguing that it did not breach the UK GDPR and that its activity did not fall outside the scope of accepted business practice. Easylife separately argued that, in any event, the fine issued was too extreme. Whilst Easylife have accepted the ICO's finding that there was a breach of the UK GDPR, the ICO have agreed to reduce the UK GDPR fine to £250,000.

John Edwards, the UK Information Commissioner, stated that "Easylife has confirmed that it has stopped the unlawful processing which formed the basis of the ICO’s concerns. Having considered the amount of the penalty again during the course of the litigation, in light of the issues raised by Easylife, I considered that a reduction was appropriate". The reduced fine has been approved by the Tribunal.

You can read the ICO's press release here.

ICO will appeal Experian judgment

As reported last month, the Tribunal upheld Experian's appeal of the ICO’s Enforcement Notice issued in October 2020 and issued a Substituted Decision Notice.

The ICO originally held in October 2020 that Experian breached the UK GDPR by processing personal data for direct marketing purposes without consent. However, the Tribunal struck out the ICO's Enforcement Notice on the grounds that the ICO were incorrect in arguing that Experian's privacy notice was not transparent. The Tribunal also disagreed with the ICO that using credit reference data for direct marketing purposes was unfair and that Experian did not properly assess its lawful basis.

The ICO has now confirmed that it will appeal this decision. The UK Information Commissioner, John Edwards, stated that "having carefully considered that judgment, I believe that the tribunal has got the law wrong". As a result, the ICO-Experian saga is set to continue and we will report on any updates moving forward.

Leading restaurant reservation platform fined by Singapore regulator

Singapore's Personal Data Protection Commission ("Singapore Commission") fined Eatigo International Pte. Ltd. ("Eatigo") SG$62,400, equivalent to €43,300. Eatigo is an online restaurant reservation platform which offers discounts to its users and operates across Asia.

A selection of personal data held by Eatigo on an old database was leaked and offered for sale on an online forum. Eatigo then implemented remedial actions, including backing-up and deleting the database and notifying affected individuals.

In failing to protect its old database, which held data relating to approximately 2.76 million users, Eatigo breached its obligation under section 24 of the Personal Data Protection Act 2012. This obligation requires organisations to protect personal data in an organisation's possession or under its control by taking reasonable security steps or arrangements to prevent unauthorised access, collection, use, disclosure, copying, modification, disposal, or similar risks.

This case is a reminder that organisations should consider their data retention policies and methods of disposing of data to ensure data security and compliance.

The Singapore Commission stated that "for an organisation to effectively safeguard the personal data in its possession or control, it must first know what its personal data assets are".

The Singapore Commission's full decision can be accessed here.

Cyber security

Data breach incidents result in €750,000 GDPR fine for Bank of Ireland

The Irish Data Protection Commission ("IDPC") has fined the Bank of Ireland ("BOI") €750,000 for breaching Article 5(1)(f) of the GDPR, which requires that personal data is processed in a manner which ensures such data's security.

The inquiry follows a notification by the BOI to the IPDC regarding a series of 10 data breaches between 30 January 2020 and 6 May 2020 in connection with the BOI365 banking app. The data breaches concerned individuals gaining unauthorised access to the accounts of other individuals through the use of the app.

The key issue for the IDPC to determine was whether the BOI has appropriate technical and organisational measures in place, having regard to the level of risk posed to the individuals on whose behalf BOI processes data. The BOI outlined its range of technical and organisational measures in place as comprising data protection governance through various policies and procedures, training and awareness, records management and a range of oversight and quality assurance measures.

The IDPC disagreed with BOI's assessment of the risk posed as low; it found it was high in terms of severity, based on the risk of identity theft, fraud and financial loss, and moderate in terms of likelihood of occurrence. The IDPC found that the data protection governance policies and procedures the BOI had in place did not include additional controls to minimise risk and that the checks and enforcement measures were inadequate. For example, while data protection policies were in place, there was no evidence of regular assessment of the effectiveness of these policies and nor was there ongoing and verifiable oversight of staff compliance with policies. In 6 of the 10 breaches there was evidence that policies had not been followed. Further, training was not found to deal with specific risks relevant to the business (in particular the risk of the merging of client data). The IDPC also found that technical measures were lacking, evidenced in part by the 21-month gap between identifying the underlying issue relating to many of the breaches, and implementing a technical fix. Organisational measures were found to be similarly lacking on the basis that new testing and measures were not put in place to test for and mitigate the risks after the earlier breaches took place; infringing the requirement under Article 32(1)(d) that new measures must be put in place where existing measures are shown to be ineffective.

As a result, the IDPC issued BOI with a reprimand, ordered BOI to bring its processing into compliance with its obligations under the GDPR, and imposed a fine of €750,000. The decision speaks to the importance of properly assessing risk and implementing technical and organisational measures proportionate to that risk, and crucially to the need to properly implement, assess, test and where necessary revise those measures on an ongoing basis.

You can read the IDPC's decision here.

Norwegian Privacy Appeals Board upholds fine following ransomware attack

The Norwegian Privacy Appeals Board ("NPAB") has upheld the Norwegian Data Protection Authority's (the "NDPA") decision to fine Østre Toten commune, a Norwegian municipality, (the "Municipality") NOK 4 million, equivalent to approximately £310,000, for breaches of Article 5(1)(f), Article 24 and Article 32 of the GDPR.

The Municipality experienced a severe ransomware attack in January 2021 as a result of which data was encrypted, backups were deleted and employees were left without access to key IT systems. In total, 30,000 documents comprising 160 GB of data were affected by the attack, which included information concerning ethnic origin, political opinion, religious belief, trade union membership, sex life and sexual orientation, health as well as electronic ID and bank account information.

An investigation into the attack revealed that the Municipality had severe shortcomings in its IT systems and processes, including unsecured back-ups, a lack of two-factor authentication and proper log management.

In its appeal to the NPAB, the Municipality argued that there were no grounds for such an administrative fine and that it had implemented sufficient technical and organisational measures in accordance with their internal resources and in line with Articles 24 and 32 GDPR.

Whilst the NPAB recognised the severity of the ransomware attack suffered by the Municipality, it held that the Municipality's failure to protect backup copies against deletion and manipulation was a significant deficiency in the security management system. The NPAB further held that the configuration of firewalls and the network topography represented fundamental weaknesses in the Municipality's information security systems. The NPAB therefore upheld the fine and rejected the appeal.

The NPAB's decision serves as a reminder to both controllers and processors of the importance of implementing sufficient measures to ensure the security of personal data and the consequences that may follow in their absence.

You can read the NDPA's decision here, and can read the NAPB's decision here.

Data protection by default: ICO publishes privacy guidance to help product designers

In an attempt to help organisations comply with the principle of data protection by design and default, the ICO has produced new guidance ("Design Guidance"). The Design Guidance includes examples of good practice as well as the practical steps that organisations may take in order to ensure compliance with data protection laws when designing technology products.

The Design Guidance has been broken down into key privacy considerations for each stage of product design. For instance, at the "kick-off" stage, the Design Guidance advises designers to map out the personal information needed by the product, identify changes and risks, agree responsibilities with other stakeholders and weave privacy into the product's business case. The guidance then deals with the research stage, design stage, development stage, and launch stage before moving to post-launch, in which designers should review how people are using the product through monitoring and fixing unexpected privacy issues, reappraising expectations and reflecting on how privacy can be better embedded in any future products.

You can read the ICO's guidance here.

AI camera surveillance approved by French lawmakers

Following the approval of its National Assembly on 23 March, France has become the first country in the EU to approve the use of smart cameras. The real-time smart cameras use AI to identify, analyse and classify bodies, physical attributes, gestures, silhouettes and approaches, in an attempt to detect suspicious behaviour and assist law-enforcement bodies.

The cameras are for use by law enforcement bodies and are initially for use during the Paris Olympics in 2024. However, they may also be deployed for events including the Tour de France and music festivals. The use of the cameras must stop at the end of 2024.

The decision has attracted criticism from various privacy groups. In a joint statement published on 6 March, Access Now, Privacy International and La Quadrature du Net commented that the law would "set a worrying precedent for unwarranted and disproportionate surveillance in the public space, to the detriment of fundamental rights and freedoms". Supporters of AI backed surveillance, however, claim that the law will enhance safety at high-profile public events and will prevent terrorist attacks.

Civil litigation

AG decides credit scoring is an automated decision triggering opt out rights

In an Opinion prepared for the European Court of Justice ("CJEU"), Advocate General Priit Pikamäe (the "AG") has stated that a credit score which has been calculated solely on the basis of automated processing procedures constitutes "profiling" under the GDPR. The AG's opinion is given in respect of three cases currently before the CJEU relating to Schufa, a German credit ratings institution. The cases had been referred to the CJEU by the administrative court of Wiesbaden in the German state of Hesse.

The first of the three cases concerns an individual who was refused a loan by a credit institution (the "Institution") on the basis of a credit score calculated by Schufa, which was determined in an automated manner. The Wiesbaden court sought guidance from the CJEU following the individual's request for judicial review and asked whether the calculation of a credit score would constitute a breach of Article 22 GDPR, which provides that data subjects have the right not to be subject to a decision based solely on automated processing. This right only applies if the automated decision has a legal or similarly legal effect.

In his Opinion, the AG found that the decision in question was in fact an automated decision and the conditions for the Article 22 right to apply were satisfied. The AG opined that even where a value is calculated by one entity (in this case Schufa) and is communicated to another entity (the Institution) it is still an automated decision if that receiving entity draws strongly on that value for its decision on establishing, implementing or terminating a contractual relationship with that person.

Given that this is the AG's opinion on a German case, it is not directly relevant under the UK GDPR and in any event is not binding on the CJEU. However, it speaks to the importance of considering any and all automated decisions and what obligations your business might have in relation to those decisions.

You can read the opinion of the AG here.

Classification of Big Tech platforms to result in Digital Markets Act litigation

Speaking at the Annual Conference of the European Commission Legal Service in Brussels on 17 March, Marc Van der Woude, the president of the EU's General Court, has warned that a wave of Digital Markets Act ("DMA") litigation looms. Although Van der Woude emphasised that he was speaking in a personal capacity and making only cautious predictions, he explained that legal questions arising in connection with the three main stages of the new DMA are likely.

The DMA places certain obligations and prohibitions on Big Tech platforms such as Meta Platforms ("Meta"), Apple, Google and Amazon (referred to as "gatekeepers" under the DMA). The DMA seeks to create a fairer environment for business users that rely on gatekeepers, and to ensure consumers have access to better services and can easily switch providers.

The first step for implementation of the DMA, due to kick off on 2 May 2023, concerns the classification or designation of the platforms as being so large that they attract "gatekeeper" status. Van der Woude explained that although gatekeeper designation is to be based on clear metrics such as revenue and the number of active users, there is a risk that platforms challenge any such designation which in turn is likely to "battle of the experts" style litigation.

Van der Woude was clear that a balance must therefore be struck between the industry continuing to operate as normal and the need for judges to review any cases. According to Van der Woude, a company bringing a claim should think carefully about timeline extensions and confidential treatment requests if it wants the case to move quickly.

European Data Protection Board to rule on Meta US transfers probe

The EDPB is to meet on 13 April 2023 to review an IDPC decision against Meta, requiring it to stop transfers of data to the US.

The case stems from the privacy complaint filed by Austrian campaigner Max Schrems in 2013 and the subsequent series of decisions culminating in the CJEU decision in July 2020 ("Schrems II"). Following Schrems II, the IDPC proposed in July 2022 that Meta should be prevented from using standard contractual clauses to protect users' data when it is sent to the US. There are also proposals that it should delete the data that has been transferred to the US.

Meta has argued that it would be wrong to impose an order on it to delete the large volumes of data it has transferred from the EU to the US.

Meta switches to legitimate interest for ads targeting activities

Following the EDPB's decisions that Meta could not rely on contract as a legal basis for targeting Facebook and Instagram users with personalised advertisements, Meta announced on 30 March that it will instead rely on legitimate interests for these processing activities. The Wall Street Journal reported that this would involve Meta permitting users to choose to have ads targeted at them based on broad categories, such as age range and general location, rather than on their specific activities.

Campaign group NOYB immediately announced that it will take "imminent action" to contest the change.

Dutch court rules Meta breached multiple privacy rules

On 15 March, the Amsterdam District Court ruled that between 2010 and 2020, Meta's Irish subsidiary failed to properly inform Dutch users about how it used their data, did not obtain permission to use their data for advertising purposes and mislead users by withholding essential information. The court held that the average consumer was unable to make a well-informed decision about participating in the Facebook service.

The case concerned a mass privacy action brought by Data Privacy Stichting ("DPS") and Consumentenbond (the "Dutch Consumers' Association"). The DPS and Dutch Consumer's Association have announced that they plan to commence a new action against Facebook. According to them, the CJEU has ruled twice that Facebook had acted in violation of the GDPR by forwarding data and that, despite this, Facebook has continued to transfer personal data of its European users to the US, where such data was accessible by US intelligence services.

The Dutch court's ruling can be read in Dutch here.

Brazilian court orders Meta to pay €731 million in damages

On 23 March 2023, the Court of Justice of Maranhão in Brazil ordered Meta to pay €731 million in damages.

In 2021, a data leak at Facebook led to the personal data of 533 million users being published, allegedly as a result of automated tools being used to scrape from public Facebook pages. The users were spread across 106 countries. In response, consumer rights organisation IBEDEC brought a class action on behalf of the affected Brazilian users.

IBEDEC argued that Facebook had unlawfully leaked users' personal data and subsequently failed to notify affected users. Meta argued that there was no leak as the data was already publicly available but that it would invest resources to combat unauthorised data scraping.

The court rejected Meta's arguments and stated that the unauthorised scraping of data constituted a clear security breach. As a result, the court ordered Meta to pay €90 to each affected user within the class action in individual non-material damages. Meta must also pay a further €12.9 million in collective non-material damages which will go to Brazil's State Fund for Consumer Protection and Defence.

Meta stated that it is currently assessing its options in response to the court's decision.

Austria finds Meta's tracking tools to be in breach of transfer rules

Austria’s data protection authority has found that Meta’s tracking technologies violate EU law as a result of personal data being transferred to the US despite being at risk from government surveillance.

The decision relates to an Austrian news websites' use of Meta's tracking tools (its name is redacted from the decision). Whilst this decision relates to just one site using Meta's tracking tools, the decision could have much broader implications for the use of Meta’s tools. It is therefore an important decision for any EU site using Meta’s tracking tools and indicates the ongoing legal uncertainty around EU-US data transfers.