AI update – ICO's strategic approach to regulating AI

AI update – ICO's strategic approach to regulating AI

In late April 2024, the UK Information Commissioner's Office ("ICO") published its strategic approach to Artificial Intelligence ("AI") regulation, in response to the UK government's White Paper (the "White Paper") and call for action to the regulators.

Several other UK regulators also published their strategic action plans for how they will address the development of AI technologies within their remit, including other members of the Digital Regulation Cooperation Forum - FCA, Ofcom and CMA. Please see our insight for further details on the FCA AI Update.

The ICO has already called out AI and its application in biometric technologies, along with children's privacy and online tracking, as its focus areas for 2024/25.

AI White Paper

The White Paper outlined a principle-led, sector-based approach to AI regulation, with each regulator having the freedom to tailor a regulatory framework to the needs of their sector. It also highlighted five key principles that regulators should use to guide their approach to AI regulation (the "Principles").

Please see our 2023 insight for further details on the regulatory framework proposed by the White Paper.

Data protection law and the Principles

In its response to the White Paper, the ICO emphasised that many of the Principles were also integral principles in data protection law. This meant that, for the most part, they were already enshrined within the data protection framework within the ICO's remit.

The table below summarises the ICO's views on how the current framework aligns with the Principles.

AI Principle

Data Protection Framework

Safety, security, and robustness

Security is a key data protection principle, where organisations must ensure appropriate levels of security against data’s unauthorised or unlawful access, processing, accidental loss, destruction or damage. It is also a guiding tenet of other frameworks the ICO oversees, such as the Network and Information Systems Regulations.

The ICO highlighted that while AI introduces new security risks, such as membership inference attacks and model inversion, data protection law can be used to mitigate these.

Appropriate transparency and explainability

Transparency is also a data protection principle. This is about being clear, open and honest with people from the start about who organisations are, and how and why they use their personal data.

The ICO stated that transparency requirements for organisations go beyond simply providing information regarding the processing of personal data in the training of AI systems.

For example, if an AI system is solely responsible for making decisions with "legal or similarly significant effects", organisations will be required to explain the "logic" of their AI systems.

The ICO points to the guidance it produced in conjunction with the Alan Turing Institute on "Explaining Decisions Made with AI", for organisations that need further advice on this point.

Fairness

Fairness is another key data protection principle. It means that organisations should only handle personal data in ways people would reasonably expect, and not in ways that have unjustified adverse effects on them.

Under data protection law, "fairness" is contextual – the ICO gives the example of how organisations need to consider factors such as the environment in which a system is deployed, or the power dynamic between people and the organisations controlling or processing their data when determining whether an AI system and its use of personal data is "fair".

Accountability and governance

Accountability as a data protection principle requires organisations to take responsibility for what they do with people’s personal data but also how they comply with all the other data protection principles.

Under data protection law, accountability is allocated based on roles defined in legislation, such as "controller", "processor" or "joint controller".

The ICO provides the example of data protection impact assessments as a way in which organisations can demonstrate accountability.

Contestability and redress

While contestability is not a principle in data protection law, it is reflected in individuals' data rights (for instance an individual's right to access their personal data that has been processed and rights in relation to solely automated decision-making).

 

Use cases highlighted by the ICO

In its response, the ICO emphasised that while AI presented many opportunities across a variety of sectors, the risks associated with AI, such as issues surrounding transparency, security or fairness, need to be mitigated.

The ICO provided several examples of areas and technology in which data protection laws could be utilised to mitigate some of the risks posed by AI.

Foundational models

The ICO highlighted foundational models (machine learning models which can be adapted to a wide range of tasks) as being a particular risk area. These models are generally trained on vast amounts of personal data, so that they can be deployed for a wide range of purposes.

Due to the personal data being processed, there is clearly a risk that individuals' personal data is not being sufficiently protected. The ICO emphasised that data protection laws would therefore apply to every stage of the model life cycle and "every actor within the supply chain where personal data is being processed" to ensure that individuals' data privacy rights were respected.

High-risk AI applications

These include AI applications where the risk of harm, such as discriminatory outcomes, is elevated. The ICO gives the example of "high risk" AI systems listed in the EU AI Act, as systems which would fall under this category (such as any AI model which assesses a person's "social score"). Please see our article from December 2023 for further examples of what type of AI systems would be considered "high risk" under the EU AI Act.

The ICO underscored that data protection laws could be used to mitigate some of the risks these applications present. For instance, the fairness principle of data protection law prohibits organisations from undertaking data processing which has "unjustified adverse effects" on individuals.

Facial recognition technology and biometrics

The ICO stressed that any use of facial recognition technology must be "proportionate and strike the correct balance between privacy intrusion and the purpose they are seeking to achieve."

With regards to biometric recognition technologies, the ICO has produced guidance on practical data protection issues organisations may encounter when working with these technologies, and how to counter these.

Children and AI

The ICO is particularly concerned with potential harm to children as a result of AI products and services. For instance, they will be closely scrutinising any risk assessment process for any AI products or services that will be processing children's personal data.

Regulatory actions

The ICO has already taken steps towards AI-related enforcement. For instance, Clearview AI Inc, a company which provides facial recognition software, was fined £7.5 million by the ICO for processing UK residents' personal data without a lawful basis, and thereby breaching the UK GDPR. Please see our article for more detail on this fine, which is subject to appeal.

Another example is the ICO's enforcement actions in relation to Serco Leisure, where the ICO ordered the company to stop using facial recognition technology to monitor their employees' attendance at work. Please see our insight for an overview of this order.

Next steps

The ICO is continuing to keep abreast of developments in the AI sector and is in the process of several consultations to ensure that it provides up to date guidance on AI.

A key element of this is the ICO's consultation series on generative AI. Please see our article for further insight on the consultation series.

The ICO also plans to launch a consultation in spring 2024 on biometric classification technologies and how these should be developed or deployed.

In spring 2025, the ICO plans to consult on updates to its guidance on AI and data protection following the passage of the Data Protection and Digital Information Bill (the "DPDI"), and the impact this bill will have on data protection laws.

The ICO is committed to continue to work with organisations, other regulators, the UK Government, standards bodies and international partners on AI regulation and policies.

Our takeaway thoughts

The ICO's response is not surprising and confirms that the ICO will continue regulating AI in line with existing data protection laws and that regulating AI will remain one of its high priorities.

Organisations should ensure that their use of AI is consistent with data protection legislation and the ICO's guidance. They should also keep up to date with changes in legislation, such as the introduction of the DPDI and speculation about a change in direction of the current AI strategy in the UK.