DCMS AI Policy Statement

DCMS AI Policy Statement

On 18 July DCMS issued a Policy Statement and call for views and evidence on the governance and regulation of AI. The foreword proposes a "light-touch" and "proportionate" regulatory framework to position the UK as the best place in the world to found and grow an AI business, in a bid to unlock the huge benefits across the economy and society. The framework will be designed to increase consumer trust and business confidence, and to create an internationally competitive regulatory approach. Following responses to the consultation, the government will issue a White Paper later in the year.

The Policy Statement proposes a flexible approach to regulation, with a focus on cross-sectoral principles applicable to AI and the range of new and accelerated risks that the technology creates. The "light touch" framework would be supported by various AI standards and assurance tools that are being developed in the UK, such as those being piloted through the DCMS AI Standards Hub and the Centre for Data Ethics and Innovation.

Key observations on the proposed framework

The Policy Statement concentrates on the following key themes in its approach:

  • Cross-sectoral approach: Regulatory control will be decentralised, with existing regulatory bodies (such as the ICO, FCA, CMA, MHRA and Ofcom) applying the framework as appropriate to their areas of remit and strategic focus, likely backed up by government issued guidance to the regulators.
  • Use case focused: The definition of AI, and thus what is caught by the framework, will be set against core characteristics and capabilities of how it operates (such as "autonomy" and "adaptiveness") to allow regulators to set out and evolve more detailed definitions relevant to their specific domains or sectors – focusing on the regulation of the use of AI, rather than the technology itself.
  • Core principles: It will be based on a set of core principles to regulate the use of AI against the impact on individuals, groups and businesses – they will be applied in the context of the specific AI application.
  • Risk based: The principles will focus on areas where there is clear evidence of high unacceptable risk, rather than hypothetical or low risk use cases – the focus will be to promote innovation, taking a risk based approach.
  • Adaptable: The principles will initially apply on a non-statutory basis, with regulators asked to apply them as guidance or voluntary measures - the rationale being that this will be more adaptable to change. 
  • Proportionate: Regulators will have to interpret and apply the principles "proportionately" as relevant to their domains or sectors – to allow regulators to apply a targeted and nuanced response to risk and developments, with an emphasis on eliminating burdensome or excessive administrative compliance.  

Cross-sectoral principles

The proposed cross-sectoral principles will build on the OECD Principles on Artificial Intelligence, with an emphasis on being "values" focused and aligned to the governments wider digital strategy. They will apply to any actor in the AI lifecycle whose activities may create risk that should be managed. A summary of the early proposals for the principles is set out below:

  • Ensure that AI is used safely: Taking a context based approach to assessing the likelihood that AI could pose a risk to safety, with requirements commensurate with actual risk - comparable with non-AI use cases.
  • Ensure that AI is technically secure and functions as designed: To ensure consumer confidence, AI systems must be secure and reliable under normal conditions of use. Resilience and security should be tested and proven, and any data used must be relevant, high quality, representative and contextualised.
  • Make sure that AI is appropriately transparent and explainable: Transparency requirements will be used to improve the understanding of AI decision-making. Requirements could include obligations to proactively or retrospectively provide information relating to the nature and purpose of the AI, the data being used, the logic and process used, and information to support explainability of decision making. In some high risk circumstances, regulators may deem that decisions which cannot be explained should be prohibited entirely.
  • Embed consideration of fairness into AI: High-impact outcomes from AI - and the data points used to reach them - should be justifiable and not arbitrary. Regulators will need to design, implement and enforce appropriate governance requirements for ‘fairness’ as applicable to the entities that they regulate.
  • Define legal persons' responsible for AI governance: Accountability for the outcomes produced by AI and legal liability must rest with an identified or identifiable legal person - whether corporate or natural.
  • Clarify routes to redress or contestability: The use of AI should not remove the ability to contest an outcome. Regulators will need to implement proportionate measures to ensure the contestability of outcomes in relevant regulated situations.

The overall suggested approach will provide each of the regulators with a degree of scope as to how these are implemented.

Our concluding thoughts

This proposed approach sets the UK apart from the more detailed and likely more onerous regime currently being approved in the EU (the AI Act) - a strategy aligned with the governments vision for Digital Regulation post-Brexit. It's what the Policy Statement refers to as a "nimble regulatory framework", and is designed to support the UK economy to tap into the multi-billion pound AI market (predicted to account for c.£200bn UK spending by 2040), as a pro-innovation hub for international business.

However, as this proposal is just in its early stages, with a White Paper to follow the consultation, there is scope for things to change. In particular, with a new government in the Autumn there could be a change in direction or a delay in the timetable. In particular, we expect businesses to challenge the coherence of the decentralised approach based on concerns that they may have to comply with a labyrinth of regulatory guidelines where their business, or AI use cases, span the regimes of multiple regulators. Some may see the cross-sectoral approach, with no single regulator, as a cost cutting measure (pointing to the way the ICO is adept at dealing with data protection issues that cut across multiple domains and sectors). There are clear advantages to having a more clearly defined set of rules, and one consistent regulator charged with governing compliance.

Given that the development and use of AI is somewhat "unregulated" at the moment, this is a step in the right direction. Not least to get the ball rolling on feedback and lobbying to shape the approach in the next parliament.

The government is requesting feedback on its proposed approach by 26 September: evidence@officeforai.gov.uk. Please do get in touch if you would like to collaborate on a response.