Neural Network - August 2025

Neural Network - August 2025

In this edition of the Neural Network, we look at key AI developments from July and August.

In regulatory and government updates, the GPAI Code of Practice is published; a mandatory template is released for GenAI model providers; the government launches an Inquiry into human rights and the regulation of AI; the government publishes an AI Action Plan for Justice; Trump's AI Action Plan is released; and the Law Commission releases an AI discussion paper.

In AI enforcement and litigation news, voice actors sue an AI startup for voice cloning.

In technology developments and market news, the UK government and OpenAI launch a new partnership; and we consider China’s proposal for a global body to govern AI.

More details on each of these developments is set out below. 

Regulatory and Government updates

Enforcement and Civil Litigation

Technology developments and market news

Regulatory and Government updates

GPAI Code of Practice published

On 10 July, the European Commission (the "Commission") published the final General-Purpose AI Code of Practice (the "Code"), which is intended to help providers of general-purpose AI ("GPAI") models comply with certain provisions of the EU AI Act (the "AI Act"). Although the Code is voluntary and non-binding, the Commission has stated that it will provide a practical way for providers to demonstrate compliance and reduce legal uncertainty.

The Code supports compliance with Articles 53 and 55 of the AI Act, which set out obligations for providers of GPAI models, including those defined in the AI Act as having systemic risk (i.e. when the cumulative amount of computation used for a system's training exceeds "10(^25)" or when the system has an impact equivalent to this amount). The AI Act's requirements take effect from August 2025, becoming fully enforceable for new GPAI models by August 2026 and for existing GPAI models by August 2027.

The Code is organised into three chapters: Transparency, Copyright, and Safety & Security. Some key takeaways from the Code are as follows:

  • Providers must maintain comprehensive, up-to-date documentation for each GPAI model, using a template supplied by the Commission. They are required to make contact information publicly available for information requests and ensure the accuracy, integrity, and security of all documented information.
  • To comply with copyright obligations, providers need to implement a copyright policy in line with EU law, ensure only lawfully accessible content is used for training the models, and respect machine-readable rights reservations. Technical safeguards must be in place to prevent copyright-infringing outputs, and a clear complaints mechanism for rightsholders is required.
  • For GPAI models with systemic risk, providers must adopt a robust safety and security framework, identify and mitigate systemic risks, and implement cybersecurity protections. Providers are also required to prepare a Safety and Security Model Report, define clear internal responsibilities, and report serious incidents to authorities.

The publication of the Code marks a significant step in operationalising AI regulation in the EU. Organisations developing or deploying GPAI models should review their practices against the Code’s requirements to ensure good governance and regulatory readiness as the AI landscape continues to evolve.

Mandatory template released for GenAI model providers

In July, the Commission published a template to aid GPAI providers with summarising the content which they use to train their AI models. The Commission intends for the template to be an effective way for providers to increase transparency in line with their obligations under the AI Act. This template is a supplement to the guidelines on the scope of rules for GPAI models and the Code.

Designed to be user-friendly, its goal is to help providers of GPAI models fully comply with the AI Act. By streamlining compliance, the initiative supports broader efforts to build public trust in AI and unlock its full potential for economic and societal benefit.

Inquiry into Human Rights and the regulation of AI

On 25 July 2025, the Joint Committee on Human Rights (the "Committee") has launched an inquiry (the "Inquiry") into the impact of AI on human rights in the UK.

The Inquiry aims to assess both the risks and opportunities that AI presents for human rights and to determine whether current legal and regulatory frameworks are adequate to address these challenges as AI technology evolves and if not, what changes should be made to these frameworks. AI technologies offer clear benefits to society but also raise serious concerns, including the risk of reinforcing bias and discrimination through the use of flawed data. Their use in surveillance may threaten privacy and freedom of expression, while the data used to train AI models could lead to opaque decision-making and bias.

The Inquiry is particularly interested in the way that AI can influence individual human rights both positively and negatively in areas such as: (i) privacy and data usage; (ii) discrimination and bias; and (iii) access to effective remedies for human rights violations.

The Committee is seeking written submissions on the above key issues from any member of the public. It will also examine whether the UK's legal framework and government policies sufficiently protect human rights in the context of AI, and what future legislation might be needed.

Additional topics include the challenges of regulating AI given its international nature, accountability for human rights breaches, the appropriate point in the AI lifecycle for liability, and whether different types of AI require distinct regulatory approaches. The Inquiry will also consider lessons from other jurisdictions, such as the European Union.

Feedback can be submitted here until 5 September 2025.

Government publishes AI Action Plan for Justice

The Ministry of Justice's AI Action Plan for Justice (the "Plan for Justice"), launched in July 2025, sets out a comprehensive strategy to integrate AI across the justice system in England and Wales. The Plan for Justice aims to make justice services faster, fairer, and more accessible through, among other things, the automation of routine tasks and personalising services for citizens and staff. Developed with input from both the judiciary and legal regulators, the Plan for Justice supports AI adoption across courts, prisons, probation, and supporting services.

It focuses on three main priorities which are as follows: (i) strengthening the foundations for AI adoption through establishing a dedicated Justice AI Unit, governance, creating an AI & Data Ethics Framework, and robust data infrastructure; (ii) embedding AI in courts, tribunals, prisons, and probation to enhance productivity and streamline work, such as by ensuring all staff have a secure AI assistant, using speech and translation AI, and using AI coding assistants to modernise legacy systems; and (iii) investing in people and partnerships by developing AI talent, fostering collaboration with legal service providers, and supporting responsible innovation. Importantly, the Plan for Justice emphasises the importance of human oversight when using AI, noting that "[the Ministry of Justice] will monitor outputs, apply de-biasing techniques, and use representative datasets to ensure AI augments, not replaces, human judgment".

The Plan for Justice emphasises transparency to maintain ethical standards and uphold public trust as this will ensure AI supports rather than replaces human judgment. Early pilots have shown promising results, with AI tools already reducing administrative burdens and improving service quality. The roadmap involves a phased approach, with continuous evaluation and adaptation over a 3-year period that starts in April 2025. The Ministry of Justice is committed to working with partners, regulators, and the wider justice sector to ensure AI delivers tangible benefits to uphold an equitable justice system.

The Plan for Justice can be read in full here.

Trump's AI Action Plan

On the 23 July the White House revealed its vision for future AI policy in the form of a 28-page AI action plan (the "Plan"). According to the White House's press release, this Plan is part of a "shift toward policy aimed at fostering U.S AI dominance in the fact of fierce competition from China." This statement is just another example of the competitiveness between world powers and the interconnected nature of geopolitics and technology in the global AI race. Emphasising this, the first line of the Plan's introduction states that "America is in a race to achieve global dominance in artificial intelligence."

Key aspects of the Plan include: (i) using federal agencies to develop new standards and improve those that already exist; (ii) removing regulatory barriers and red tape to foster AI innovation; and (iii) improving risk prevention. It is expected that President Trump will use executive orders to put some of the Plan's points into action.

According to the White House Office of Science and Technology Policy Director, Michael Kratsios, the Plan "galvanizes Federal efforts to turbocharge our innovative capacity, build cutting-edge infrastructure, and lead globally, ensuring that American workers and families thrive in the AI era." Organisations will welcome the Plan's explicit efforts to remove red tape.

The Plan has already been met with varying reactions from concerned parties, with several privacy and AI safety groups coming together to sign the People's AI Action Plan – a statement within which they urge the US government to focus on the environmental and social needs of the country rather than the wants of the technology industry.

Law Commission releases AI discussion paper

Last month the Law Commission published a discussion paper (the "Paper") focusing on AI and the law. The aim of the Paper is to raise awareness of legal issues concerning AI, to encourage discussion around the topic and to aid in identifying the areas that are in most pressing need of reform.

The Paper explores:

  • AI autonomy and adaptiveness;
  • Interaction with and reliance on AI; and
  • AI training and data.

While the Paper considers how legal issues may arise, it does not include any specific proposals for law reform. However, it does consider the "perhaps radical, option of granting AI systems some form of legal personality" which it considers might be a solution to the immediate problem of liability - likely to become a more pressing issue as AI becomes more sophisticated.

Sir Peter Fraser, Chair of the Law Commission, emphasised the rapid development and expanding use of AI across various fields, including automated driving and medical diagnosis. He noted that AI is likely to continue influencing many aspects of daily life in significant ways – posing both benefits and risks. Fraser stressed the importance of ensuring that the laws of England and Wales evolve to keep pace with the changes brought about by AI.

Enforcement and Civil Litigation

Voice actors sue AI startup for voice cloning

A federal judge in New York has allowed two voice actors who allege their voices were used without their permission to proceed with a lawsuit against AI voiceover company Lovo Inc. ("Lovo"). The actors claim they were hired through the freelancing app Fiverr for limited projects, but later discovered Lovo had sold AI-generated versions of their voices as digital personas which the actors deemed to be unauthorised "clones" of their voices. While the judge has dismissed most federal copyright and trademark claims, he permitted the actors to pursue claims under New York's right of publicity laws and to amend their complaint regarding copyright infringement related to AI training.

This case is part of a broader trend of legal challenges against tech companies accused of using creative works to train AI systems without consent. Similarly in the UK, there was a prolonged "ping-pong" of debate between the House of Lords and the House of Commons around the inclusion of protective measures for rights holders in the Data (Use and Access) Act 2025 (the "DUAA"), which we covered in a previous edition of Neural Network and more generally in our DUAA series (a deep-dive into various provisions of the DUAA, the most recent of which can be found here). The House of Commons consistently blocked the inclusion of any such clauses, noting that, among other things, the DUAA was not the right instrument for this change. Creatives, including Elton John and Dua Lipa were vocal about the issue, urging the Prime Minister to appropriately consider legislative protections for artists' copyright. Protective provisions were ultimately not included, but the government agreed to publish a report on copyright and AI proposals within nine months of the DUAA being passed.

Technology developments and market news

A new partnership: UK and OpenAI

Last month the UK government signed a "strategic partnership" with artificial intelligence organisation OpenAI.

This new partnership forms part of the government's strategy to attract increased investment into the UK's artificial intelligence sector - in January of this year, the digital secretary Peter Kyle announced an "AI opportunities action plan" in order to uplift the UK's AI sector. Currently, the UK is home to pioneering AI laboratories such as Google DeepMind. However, the levels of funding available within the nation are dwarfed by AI powerhouses such as the USA and China. This is best exemplified with the amounts of private AI investment each of these countries received last year; the USA boasted $109.1 billion, China had a healthy $9.3 billion while the UK lagged behind with only $4.5 billion.

The "strategic partnership" between the UK government and OpenAI is set out in a voluntary memorandum of understanding that includes a series of pledges from both parties. The government commits to discovering ways to adopt OpenAI's technologies in public services in a variety of sectors such as justice and education technology. On the flip side, OpenAI pledges to "explore" investing in AI infrastructure within the UK and to hire more staff in the UK.

This partnership seems promising and may help alleviate some of the concerns voiced by UK entrepreneurs regarding technology regulation, most notably highlighted by the recent enactment of the DUAA on 19 June 2025.

China proposes a global regulatory body for AI

China has announced an ambitious plan to expand its influence in global AI governance, aiming to establish a new international cooperation organisation and promote open-source technology sharing. Speaking at the World Artificial Intelligence Conference in Shanghai, Premier Li Qiang highlighted the need for a unified global AI governance framework, criticising technological monopolies and export restrictions – arguably implicitly referring to US policies.

China's foreign ministry released a 13-point proposal advocating for new UN-led dialogue mechanisms and a safety governance framework. The plan emphasises collaboration on open-source platforms and sharing key technologies, including semiconductors, to benefit global development, particularly in the global South. This reflects China's strategy of continuing in its attempts to grow its global influence, also exemplified by gradually increasing its involvement in UN agencies, particularly those focused on development, technology, and technical standards. The aim of this could be to fill the political lacuna left since the Trump administration began pulling back from the UN. Despite contributing over 15% of the UN’s regular budget (second only to the US), China remains under-represented among the organisation's staff.

This initiative comes amid escalating US-China competition in the technology sphere, with Washington imposing export controls on advanced semiconductor chips with the aim of maintaining its technological edge. We previously reported that China is making the switch from Nvidia due to these restrictions, while the US is growing the number of data centres on its shores. Given that a key aim of the US's AI Action Plan is to "Counter Chinese Influence in International Governance Bodies", the extent of China's role in any prospective international AI governing body remains to be seen.

Despite these restrictions, Chinese AI groups like DeepSeek and Alibaba have continued to advance rapidly, raising concerns in the US about losing global dominance.