Neural Network - July 2025

Neural Network - July 2025

In this edition of the Neural Network, we look at key AI developments from June and July.

In regulatory and government updates, the European Commission (the "Commission") has released its long-awaited General-Purpose AI Code of Practice and published a Generative AI ("GenAI") Outlook Report; Ofcom has released its approach to AI governance in 2025/26; a proposed 10-year AI moratorium has been scrapped from Trump's "big beautiful bill"; and the Commission has launched a call for proposals on the use of GenAI in cyber security.

In AI enforcement and litigation news, the Getty Images v Stability AI hearing has just concluded in the High Court; the UK courts have issued a strong warning about AI use in court proceedings; Anthropic is now allowed to use books to train its AI model; and a Korean AI chatbot developer has been ordered to compensate data breach victims.

In technology developments and market news, European CEOs are lobbying against the landmark AI Act; Microsoft invests in AI amidst significant layoffs; and OpenAI and Oracle are to collaborate on data centres.

More details on each of these developments is set out below.

Regulatory and government updates

Enforcement and civil litigation

Technology developments and market news

Regulatory and government updates

Commission releases AI Code of Practice

On 10 July 2025, the Commission's European AI Office ("AI Office") published its General-Purpose AI Code of Practice (the "Code") to help businesses comply with the rules on general-purpose AI models under the EU Artificial Intelligence Act (the "AI Act"), which will come into effect on 2 August 2025.

This voluntary Code provides a compliance framework focusing on transparency, copyright, and measures to mitigate risks. It applies to general-purpose AI models, including well known large-language models such as Google's Gemini and OpenAI's GPT-4. The Code will need to be endorsed by European Union ("EU") member states and the Commission, who will now assess its adequacy before it can be implemented. The Code still awaits accompanying guidelines from the Commission on key concepts related to general-purpose AI models.

The Code was developed with input from over 1,000 stakeholders with the aim of helping providers of general-purpose AI models to meet their existing compliance obligations under the AI Act. General-purpose AI model providers will voluntarily sign up to demonstrate that they comply with the AI Act by adhering to the Code. All providers of general-purpose AI are urged to follow the Code as the 2 August compliance deadline approaches, with enforcement action for new models under the AI Act to begin from 2 August 2026, and for existing models from 2 August 2027. We understand that initial non-compliance will be met with collaboration rather than fines, as providers work toward full compliance.

The Code is made up of three chapters, with the sections on transparency and copyright offering all providers of general-purpose AI models a way to demonstrate compliance with their obligations under Article 53 of the AI Act. A third section on safety and security is relevant only to providers of the most advanced models considered to present systemic risk under Article 55 of the AI Act. An overview of each chapter is as follows:

  1. Transparency: this chapter sets out the need to (i) draw up and keep up-to-date model documentation (offering a model form to document sufficiently transparent information for AI Act compliance); (ii) disclose any relevant information to the public or the AI Office, including information requested by the AI Office about the model in question; and (iii) ensure the quality, integrity and security of information, which involves following the recommended technical protocols.
  2. Copyright: this chapter addresses the AI Act requirement to implement and maintain an up-to-date copyright policy that ensures compliance with EU law. Companies will have to commit to putting in place technical measures that prevent their models from generating content that reproduces copyrighted content, requiring providers to only use and extract lawfully accessible content when web scraping.
  3. Safety and Security: this chapter covers the Code's requirement for signatories to put in place safety and security frameworks as part of a risk analysis of their general-purpose AI models, to ensure that the systemic risks it presents are acceptable.

The Code can be read in full here.

GenAI Outlook Report published by Commission

The Commission's Joint Research Centre Outlook Report (the "Report") explores the transformative impact of GenAI in the EU, highlighting its potential to drive innovation and productivity across sectors like healthcare, education, and the creative industries.

Some key takeaways from the Report are as follows:

  • The Report outlines that while GenAI offers significant opportunities due to its ability to produce human-like content on a rapid scale, this rapid pace of development can also raise challenges.
  • It underscores the need for strong alignment with EU frameworks including the AI Act and data protection laws, to ensure GenAI is consistent with core principles of democracy and EU legislation.
  • The Report also makes clear that a strategic approach to GenAI policy must be taken as GenAI is emerging as its own economic sector, bringing unique challenges and opportunities.
  • To stay competitive, the Report emphasises that the EU must address key issues like investment, talent, and innovation, and foster an environment that supports AI development and deployment. Focussing on these efforts will align with the goals of the recently adopted AI Continent Action Plan, which Neural Network previously covered here.

The Report discusses the relationship between GenAI and the AI Act, the General Data Protection Regulation (the "GDPR") and the Digital Services Act from a legal perspective. Notably, the Report:

  • recommends enhancing the transparency of GenAI systems in accordance with the AI Act by implementing measures such as embedding metadata within content or applying watermarking techniques. These techniques involve inserting subtle imperceptible markers into audio, image, or text outputs to help identify AI-generated material; and
  • notes that although the AI Act provides a specific legal framework for the examination of AI, the GDPR continues to apply to any processing of personal data. As such, the way that the GDPR applies to GenAI models presents issues in several areas including: data subject rights; accountability; and the lawfulness of processing and legitimate interest as a legal basis. This means that more research and practical assessments of existing laws are needed to ensure their suitability.

The Report highlights the Commission’s perspective on both the promise and the challenges of GenAI use, indicating that a pragmatic approach will help ensure GenAI is able to support the EU's goals in a sustainable way.

You can read the full Report here.

Ofcom releases approach to AI governance in 2025/26

On 6 June, Ofcom published its strategic approach to AI for 2025 to 2026. Ofcom aims to balance innovation with robust consumer protections across several areas such as broadcasting, telecoms, postal services, and online platforms by taking a "safety-first" approach to AI regulation. The strategy recognises AI's potential to improve services and efficiency but also highlights risks such as misinformation and privacy concerns.

Ofcom's planned AI initiatives are particularly focused on transparency and accountability, and include, but are not limited to, the following:

  • Cross-sectoral: develop media literacy resources, publish all non-confidential research data that Ofcom collects, and continue to engage with the Government on its plans for AI opportunities and its planned future consultation on AI legislation.
  • Online safety: monitor the development of new AI and make sure that services are aware of their obligations under the Online Safety Act, explore the need for measures to address AI-based harms and interventions that could help to respond to harmful deepfakes, including adopting watermarks, content provenance, and AI labels.
  • Telecoms: continue to monitor and engage on AI standards development and continue to engage with telecoms operators to understand how AI technologies are applied in their networks and their future development plans so that Ofcom can provide support on this.
  • Broadcasting and Media: issue further guidance to broadcasters as necessary to ensure responsibilities and accountabilities regarding AI and continue to investigate the implications of social media's influence on media and the discoverability of news content.

The strategic approach can be read here.

10-year AI moratorium scrapped from Trump's "big beautiful bill"

On 1 July 2025 the US Senate voted to remove a 10-year moratorium (the "Moratorium") on AI legislation from the "One Big Beautiful Bill Act", before it was signed into law on 4 July 2025. The Moratorium would have prevented US States from enacting their own laws and regulations addressing AI models and systems.

The Moratorium was criticised for arguably preventing US States from being able to properly respond to AI-driven threats such as deepfakes and to regulate technologies such as autonomous vehicles and AI-based communication tools.

States have reacted positively to the Moratorium's removal as they regain some autonomy over any potential policing of AI. The California Attorney General welcomed the decision, underscoring the need for thoughtful oversight of emerging technologies.

Commission calls for proposals on the use of GenAI in cyber security

On 12 June, the Commission launched a call for proposals focused on leveraging GenAI to enhance cyber security, which is intended to support research on new opportunities brought by GenAI for cyber security applications. The initiative seeks projects that aim to develop, train, and test GenAI models capable of monitoring, detecting and responding to cyber-attacks - including adversarial AI threats - as well as enabling systems to self-heal.

Proposals can also explore the development of GenAI for continuous monitoring, regulatory compliance, and automated threat remediation. This would all occur while ensuring alignment with EU and national legal, ethical, and privacy rules.

Public submissions are open until 12 November.

Enforcement and civil litigation

Getty Images v Stability AI

June saw the trial for the much-publicised Getty Images' ("Getty") landmark copyright claim against Stability AI ("Stability"), which started on 9 June 2025 in the High Court.

The three-week trial addressed claims by Getty that Stability, creator of the AI model Stable Diffusion, had unlawfully scraped and used copyrighted images from Getty's database to train its AI model. Getty alleged that Stable Diffusion was trained on 12.3 million of Getty's copyrighted images without permission, with a risk that harmful content could be produced using these images, damaging Getty's brand.

This case looked set to be the most significant test on whether training an AI model on copyrighted material could amount to infringement; addressing the growing issue of trying to police AI models with outdated copyright legislation, which has become increasingly impractical in the face of the scale and ubiquity of AI and rapid evolutions in computer processing.

However, towards the end of the trial, Getty dropped its primary copyright claim against Stability; likely a result of issues with proving that (i) a high proportion of the copyrighted image generation took place in the UK; and (ii) the model had been trained in the UK. Getty has made it known that it is still considering litigation against Stability outside the UK over aspects of direct copyright infringement. Dropping the primary copyright infringement claim is another setback for creative rightsholders, who recently experienced disappointment at copyright protections ultimately not being included in the Data (Use and Access) Act 2025. Rightsholders are likely to increase pressure on the UK government to set out how UK copyright law will apply to the development and use of AI models.

The remaining claims being pursued against Stability relate to trademark infringement (namely the reproduction of the Getty watermark in AI-generated images), passing off, and secondary copyright infringement. This final point is the most critical, and centres around the question of whether an AI model trained on infringing data could itself constitute infringement when deployed in the UK. Under UK law, even if the original copying took place outside of the UK, the importation of an infringing work into the UK can amount to infringement. Getty appears to be claiming that Stability's AI model is, for the purposes of the Copyright, Designs and Patents Act 1988, an "infringing article", developed from Getty's images. Therefore, even if training of the AI model took place outside of the UK, offering the AI model for use in the UK would amount to bringing an unlawful copy into the country.

The outcome of this case could influence UK copyright law and AI regulation, have major implications for how AI development companies train AI models and impact the rights of creators and rights holders like Getty. The High Court's decision will most likely be handed down in Autumn.

UK courts issue a strong warning about AI use in court proceedings

A Divisional Court (the "Court") judgment of 6 June addressed the misuse of GenAI by legal professionals in court proceedings, with the judgment focusing on two cases: Ayinde v Haringey ("Ayinde") and Al-Haroun v Qatar National Bank ("Al-Haroun"). Both cases involved lawyers submitting either written arguments or witness statements containing fictitious legal citations and authorities, purportedly generated by AI tools such as OpenAI's ChatGPT, without proper verification.

In Ayinde, the barrister acting for the claimant cited five non-existent cases and misstated statutory provisions in a judicial review claim against a decision by a public authority to not provide the claimant with interim housing. Despite being alerted by the defendant's solicitors in inter-party correspondence, the legal team failed to provide explanations or corrections, leading to a wasted costs orders against the lawyers and referrals to the Bar Standards Board and Solicitors Regulation Authority. The Court highlighted the duty of legal professionals to verify AI-generated content against authoritative sources.

In Al-Haroun, the claimant and his solicitor submitted witness statements with eighteen mentions of fictitious cases and quotes that were not found in authorities, again apparently sourced from AI tools. While the Court acknowledged the solicitor's lack of intent to mislead, it emphasised that lawyers remain responsible for ensuring the accuracy of material presented to the courts.

The judgments underscore the risks of uncritical use of AI in legal research and the paramount importance of upholding professional and ethical standards. The Court called for urgent action by legal regulators and leaders to ensure all practitioners understand their obligations when using AI, warning that misuse could result in severe sanctions, including regulatory referrals and potentially contempt of court proceedings.

Anthropic allowed to use books to train its AI model

A recent ruling by a San Francisco federal judge clarified how US copyright law's fair use doctrine applies to AI training, specifically in the case of Anthropic's use of digitised books to train its Claude AI model. The judge found that Anthropic's use of books authored by the plaintiffs was "exceedingly transformative", qualifying as fair use.

He reasoned that, like a reader learning to write, Anthropic's large language models do not replicate or replace the original works but instead create something new, thus being "transformative". It was held that the amount of copying was reasonably necessary for this transformative purpose, and that such use would not harm the market for the original books.

However, the court drew a clear line in the sand regarding Anthropic's use of pirated works, emphasising that there is no copyright exception for AI companies. The judge ruled that creating a "central library" of millions of books, many of which were pirated from online datasets or scanned from purchased print copies, does not fall under fair use. It was noted that retaining pirated copies, even if not used for training, is "inherently, irredeemably infringing". As a result, a hearing will be held to determine the amount of damages due to the plaintiffs stemming from the use of pirated books in Anthropic's library.

The ruling also noted that digitising print books that were legally purchased by Anthropic for internal use was fair use, as it merely replaced physical copies with searchable digital versions and there was no evidence of an intent to redistribute. This sets an important precedent for how courts may balance transformative AI use against copyright protections.

Korean AI chatbot developer to compensate data breach victims

On 5 June, the Seoul Eastern District Court ruled in favour of the users of a Korean AI chatbot who were subject to a data breach. The class action was filed in 2021 where the plaintiffs were claiming that their personal data was breached during unauthorised use of the users' conversation data to train the AI chatbot. The chatbot made unethical comments to users, including hate speech directed at sexual minorities and people with disabilities.

The developers of the AI chatbot, Scatter Lab, were ordered to pay all plaintiffs compensation for either the breach of their personal data, sensitive data or both. Scatter Lab was previously fined $93,000 by the Personal Information Protection Commission for the illegal collection of users' personal information and using the data beyond their specified purposes.

Technology developments and market news

European CEOs lobby against landmark AI Act

A group of 44 chief executives from major European companies - including Airbus, BNP Paribas, Carrefour, and Philips - has called on the Commission to pause the implementation of the AI Act for two years. In an open letter to the Commission President, they argue that the current complex and arguably overlapping regulations threaten Europe's competitiveness in the global AI race. The letter highlights that implementation of the Act could potentially hinder both innovation and the ability of European industries to scale AI adoption.

The AI Act has faced criticism from both industry leaders and entrepreneurs for its strictness. Concerns centre on the risk of fragmented rules across member states which could lead to legal uncertainty and the possibility that compliance burdens will ultimately favour large US tech firms over European start-ups and SMEs. The Code, as reported on above, aims to guide some of the impacted stakeholders ahead of the Act's phased rollout.

Is Microsoft betting on AI amidst mass layoffs?

In an effort to stay within its fiscal budget and June quarter margins, Microsoft (the "Company") has announced that it will be cutting its workforce by 4%. The July wave of layoffs will see approximately 9,000 of Microsoft's employees being let go. This builds on Microsoft's May and June layoff waves where 6,000 employees were let go.

These layoffs coincide with the Company's restructuring and focus on investment into AI technologies, with Microsoft increasing its capital expenditure by $80 billion to meet growing AI technology demands and advances. Facebook and Amazon are also streamlining their workforce. Whilst increased costs and an uncertain economic outlook have certainly not bolstered Big Tech's appetite to hire, it seems likely that its AI investment may have played a part in its decision to reduce the size of its workforce.

Open AI and Oracle to collaborate on data centres

A recent agreement between OpenAI and Oracle marks one of the largest data centre collaborations in the AI industry, with OpenAI leasing 4.5 gigawatts of computing power to Oracle. This arrangement is part of OpenAI's broader effort to dramatically increase its computing resources for developing advanced AI systems and supporting popular applications like ChatGPT.

To meet its obligations under this agreement, Oracle will construct several new data centres across the United States, with potential locations including Texas, Michigan, and Ohio. As the project is a substantial portion of the US's current data centre infrastructure, its scale cannot be overemphasised. Oracle will also enhance its existing facility in Abilene, Texas, and plans to invest heavily in high-performance hardware, including a large purchase of Nvidia’s latest AI chips. In general, the US is rapidly expanding its data centre infrastructure to support national security and to align with the strategic aims of major US tech companies, in a clear move to increase reliance on domestic goods and services.

The project could see investments reach up to $500 billion as it expands globally. The new deal is expected to significantly boost Oracle's revenue and market position. This partnership also reflects a shift in OpenAI’s approach to cloud services. After previously relying exclusively on Microsoft, OpenAI has begun working with other providers, including Google and CoreWeave, to ensure it has the necessary infrastructure to support its rapid growth and innovation in AI.