Neural Network – October 2025

In this edition of the Neural Network, we look at key AI developments from September and October.
In regulatory and government updates, the European Commission has announced new AI strategies; consultations have been launched in the EU on digital simplification and the EU Artificial Intelligence Act (Regulation (EU) 2024/1689) (“EU AI Act”) serious incident guidance ; a new Californian AI safety law has been announced; Ireland becomes a pioneer in the EU AI Act rollout; and Italy announces a new AI framework.
In AI enforcement and litigation news, the ICO wins its counter-appeal against Clearview AI and GEMA goes head-to-head with OpenAI in a groundbreaking generative AI music licensing case.
In technology developments and market news, Nscale AI data centres raise $1.1 billion and China has been developing AI for use in transportation.
More details on each of these developments is set out below.
Regulatory and government updates
- European Commission’s new AI strategies
- Consultation on digital simplification announced by EU
- EU AI Act Serious Incident Guidance Consultation launched
- New Californian AI safety law announced
- Ireland becomes pioneer in EU AI Act rollout
- Italy announces new AI framework
Enforcement and civil litigation
Technology developments and market news
Regulatory and government updates
European Commission’s new AI strategies
The European Commission (“Commission”) has announced two complementary strategies this month to accelerate AI across EU industry and science. The “Apply AI Strategy” is the EU’s overarching AI sectoral strategy which focuses on deploying AI in key sectors such as healthcare, energy, mobility, manufacturing, and public services.
In parallel, the Commission’s “AI in Science Strategy” aims to position Europe at the forefront of AI-driven research and scientific innovation by supporting the development and use of AI by the European scientific community. At its core is the Resource for AI Science in Europe (RAISE), a virtual European institute that aims to pool expertise and coordinate AI resources, which will be launched in November 2025.
Collectively, these measures are designed to enhance competitiveness and promote trustworthy AI, which supports the EU’s "AI Continent Action Plan" as we covered in April’s edition. It is expected that the proposed Data Union Strategy will join these measures later this month, which is a future initiative aimed at ensuring the availability of high-quality, large-scale datasets for training AI models.
Consultation on digital simplification announced by EU
The Commission launched a short public consultation on 16 September 2025 on the Commission's omnibus proposal for digital simplification, which aims to simplify digital regulations for businesses, with a particular focus on the recently enacted EU AI Act. It comes amidst increasing pressure from business and lobby groups to pause the implementation of the EU AI Act which is under consideration by the Commission. This digital simplification initiative seeks to clarify and optimise the application of the EU AI Act, which entered into force on 1 August 2024 and will be fully applicable on 2 August 2026, subject to certain exceptions, including rules for high-risk AI systems which have an extended transition period until August 2027.
The consultation, which closed on 14 October 2025, is part of the Commission’s broader Omnibus IV Simplification package.
EU AI Act Serious Incident Guidance Consultation launched
The Commission also launched a public consultation on 26 September 2025 on draft guidance and a reporting template for serious AI incidents under the EU AI Act. This initiative is designed to help providers of “high-risk AI systems” (which may include general-purpose AI models as we covered in August's edition) comply with upcoming mandatory reporting requirements under Article 73 of the EU AI Act, which will take effect from 2 August 2026. The Commission states that Article 73 is intended to support early risk detection, improve accountability, enable prompt intervention, and build public trust in AI technologies.
Key aspects of the draft guidance include:
- Definitions in the EU AI Act: The guidance explains key terms related to serious AI incidents and outlines the associated reporting responsibilities.
- Illustrative scenarios: Practical examples are included to demonstrate when and how incidents should be reported, such as cases involving misclassifications, notable drops in accuracy, interruption of AI systems, or unexpected AI behaviour.
- Reporting requirements and timelines: The guidance details the specific obligations and deadlines for various stakeholders, including providers and deployers of high-risk AI systems, providers of general-purpose AI models with systemic risk, market surveillance authorities, national competent authorities, the Commission, and the AI Board.
- Interplay with existing laws: The guidance clarifies how these AI-specific requirements align with other legislative frameworks and reporting requirements, such as the Critical Entities Resilience Directive, the NIS2 Directive, and the Digital Operational Resilience Act.
- International alignment: The guidance aims to harmonise reporting practices with international reporting regimes, including the AI Incidents Monitor and Common Reporting Framework of the Organisation for Economic Co-operation and Development.
The consultation is now open for stakeholders to review the draft guidance and reporting template and provide feedback by 7 November 2025.
New Californian AI safety law announced
On 29 September 2025 the Governor of California, Gavin Newsom, signed the Transparency in Frontier Artificial Intelligence Act (“TFAIA”). This law will come into effect on 1 January 2026.
In the absence of comprehensive federal legislation that specifically governs AI, a growing number of individual states, including California and Illinois, have started implementing their own AI rules and regulations focused on issues such as algorithmic transparency, biometric data, and consumer protection.
Newsom has suggested that the transparency-led TFAIA will act as a blueprint for other federal states when it comes to developing AI legislation.
The TFAIA will apply to AI developers that create and train frontier models (e.g. LLM developers) with an additional set of rules for “Large frontier developers” which have an annual gross revenue of more than $500 million. The TFAIA does not explicitly restrict scope to California-based developers. It mandates disclosure and documentation for these large frontier AI developers and will require them to record their safety measures and report safety incidents to the California Office of Emergency Services.
Ireland becomes pioneer in EU AI Act rollout
On 16 September 2025, Ireland’s Department of Enterprise, Tourism and Employment reached a significant milestone in implementing the EU AI Act by establishing a single central coordinating authority, and designating a further seven additional national competent authorities, to enforce the EU AI Act.
Article 70 of the EU AI Act mandates that every EU Member State must appoint at least one notifying authority and one market surveillance authority to oversee AI regulation. These authorities are required to operate independently, impartially, and without bias and must be equipped with sufficient technical, financial, and human resources, along with the necessary infrastructure, to effectively carry out their duties under the EU AI Act.
Ireland has implemented a distributed regulatory framework, with 15 regulatory bodies as competent authorities making up the National AI Implementation Committee, supported by a central authority to coordinate certain centralised functions. The 15 competent authorities, which met for the first time on 16 September 2025, will each oversee the application of the EU AI Act within its specific area of responsibility.
Ireland will also establish a new body, the National AI Office, by 2 August 2026 to ensure consistent and effective implementation of the EU AI Act. This body will have four critical functions:
- coordinate activities of the competent authorities to ensure consistent implementation of the EU AI Act;
- serve as the EU AI Act’s single point of contact;
- facilitate centralised access to technical expertise by the other competent authorities; and
- drive AI innovation and adoption through the hosting of a regulatory sandbox.
Meanwhile, an interim single point of contact has been established within the Department of Enterprise, Tourism and Employment to coordinate activities among Irish regulators and act as a liaison with the public, the Commission, and other key stakeholders. Out of the 27 Member States, Ireland is one of only seven that have established a single point of contact to date.
Italy announces new AI framework
On 10 October 2025, Italy’s national AI framework (Bill S. 1146-B) (the “Italian AI Law”) entered into force after it was signed into law last month. Intended to be complementary to the EU AI Act, Italy is the first country in the EU to implement its own AI laws. The Italian AI Law, which aims to ensure “human‑centric, transparent, and safe AI use”, introduces sector‑specific rules for areas deemed high risk; safeguards for minors; and sets out governance and enforcement mechanisms. This includes a new provision for mandatory AI age verification, ensuring children under 14 are only able to access AI with parental consent.
Copyright protections are also clarified in the new law, in particular its assertion that works created ‘with genuine human intellectual effort using AI assistance’ are eligible for protection.
The Italian AI Law is aligned with the EU AI Act’s position on text and data mining (“TDM”); but introduces targeted amendments to confirm that the exceptions for TDM under EU law cover the ‘development and training’ of generative AI models (i.e. subject to lawful access in accordance with copyright law and the owner’s opt‑out rights, reproduction and extraction of lawfully accessible works for the purposes of TDM, via the use of AI models, is permitted). It amends Italian copyright law by attaching criminal liability to unlawful TDM; elevating what used to be a solely civil liability.
Finally, tougher penalties are also imposed under the Italian AI law, including prison sentences of up to five years, for those who have unlawfully distributed harmful AI-generated content (including deepfakes), with increased penalties for crimes being committed using AI such as fraud and money laundering.
Enforcement and civil litigation
ICO wins counter-appeal against Clearview AI
The UK Upper Tribunal (“Upper Tribunal”) has confirmed that the ICO can enforce the UK GDPR against Clearview AI, a US facial recognition company that scraped images of UK residents from social media and other websites. The scraping for model training purposes was deemed to constitute monitoring for which Clearview AI was responsible, even though it was undertaken by Clearview AI’s customers. The ruling overturns the First-tier Tribunal’s decision that the ICO did not have the territorial scope to issue a £7.5 million fine to Clearview in May 2022.
The case has been remitted to the First-tier Tribunal to reconsider the appeal now that it is confirmed the ICO had jurisdiction to issue the fine.
Hearing begins for gen-AI music licensing case
On 29 September 2025, the Regional Court of Munich heard a landmark case between GEMA, the German music rights organisation, and OpenAI. GEMA claims that OpenAI used copyrighted music from its members to train its generative AI models without proper licenses. This is the first case in Germany, and one of the first in Europe, to address whether AI companies need to obtain licenses for using copyrighted works in AI training.
GEMA initially filed a lawsuit against OpenAI on 13 November 2024 alleging that OpenAI used works from its repertoire comprising millions of musical pieces, without obtaining the necessary licenses. The organisation argues that this unauthorised use infringes on the rights of its members, who are composers, lyricists, and music publishers. GEMA is seeking both financial compensation and transparency, demanding that OpenAI disclose which specific works were used to train its AI models.
A common point raised in public debates is that, unlike a CD-ROM where works are stored for direct playback, the training data, including the copyrighted songs, is not physically embedded in the AI model as discrete, retrievable files. OpenAI argues that it meets its legal obligations through various measures designed to prevent copyright infringements and is not required to pay licence fees to GEMA.
The case is significant because it could set a precedent for how AI companies across Europe handle copyrighted material. If the court sides with GEMA, AI developers may be required to negotiate licenses with rights holders, potentially increasing costs and complexity for developing generative AI models. The outcome could also influence future legislation and the broader debate over copyright and AI.
The Munich court’s decision on 11 November is expected to have far-reaching implications, not just for the music industry, but for all sectors where AI training relies on copyrighted content.
Technology developments and market news
Nscale AI data centres raise $1.1 billion
A UK-based data centre group has announced one of the largest ever investments for a European tech company, in the latest of large-scale financing of AI infrastructure, highlighting the surging demand for AI services. Cloud computing start-up Nscale has raised $1.1 billion to accelerate expansion of data centres for AI across Europe, the US, and the Middle East. The funding round was led by Norway’s Aker, which is investing $285 million for a 9.3% stake, which would value Nscale’s equity at around $3 billion. This makes it one of the largest early-stage financings for a European tech company at this stage in its development.
Nscale is already building a major AI data centre in Narvik with Aker, for customers including OpenAI and Microsoft. It recently announced a $6.2 billion, five-year contract giving Microsoft access to the Narvik facility, and separate deals to serve OpenAI. The company positions itself as a “sovereign” AI infrastructure provider, offering local, compliant data processing for corporates and governments.
China developing AI for use in transportation
China has announced a comprehensive plan to accelerate the adoption of AI in its transportation sector, setting clear targets for 2027 and 2030. Issued by seven key government agencies, including the Ministry of Transport and the Ministry of Industry and Information Technology, the strategy forms part of the broader “AI Plus” initiative, which aims to drive sector-specific AI innovation. This plan is in line with the Chinese regulators’ aims to focus and promote domestic AI innovation and development, highlighted by the ban of the Nvidia AI chip in the summer. Our September publication covered this ban in more detail.
A central feature of the plan is the development of “transportation large models”, which are advanced AI platforms designed to integrate multiple modes of transport, infrastructure, and services which extends to ‘smart railways’ and ‘intelligent driving’. The government intends to boost computing capacity at strategic locations such as highways, ports, and transport hubs, and to establish a national transport information platform to facilitate data sharing.
The plan places strong emphasis on governance, calling for new legal frameworks, stricter compliance with data and network security standards, and robust ethical reviews. Draft safety guidelines and technical standards are being prepared, with a public consultation expected in the near future.