Neural Network - September 2025

Neural Network - September 2025

In this edition of the Neural Network, we look at key AI developments from August and September.

In regulatory and government updates, the German and French cybersecurity authorities have issued “Design Principles” in efforts to strengthen AI security, and the FCA has launched an AI Innovation Lab in order to advance responsible AI within the financial services sector.

In AI enforcement and litigation news, Elon Musk has sued OpenAI and Apple due to changes to OpenAI’s not-for-profit mission, and Australian Regulators have issued a formal warning to a UK-based tech company for enabling exploitative features.

In technology developments and market news, China is imposing bans and limitations on tech firms’ use of Nvidia chips, and AI ‘chatbots’ are transforming the eCommerce space.

More details on each of these developments is set out below. 

Regulatory and government updates

Enforcement and Litigation

Technology and market news

Regulatory and government updates

Strengthening AI security: design principles Issued by German and French cybersecurity authorities

On August 11 2025, Germany’s Federal Office for Information Security (“BSI”) and France’s Cybersecurity Agency (“ANSSI”) jointly published the paper “Design Principles for LLM-based Systems with Zero Trust.” This document offers essential guidance for organisations deploying Large Language Model (“LLM”) systems, focusing on AI-specific security challenges and risk management.

The paper identifies typical risks such as data leaks, prompt injection attacks, and unauthorised access and recommends practical countermeasures. In light of these identified risks, the paper introduces the Zero Trust approach, which is built on three central pillars:

  1. Authentication and authorisation: Each individual user, device and component must be authenticated and authorised before any interaction can occur.
  2. Principle of least privilege: Permissions are to be assigned at a highly granular level, with resources segmented into smaller units.
  3. Threat Intelligence and awareness: Human oversight remains essential – all networks should be considered untrustworthy and not secure. Ongoing risk assessments and threat modelling are essential.

The guidance emphasises that Zero Trust measures must be extended for LLM systems to address AI-specific risks. This includes securing model weights and training data, auditing inputs and outputs for anomalies, and deploying robust defences against evasion, poisoning, and privacy attacks.

FCA launches AI innovation lab to advance responsible AI in financial services

The FCA has launched the "AI Lab" to enable the responsible use of AI in UK financial services firms whilst driving growth and innovation in the sector. The AI Lab will also assist the FCA in understanding the risks and opportunities AI presents to UK consumers and markets which will inform how they approach their regulatory duties.

The FCA have launched the first components of the AI Lab which seeks to support the safe and responsible adoption of AI in financial services. The FCA offers firms guidance, collaborative opportunities, and supervised live testing to ensure innovations meet regulatory standards and protect consumers.

The components of the AI lab include:

Supercharged Sandbox: The FCA have collaborated with Nvidia to provide firms with access to computing capabilities, enhanced datasets and more advanced tooling. Applications for this programme are open to any financial services firm looking to experiment with AI.

AI Live Testing: The FCA provide a space for firms to test AI systems in real world conditions with the appropriate regulatory oversight. This is aimed at firms who are further along in the AI development process.

AI Spotlight: Projects accepted on to this programme will provide real world insight and understanding into how firms are experimenting with AI in financial services. Applications to join this programme remain open.

AI Sprint: The FCA brings together industry, academics, regulators, technologists and consumer representatives to provide feedback on the FCA's regulatory approach to AI.

AI Input Zone: The FCA encourages stakeholders to provide feedback on the future of AI in the UK financial services market. Stakeholders are prompted to consider the most transformative AI tool in financial services, barriers to adopting these and the role of regulators. The FCA are currently considering responses and will provide an update in due course.

Enforcement and Litigation

Elon Musk’s xAI sues OpenAI and Apple over alleged shift toward profit-driven AI

Elon Musk’s artificial intelligence company, xAI, has commenced legal action against OpenAI and its strategic partner, Apple, alleging a significant deviation from OpenAI’s original non-profit mission to develop AI for the benefit of humanity. The claim, filed in August 2025, contends that OpenAI’s recent commercial partnerships with the notable integration of its generative AI models into Apple’s consumer devices, represent a shift towards profit-driven objectives at the expense of the original transparency and public interest.

From a legal standpoint, xAI asserts that OpenAI’s conduct breaches the founding principles and commitments made to the public and its stakeholders, particularly concerning the open dissemination of advanced AI technologies. The lawsuit further alleges that Apple’s involvement exacerbates the situation by embedding proprietary AI solutions within a closed ecosystem, thereby restricting broader access and potentially stifling competition and innovation.

xAI seeks judicial intervention to compel OpenAI and Apple to realign their activities with the original ethos of openness and collective benefit, as well as to ensure greater accountability in the stewardship of transformative AI technologies. This litigation underscores the increasingly complex interplay between commercial imperatives and ethical responsibilities in the rapidly evolving AI sector, and signals heightened scrutiny of how foundational AI technologies are governed and commercialised.

Australian regulator issues formal warning to UK-based tech company amid heightened online safety enforcement

The Australian eSafety Commissioner has issued a formal warning to a UK-based technology company operating prominent “nudify” services. These services leverage AI to generate nude images—including those of minors—from uploaded photographs. This regulatory action, grounded in the 2021 Online Safety Act (Australian legislation), underscores Australia’s increasingly assertive stance against the proliferation of harmful digital technologies, particularly those facilitating image-based abuse and the creation of child sexual exploitation material.

The Commissioner’s statement highlights that these services, which attract significant Australian traffic, have been implicated in the generation of explicit deepfake images of school-aged children. The regulator, mindful of not amplifying the company’s profile, has refrained from public identification but has made clear its willingness to pursue substantial civil penalties of up to A$49.5 million should non-compliance persist.

This enforcement activity coincides with broader governmental initiatives to curtail access to abusive technologies and follows similar regulatory scrutiny in the UK. It also occurs against a backdrop of ongoing legal challenges to the eSafety Commissioner’s powers, including high-profile disputes with social media platforms over compliance obligations.

The Australian government’s position is unequivocal: technology companies must proactively prevent the misuse of their platforms for exploitative purposes, and regulators will not hesitate to deploy the full spectrum of enforcement tools to protect vulnerable users, particularly minors, in the evolving digital landscape.

Technology and market news

China imposes bans and limits on tech firms’ use of Nvidia chips in push for semiconductor independence

Nvidia has instructed its suppliers, including Samsung Electronics and Amkor Technology, to halt production of its H20 AI chip. It has been reported that Beijing has pressured Chinese tech giants, including Alibaba and ByteDance, to suspend their purchase and utilisation of the H20 chip due to security concerns.

Recently, U.S. authorities allowed Nvidia and AMD to resume AI chip sales to China, provided they pay 15% of their China-generated chip revenue to the U.S. government. However, China’s Ministry of Industry and Information Technology has asked major tech firms to justify their need for Nvidia’s H20 chips over domestic alternatives, prompting some companies to reduce or suspend orders. The Cyberspace Administration of China has also directed a halt to new purchases pending a security investigation.

Chinese state media have criticised Nvidia’s chips as “neither advanced, nor safe,” and called for convincing security proofs. This regulatory pushback aligns with China’s broader strategy to promote homegrown technology, boosting companies like Huawei and SMIC, whose shares have risen on expectations of increased demand for local chips.

The Cyberspace Administration of China (“CAC”) has further ordered tech companies to stop testing and all orders of another AI chip, the RTX 6000D. Nvidia has been developing the RTX 6000D chip specifically for the Chinese market. Despite indications of large scale orders, production and sales of the RTX 6000D chip have been ordered to stop. This ban follows Chinese regulators’ aims to focus on domestically developed AI chips in order to compete with the US in the development of AI chips.

This situation underscores the growing geopolitical and cybersecurity risks in the global AI chip market, as well as the increasing emphasis on supply chain resilience and regulatory compliance.

AI 'chatbots' are transforming the eCommerce space

The rise of AI "agents" is set to transform the eCommerce sector. Leading tech companies like OpenAI, Google, Microsoft, and Perplexity have in recent months developed AI features that can search for products, add them to the consumer's shopping basket and hand back control to complete the purchase.

This has led brands to rethink how they market their products. Brands are now adopting new methods to optimise their online presence for AI discovery, using Search Engine Optimisation ("SEO") techniques, greater precision in product descriptions and ensuring their website loads within three seconds (as chatbots prioritise sites that load quickly). More specifically, retailers are noticing consumers search in broader terms instead of specifying the fashion item in AI chatbots. This means brands must adapt to include product descriptions that match this style of searching.

The impact of AI on consumer shopping habits emanates from the increasing use of AI as a search engine tool. According to search engine marketers, Semrush, almost 60% of European Google searches no longer result in a "click". Instead, users rely on AI generated text "overview" to answer their query.

Brands need to be prepared for a world where consumers complete their purchases on AI chatbots rather than their own websites. The use of AI chatbots increases data privacy and freedom of choice concerns. It may well be that consumers are willing to sacrifice this for improved efficiency.

Our recent publications

Our most recent publication of the Neural Network can be found here.