Neural Network - June 2025

Neural Network - June 2025

In this edition of the Neural Network, we look at key AI developments from May and June.

In regulatory and government updates, the UK's Data (Use and Access) Bill is set to receive Royal Assent; the European Commission has published an AI Literacy Q&A; guidelines for an AI impact assessment standard have been published; Meta is filing an AI chatbot risk assessment pursuant to its obligations under the Digital Services Act; and Trump's "big beautiful bill" stirs debate in relation to AI.

In AI enforcement and litigation news, OpenAI has continued to pursue its counterclaim against Elon Musk in relation to OpenAI restructuring into a for-profit organisation.

In technology developments and market news, Meta has made a deal to buy nuclear output for 20 years; UK Government departments have reported that civil servants have saved two weeks of time per year by using AI; and China is making the switch from Nvidia chips.

More details on each of these developments is set out below.

Regulatory and Government updates

Enforcement and Civil Litigation

Technology development and market news

Regulatory and Government updates

Data (Use and Access) Bill to receive Royal Assent

The Data (Use and Access) Bill (the "DUA Bill") has finally passed through Parliament and is awaiting Royal Assent.

In the most recent edition of Neural Network, we reported on the "ping-pong" of amendments about the use of copyright materials to train artificial intelligence ("AI"). Since the publication of that article, there have been several more iterations of this amendment proposed by the House of Lords – each one blocked yet again by the House of Commons. The Lords continued to insist on the amendment, with their most recent reason merely being "because the Lords wish the Commons to consider the matter again".

While the AI copyright amendments around transparency ultimately failed, a compromise was reached, in which the government agreed to publish a report on copyright and AI proposals (the "Report") within nine months of the DUA Bill being passed. In addition, the Secretary of State must make a "progress statement" to Parliament on the progress of the publication of an economic impact assessment and the Report within six months of the DUA Bill being passed.

The dispute over this amendment once again highlights the tension between fostering innovation in AI and safeguarding the rights of content creators. As the DUA Bill has now progressed through Parliament, the Government appears to have attempted to side with the tech sector's call for open data, over the creative industry's demand for protections. Hopefully for the creative industry, their concerns around copyright protection will be addressed through incoming AI legislation.

The most recent article in our DUA Bill series can be found here. Further details on the soon-to-be enacted DUA Bill will be included in our next Data Protection bulletin.

European Commission publishes AI Literacy Q&A

On 7 May 2025, the European Commission (the "Commission") released a Q&A document clarifying the AI literacy requirements under Article 4 of the AI Act. Article 4 requires providers and deployers of AI systems to ensure a sufficient level of AI literacy amongst their staff (this includes contractors, employees, and service providers) and those dealing with AI systems on their behalf. Providers and deployers should ensure a sufficient level of AI literacy, taking into account staff's technical knowledge, experience, education, and training concerning the way in which AI systems are used.

While the AI literacy obligation has been in effect since 2 February 2025, which is when the AI Act came into effect, full supervision and enforcement by national market surveillance authorities is only scheduled to begin on 3 August 2026.

Importantly, the Q&A notes that there is no one size fit all when it comes to AI literacy and the Commission does not intend to impose mandatory trainings on companies. Instead, the requirement for each company depends on the context. In addition, no specific governance structure, for example, the need for a designated AI officer, is mandated to comply with Article 4 of the AI Act.

The Commission will publish guidelines on complying with Article 4 in due course.

Guidelines for AI impact assessment standard published

The International Organisation for Standardisation ("ISO") have published guidance (the "Guidance") introducing a set of guidelines for organisations to use when conducting assessments of the impact of AI systems. The Guidance is relevant to any organisation developing, providing or using AI systems that wants to assess and manage the potential impacts of their AI systems on people and society. The Guidance can be used by organisations of any size or industry.

The Guidance recommends assessing impacts throughout the entire AI lifecycle — from design and development to deployment and post-market monitoring — and updating assessments as necessary. The ISO has created this standard to set a precedent in relation to AI development with the aim of supporting governance and risk management practices. The Guidance was created to sit alongside and complement other AI standards published by the ISO, focusing specifically on the human impacts of AI.

By promoting structured impact assessments, the standard supports transparency and accountability by organisations. It also encourages alignment with values such as safety and human-centred design, reinforcing the need for ethical AI development.

You can read the Guidance and press release here.

Meta to file AI chatbot risk assessment

Meta Platforms ("Meta") is expected to submit a risk assessment for its AI feature under the European Union's Digital Services Act ("DSA") soon, according to the Commission. Despite releasing its AI chatbot, Meta AI, in March, Meta has yet to provide the required assessment. The delay follows regulatory concerns over data protection in relation to the AI chatbot, and it is still unclear whether the pending risk assessment will lead to any regulatory action under the DSA.

The DSA mandates that very large online platforms, like Meta's Facebook and Instagram, conduct risk assessments (i) annually; and (ii) whenever new features are introduced that could alter their risk profile. While the DSA does not directly regulate standalone AI systems, it covers them when integrated into platforms, as with Meta AI and the Commission may take regulatory action based on the assessment. Meta AI is integrated into Facebook, Instagram, and WhatsApp and uses large language models requiring extensive data processing.

Recently, Meta began training its AI using public EU user data, citing legitimate interest under the General Data Protection Regulation ("GDPR"). Despite concerns raised by consumer protection groups, both a German High Court and the Irish Data Protection Commission deemed Meta's actions to be lawful.

Trump's "big beautiful bill" stirs debate in relation to AI

A Republican proposal to ban State governments from regulating AI for ten years has ignited debate among lawmakers, tech leaders, and State officials. The measure would prevent States from enacting AI laws, favouring a uniform federal approach.

Tech industry leaders like OpenAI's Sam Altman and Microsoft’s Brad Smith support a single national framework, arguing that inconsistent State regulations would hinder innovation. However, certain critics, including multiple State attorneys general and lawmakers, warn that such a ban would strip States of their ability to protect citizens from AI risks. This would likely create a regulatory gap that leads to more questions than answers in the decade until the ban is lifted. 

Anthropic CEO Dario Amodei called the moratorium "too blunt," advocating instead for the White House and Congress to work together on a transparency standard that requires AI companies to disclose emerging risks. The controversy highlights the tension between the desire for uniformity and the need for local oversight, putting forward the argument that Congress should, gradually, if necessary, establish robust safeguards at the Federal level rather than simply putting a blanket ban on State action.

Enforcement and Civil Litigation

OpenAI strives to maintain counterclaim against Elon Musk

We previously reported that Elon Musk is suing OpenAI, accusing the company of betraying its original aim, with Musk asserting that OpenAI's founding mission "to develop AI for the good of humanity" has now been disregarded. However, OpenAI is putting up a strong resistance, standing firm in the allegations raised in its counterclaim against Musk, which accuses him of engaging in fraudulent business practices under Californian law.

OpenAI argues that a $97.4 billion takeover bid for the company earlier this year from a Musk-led consortium was a "sham bid", alleging the bid was leaked to the media before proper consideration by OpenAI's board in order to drum up media frenzy. OpenAI want this counterclaim to progress alongside the main legal proceedings, rather than being put on hold until other issues are decided.

In addition, OpenAI cited a pattern of harassment by Musk in its counterclaim. Musk’s legal team are continuing with the claim as is and have asked for these counterclaims to either be dismissed or put on hold until a later stage of the case.

Technology development and market news

Meta makes trailblazing deal to buy nuclear output for 20 years

Meta has entered a significant two-decade agreement to purchase energy from the Clinton Clean Energy Centre in Illinois, a nuclear plant, starting in June 2027. The deal is between Meta and the largest operator of conventional nuclear reactors in the US and will extend the plant's operation beyond the expiration of Illinois state subsidies. This marks Meta's first venture into nuclear energy procurement, aimed at supporting its AI initiatives and data centres in the United States.

The agreement is part of a broader trend among companies including Amazon, Google, and Microsoft to secure substantial electricity supplies to meet the growing demands of AI technology. Meta's previous plans for a similar deal were delayed due to environmental and regulatory issues. Although financial specifics of the Meta deal remain undisclosed, it is reported the partnership will generate $13.5 million in annual tax revenue.

President Donald Trump's recent executive order to expedite reactor construction and upgrade existing facilities further supports the US nuclear sector's growth. Similarly, across the Atlantic, the UK government has recently committed £14.2 billion to the development of the Sizewell C nuclear power station with the aim of meeting the UK's electricity demand.

AI saves two weeks per year for civil servants

A UK government trial found that civil servants using Microsoft’s Copilot AI assistant for administrative tasks saved about two weeks of working time per year, with staff reporting an average daily saving of close to half an hour. Over 20,000 officials participated in the trial, using the tool to draft documents, summarise meetings, and prepare reports. Most respondents were satisfied, with 82% wanting to continue using AI tools.

The government aims to modernise the public sector and achieve £45 billion in cost savings through digital services and AI, though some experts caution that AI tools are still works in progress, emphasising the need for human oversight. A separate report from the Alan Turing Institute found that up to 41% of public sector tasks could be supported by AI, especially in education.

However, the risks and controversies of AI must not be understated: not all users found the AI tools helpful, and past uses of AI in government, such as predictive policing and fraud detection, have led to criticism and unintended negative consequences, including bias and wrongful penalties.

China is making the switch from Nvidia

US restrictions have accelerated contingency planning and prompted the tech companies in China to explore domestically made solutions. China's leading tech companies, including Alibaba, Tencent, and Baidu, are transitioning their AI development to domestic chips due to dwindling Nvidia processor supplies and tightening US export controls. As the existing Nvidia stock is expected to only last until early 2026, these companies are in the process of developing alternative semiconductors to meet increasing AI demands.

Nvidia's upcoming chip for China, expected in July, will lack high-bandwidth memory, crucial for processing large data volumes. The transition to domestic alternatives involves significant costs and disruptions, with companies adopting a hybrid approach - using existing Nvidia chips for AI training and local processors for inference. Huawei is increasing production capacity to try and meet demand, while other Chinese chipmakers like Cambricon and Hygon are also being tested and will be competing to become the country’s national chip.

We previously reported that the Taiwan Semiconductor Manufacturing Company is set to build five additional chip facilities in the US in the years to come, ensuring a reliable home-grown chip supply into the US market.