UK’s AI Regulation Under Scrutiny Amid ‘Superpower’ Ambitions

Britain is trying to position itself as the world’s third-largest AI market, behind the United States and China.
UK’s AI Regulation Under Scrutiny Amid ‘Superpower’ Ambitions
British Prime Minister Sir Keir Starmer (R) speaks with researchers and professors during a visit of the Manufacturing Futures Lab at UCL in London on Jan. 13, 2025. Henry Nicholls - WPA Pool/Getty Images
Evgenia Filimianova
Updated:

The rise of artificial intelligence and the government’s goal of positioning the UK as an AI “superpower” have sparked debate on the approach to regulating the rapidly developing sector.

To date, the UK doesn’t have specific legislation dedicated solely to AI regulation. Instead, the technology is governed under existing laws and sector-specific regulation.

The Data Protection Act 2018 and GDPR govern the use of personal data by AI systems, while consumer protection laws address potential harms caused by faulty or misleading AI. Applications of AI in finance are regulated under the Financial Services and Markets Act 2000 and the Equality Act 2010 prevents discrimination in AI-driven decision-making.

UK’s AI White Paper, published under the Conservative government, was one of the first steps towards a specific AI regulatory framework. Its principles of AI safety and accountability aim to guide existing regulators within their respective sectors.

For instance, the Financial Conduct Authority regulates AI use in financial services, while the Medicines and Healthcare products Regulatory Agency oversees AI applications in health care.

The sectoral approach aligns with recommendations from Google, which advocates against a “one size fits all” strategy for AI regulation, emphasising that AI’s multi-purpose nature requires tailored oversight.
Further regulatory guidance comes from the AI Safety Institute (AISI), launched at the AI Safety Summit in November 2023. The AISI conducts research on AI risks and collaborates with major developers, including DeepMind and OpenAI, to address safety and ethical concerns.
In 2023, House of Lords peer Chris Holmes introduced the Artificial Intelligence (Regulation) Bill, which proposed establishing an AI Authority to oversee regulation. The bill, however, failed to gain traction.
Holmes continues to advocate for broader AI legislation, including appointing AI-responsible officers and creating “right-sized” regulations to balance innovation with public safety.

‘Proportionate’ Regulation

The Labour government has pledged to “harness“ the power of AI and ”strengthen safety frameworks.” Speaking in January, Prime Minister Sir Keir Starmer said that UK’s approach to AI regulation will be “be pro-growth and pro-innovation.”

“Our ambition is not just to be an AI superpower but also make sure that this benefits working people.

“We will test and understand AI before we regulate it to make sure that when we do it, it is proportionate and grounded in the science. But at the same time, we’ll offer the political stability that business needs,” Starmer said.

In its manifesto, the party also vowed to implement binding regulations “on the handful of companies developing the most powerful AI models” and “ban the creation of sexually explicit deepfakes.”
Prime Minister Sir Keir Starmer speaking at University College London (UCL) East in east London on Jan. 13, 2025. (Henry Nicholls/PA)
Prime Minister Sir Keir Starmer speaking at University College London (UCL) East in east London on Jan. 13, 2025. Henry Nicholls/PA
The AI Opportunity Action Plan, commissioned by the government and developed by tech entrepreneur Matt Clifford, aims to drive economic growth, improve health care and education, and enhance national security. It produced 50 recommendations, including on regulation of AI, all of which have been adopted by the government.
Key measures include boosting the use of AI in the public sector to enhance efficiency. Regulators will also receive funding to expand their AI capabilities and must publish annual reports detailing how they have supported AI-driven innovation and growth within their sectors.

Readiness of Regulators

The action plan comes amid warnings by leading AI research organisations that many regulators are still in the early stages of adapting to AI.

According to The Alan Turing Institute, key challenges include a lack of knowledge and poor coordination between the regulators. The think tank highlighted the need for “new sources of expertise” to help fill the regulatory gaps and speed up progress towards AI readiness.

The Ada Lovelace Institute has backed Labour’s growth plans but warned that making regulators focus on growth could undermine their main job of protecting the public and damage their credibility.
“The piloting of AI throughout the public sector will have real-world impacts on people. We look forward to hearing more about how departments will be incentivised to implement these systems safely as they move at pace, and what provisions will enable the timely sharing of what has worked and – crucially – what hasn’t,” said Gaia Marcus, director of the institute.

Public Sector

Science Secretary Peter Kyle has acknowledged that “the vast majority of AI should be regulated” by expert watchdogs and pledged government support to help them evaluate AI capabilities.

Under the adopted Scan-Pilot-Scale approach, the public sector will collaborate with AI vendors and start-ups to anticipate future AI developments. Successful pilot projects, such as reducing waiting lists or streamlining paperwork to save time and costs, will be expanded across organisations.

The Ada Lovelace Institute has also cautioned against rolling out private sector tech into the public sector without “oversight.” It highlighted the Post Office Horizon scandal as a cautionary example.

The institute has called for a national taskforce on AI procurement in local government with a temporary body that can bring together experts from the public and private sectors to create and test practical solutions.

“We shouldn’t wait for a Post Office-style scandal to act. Ministers must build on recent progress and introduce binding laws to address AI risks and ensure effective oversight,” said Michael Birtwistle, associate director at the think tank.

Several challenges complicate the UK’s efforts to regulate AI effectively. These include bias and discrimination in AI systems, as well as concerns over job displacement and data privacy. However, proponents of AI advancement argue that regulations should protect the public without hindering progress.

According to Clifford, the AI action plan “offers opportunities we can’t let slip through our fingers,” stressing the potential of AI to boost productivity.

Estimating AI potential to increase bureaucratic productivity, The Alan Turing Institute found that AI could automate up to 84 percent of the 143 million government transactions involved in delivering approximately 400 services annually.
U.S. President Donald Trump speaks during a news conference in the Roosevelt Room of the White House in Washington on Jan. 21, 2025. Trump announced an investment in artificial intelligence infrastructure and took questions on a range of topics including his presidential pardons of Jan. 6 defendants, the war in Ukraine, cryptocurrencies, and other topics. (Andrew Harnik/Getty Images)
U.S. President Donald Trump speaks during a news conference in the Roosevelt Room of the White House in Washington on Jan. 21, 2025. Trump announced an investment in artificial intelligence infrastructure and took questions on a range of topics including his presidential pardons of Jan. 6 defendants, the war in Ukraine, cryptocurrencies, and other topics. Andrew Harnik/Getty Images

Global Approaches

The UK’s decentralised approach to AI regulation contrasts with that taken by the European Union. The European AI Act, effective from Aug. 1, 2024, addresses risks to health, safety, and fundamental rights while setting clear rules for AI developers and users.

It creates one system for all EU countries, dividing AI into four risk levels: unacceptable, high, limited, and minimal. High-risk systems, like those in critical infrastructure or hiring, face strict requirements, including safety checks and documentation.

The United States lacks a comprehensive AI act and instead employs fragmented policies to enhance innovation and manage risks.

A new executive order, signed under the new Trump administration in January, revoked directives perceived as restrictive to AI innovation, including many from the AI Bill of Rights, adopted under the Biden administration.
This policy mandates the creation of a comprehensive AI advancement action plan within 180 days. Federal agencies will review existing regulations related to AI to identify those inconsistent with the new policy goals.
Evgenia Filimianova
Evgenia Filimianova
Author
Evgenia Filimianova is a UK-based journalist covering a wide range of national stories, with a particular interest in UK politics, parliamentary proceedings and socioeconomic issues.