Advocates Urge Government to Prioritise AI Safety in Growth Agenda

The government’s new AI action plan includes proposals for using AI to reduce administrative burdens for public sector workers and improve service delivery.
Advocates Urge Government to Prioritise AI Safety in Growth Agenda
Prime Minister Sir Keir Starmer speaking at University College London East in east London on Jan. 13, 2025. Henry Nicholls/PA
Evgenia Filimianova
Updated:
0:00

AI safety advocates have urged the government to include a roadmap for tackling broader AI harms in its plans to “harness” the technology for economic growth and better public services.

A UK think tank focussed on ethical and responsible use of AI said that piloting AI throughout the public sector should be implemented safely and include input from the public.

In its response to the government’s AI Opportunities Action Plan, published on Monday, the Ada Lovelace Institute said that it “will require careful implementation to succeed.”

The action plan includes proposals for greater use of AI to enable public sector workers to spend less time performing administrative tasks and more time delivering services.

A series of AI “growth zones” will be set up around the UK to speed up AI development, by simplifying planning and providing support for building data centres and AI infrastructure.

The government also plans to build an AI supercomputer and increase computer capacity by twenty-fold by 2030.

Ministers have adopted all 50 recommendations from a plan developed by tech entrepreneur Matt Clifford, who was commissioned by Science Secretary Peter Kyle in July to identify AI opportunities.

Introducing the plan in his speech in east London on Monday, Prime Minister Sir Keir Starmer said that AI “will transform the lives of working people for the better.”
He acknowledged there will be “teething problems” but stressed that AI can work for “everyone in our country,” including teachers, healthcare professionals and public sector workers to provide support and create wealth.

Public Confidence

The plan includes funding commitments to regulators “to scale up their AI capabilities” and monitor the budgets via the Spending Review.

All regulators will be required to publish annual reports on how they have “enabled innovation and growth driven by AI in their sector.”

Gaia Marcus, director of the Ada Lovelace Institute, supported the government’s growth plans but stressed the need for public trust.

She warned that making regulators focus on growth could undermine their main job of protecting the public and damage their credibility.

Marcus also noted that the public holds strong and nuanced opinions about how their data is used, particularly in areas like health.

“In light of past backlash against medical data sharing, the Government must continue to think carefully about the circumstances under which this kind of sharing will be acceptable to the public. Greater public engagement and deliberation will help in understanding their views.

“The piloting of AI throughout the public sector will have real-world impacts on people. We look forward to hearing more about how departments will be incentivised to implement these systems safely as they move at pace, and what provisions will enable the timely sharing of what has worked and—crucially—what hasn’t,” Marcus said.

She called for a credible plan to tackle broader AI harms, beyond a narrow focus on extreme risks, to protect the public.

Funding and Regulation

The AI Opportunities Action Plan is backed by leading tech firms, three of which have committed £14 billion to various projects, creating 13,250 jobs across the UK, the government said.

Additionally, £25 billion in investment, announced at the International Investment Summit in October, will fund the development of data centres to advance AI.

The action plan, placed at the heart of the government’s Industrial Strategy, has been hailed by senior Labour ministers.

Kyle said it will propel Britain in the global race for AI, while Chancellor Rachel Reeves said it put more money in the pockets of working people.

Britain’s approach to AI regulation doesn’t involve new legislation specific to AI.

In contrast to the EU, whose approach to tech regulation is more protective, the UK and the United States are more sector-based and self-regulatory when it comes to AI.

The action plan noted that ineffective regulation could “hold back” AI adoption in  crucial sectors like the medical sector, and called for safety and assurance throughout the process.

Last year, London-based think tank, the Centre for Long-Term Resilience (CLTR), identified a “critical gap” in the UK’s regulation of AI which could cause “widespread harm” to the British people if not adequately addressed.

The CLTR urged the government to design a system of “incident reporting” that can be monitored to better develop how AI is regulated and deployed.

Evgenia Filimianova
Evgenia Filimianova
Author
Evgenia Filimianova is a UK-based journalist covering a wide range of national stories, with a particular interest in UK politics, parliamentary proceedings and socioeconomic issues.