AI safety advocates have urged the government to include a roadmap for tackling broader AI harms in its plans to “harness” the technology for economic growth and better public services.
A UK think tank focussed on ethical and responsible use of AI said that piloting AI throughout the public sector should be implemented safely and include input from the public.
The action plan includes proposals for greater use of AI to enable public sector workers to spend less time performing administrative tasks and more time delivering services.
A series of AI “growth zones” will be set up around the UK to speed up AI development, by simplifying planning and providing support for building data centres and AI infrastructure.
The government also plans to build an AI supercomputer and increase computer capacity by twenty-fold by 2030.
Ministers have adopted all 50 recommendations from a plan developed by tech entrepreneur Matt Clifford, who was commissioned by Science Secretary Peter Kyle in July to identify AI opportunities.
Public Confidence
The plan includes funding commitments to regulators “to scale up their AI capabilities” and monitor the budgets via the Spending Review.All regulators will be required to publish annual reports on how they have “enabled innovation and growth driven by AI in their sector.”
Gaia Marcus, director of the Ada Lovelace Institute, supported the government’s growth plans but stressed the need for public trust.
She warned that making regulators focus on growth could undermine their main job of protecting the public and damage their credibility.
Marcus also noted that the public holds strong and nuanced opinions about how their data is used, particularly in areas like health.
“In light of past backlash against medical data sharing, the Government must continue to think carefully about the circumstances under which this kind of sharing will be acceptable to the public. Greater public engagement and deliberation will help in understanding their views.
“The piloting of AI throughout the public sector will have real-world impacts on people. We look forward to hearing more about how departments will be incentivised to implement these systems safely as they move at pace, and what provisions will enable the timely sharing of what has worked and—crucially—what hasn’t,” Marcus said.
Funding and Regulation
The AI Opportunities Action Plan is backed by leading tech firms, three of which have committed £14 billion to various projects, creating 13,250 jobs across the UK, the government said.Additionally, £25 billion in investment, announced at the International Investment Summit in October, will fund the development of data centres to advance AI.
The action plan, placed at the heart of the government’s Industrial Strategy, has been hailed by senior Labour ministers.
Kyle said it will propel Britain in the global race for AI, while Chancellor Rachel Reeves said it put more money in the pockets of working people.
In contrast to the EU, whose approach to tech regulation is more protective, the UK and the United States are more sector-based and self-regulatory when it comes to AI.
The action plan noted that ineffective regulation could “hold back” AI adoption in crucial sectors like the medical sector, and called for safety and assurance throughout the process.
The CLTR urged the government to design a system of “incident reporting” that can be monitored to better develop how AI is regulated and deployed.