The rise of artificial intelligence and the government’s goal of positioning the UK as an AI “superpower” have sparked debate on the approach to regulating the rapidly developing sector.
To date, the UK doesn’t have specific legislation dedicated solely to AI regulation. Instead, the technology is governed under existing laws and sector-specific regulation.
The Data Protection Act 2018 and GDPR govern the use of personal data by AI systems, while consumer protection laws address potential harms caused by faulty or misleading AI. Applications of AI in finance are regulated under the Financial Services and Markets Act 2000 and the Equality Act 2010 prevents discrimination in AI-driven decision-making.
For instance, the Financial Conduct Authority regulates AI use in financial services, while the Medicines and Healthcare products Regulatory Agency oversees AI applications in health care.
‘Proportionate’ Regulation
The Labour government has pledged to “harness“ the power of AI and ”strengthen safety frameworks.” Speaking in January, Prime Minister Sir Keir Starmer said that UK’s approach to AI regulation will be “be pro-growth and pro-innovation.”“Our ambition is not just to be an AI superpower but also make sure that this benefits working people.
“We will test and understand AI before we regulate it to make sure that when we do it, it is proportionate and grounded in the science. But at the same time, we’ll offer the political stability that business needs,” Starmer said.
Readiness of Regulators
The action plan comes amid warnings by leading AI research organisations that many regulators are still in the early stages of adapting to AI.According to The Alan Turing Institute, key challenges include a lack of knowledge and poor coordination between the regulators. The think tank highlighted the need for “new sources of expertise” to help fill the regulatory gaps and speed up progress towards AI readiness.
Public Sector
Science Secretary Peter Kyle has acknowledged that “the vast majority of AI should be regulated” by expert watchdogs and pledged government support to help them evaluate AI capabilities.Under the adopted Scan-Pilot-Scale approach, the public sector will collaborate with AI vendors and start-ups to anticipate future AI developments. Successful pilot projects, such as reducing waiting lists or streamlining paperwork to save time and costs, will be expanded across organisations.
The institute has called for a national taskforce on AI procurement in local government with a temporary body that can bring together experts from the public and private sectors to create and test practical solutions.
“We shouldn’t wait for a Post Office-style scandal to act. Ministers must build on recent progress and introduce binding laws to address AI risks and ensure effective oversight,” said Michael Birtwistle, associate director at the think tank.
Several challenges complicate the UK’s efforts to regulate AI effectively. These include bias and discrimination in AI systems, as well as concerns over job displacement and data privacy. However, proponents of AI advancement argue that regulations should protect the public without hindering progress.
According to Clifford, the AI action plan “offers opportunities we can’t let slip through our fingers,” stressing the potential of AI to boost productivity.
Global Approaches
The UK’s decentralised approach to AI regulation contrasts with that taken by the European Union. The European AI Act, effective from Aug. 1, 2024, addresses risks to health, safety, and fundamental rights while setting clear rules for AI developers and users.It creates one system for all EU countries, dividing AI into four risk levels: unacceptable, high, limited, and minimal. High-risk systems, like those in critical infrastructure or hiring, face strict requirements, including safety checks and documentation.
The United States lacks a comprehensive AI act and instead employs fragmented policies to enhance innovation and manage risks.