Sunak: Guardrails Needed to Regulate Growth of AI

Sunak: Guardrails Needed to Regulate Growth of AI
Prime Minister Rishi Sunak at the Shukkeien Garden in Hiroshima, Japan, on May 19, 2023. Stefan Rousseau/PA Media
Alexander Zhang
Updated:

“Guardrails” need to be put in place to make sure artificial intelligence (AI) is developed and used “safely and securely,” the prime minister has said.

Speaking to journalists travelling with him in Japan, Rishi Sunak said he expects to have discussions with world leaders on AI at the G7 summit in Hiroshima.

“If it’s used safely, if it’s used securely, obviously there are benefits from artificial intelligence for growing our economy, for transforming our society, improving public services,” he said.

“But, as I say, that has to be done safely and securely, and with guardrails in place, and that has been our regulatory approach.”

Regulators worldwide are stepping up their scrutiny of AI, given its explosion into general use worldwide and fears over its impact on jobs, industry, copyright, the education sector, and privacy—among many other areas.

On May 4, the Competition and Markets Authority—Britain’s competition watchdog—launched a review of the AI market to look at the opportunities and risks of AI, as well as the competition rules and consumer protections that may be needed.

The prime minister’s official spokesman said: “There’s a recognition that AI is a problem that can’t be solved by any one country acting unilaterally. The UK’s approach is meant to be nimble and iterative because of the nature of AI.

“The starting point for us is safety and reassuring the public they can have the confidence in how AI is being used on their behalf.”

Job Losses

Sunak’s comments come a day after BT Group—Britain’s largest broadband and mobile provider—said it will cut up to 55,000 jobs by the end of the decade.

Roughly one-fifth of the job losses will result from the telecom giant’s plans to shift to AI and automated services, as customers rely more on online and app-based communication rather than call centres for things like account servicing and upgrades.

Sir Patrick Vallance, former chief scientific adviser to the government, told a parliamentary committee on May 3 that the rapid development of AI has been “a surprise for everyone,” including people very close to the field.

He warned that the technology will have a “big impact on jobs,” which he said “could be as big as the industrial revolution was.”

He also said it is important to keep track of “what happens with these things when they start to do things you really didn’t expect and what are the risks associated with that.”

Risk to Humanity

AI has recently come under the global spotlight, with ChatGPT coming to prominence in recent months after a version was released to the public last year.
Following the launch of the latest version of ChatGPT in March, some AI professionals signed an open letter, written by the nonprofit Future of Life Institute, warning that the technology poses “profound risks to society and humanity.”

Tesla CEO Elon Musk, who was among the signatories, has been outspoken about his concerns with AI in general, holding that it poses a serious risk to human civilization.

“AI is perhaps more dangerous than, say, mismanaged aircraft design, or production maintenance, or bad car production, in the sense that it is, it has the potential—however small one may regard that probability, but it is nontrivial—it has the potential of civilizational destruction,” he told Fox News in a recent interview.

Geoffrey Hinton, the British computer scientist who has been called the “Godfather of AI,” recently left his position as a vice president and engineering fellow at Google so he could join the dozens of other experts in the field speaking out about the threats and risks of AI.

“It is hard to see how you can prevent the bad actors from using it for bad things,” Hinton, 75, told The New York Times in an interview.

PA Media contributed to this report.