A senior army officer has told a parliamentary committee the armed forces have to have confidence in weapons which rely on artificial intelligence (AI) if they are to “sleep at night.”
Lord Hamilton—who was a defence minister under Sir John Major in the 1990s—said: “There was a report some months ago in the papers, the Americans were trialling an AI system and it basically went completely AWOL, blew up the operator, killed the operator, and then blew itself up afterwards. The Americans later denied it ever happened ... Let’s hypothesise that this has happened in the M.O.D. What would you then do if that happened?”
Lieut. Gen. Copinger-Symes said: “I don’t know about the incident you’re referring to, but we have very tried and tested procedures for dealing with incidents like that and working out what lessons we learn, and how we take them forward to make sure they’re safe and responsible.”
‘Public Anxiety About Artificial Intelligence’
Later, Defence Procurement Minister James Cartlidge, told the committee: “We recognise as a government there is public anxiety about artificial intelligence. That is precisely why the prime minister will be holding an international summit in the autumn, about AI safety.”Mr. Cartlidge said AI defence systems would not be discussed at that summit but he said, “Nevertheless it’s a very important statement of the government’s overall commitment to ensuring there is public confidence in the way we explore AI.”
He said that while the aim of using AI in weapons was partly to give Britain an edge over its competitors, they could also be used to carry out “mundane tasks” which freed up service personnel for other roles and AI could also be used to keep humans “out of harm’s way” such as by defusing ordnance.
Mr. Cartlidge said: “The Royal Navy has a gun called Phalanx which contains in its potential use a capability which can arguably be described, for part of its use, as partly autonomous/automated. But the crucial thing is that it can only operate if there is appropriate human involvement ... it has to be switched on.”
Cartlidge Says UK Must ‘Stay Ahead of Our Adversaries’
Mr. Cartlidge replied: “To be absolutely clear to you, as far as I am concerned, we must not in any way act naively or put restraints on our country in terms of its ability to exploit AI within the bounds and parameters of international law, but in a way that ensures absolutely we stay ahead of our adversaries.”He said “We only have to look at what’s happening in Ukraine. There is some intelligence potentially about AI used by Russia ... but irrespective of that in a situation like this, where you know they are operating in a fundamentally nefarious way, they’ve invaded a sovereign country, there has to be a strong presumption that they will be pursuing investment in R&D, technology.”
“We must not restrict our ability to respond. But equally, yes, we must operate within international law. It is a balance to be struck,” added Mr. Cartlidge.
Earlier the witnesses were quizzed about the use of “synthetic data” in AI weapons development by Lord Mitchell, a Labour peer.
Lieut. Gen. Copinger-Symes explained why synthetic data was important.
He said: “If I were training a system to recognise what a cat looks like, I'd have the whole of the internet to trawl for data ... If we’re training a system to recognise a threat tank across the whole world, our existing data set to train on that might be slanted towards where we’ve operated previously, or where our intelligence-gathering has focused on.”
Lieut. Gen. Copinger-Symes went on: “So, for instance, you might be looking for a tank but all of your images of a tank are against a sort of European dark green background, rather than a desert background or a jungle background or, or an Arctic background. And to prevent that bias ... we might have to create synthetic data ... and that means the whole system is going to be far more effective at finding the enemy tank wherever it is in the world.”