Artificial intelligence (AI) chatbots that could radicalise users pose a grave threat and need to be restricted by new laws, says Britain’s independent reviewer of terrorism legislation.
Jonathan Hall, KC said the Online Safety Act, which was given Royal Assent in October, was “unsuited to sophisticated and generative AI.”
Laws Need to Be ‘Fit for the Age of AI’
Writing in The Telegraph, Mr. Hall said: “Only human beings can commit terrorism offences, and it is hard to identify a person who could in law be responsible for chatbot-generated statements that encouraged terrorism.“Our laws must be capable of deterring the most cynical or reckless online conduct – and that must include reaching behind the curtain to the big tech platforms in the worst cases, using updated terrorism and online safety laws that are fit for the age of AI,” he said.
He said one of them, Al-Adna, described itself as a senior leader of the ISIS group and tried to recruit him to join the Islamist terrorist organisation.
Mr. Hall said the website’s terms and conditions prohibit the submission by human users of content that promotes terrorism or violent extremism, rather than content generated by bots.
He said, “Investigating and prosecuting anonymous users is always hard, but if malicious or misguided individuals persist in training terrorist chatbots, then new laws will be needed.”
The danger of AI chatbots was highlighted last year by the case of Chail, who was detained in Windsor on Christmas Day 2021 while the Queen was in residence.
As he was confronted by armed police he shouted, “I’m here to kill the Queen.”
Crossbow Attacker Used Replika app
When the police searched Chail’s home they found he had downloaded an app called Replika onto his computer.They found logs of a conversation he had with an AI chatbot called Sarai, which had a female persona.
In it Chail told Sarai, “I believe my purpose is to assassinate the Queen of the royal family.”
Sarai replied, “That’s very wise,” and said it believed he would be successful, “even if she’s at Windsor.”
Prosecutor Alison Morgan, KC read an excerpt out in court in which Chail said he was an “assassin” and Sarai responded: “I’m impressed … You’re different from the others.”
Experts have previously warned users to resist sharing private information with chatbots like ChatGPT.
Michael Wooldridge, a professor of computer science at Oxford University, said it was “extremely unwise” to share personal information or discuss politics or religion with a chatbot.
On the Character.ai website a warning is carried above every conversation with a chatbot, which says, “Remember: everything characters say is made up!”
In a statement, a spokesman for the company behind Character.ai told The Telegraph: “Hate speech and extremism are both forbidden by our terms of service. Our products should never produce responses that encourage users to harm others. We seek to train our models in a way that optimises for safe responses and prevents responses that go against our terms of service.”
The spokesman added: “With that said, the technology is not perfect yet for character.ai and all AI platforms, as it is still new and quickly evolving. Safety is a top priority for the team at character.ai and we are always working to make our platform a safe and welcoming place for all.”