Human Impersonation AI Must Be Outlawed

Human Impersonation AI Must Be Outlawed
Illustration by The Epoch Times, Getty Images, Shutterstock
Richard W. Stevens
Updated:
0:00
Commentary

In 15 minutes of techno-evolutionary time, artificial intelligence-powered systems will threaten our civilization. Yesterday, I didn’t think so; I do now.

Here’s how it will happen.

The AI Human Impersonation Danger

Recall how in 2023 criminals used an artificial intelligence (AI) system to phone an Arizona mother and say they were holding her daughter for ransom. The AI system mimicked perfectly her daughter’s voice down to the word choices and sobs. The terrified mom found her daughter safe at home, only then to determine the call was a scam. That crime showed the power of AI audio alone to deceive and defraud people.

Today, Mom gets a text message demanding a ransom, threatening that the caller will torture her child to death before her eyes. “Click on the link,” the text says. She clicks through, and there in full color is her child, tied to a chair, wearing familiar clothes. The child calls out for Mom to help, and the torture begins. The screams, the cries of the child, all very real—Mom’s worst nightmare is happening.

Click on the link to electronically send money, or the torture continues. Mom has no time to research if this horror is real. The child is at a camp 300 miles away. Mom pays. The torture stops. The child tearfully thanks Mom and is let go. All on full-color, clear video.

This AI technology already exists. So, what crime was committed here? It’s an animation, so maybe the criminals committed extortion and related fraud crimes. Mom thought it was kidnapping and torture. Because the video was AI-generated, Mom was deceived, and no child was harmed. Maybe the police can solve this crime.
At this moment, anyone can obtain an AI-powered system that starts with a photograph of a person and can create a believable animation. AI can supply the voice and speech patterns of the person. Even the mouth and face move exactly as if that person is speaking.

Millions of Terror–Extortions Worldwide

The AI-powered kidnapping, torture, and extortion video can be made today. One slimeball doing it—that’s just one crime.

Multiply the same scenario by 1 million. Every day, a million of these texts with videos are sent to a million parents. Kidnapping, torture, ransom demands. Every day. Seem far-fetched? Hardly. For AI systems, doing the same thing once or a million times is nothing special. Remember, that’s the power of AI: Do something sophisticated fast and in huge quantities.

As the criminal mind speeds ahead of society’s responses, any number of criminal deceptions and frauds can be executed nearly flawlessly using totally believable AI-generated animations of people, animals, or machines. Something as mundane as getting a former supervisor’s glowing video reference when you apply for a job—AI delivers the fully animated boss. The boss, in his or her own voice, will even answer questions on the Zoom call. You get the job because of the boss’s praise.

Anything you can imagine in audio and video, AI can deliver. And people will believe what they “see,” right? Multiply the deceptions by a million. Every day.

Individuals Are Practically Defenseless

How to stop or prevent the massive worldwide fraud and deception machines? Consider individual self-defense. Can you reliably detect a fake from a real video, when the details of appearance, voice, mannerisms, and word choices all match your family member or friend? Maybe today you could, but soon you won’t.
With human-versus-human violent crimes, the potential victim can be armed with defense tools that can deter or harm, even possibly kill, the attacker. No defense tools can deter an AI system, however, because it never worries nor feels pain. Remember “The Terminator”?

Can smart programmers build AI systems to detect fake AI-generated text, pictures, and videos? Yes, but only up to a point. The fakers can tweak their AI systems to escape other AI detection. The evolution of computer viruses proves that fighting such changing sophisticated threats is a nonstop challenge.

But when there are millions of AI deceptions, the defenses won’t succeed. If one-tenth of 1 percent of a million deceptions succeeds, that means 1,000 successful crimes. Every week, maybe every day, worldwide.

AI System Makers Create the Tools

We can blame the criminals for using AI systems for the World Wide Ripoff, but that ignores the glaring fact: The makers of such AI systems empower the criminals by making the crimes possible.
Yes, flawless animation of humans could have noncriminal, even beneficial, uses. The entertainment industry could shift from human to animated AI actors and stunt performers, for example, giving endless creative possibilities.

Civil Court Remedies?

What can society do about the harmful uses? The situation is:

(A) powerful equipment that can do good or evil; and

(B) the same equipment can severely harm people emotionally, physically, and financially, by the millions, every day.

Consider the basics. Our system of laws and courts is supposed to help keep the peace by deterring intentional and negligent actions that harm people. It also aims to provide the unlawfully harmed victims with some kind of redress, such as forcing a correction of a situation or awarding monetary damages.

To use the civil court system means individual harmed people have to file lawsuits and slog through the courts. Groups of people with near identical claims can file “class-action” lawsuits, but these must likewise proceed through the courts, typically for years.

Can you sue the AI system manufacturer? Possibly, using a form of the “abnormally dangerous activity” basis for liability. Under that rule, you can be held strictly liable for harms caused when you create an unreasonable risk of harm to others and those others are harmed by the situation you created. If you set up a toxic waste dump in your backyard, you could face liability for harms, regardless of whether you were “at fault.”
The threat of AI-powered crimes using human impersonation, however, doesn’t affect only one location or certain identifiable victims. It’s a mega-threat—AI systems can commit millions of attempted terror–extortions every day. Lawsuits by individual people against defendants in other countries can scarcely deter the crimes. In reality, finding out who committed the harms is the first and likely insurmountable hurdle.

Criminal Laws to Address Terror-Extortion?

Unlike the civil system, the criminal law system exists to protect society as a whole, not just individuals. Using AI systems to commit terror-extortion against multiple victims certainly rises to a level of crimes against society. If laws are enacted to criminalize AI terror-extortion, then police forces can work on detecting violations, finding perpetrators, and supporting criminal prosecutions to impose fines and imprisonment.
Using the police sounds good, at first. But AI terror-extortion is a “cyber crime,” which means the police will use computer tools, even AI tools, to try to locate the people and computers who are committing the crimes. Frankly, the police detective model makes things worse.

Totalitarian Government Solutions?

Aggressive legislators and police detectives will be frustrated with trying to solve crimes after the fact. They will declare they need computer monitoring of every person, so that the “bad guys” can be discovered at the stage of preparing to commit the crimes.

Constant monitoring of all computer and telephonic systems, looking for suspicious activity and communications, will “make sense.” So will monitoring people’s locations, meetings, movements, employment, friends, and financial transactions. After all, we’re trying to prevent millions of terrible AI crimes.

Handing to central government these powers and all of the information to rule the world—made possible by AI systems designed to combat AI crimes—is the worst possible solution. Unlimited evil is the result when governments get power the people cannot check, balance, or ultimately resist by force. Read professor R.J. Rummel’s “Death by Government” for gut-wrenching confirmation.

Non-Totalitarian Practical Solution

How can we address AI system terror–extortion and any sort of human impersonation crimes? Outlaw the production, sale, and use of human impersonation systems for any purpose.

The freedom lover in me bristles at this proposal. But the crimes are real, the harms are horrific, and the likely sheer numbers of attempted and successful crimes against innocent and defenseless people stagger the imagination.

Humanity has never before faced potential serious crimes of fear and violence personally targeting millions of people worldwide all at once and continuously, 24/7. No amount of secular preaching about computer programmer ethics can prevent the mind-blowing damage these AI systems can inflict. Targeted individuals can’t even defend themselves against it. And the big-government-controls-everyone solutions are worse than the problem.
We can righteously draw the line against AI system proliferation. To protect innocent victims and to prevent totalitarian government solutions: It must be declared a serious felony, akin to attempted mass murder, to produce, sell, possess, or use any AI-powered human impersonation system.
Views expressed in this article are opinions of the author and do not necessarily reflect the views of The Epoch Times.
Richard W. Stevens
Richard W. Stevens
Author
Richard W. Stevens is a lawyer, author, and a Fellow of Discovery Institute's Walter Bradley Center on Natural and Artificial Intelligence. Richard earned his law degree from University of San Diego Law School and taught legal research and writing at George Washington University and George Mason University law schools. In recent years, Richard has written about human and artificial intelligence, especially examining issues of government operations and patent, copyright, criminal and civil liability law.
Related Topics