After California legislators introduced two new laws that place stricter limitations on “deepfake” videos, some have questioned how easy they will be to enforce.
AB-730 and AB-602 were both approved by the governor on Oct. 3. The first makes it illegal to post or distribute any video that maliciously manipulates the face or speech of a political candidate within 60 days of an election, while the second allows California residents to sue any individual or group that integrates someone else’s likeness into sexually explicit material.
Alexander Reben, an MIT-trained roboticist who experiments with art and technology, suggests that the anonymity of deepfake creators could pose a problem that California’s law won’t necessarily be able to solve.
“I think the most damaging deepfakes will come from anonymous creators, which are then picked up by legitimate outlets as fact or become viral,” Reben told The Epoch Times. “It seems the legislation assumes the source of the video can be ascertained, while it is quite clear that bad-actors on the internet often cannot be identified.”
In his view, the law should ideally be supplemented with a technical solution to ensure detection or verification.
“Detecting alterations of video might be a cat-and-mouse game as technology improves,” Reben said. “Verification would involve determining if the media is authentic by tagging it somehow at the source, possibly with technologies such as encryption or public-ledgers like the blockchain.”
Recent examples of viral deepfake videos have repeatedly raised concerns over the implications of this new technology.
Earlier this month, a video of President Trump saying “AIDS is over” began circulating online. The deepfake video was outed as a publicity stunt when Solidarité Sida, a French charity, published a statement identifying the video as fake.
Various other videos of political figures—including one that digitally altered House Speaker Nancy Pelosi’s words to make it sound as if she was drunk and slurring—have highlighted the potential dangers of deepfake technology.
“What if somebody creates a video of President Trump saying, ‘I’ve launched nuclear weapons against Iran, or North Korea, or Russia?’” Hany Farid, a computer science professor at the University of California, told CBS News. “We don’t have hours or days to figure out if it’s real or not.
“The implications of getting that wrong are phenomenally high. What you have to understand about this technology, is that it’s not in the hands of few, it’s in the hands of many.”
Assemblymember Marc Berman (D-Palo Alto), who introduced AB-730, said in a statement, “Deepfakes are a powerful and dangerous new technology that can be weaponized to sow misinformation and discord among an already hyper-partisan electorate. Deepfakes distort the truth, making it extremely challenging to distinguish real events and actions from fiction and fantasy.”
The second legislation, AB-602 specifically addressed pornographic content because it accounts for the vast majority of deepfakes.
According to reports, new apps will make this even easier by allowing tens of thousands of users to create fake pornographic videos with celebrities as well as everyday people.
The creator of FakeApp told a Vice reporter of his ambitions to simplify the process so that users can one day “select a video on their computer, download a neural network correlated to a certain face from a publicly available library, and swap the video with a different face with the press of one button.”
Euronews reported that Deeptrace, a cyber-security company, recently released a report that shows the number of deepfake videos online have nearly doubled since December 2018. Furthermore, 96 percent were sexually explicit and involved the faces of celebrities being superimposed over the bodies of adult actors.
“It’s critically important that we crack down on both politically manipulative and nonconsensual pornographic #deepfakes,” Berman tweeted after thanking Gov. Newsom for his support.
While some see the legislation as an effective safeguard against misinformation and invasion of personal privacy, others are more critical of the implications for individual rights.
In a letter to Newsom, the American Civil Liberties Union wrote that, “Despite the author’s good intentions, this bill will not solve the problem of deceptive political videos. It will only result in voter confusion, malicious litigation and repression of free speech.”
“Political speech enjoys the highest level of protection under US law,” Jane Kirtley, a professor of media ethics and law at Hubbard School of Journalism and Mass Communication, told The Guardian. “The desire to protect people from deceptive content in the run-up to an election is very strong and very understandable, but I am skeptical about whether they are going to be able to enforce this law.”
AB 730 goes into effect next year and sunsets in 2023.