Virginia Attorney General Joins Coalition to Fight AI-Generated Child Abuse Images

Virginia Attorney General Joins Coalition to Fight AI-Generated Child Abuse Images
A man types on a computer keyboard on Feb. 28, 2013. Kacper Pempel/Reuters
Masooma Haq
Updated:
0:00

Virginia Attorney General Jason Miyares joined a coalition of attorneys general from 54 U.S. states and territories urging Congress to study how artificial intelligence (AI) is being used to create child sexual abuse material (CSAM) and to introduce legislation to make it easy to prosecute such crimes.

“AI use in the production of child sexual abuse materials is becoming increasingly prevalent. We are in a race against the clock to establish strong legal boundaries and protections that encompass artificial intelligence technologies and, more importantly, protect the safety and innocence of our children,” Mr. Miyares said in a statement.

The National Association of Attorneys General sent a letter (pdf) to congressional leaders about the issue.

Mr. Miyares and the other attorneys general said they are deeply concerned for the safety of children in their respective states and territories.

“We also have a responsibility to hold accountable those who seek to harm children in our States and territories. And while internet crimes against children are already being actively prosecuted, we are concerned that AI is creating a new frontier for abuse that makes such prosecution more difficult,” the letter states.

Because AI-generated technology is advancing at a fast pace and laws have not been able to keep up, in June, Mr. Miyares led 23 state attorneys general in urging the National Telecommunications and Information Administration to advance artificial intelligence governance policies.

While some say AI-generated CSAM does not victimize real children, opponents argue that it does cause harm, because viewing child pornography often leads to the viewer abusing a child in real life.

A 2005 study published in the journal Prosecutor found a link between watching child pornography and sexually abusing children in real life, and Canadian forensic psychologist Michael C. Seto and his colleagues published a study in the journal Sexual Abuse with their findings that 50 to 60 percent of viewers of child pornography admit to abusing children themselves.

Types of AI-Generated CSAM

In their letter, the attorneys general cite three types of AI-generated CSAM.

One type uses the likeness of a real child who has not been sexually abused but whose image is digitally altered to show depictions of abuse. A second type uses images of a child who was sexually abused, and these images are digitally recreated to show other forms of abuse. A third type uses images that are generated entirely by AI without the image of any real children.

The attorneys general are asking that all forms of AI-generated CSAM be illegal and that the crimes be easily prosecuted.

They are also asking Congress to create laws that “deter and address child exploitation, such as by expanding existing restrictions on CSAM to explicitly cover AI-generated CSAM.”

The United States is the top consumer of child pornography and top of the list for creating and supporting CSAM, in part because too many people don’t realize or believe it is happening, according to Operation Underground Railroad, a private foundation that works to save children from sex trafficking.

The nonprofit National Center for Missing and Exploited Children operates a centralized reporting system called CyberTipline, where people can submit incidents of online exploitation.

According to the center’s 2022 report (pdf), the tip line received over 49 million reports of CSAM, up from 33 million in 2020.
Masooma Haq
Masooma Haq
Author
Masooma Haq began reporting for The Epoch Times from Pakistan in 2008. She currently covers a variety of topics including U.S. government, culture, and entertainment.
Related Topics