In 2022, voice actor Cooper Mortlock was hired to work on an animated series.
At first, everything went well: the job was stable, and the employer was friendly and paid on time.
However, the Sydney-based Mortlock was told the series would be cancelled when the project reached episode 30 of the contracted 52-episode season.
Yet one year later, Mortlock found another episode had been released on YouTube and was surprised to hear himself doing the voiceover—even though he had taken no part in it.
Mortlock believes he is the latest creative talent to fall victim to advances in artificial intelligence (AI), or more specifically, the growing problem of “voice theft”—where AI is used to imitate or “clone” the voice of artists without consent.
A Murky Area of the Law
After consulting with the media union, Mortlock’s team sent a cease-and-desist letter to the employer, asking them to remove the episode and destroy any AI training models.In response, the firm said AI was not used, and that existing vocal technology had been employed to “mimic” the sound of the cartoon characters.
“That’s a lie you can tell just by listening to it,” Mortlock alleged. “That is my voice running through AI. It sounds like a poor imitation of my voice.”
But it’s also a hazy area of the law.
“The only way currently in our legal system where we could prove that … is if we had a copy of the recording or the vocal tones that they use, like a recording of the source code that they had,” he said.
Of course, a request for the employer to provide this was denied.
Mortlock was able to ask AI technicians to test the recordings, and the results came back positive. Unfortunately, that technology was also too new and could not be used as evidence in court.
The recording firm also argued the original contract allowed for the use of AI.
Mortlock’s response is the contract was drafted before the spread of AI voice cloning and generation.
“We can’t do anything at the moment because the legislation around the legal system and the contracts are too far behind how this technology operates,” he conceded.
Former Chief Scientist Says There’s Still Hope
Ian Oppermann, the former New South Wales government’s chief data scientist, said concerns about job losses was well-founded due to AI’s ability to mimic, synthesise, or evolve existing work.“The question will be what and how the rights of artists are explicitly protected when original content can be evolved,” he told The Epoch Times.
While the former chief data scientist believed AI could take some jobs, he said there could be space for artists.
“AI can be thought of as a very clever mimic. I would like to think there is always going to be something special about a ‘real’ human actor’s reaction as opposed to a synthetic response,” he said.
“A little bit like a-la-carte dining as opposed to formulaic fast-food eating.
“Fast food is cheap and very popular, but arguably, a-la-carte is far more enjoyable if you can afford it.”
At the same time, Oppermann said the government should continue exploring safeguards against the technology.
“Most legislation was developed with the framework of protecting people from other people in one way or another,” he said.
Mortlock Says Wider Industry Impacted
Meanwhile, Mortlock says he has personally witnessed a drop in voice-over work.A friend, also a full-time actor, told him he had lost over 50 percent of his income to AI.
“He used to do things like corporation or explainer videos or internal training videos, which are not for commercial use necessarily, but for internal use. That has all been scooped up by AI,” Mortlock said.
With how fast AI technology has developed, the voice actor was concerned about the future of his industry.
But There’s Reason to be Optimistic
However, Mortlock said he had become more optimistic about the industry with recent pushback in Australia and worldwide.Citing the SAG AFTRA strike over AI’s impact on video games in the United States, and a recent Australian Senate inquiry, Mortlock said more people had become aware of the issue.
The voice actor also said people were slowly realising the ethical considerations around AI, and that the technology was disabling the ability of people to work.
Media Union Pushes for ‘AI Tax’, More Safeguards
During a Senate inquiry hearing on AI in July, Matt Byrne, a representative from the union, the Media, Entertainment and Arts Alliance, said they were pushing for changes to workplace laws.“It’s a choice of these businesses to adopt tools in a systematic way to do the kinds of things that they want to do,” he said.
“If they do want to do it, then it’s our view that it should be mandated that they consult and work with the employees on how these tools will work to ensure that the tool can be used ethically.”
Byrne also suggested introducing new rules requiring AI-generated content be watermarked or labelled so people could know its origin.
“It’s our view that the social responsibility for content produced by AI should lie with both the company that oversees its production as well as AI developers,” he said.
“So we want to make sure that there is accountability at both the developer end, so OpenAI, Amazon, Google etc., but also with the companies who use these tools in their workplaces.”
In addition, the MEAA proposed implementing an “AI tax” on businesses that replaced their employees with digital tools.