Can AI Produce Art? Sothebys Says Yes

Can AI Produce Art? Sothebys Says Yes
A visitor takes a picture with his mobile phone of an image designed with artificial intelligence by Berlin-based digital creator Julian van Dieken (C) inspired by Johannes Vermeer's painting "Girl With a Pearl Earring" at the Mauritshuis Museum in The Hague on March 9, 2023. Simon Wohlfahrt/AFP via Getty Images
Updated:

A painting created by an AI robot named Ai-Da sold for $1 million in a recent Sothebys auction. The painting is a blotchy, somewhat distorted portrait of early AI researcher Alan Turing against an empty black background, daringly titled “A.I. God. Portrait of Alan Turing.”

The robot’s creator, Aidan Meller, said he believes that the sale will spark conversations about the role of technology in society and art. He insists that Ai-Da is a genuine creative artist. Meller made this assertion before a 2022 committee of British House of Lords, with the robot present and even “speaking” on its own behalf at times. In the unsettling video of Meller’s testimony, Ai-Da’s silent, lifelike head swiveled from side to side as it stood beside Meller, who spoke animatedly about artificial intelligence and the possibility of “hacking” the “systems” of human nature.

Meller equated humans and robots, arguing that humans are simply a conglomeration of complex processes and algorithms not unlike those present in Ai-Da. If humans and robots are fundamentally the same, they can be blended. Ai-Da is “foreshadowing actually, in many respects, a physical embodiment of where biotech might go,” Meller said.

The Ai-Da robot is photographed with a self-portrait it created, in 2021. (<a href="https://commons.wikimedia.org/w/index.php?title=User:Leemurz&action=edit&redlink=1">Leemurz</a>/<a href="https://creativecommons.org/licenses/by-sa/4.0/deed.en">CC BY-SA 4.0</a>)
The Ai-Da robot is photographed with a self-portrait it created, in 2021. Leemurz/CC BY-SA 4.0

Real Robot Art

But is all this true? Are humans and robots the same? Are they so similar that we can legitimately call Ai-Da a “creative artist”? Can a robot or an AI program create genuine art—whether that be paintings, poems, or music?

The notion of AI art provides a test-case to examine the claim that human beings and computers run in basically the same way. Artistic creation is a distinctly human activity. So if robots can make art, does that prove their personhood?

Not really. In these conversations, the term “art” isn’t used in a precise way. It’s true that AI can produce end-products that resemble paintings, poems, and songs. Sometimes, these outputs are indistinguishable from what a human could produce. But these aren’t genuine works of art because there is no intention, meaning, understanding, or consciousness behind them. Instead, they’re the result of a blind program running a set of instructions based on a certain input in order to kick out a variable output. In short, robots lack souls, and genuine art only flows from a soul, from a being with experience, awareness, and intentionality.

"Josiah Robertson, Artist at Work" by Anonymous. Photograph. Art requires a human touch to imagine, create, and execute it. (Public Domain)
"Josiah Robertson, Artist at Work" by Anonymous. Photograph. Art requires a human touch to imagine, create, and execute it. Public Domain

Art is an imitation of something in the world observed and contemplated by the artist. The artist recreates it with the intention of deepening understanding and moving the heart. Art arises from the conscious experience of the world, with resulting emotional responses, and later reflection on that experience. The absence of consciousness, understanding, and emotion, would render the artistic process in its full sense impossible.

We do encounter objects that resemble art because they’re a meaningful imitation of something. This happens when a blob of ketchup happens to form the shape of a horse on your plate, or when natural erosion makes a rock look like a sculpture of a human face, like the “Face on Mars.” But we know that these objects aren’t technically art since there was no artistic intention behind them. They result from blind processes.

So do robots have consciousness, understanding, and intention, or are they just blind processes? In its “testimony” before Parliament, Ai-Da responded to a question about how its artistic process differs from that of a human being: “How this differs to humans is consciousness. I do not have subjective experiences, despite being able to talk about them. I am and depend on computer programs and algorithms.”

This is factually correct. Robots and AI don’t have consciousness nor all the wonderful and mysterious capabilities associated with it. They have human-generated algorithms.

Inputs and Outputs

Computers and computer programs work by receiving inputs in order to generate outputs. At its most basic level, this happens through switching on and off electrical currents. But on both ends—input and output—the “information” transmitted by the computer relies on human intelligence in order to have any significance. Images on a computer screen, like letters on a page, have meaning only because we assign meaning to them. Human intelligence can interpret and acquire understanding through them. But those letters only appeared there because we or someone else, at some point, put some kind of meaningful input into the computer. The computer doesn’t “know” what these words mean any more than an old-fashioned typewriter does. It’s simply a complex tool by which these words make their way to readers.

Contemporary philosopher Edward Feser made this very point in an article on artificial intelligence. Feser wrote, “The thing to emphasize is that the computer is not ‘in and of itself’ carrying out logical operations, processing information, or doing anything else that might be thought a mark of genuine intelligence—any more than a piece of scratch paper on which you’ve written some logical symbols is carrying out logical operations, processing information, or the like.”

This holds true even if the “logical symbols” are electrical currents inside a computer.

In 1940, a woman uses a punching machine to punch corresponding holes in a 1940 US Census card. The use of punch cards was one of the earliest stages of computerization; for their size, the cards could store more information than longhand records. (Public Domain)
In 1940, a woman uses a punching machine to punch corresponding holes in a 1940 US Census card. The use of punch cards was one of the earliest stages of computerization; for their size, the cards could store more information than longhand records. Public Domain

Feser continued: “And in ‘exactly’ the same way, considered ‘by themselves’ and apart from the intentions of the designers, the electrical currents in an electronic computer are just as devoid of intelligence or meaning as the current flowing through the wires of your toaster or hair dryer. ... The intelligence is all in ’the designers and users’ of the computer, just as it is all in the person who wrote the logical symbols on the piece of paper rather than in the paper itself.”

Where AI becomes confusing is in its ability to seemingly generate meaningful outputs completely of its own accord—thus implying intentionality or intelligence on its own part. But that’s not really what’s happening. The non-intelligence of electrical currents that Feser mentions applies to AI models, just as it does to other computer applications. The AI program doesn’t “know” the information it’s handling any more than my word processor knows what I’m typing.

A Basis in Humanity

AI models must be “trained” in vast amounts of human-sourced data in order to generate seemingly “creative” outputs. A language-based AI scans enormous amounts of human-generated text to establish certain patterns. It then regurgitates those patterns in imitation of human speech when prompted.
One definition reads: “A language model is a type of machine learning model trained to conduct a probability distribution over words. Put it simply, a model tries to predict the next most appropriate word to fill in a blank space in a sentence or phrase, based on the context of the given text.”

The generation of new “artistic” texts with an AI goes something like this: When a user asks the AI to write a sonnet, the AI draws on a large collection of language patterns in its memory tagged “sonnet.” These are sonnets by human beings that it has scanned. It can use statistical predictions on word association—based on these human writings—to cobble together the types of word combinations, sentence structures, and phrases that normally appear in a human-written sonnet. The AI is more like a database that cuts and pastes together items from its datasets with some degree of randomness, than it is like a person. All the AI is doing is blindly running statistical analysis on the words fed into it by humans. There is no intelligence here at all.

Dante and His Poem by Domenico di Michelino. (Public domain)
Dante and His Poem by Domenico di Michelino. Public domain

Thus, with AI art, the result is not art itself but a convincing simulation of art, completely derivative of genuine human art. As Feser said, a computer is “a way of using utterly unintelligent physical objects and processes to ’mimic‘ various intelligent activities—just as various utterly non-magical objects and techniques provide an entertainer with a way to ’mimic’ magic.”

Now, this argument against AI’s ability to create true art doesn’t nullify the impressive capabilities of artificial intelligence. These machines generate remarkable outputs—and these outputs become more impressive all the time. AI-generated content often matches human content perfectly because programmers are good at what they do. But however perfect the simulation of artistic intelligence may be, it remains just that: a simulation. It’s no more real than a computer simulation of the weather or of war. Without the unquantifiable force of consciousness, there is no real art.

Apocalyptic warnings about AI abound in our media, fiction, cinema, and the darkened corners of Reddit forums. I won’t weigh in on fears that some supercomputer will launch squadrons of killer robots to annihilate humanity. But I will point to a less noticeable danger: Perhaps the bigger threat from AI is that as it gets better at simulating human intelligence and creativity, we will increasingly forget what it is that makes humanity special, and we'll fall for the lie that we’re no more than a biological computer.

What arts and culture topics would you like us to cover? Please email ideas or feedback to [email protected]
AD