Google’s ‘Dreaming’ Artificial Intelligence Shows What Real Machine ‘Learning’ Looks Like

Last week, Google’s Research blog released an incredible series of computer-generated images, dubbed the “dreams” of the machine, that can be best described as a version of Van Gogh’s painting “The Starry Night” on crack.
Google’s ‘Dreaming’ Artificial Intelligence Shows What Real Machine ‘Learning’ Looks Like
Neural net dreams (Google Research Blog).
Jonathan Zhou
Updated:

Last week, June 17, Google’s Research blog released an incredible series of computer-generated images, dubbed the “dreams” of the machine.

The surrealist landscapes were created by neural network computers—machines designed to emulate what scientists believe to be the biological structure of the human mind—that were in the process of training their visual pattern-recognition abilities. 

For instance, to teach the computer how to recognize a banana, the researchers feed the machine with millions of training examples, allowing it to formulate its own criterion for what constitutes a banana, then gradually they adjust the algorithm to improve its recognition abilities.

Then, to test the minimal threshold of visual patterns needed for the computer to “see” a banana, so as to better understand how the computer “sees” things, an image of random noise is slowly adjusted until the machine registers a banana.

(Google Research Blog)
Google Research Blog

The human-like ability to perceive objects despite imperfect visual evidence is what separates neural network artificial intelligence (AI) from computers that run on boolean logic.

Neural network machines, which “think” using a network of artificial neurons that form links within each after processing new “experiences” like real neurons, are more durable than traditional machines, which can be forced to halt a complex calculation due to a single error.

In humans, the flip-side of a robust pattern-recognition capability is that we can often see and hear things that aren’t there, projecting objects onto a coincidence in the sense-data. One enduring example of human perception on over-drive was the Satanic-rock scare of the 1970s, where many found hidden messages praising Satan in rocks songs—most notably Led Zeppelin’s “Stairway To Heaven”—that people claimed could be heard when the songs were played backward.

https://www.youtube.com/watch?v=nwb4qwfm5IY

Similar to how humans were more apt to hear the Satanic messages in rock songs when they were “primed” by reading the lyrics of what they were suppose to hear, the neural computer could be “primed” to detect patterns that aren’t there when instructed to do so by researchers.

Google’s AI was fed arbitrary images and then told to visually amplify whatever patterns it detected in the image. At lower, simpler levels of abstraction, the computer imposed simple stroke-line patterns onto the image.

(Google Research Blog)
Google Research Blog

When the pattern-recognition was raised to higher levels of abstraction, the images imposed became more complex, and the AI started seeing animals in the clouds, just like children do.

(Google Research Blog)
Google Research Blog

 

(Google Research Blog)
Google Research Blog

When the AI was fed images and had the pattern-recognition process put on repeat, so that each output would become an input for further visual amplification, until the only thing visible was the machine’s own super-imposed patterns, the researchers were left with haunting, nightmarish landscapes that wouldn’t be out of place in a surrealistic art gallery.

(Google Research Blog)
Google Research Blog

Many researchers hope that the further development of neural network AI will unlock the secret to general artificial intelligence—machines that can learn and think like humans, and not just execute a narrow set of programs, no matter how complex they might be. Google, sitting on top of billions in profit from search, has been able to generously fund ventures in this field.

In 2014, Google purchased DeepMind, an artificial intelligence startup for $400 million. DeepMind is partnering with neuroscientists with the goal of reverse-engineering the human mind and constructing a simulation of it with machine parts, that transcend the limitations of the AI the public is familiar with.

“We’re trying to build things with generality in mind. ... You need to process vision; you need long-term memory; you need working memory so you can switch between tasks,” DeepMind co-founder Demis Hassabis told Wired.

“Today you can create pretty good bespoke programs to solve specific tasks—playing chess or driving a car. Our system could learn how to play chess, but it’s not going to be better than Deep Blue.”

For Hassabis, machines like Deep Blue aren’t, strictly speaking, artificial intelligence at all, but merely a complex set of programs.

“You give it all the knowledge it needs—the moves, the openings, the endgames. But where does the intelligence reside in something like Deep Blue? It’s not in the program, it’s in the minds of the programming team. The program is pretty dumb; it doesn’t learn anything,” he said.

For a long time, skeptics of AI doomsday scenarios could rest easy on the fact that the most cutting-edge examples, like IBM’s Watson, were in many respects downright stupid and couldn’t possibly function without a human custodian. If the development of neural network AI continues apace, however, even die-hard tech optimists should start worrying.

Jonathan Zhou
Jonathan Zhou
Author
Jonathan Zhou is a tech reporter who has written about drones, artificial intelligence, and space exploration.
Related Topics