A healthcare company is trying its hand at playing the human versus artificial intelligence (AI) contest—and wants us to play along.
McGuire calls his composition (above) “Lyrical Lullaby.” It was created in conjunction with Bede Williams, head of Instrumental Studies at the University of St. Andrews.
“Lots of people report of a falling sensation as they fall asleep, and many lullabies mimic this by containing melodies made up of descending patterns in the notes. Lyrical Lullaby has this essential feature and many other musical devices which can induce in us a state of restfulness,” Williams told the Mirror.
In order to come up with “Lullaby,” an AI-capable machine was taught to compose using sheet music in computer-readable format.
“An artificial neural network is essentially a representation of the neurons and synapses in the human brain—and, like the brain, if you show one of these networks lots of complex data, it does a great job of finding hidden patterns in that data,” said Ed Newton-Rex, creator of the machine that produced the composition, according to the Mirror.
“We showed our networks a large body of sheet music, and, through training, it reached the point where it could take a short sequence of notes as input and predict which notes were likely to follow.
“Once a network has this ability, it essentially has the ability to compose a new piece, as it can choose notes to follow others it’s already composed.”
The company behind the stunt is AXA PPP healthcare, part of the AXA Group.
“I have exposure to the very cutting edge AI, and I think people should be really concerned about it,” Musk told attendees at the National Governors Association summer meeting on Jul. 15, 2017. “I keep sounding the alarm bell, but until people see robots going down the street killing people, they don’t know how to react, because it seems so ethereal.”
“The real safety question, if you want to call it that, is that if we give these systems biased data, they will be biased,” Giannandrea told MIT Review.
Giannandrea worries that as cloud-based AI becomes more accessible, it will make it easier for bias to creep in. With people in the driver’s seat without the technical knowledge or the ability to assess underlying data and algorithms for quality and bias, machine intelligence could actually increase the incidence of bad choices.
“If someone is trying to sell you a black box system for medical decision support, and you don’t know how it works or what data was used to train it, then I wouldn’t trust it,” Giannandrea said.
“We haven’t achieved the whole dream, which is where AI has a conversation with you, and it is able to explain,” says Carlos Guestrin, a professor at the University of Washington. “We’re a long way from having truly interpretable AI.”
What “interpretable” means in the context of AI is simply that an explanation of how it works can be provided in a way that is rational and understandable to humans. However, cutting-edge machine learning, or deep learning, is heading in the opposite direction—towards greater complexity.
To some, what makes the promise of machine intelligence so enticing is that it can do what humans can’t. But that’s also where the danger lies because it can only accomplish this objective by becoming too complex for humans to grasp.
Tufts University to meet with Daniel Dennett, a philosopher and cognitive scientist from Tufts University told MIT Review, “I think by all means if we’re going to use these things and rely on them, then let’s get as firm a grip on how and why they’re giving us the answers as possible,” Dennett says.
Otherwise, the machines may become too smart for our own good.
“If it can’t do better than us at explaining what it’s doing,” he says, “then don’t trust it.”