Google has already extensively tested a modified, self-driving Lexus RX450h SUV for months, but now it’s finally prepared to roll out its native autonomous vehicle on public roads.
In a blog post on Friday morning, Google said that it plans to test its native self-driving cars on the streets of Mountain View, California, this summer. The cars have already been tested on private roads.
Google promoted its autonomous cars as a step forward for safety, touting how they could cut down on the number of car accidents, 94 percent of which are caused by human error.
https://www.youtube.com/watch?v=uCezICQNgJU
However, autonomous cars are vulnerable to something that human-driven cars, for the most part, aren’t—hacking.
Machine Error
Google’s autonomous car uses a LIDAR system—short for light and radar—to create a 3-D model of the world for navigational purposes. The system works deterministically: Given a set of sensory inputs, the output is highly predictable.
In theory, this means that a self-driving car could be tricked into “thinking” that an object is closer and farther away than it actually is, with potentially disastrous consequences.
A hack-proof autonomous car is, like other invulnerable digital systems, a pipe dream. The primary advantages of self-driving cars—fewer crashes, less congestion—derive from the ability of the cars to talk to each other on a larger network.
“In order to have dramatic increase in density, they need to be cooperating,” said Ryan Gerdes, a researcher at Utah State University who specializes in autonomous vehicles. “You can’t have a system of full autonomous vehicles that don’t share information among each other.”
A troublemaker wouldn’t need to have a car veer off into a wall or off a bridge to cause mayhem. Simply slightly speeding up or slowing down a vehicle on a highway could trigger massive traffic jams if there were a large enough number of other self-driving cars on the road.
“We modeled user behavior through a simulator, and it shows that an automated transportation system is brittle,” Gerdes said. In the model, other autonomous cars would increase the spacing between themselves and the renegade vehicle, which would have a cascading effect on the cars behind them.
Even with human-driven cars, hacking becomes an ever more distinct prospect as more and more of the control system becomes partially automated for safety reasons. Engineers have already demonstrated how General Motors’ OnStar system makes its vehicles susceptible to brake-jamming.
https://www.youtube.com/watch?v=oqe6S6m73Zw
Machine Complexity
For every pessimist, there’s an equal and opposite optimist, and in the automation industry, some see self-driving cars as above the trend toward ever more security breaches.
An anonymous engineering consultant with field experience on self-driving cars has downplayed Gerdes’s research, arguing that actual autonomous cars are much more immune to hacking than he suggests.
“For autonomous vehicles, ’reprogramming' in a more sophisticated way like described there, is extremely difficult,” he wrote to Atlantic reporter Alexis C. Madrigal. “Having some intelligent speeding up and slowing down is difficult, because inside the modules are many fail-safe diagnostics called ‘rationality monitors’ that can detect a messed-up signal being fed to a sensor.”
The strongest line of defense against car hacking, the consultant states, is the immense complexity of the self-driving software—no one really knows how it works.
“Everyone inside the company has their little area that they are experts in, but the knowledge is so spread out that it is really hard to pull this kind of thing off,” he wrote. “However, if it were to happen, it would require some coordinated use of the network vulnerabilities—which is a crude way of doing things, and is more likely to disable than to effectively control a car.”