Users of the first Microsoft Windows personal computers feared the somber glow of the “Blue Screen of Death,” the infamous stop screen that signaled a fatal system crash. But what if your computer is driving you down the highway? Any BSoD moment in an autonomous vehicle might mean facing a far harsher crash altogether.
Because automated cars and trucks of one flavor or another, presumably piloted by reasonably road-tested and street-wise control systems, will hit the road in five years or so, it’s time for car makers to think about how they are to be protected from digital attack by hackers, according to the authors of a recent analysis published in IEEE Transactions of Intelligent Transportation Systems. The paper may be the first “investigation of the potential cyberattacks specific to automated vehicles, with their special needs and vulnerabilities,” wrote Jonathan Petit, Research Fellow at the University College Cork’s Mobile and Internet Systems Laboratory in Ireland, and Steven E. Shladover, Research Engineer at the University of California, Berkeley, and Program Manager for California PATH (Partners for Advanced Transportation Technology).
The paper’s authors warn that the auto industry is unprepared for the impending threats against network-connected robot cars, which will exchange data via vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) dedicated short-range communications (DSRC). Though the external cooperative information will improve performance and safety, such access means network vulnerabilities for potential malefactors, they stated.
For their investigation, the collaborators focused on the three highest levels of automation within the SAE J3016 definitions of driving automation: conditional automation, high automation, and full automation, said Shladover, who served on the SAE International committee that had formulated the standards.
For conditional automation systems, the driver is expected to be able to resume vehicle control within a few seconds of an adverse event, but much can happen in a few seconds, which can mean up to 100 m (328 ft) of travel, the paper stated. With high- and full-automation systems, it is required to bring the vehicle to a safe (“minimal risk”) state, even if the driver takes no action, placing a much higher burden on the designer of the system to manage any consequences of a cyberattack without compromising safety.
Connected vs. autonomous
The two researchers asked three questions: How can autonomous automated vehicles, those that are both independent and self-contained, not communicating with others around them, be attacked? How can cooperative automated vehicles be attacked? Finally, they considered the differences between the two.
“We did a threat-analysis for automated vehicles, identifying the problems that a networked, connected vehicle might encounter plus those that a self-contained autonomous car might face,” formulating a list of riskiest scenarios and then seeking defense strategies, Petit said,. This means guarding gateways to ITS (intelligent transportation system) networks from penetration and any control over road signs/sensors and community maps, as well as fending off attacks aimed at GPS and navigation devices, odometer and acoustic sensors, and common obstacle-detection and -tracking sensors including radar, lidar, cameras, and machine-vision systems.
The authors noted that threats to autonomous vehicles are potentially more damaging because the driver may not be available to provide independent uncorrupted information or to defeat a malfunctioning system within the critical few seconds if the driver is not paying attention to driving. They cited recent, as yet unpublished, research by General Motors that “has shown that drivers largely disengage from the driving task and monitoring of the driving environment after continuous intervals of fully automated driving ranging from 5 to 30 min, becoming almost totally dependent on the automation system.”
Self-driving systems face a tough signal-detection problem, Shladover said. Almost any false positives and false negatives would annoy drivers and so must be avoided, but the system’s “success rates for both of them have to be way out on the tails, which is hard to do.” Some hazardous conditions are very hard to detect, he noted, such as damaging pot holes amid heavy traffic.
The researchers said that future automated vehicles will probably involve more and different sensors; in any case they expect that data-fusion software will ultimately play an important role in ensuring safety by determining the true state of the vehicle and its surroundings. Smart control algorithms will combine the data received from all sources and fact-check the collected information.
The paper categorizes cyber-threats in terms of three alternative tactical approaches: passive snooping versus active manipulation, signal jamming versus sending false messages (or spoofing), and attacks targeting single vehicles versus those exploiting a network of connected vehicles.
“An independent, autonomous car may not know that it’s been attacked,” Shladover said, “Stealthy attack is much more difficult,” particularly problematic if the vehicle control doesn’t know it has bad data and a road crash could be unavoidable.
Two principal threats to solo robot cars stood out, the paper said: blinding cameras or inserting fake video into vision systems, and jamming or spoofing GPS signals.
“Cameras are mobile eyes,” Petit said. “They’re hacked easily. You could feed a system's recorded images or mess up the cameras by playing with the brightness using something as simple as a laser pointer. And it doesn’t take a great amount of resources to do GPS spoofing,” he said. GPS jamming uses equipment that is available for around $20 and more expensive GPS jammers to perform GPS spoofing, where they replicate signals and pass false locations essentially by fouling the signal correction for drift in the target location.
Medium-level risks to single autonomous cars are posed by electromagnetic pulses (EMPs) that could shut down the electronics altogether or environmental confusion inflicted on radar and lidar scanners.
Automated cars will probably be connected in mesh networks to enable more efficient traffic management, so bad data could end up being passed among vehicles and through the network. Probably the biggest threat is the injection and propagation of incorrect navigation signals or safety messages that generate wrong reactions (such as spurious braking) that can be life-threatening for all in the vicinity.
The other high-level threat to connected automated vehicles is the shared map database. The locally stored dynamic maps are susceptible to map poisoning. This attack differs from map poisoning of autonomous automated vehicles in the sense that it does not target an online server that collects floating car data.
“Shared networks provide all kinds of attack routes and combination of attacks,” Petit said. Against these threats the authors propose establishing authentication systems, which might be based on encryption, say, a Public Key Infrastructure, and “misbehavior detection systems,” which use algorithms to look for inconsistencies and guess when something in a car’s activity just is not right and then flag it. The network thus develops a constantly updated list of untrustworthy and revoked information data sources.
Only by cladding robot cars in overlapping security measures—a digital defense in depth—have OEMs a chance of foiling the hacking and external tampering to come.