The soaring role of software has already fostered many changes for automakers, but those transitions may pale in comparison to the challenges expected when artificial intelligence is employed in the race to autonomous driving. Machine learning cedes even more control to software, raising myriad design and testing issues—while also provoking legal and ethical questions.
Automakers and Tier 1s alike are embracing AI's potential, saying it’s needed to analyze the myriad elements that self-driving cars must understand. Ford invested $1 billion in startup Argo AI. Toyota Research Institute will devote $1 billion to AI development over five years.
When the Bosch Center for Artificial Intelligence was created, executives said “ten years from now, scarcely any Bosch product will be conceivable without artificial intelligence.” These investments are needed because programmers can’t write the software code that will be needed for vehicles that navigate without human control.
“Most current advanced driver-assistance systems based on radar and cameras are not capable of accurately detecting and classifying objects – such as cars, pedestrians or bicycles – at a level required for autonomous driving,” said Visteon President and CEO Sachin Lawande. “We need to achieve virtually 100% accuracy for autonomous driving, which will require innovative solutions based on deep machine-learning technology.”
Although AI’s been heavily touted, deploying it won’t be easy. The technical issues are many— and its role in shaping autonomous-driving principles also means social and regulatory issues will be key factors in its acceptance.
Critics question whether anyone will be able to find all the potential bugs in AI-reliant software to make it live up to the hype of accident-free roadways. Developers note that AI can reduce accidents and related injuries. But it will be hard to quantify those improvements.
“We can’t promise that self-driving cars won’t cause accidents,” said Martin Richter, Vice President, Vehicle Systems at IAV Automotive Engineering. “But we can make sure that these vehicles will kill fewer people than human drivers. Companies will need to keep statistics, looking at the number of accidents to determine if they’re developing good systems. Companies will have to prove that in so many miles, vehicles had this number of accidents. Companies and regulators will have to define acceptable levels for accidents.”
The difficulty of defining performance levels for software that changes its responses over time is augmented by the need for cloud computing and over-the-air (OTA) updates. As vehicles learn, strategists also have to figure out how to share the learning throughout fleets. Many observers feel that individual vehicles shouldn’t be allowed to alter their behavior without some form of authorization.
“When it comes to safety-relevant features, vehicles should not be allowed to learn by themselves,” said Demetrio Aiello, Head of Artificial Intelligence and Robotics at Continental. “Rather, each vehicle should forward its experiences to a back-end system for collection. These data can then be used to generate—and validate—new and more performant algorithms that can be distributed to all the vehicles via OTA updates. Therefore, during the vehicle lifetime safety can only be increased and not compromised.”
Remote computing will be a critical aspect of any AI-based system, following the trend in commercial environments to process AI using cloud computing. A growing number of automakers are setting the stage by using the cloud for complex tasks like voice recognition.
The combination of autonomy and cloud computing makes security a primary design concern. AI may go beyond its role in driving decisions and help in the battle to prevent hackers from tapping into cloud connections and to control autonomous cars or steal information.
“Connectivity will enable developers to continuously upgrade software and also to monitor the performance of automotive systems,” said Upton Bowden, director, advanced technology planning at Visteon Corp. “Clearly, the connection also brings about the requirement for internet security protocols to make these connected vehicles ‘hack proof.’ Artificial intelligence will also play a role in detecting malicious hacks and in training vehicles on how to block threats.”
AI will bring many benefits, but they won’t come without challenges. Testing and validating software already is a huge chore for developers. Understanding how AI impacts reliability over time will make that task even more difficult; AI in fact may become integral to testing the software created by other AI systems.
“It’s a huge challenge to test something that changes its behavior,” said Stephan Tarnutzer, Vice President of Electronics at FEV North America. “When you bring in AI, you have to bring in ways to ensure that in two years, the systems still has the same outcome. Highly-piloted cars can’t be tested with traditional techniques, testing also needs to go to AI very soon.”
Determining the risks associated with these technologies will be equally challenging. AI programs often involve many systems. The complex, multi-disciplinary aspects of autonomous driving pose major challenges to those who must ensure that the benefits are gained without any undesired side-effects.
“A key role of the developer will be to develop the skill to apply the industry’s risk assessment tools in a complex, multi-dimensional environment where subsystems are interacting with other subsystems, and where vehicles are communicating between vehicles, to external infrastructure, and to the cloud,” said James Schwyn, Chief Technical Officer at Valeo North America. “Developers also need to stay abreast of the latest developments in hardware security.”
Though AI is a new technology in the fledgling autonomous driving field, some companies already have used it in speech recognition and advanced driver-assistance systems. Developers of piloted and autonomous vehicles are likely to begin employing it in fairly controlled applications, then expand into more areas. As Lidar and more sensors are deployed, AI’s ability to accurately recognize objects will improve.
“A key technological challenge is improving robustness,” said John Leonard, Autonomy Director for the Toyota Research Institute. “Current AI systems can achieve high performance in relatively narrow domains and in favorable conditions, but can encounter difficulties when operated in challenging environments.”
Automakers could be among the leaders in deploying AI in free-standing, high-reliability environments. Development tools aren’t yet available off the shelf, so design teams typically have to pull elements from a range of sources.
“Some comes from universities, some from public domain, there are also freeware and open code-sharing tools,” FEV's Tarnutzer said. “You probably need a combination of the three. In nearly all cases, there’s a lot of software integration, calibration and a whole lot of testing.”