When Gill Pratt, the CEO of Toyota Research Institute, the carmaker’s artificial intelligence (AI) lab in Menlo Park, CA, mounted the CES 2017 stage earlier this month, he delivered a reality check about automated driving.
"We’re not even close to Level 5 autonomy, which the SAE defined as full robotic control everywhere, at any time, in any conditions," Pratt told the audience. "We have many years of machine learning research to do to achieve Level 5.”
Later, in an interview with Automotive Engineering, Pratt credited recent steady progress to most driving being relatively easy—"we do most of it without half thinking,” he said. But true self-driving vehicles will need “trillion-mile reliability” and the elusive ability to handle “corner cases” in their automated search for the best solutions. These are the difficult and rare problems or situations that can occur outside of normal operating parameters.
He likened the required robo-driving skills of the future to those of trained professional airline pilots. Current driving capabilities are more like the skills of general-aviation pilots.
On the other hand, SAE Level 4 autonomy, where the car operates fully automatically only at limited speeds in certain operational areas, weather conditions or time of day, “is coming much faster,” the former MIT professor and DARPA director told the CES audience. “In fact, it’s very likely we’ll get to Level 4 within the decade.”
Warning is 'hard to guarantee'
Pratt then highlighted a key challenge to Level 3 and 2 operations that his all-star team of AI scientists is now studying. “If the autonomous car needs to hand-off control to a human driver in Level 3 driving, it must ensure that it gives sufficient warning, a ‘request to intervene’ to the driver, who may not be paying attention at the time,” he explained.
Perhaps even more challenging, he believes, is the requirement for the Level 2 human driver to always supervise the operation of the autonomy, "taking over control when the autonomy fails to see danger ahead.” To give a disengaged driver 15 s of warning at 65 mph (105 kmh), the system must spot trouble ahead at a distance equivalent to five football fields. Such a feat requires prediction before the hazard has yet appeared.
That’s extremely hard to guarantee,” Pratt asserted, “and unlikely to be achieved soon. In fact, it is possible that Level 3 may be as difficult to accomplish as Level 4.”
Further, tests show that “the longer drivers disregard control of the vehicle, the longer it takes to regain control,” he noted. Psychologists who studied this "breakdown of vigilance" in WWII radar operators, call this issue the “vigilance decrement.”
Pratt asked the CES audience if, over a two-hour road trip, they would be likely to remain vigilant for a possible handoff of the Level 2 car’s autonomy. “Does this mean that Level 2 is a bad idea? Some companies have already decided to skip Levels 2 and 3,” he noted.
The TRI team has uncovered earlier psychological studies that hint that some things—texting while driving not included—seem to reduce vigilance decrement. He said the research found that some "mild" secondary tasks may actually help maintain situational awareness. For example, how do long-haul truck drivers maintain comparatively good safety records? Perhaps, Pratt surmised, it's because they talk on two-way radios and scan the road for speed traps.
Expert opinion is divided, Pratt declared. He reckons that an enhanced version of the Toyota’s Yui driving assistant, which recently debuted in Toyota’s Concept-i vehicle (http://articles.sae.org/15202/), might be able to provide the interaction that could help maintain a driver’s supervisory attention over time.
To help investigate this and other such technical challenges, the TRI leader said that Toyota Motor Corp. is doubling the size of the 100-member team that he assembled a year ago to conduct the $1-billion collaborative program, which includes researchers from Stanford University, MIT and University of Michigan.