It’s said that communication is the key to trust. Between us humans, we need to honestly communicate our intentions to build trust. But in any relationship, communicating can be a complex problem and parties often don’t draw the same conclusions from the same message.
Now shift the relationship from person-to-person to person-to-automated vehicle with “intentions” that may be far more opaque. That’s the challenge facing Dr. Melissa Cefkin and her colleagues at Nissan—and every other company involved in autonomous-vehicle development—as we move into the next era of mobility.
Although we traditionally think of vehicle development as the province of engineers and designers, the promise of automated driving has drawn a plethora of new skillsets into the process. Along with lawyers, ethicists and data scientists, there are anthropologists like Dr. Cefkin. As a principal scientist and design anthropologist at Nissan’s Silicon Valley research center, Cefkin and her team are studying how people perceive vehicles that don’t have human drivers and how they will coexist in future cities.
Traversing a busy subway station, throngs of people seem to move seamlessly, but there are constant glances around, microexpressions detected on faces or movements of a shoulder as someone slips through the crowd. When you cross a busy street, you may exchange a quick glance with a driver and that’s all it takes to judge whether it’s safe to go—or whether it’s better to wait and let the car pass.
Humans are remarkably adaptable and sensitive to nuances that make society work. It’s something we learn as we grow from infancy. It’s far from a flawless process, though. Regardless of whether someone is riding in the vehicle, these machines will have to provide feedback to other vehicles and pedestrians about their intentions and in turn read signals from other entities in the driving environment.
As we hopefully have learned, adding technology to any ecosystem typically adds a range of new challenges and problems. A stroll down a Manhattan street bombards people with sights and sounds of traffic and personal interactions. In a world of autonomous electric vehicles, the sound and visualscape changes dramatically. People could be easily overwhelmed by these new stimuli.
“People will have to adapt and change to the new visual cues they must interpret,” said Cefkin. “They need to understand quickly if the car has seen me, in order to build the necessary trust.”
Nissan and Toyota, to name a couple, have shown concept automated vehicles that leverage a variety of interesting external-display techniques designed to provide these messages. Nissan’s IDS Concept has a digital signboard in the windshield that displays messages to other road users. However, if every manufacturer goes its own way with these feedback systems, it will make it exponentially more difficult for pedestrians to interpret an automated vehicle’s likely behavior.
“I’m personally committed to developing harmonization,” Cefkin added in regard to these signals. While preliminary discussions have begun on standards, it’s still premature to lock down much of anything.
Another approach Cefkin highlights is motion cues. For all the perception limitations humans have, it appears “the most expressive thing about vehicles is their motion.” People can detect changes in acceleration, for example, that give clues to intent.
In the two years since Cefkin joined Nissan, much of the public effort around automation has been directed at developing the core technologies of perception, mapping and control, but her feedback-stimuli efforts are equally important to the deployment process.
Done wrong, the results of poor communication between people and automated vehicles “could be most profound with mistrust and discomfort” that kills adoption before it can really take hold, Cefkin warns.
With the untold billions of dollars invested—and still to be invested—in autonomous-vehicle development, I doubt anybody wants that.