Kathy Winters has been re-introducing herself in auto industry circles lately. Last summer, the well-known Delphi engineering executive joined Intel as General Manager of the Automated Driving Solutions Division (ASD)—the chip maker's-fast growing group dedicated to advanced driver assistance systems (ADAS) and automated-vehicle technology. Soon after, Automotive Engineering and Winters re-connected for a discussion about automated driving's need for intense data-processing capability.
With the increasing use of processors in the vehicle and the need for more processing power, is Intel now officially a Tier 1 in some cases?
We’re still a Tier 2 but kind of a ‘hybrid’—we’re actually a Tier 2 development partner because we’re helping to architect the system. If you look at our engagement with BMW for example, we’re very ‘hands-on,’ working more directly with them and (machine-vision specialist) Mobileye. There will always be a Tier 1 in that mix, I would say, doing the integration. So we’re bridging a gap.
The biggest opportunity for us is to do both the in-vehicle piece—which requires heavy data processing, filtering and transmission for the ADAS sensor fusion—and to understand the 5G connection in the vehicle and then really put in that ‘data center’ with the learnings and apps that have to be there.
The data center will be understanding where the cars around you are, sending out traffic information and helping to optimize road use at the best times.
The ‘data center’ will be in the cloud?
Correct. There is an enormous amount of in-vehicle data that’s coming off the radar, vision systems, LiDAR and from the V2V connection—from other vehicles. The vehicle when it’s in automated mode needs to react and path-plan, executing split-second decision-making. That has to come from the cloud and it involves communicating with other vehicles, all with minimal latency. We need a super-fast computing cloud, so we’ll need 5G: we need the ‘pipeline’ and the speed.
The data center isn’t an off-the-shelf technology solution yet, but do you expect it to be?
Nothing’s off-the-shelf in this space yet. We want to get to a really well-defined platform, potentially something that could be (widely) used and not require everybody in the industry to have to reinvent their own.
The subject of latency and speed to and from the cloud is getting a lot of discussion among ADAS engineers.
Decision processing may have to be less than a millisecond. The vehicle will also need the ability to upgrade and upload new software. Over-the-air (OTA) is driving a lot of memory and storage. Do you keep the old load and the new load in parallel, or wait to completely flash-out a new engine controller? Now that OTA is a reality, how do you partition it and how much storage and memory do you need onboard to make the appropriate upgrades?
What’s your view of the SAE Level 3 and driver re-engagement debate—when and how the automated car hands back control to the human?
Today Intel and others are focused on Levels 2, 3, 4 because active safety is and will continue to be really big business. But some folks are being more disruptive and going straight to the end game of full autonomy. The mobility-on-demand guys want to get the human driver out—that’s what their business case is focused on. They’re the ones driving the pace to Level 4 faster than the folks who are incrementally ratcheting up safety on the way to Level 4. I see a place for both.
This revolution won’t happen all at once.
There will be a very interesting dynamic on our roads during the next 10 years, with traditional car owners driving fully manually in traffic with Level 2 and 3 vehicles as well as robo-fleet vehicles. It will take years to push those older Level 0 vehicles out of the car parc. But all the ‘Smart City’ initiatives are really going to drive more and better data and use cases for our development going forward.