The race to gain a foothold in the emerging autonomous vehicle market continues to attract more players. Mentor Graphics Embedded Systems Division is joining the game, introducing a system that captures raw sensor data and makes driving decisions.
Mentor’s new platform captures and fuses raw data from radar, lidar, vision and other sensors, then decides whether to turn, brake or take other actions. The DRS360, unveiled at the SAE WCX17 in Detroit, is aimed primarily at SAE Level 5 autonomous vehicles, though it can also be used for advanced driver assistance systems (ADAS).
The hardware includes Xilinx FPGAs Mentor and either X86- or ARM-based microcontrollers. It runs Linux software, building upon the company’s claims of leadership in automotive Linux.
Unlike many safety systems, Mentor is using raw sensor data. Many companies add microcontrollers to sensors, doing elementary processing before data goes onto the network. That reduces bandwidth requirements and lightens the workload for the central processing modules.
Mentor engineers contend that it’s more efficient to stream raw data, since networks like Ethernet can meet even the demands of many high-resolution sensors. Eliminating intelligence within sensors can save both time and money, especially in sensor-laden autonomous vehicles.
“In adaptive cruise control, the radar has a processor that filters out data; the important data is then sent to a system that decides whether it needs to brake,” explained Glenn Perry, General Manager of the Mentor Graphics Embedded Systems Division. “Adding a processor in the sensor induces latency and adds to the bill of materials. When you have all the lidar, radar, cameras needed for Level 5, I’m not sure this works. It will be expensive and consume an extraordinary amount of compute power.”
The Mentor module, which has a power budget of under 100W, actually improved performance in a recent test in which more sensors were added. Technicians used a single sensor when they ran object classification algorithm with a pedestrian, bicycle and vehicle, then added more sensor inputs. When input from complementary sensors was combined, it took less compute power to analyze the data.
“With one sensor, we were at an 85% CPU load with a classification time of about 600 milliseconds and a confidence rating of 65%,” Perry said. “When we added radar and lidar, the confidence level rose, the CPU load went down to 55% and the classification time was one millisecond.
"We were surprised how much the CPU load dropped; it was counterintuitive to stream in more Gbits of data and see a decline,” ne noted.
Performance won’t be the only factor that determines whether companies buy into Mentor’s concept. Business issues driven by Tier 1s as well as OEM groups will play a key role.
Mentor’s architecture utilizes a powerful centralized computer, in contrast to today’s distributed-intelligence architectures. Many autonomous architectures also employ a centralized controller that relies heavily on pre-processed inputs. Amin Kashi, Mentor’s ADAS Director, contends that this is driven by business rationales, not technical efficiency
“There’s been a resistance to consolidation, more due to organizational structures and supply chain issues,” Kashi said. “That said, there’s been some consolidation in infotainment and in-vehicle infotainment systems, especially by the Chinese, who don’t care much about structures.”
Openness is another plus on the business side. Some suppliers offer only a black box, so it’s difficult to alter hardware or software. Mentor will let OEMs tweak algorithms and hardware designs.
“OEMs feel ADAS is an area of differentiation, but if they can only get a black box, it’s difficult to differentiate. With an open platform, they can make alterations to differentiate their offerings,” Perry said.