Renesas & StradVision Release Jointly-Developed Solution for NextGen ADAS Applications

Posted Oct 03, 2019 by Renesas

Renesas Electronics and StradVision have jointly developed a deep learning-based object recognition solution for smart cameras used in advanced driver assistance system (ADAS) applications and cameras for ADAS Level 2 and above.

To avoid hazards in urban areas, next-generation ADAS implementations require high-precision object recognition capable of detecting so-called vulnerable road users (VRUs) such as pedestrians and cyclists. At the same time, for mass-market mid-tier to entry-level vehicles, these systems must consume very low power. The new solution from Renesas and StradVision achieves both and is designed to accelerate the widespread adoption of ADAS.

A leader in vision processing technology, StradVision has abundant experience developing ADAS implementations using Renesas' R-Car SoCs, and with this collaboration, it is enabling production-ready solutions that enable safe and accurate mobility in the future. According to Naoki Yoshida, Vice President of Renesas' Automotive Technical Customer Engagement Business Division, the new joint deep learning-based solution optimized for R-Car SoCs will contribute to the widespread adoption of next-generation ADAS implementations and support the escalating vision sensor requirements expected to arrive in the next few years.

StradVision's deep learning–based object recognition software delivers high performance in recognizing vehicles, pedestrians, and lane marking. This high-precision recognition software has been optimized for Renesas R-Car automotive system-on-chip (SoC) products R-Car V3H and R-Car V3M, which have an established track record in mass-produced vehicles. These R-Car devices incorporate a dedicated engine for deep learning processing called CNN-IP (Convolution Neural Network Intellectual Property), enabling them to run StradVision's SVNet automotive deep learning network at high speed with minimal power consumption.

The object recognition solution resulting from this collaboration realizes deep learning–based object recognition while maintaining low power consumption, making its use suitable in mass-produced vehicles, encouraging ADAS adoption. Key features of the deep learning-based object recognition solution:

Solution Supports Early Evaluation to Mass Production:

StradVision's SVNet deep learning software is a powerful AI perception solution for the mass production of ADAS systems. It is highly regarded for its recognition precision in low-light environments and its ability to deal with occlusion when objects are partially hidden by other objects. The basic software package for the R-Car V3H performs simultaneous vehicles, person and lane recognition, processing the image data at a rate of 25 frames per second, enabling swift evaluation and POC development.

Using these capabilities as a basis, if developers wish to customize the software with the addition of signs, markings and other objects as recognition targets, StradVision provides support for deep learning-based object recognition covering all the steps from training through the embedding of software for mass-produced vehicles.

R-Car V3H and R-Car V3M SoCs Increase Reliability for Smart Camera Systems While Reducing Cost:

In addition to the CNN-IP dedicated deep learning module, the Renesas R-Car V3H and R-Car V3M feature the IMP-X5 image recognition engine. Combining deep learning-based complex object recognition and highly verifiable image recognition processing with man-made rules allows designers to build a robust system. In addition, the on-chip image signal processor (ISP) is designed to convert sensor signals for image rendering and recognition processing. This makes it possible to configure a system using inexpensive cameras without built-in ISPs, reducing the overall bill-of-materials (BOM) cost.

Availability: Renesas R-Car SoCs featuring the new joint deep learning solution, including software and development support from StradVision, is scheduled to be available to developers by early 2020.