Advanced image sensors take automotive vision beyond 20/20
November 03, 2017
Semiconductor vendors are looking for the next big growth market. That increasingly appears to be the automotive sector, where autonomous drive presents an enormous opportunity to sell in high volumes
ON Semiconductor’s acquisition of Fairchild Semiconductor gave the company a broad portfolio of discrete power solutions for the automotive market. But it was the acquisition of Aptina Imaging Corporation in 2014 that helped drive the company’s leadership in automotive vision systems: ON Semiconductor currently commands nearly 70 percent of the front-camera advanced driver assistance system (ADAS) market and more than 50 percent of the total automotive image sensor market.
Aptina CMOS image sensor technology is at the heart of ON Semiconductor’s recently released Hayabusa Image Sensor platform, which features 1 MP to 5 MP variants with simultaneous 120 dB ultra-high dynamic range (UHDR) and LED flicker mitigation (LFM). Simultaneous UHDR and LFM is enabled by a 3.0 micron super-exposure backside illuminated (BSI) pixel technology with 100,000 electrons of charge. This technology allows more light to be captured before an image is saturated, thus eliminating any low-light tradeoff.
Simultaneous UHDR and LFM
In imaging, dynamic range represents the disparity between the lightest and darkest parts of an image, and, in turn, a camera’s ability to reproduce that image. It is expressed in dB.
Dynamic range in real-world scenery can be significant – at times in excess of 140 dB. As you can imagine, this presents challenges in object detection and recognition for safety-critical automotive vision systems. Figure 1 shows the difference between the images generated by an automotive backup camera with and without HDR technology.
Figure 1. Shown here is the difference in image quality for an automotive backup camera system with and without HDR capability. Source: Aptina Imaging Corporation, now part of ON Semiconductor.
The previous example, now more than five years old, uses the (then) Aptina Imaging 1.2 MP ARO132AT CMOS image sensor to deliver HDR. However, that device was not equipped with LFM.
Although not visible to the human eye, LEDs such as those used in taillights and traffic signals pulsate or “flicker.” In low-light situations this flickering can cause blurriness that confuses image signal processing algorithms, which is amplified because image sensors require longer exposure time in dark environments so they can capture enough photons to produce a quality image. As a result, vision systems often struggle with tasks like reading traffic signs or identifying vehicle types (for example, a motorcycle versus a car).
In contrast with legacy products, LFM capabilities on Hayabusa image sensors such as the 2.6 MP ARO233 reduce this phenomenon without sacrificing low-light performance (see BSI pixel technology). According to Bahman Hadji, a former Aptina Imaging employee and Senior Product Manager for Automotive Imaging Solutions at ON Semiconductor, the ARO233 delivers the highest resolution in terms of calibration and yield in the 2 MP CMOS image sensor segment today thanks to LFM and the 120 dB UHDR that mirrors real-world environments (Figures 2a, 2b, and 2c).
A major factor in the high resolution of the A0233 and other Hayabusa CMOS image sensors is on-chip companding, which facilitates lossless compression of 24-bit RAW HDR data into 12-bit outputs. These outputs are sent to an image signal processor through an LVDS serialzer/deserializer (Figure 3). The smaller bitstream requires less bandwidth, and therefore less power, which reduces heat in the camera module that can affect image quality. In addition, the lower bandwidth means cheaper silicon and cabling solutions are required.
Figure 3. Hayabusa image sensors compress 24-bit RAW HDR data into 12-bit outputs, which reduce bandwidth, power consumption, and system cost.
All variants within the Hayabusa product line share a common pixel design and architecture so design teams can easily scale their efforts across multiple systems or vehicle designs.
The devices are also delivered as safety elements out of context (SEooC) with an ASIL B rating, as they are able to evaluate each frame for faults in real time. Detected faults are sent in the metadata of each frame, giving vision systems more time to react to potential safety issues (Figure 4). This also facilitates the creation of fault image libraries that can be used to verify algorithms and analyze overall system behavior (Figure 5).
Figure 4. Hayabusa image sensors provide real-time fault detection that helps speed reaction time in potentially hazardous situations.
Figure 5. Faults detected in images from Hayabusa sensors can be compiled in a fault image library to be used for algorithm verification and system behavior analysis.
ON Semiconductor acquired mmWave technology from IBM’s Haifa research team earlier this year to help round out its automotive sensor portfolio, Hadji said. This will give the company a strong position in future ADAS and autonomous vehicle designs, especially considering its power supply solutions pair nicely with high-performance automotive sensor fusion processors from the likes of NVIDIA, Intel, Renesas, and others.
Continue reading: "The push to process vehicle sensor data."
1. Techno Systems Research (TSR). http://www.t-s-r.co.ip. “Automotive Camera Market Analysis 2016.” February 2017.