The Most Effective Methods to Achieve High Dynamic Range Imaging

By Maharajan Veerabahu


e-con Systems

February 28, 2024


The Most Effective Methods to Achieve High Dynamic Range Imaging

HDR imaging significantly enhances the performance of embedded vision systems, notably in environments with challenging lighting conditions. This advanced imaging technology equips these systems with the capability to capture visuals that exhibit remarkable clarity and detail. The core of HDR lies in its ability to process a wide spectrum of light, from the brightest highlights to the darkest shadows, thus enabling cameras to accurately capture the full range of light in a scene.

Hence, understanding the methods to achieve HDR can improve the capabilities of vision-powered systems. So, let's better understand the core components of HDR performance, the methods to achieve it, and a comprehensive assessment of what works and what doesn't.

The Basics of Dynamic Range

Dynamic range is a reference to the spectrum of light captured from a scene, quantifying the disparity between an image's darkest and brightest points. HDR scenes are typically characterized by bright regions (such as sunlit areas) and dark areas (like shadows), posing a challenge to traditional imaging systems, which often struggle to capture both extremes in a single shot, leading to overexposed highlights or underexposed shadows.

How HDR Works 

The technical process of HDR begins at the sensor level. Modern sensors used in HDR imaging are designed with an enhanced ability to capture a wider range of light intensities. This is achieved through advanced sensor design that includes larger pixels or by employing back-illuminated sensor (BSI) technology. 

Larger pixels have a greater capacity to gather light, which results in a broader dynamic range. BSI sensors, on the other hand, have their light-receiving elements closer to the surface, which improves their ability to capture light, especially in low-light conditions.

Core Components of HDR 

Multi-shutter speed capture

Using multi-shutter speed, a series of images of the same scene is captured each at different shutter speeds. The variation in shutter speed is crucial because it allows the camera to adjust to a wide range of lighting conditions within the same scene.

The faster shutter speed is employed to accurately capture the brightly lit areas, ensuring that these sections are not overexposed. Conversely, for darker areas, a slower shutter speed is used, which helps in gathering more light and detailing in these underlit sections. Medium-lit areas are captured at a medium shutter speed, striking a balance between the two extremes.

Image combination

After the images are captured, HDR technology employs advanced algorithms to stitch these exposures together. This process is carefully orchestrated, as it requires accuracy to ensure effective blending. The algorithms are designed to identify and merge the best parts of each exposure. By doing so, they create a composite image that encompasses the brightest and darkest areas of the scene. This results in a well-balanced image, with a dynamic range that closely resembles natural human vision.

Human eye mimicking

As you can imagine, HDR technology takes inspiration from the human eye, particularly its ability to rapidly adjust and perceive a wide array of light intensities. The human eye can seamlessly transition between different lighting conditions, capturing details in both bright and dark areas. 

Similarly, HDR technology merges multiple exposures, resulting in images that offer a comprehensive and detailed view of the scene, much like how the human eye perceives its environment. This aspect of HDR makes it especially valuable in scenarios with high contrast between light and dark areas.

Popular Methods to Achieve Effective HDR Imaging 

Multi-exposure or conventional HDR

This approach utilizes both short and long exposure shots to adeptly record a broad spectrum of brightness levels within a single scene. It's particularly effective for capturing intricate details in scenarios with high contrast and low-light environments, thus enhancing the overall dynamic range and visual quality of the images.

In this mode, a short-exposure shot is taken to accurately capture the details in the brighter areas of the scene, whereas a long-exposure shot focuses on the details in the darker regions. 

These various exposures are then intricately blended through advanced algorithms, resulting in a composite HDR image that showcases the optimally exposed elements from each individual shot.

  • Pros: Achieves a greater dynamic range, enhanced image quality, and more post-processing flexibility.
  • Cons: Can have misalignment issues, a reduced frame rate, time-consuming processing, and difficulties with moving objects.

Split-pixel HDR

The split-pixel HDR technique divides each sensor pixel into two distinct sub-pixels. These sub-pixels differ in their light sensitivity; one is more sensitive and better suited for capturing highlights (often termed as the "highlight" pixel), while the other is less sensitive and ideal for capturing shadows (commonly referred to as the "shadow" pixel).

In practical applications, the highlight pixel is responsible for recording the finer details present in the brighter segments of the scene. Conversely, the shadow pixel focuses on capturing the details in the darker areas. This dual-pixel approach allows for a more thorough representation of the scene's full range of light and dark areas.

But how does this actually work?

Basically, in the structure of a single pixel (SP) in HDR imaging, there are two on-chip microlenses (OCLs) of different sizes. The SP2's OCL is situated in the gap of SP1's OCL. This arrangement in a pixel includes one part with high sensitivity for effectively capturing dark subjects and another part less prone to saturation under bright conditions.

The dual-pixel system is engineered to yield a signal that encompasses a wide dynamic range. This functionality is crucial in handling high-contrast situations like backlit settings or variable lighting conditions, such as entering or exiting tunnels in daylight. The design aims to minimize loss of detail in dark areas (blackout) and prevent overexposure in bright areas (blowout).

The larger sub-pixel is utilized primarily for darker parts of an image. Its size allows for more light collection, which is beneficial in low-light scenarios but leads to quicker saturation in brighter conditions. In contrast, the smaller sub-pixel, targeting mid-to-high light portions of an image, gathers less light. This characteristic allows for longer exposure times without easy saturation. The combination of these two sub-pixels, each responding differently to various light conditions, contributes to extending the overall dynamic range. 

  • Pros: Provides good image quality across lighting conditions and reduces computational complexity.
  • Cons: Can cause color artifacts and may introduce processing delays.

Other methods to achieve HDR imaging include:

Digital Overlap (DOL)

  • How it works: Simultaneously captures multiple exposures by overlapping pixel readout times for a broader dynamic range.
  • Pros: Reduces motion artifacts, allows single-shot data capture, and provides detailed representation across various light levels.
  • Cons: Has hardware limitations, involves complex readout processes, and may suffer from noise issues in low-light conditions.

Digital Logarithmic Overlap (DLO)

  • How it works: Utilizes a logarithmic curve for sensors to capture high and low light data in a single frame.
  • Pros: Offers detailed capture with logarithmic responses, adaptability to lighting conditions, and reduced motion artifact risks.
  • Cons: Requires unique sensors and algorithms, specific post-processing, and may have potential tonal banding.


How it works: A 2.1 µm super-exposure pixel image sensor family from onsemi, designed specifically for HDR imaging. It offers exceptional low-light performance, integrated LED flicker adjustment, and low power consumption.


How it works: Sony STARVIS 2's technique captures two images for bright and dark areas and synthesizes them into one HDR image. It captures moving objects effectively, adapts to brightness changes, and features increased saturation capacity. 


The application of HDR in embedded vision extends to various industries. For instance, in automotive applications, HDR plays a crucial role in improving the functionality of driver assistance systems. Cameras equipped with HDR can better detect road signs and obstacles in varying light conditions, enhancing safety. Other applications that have successfully leveraged this technology include smart checkout devices, outdoor AMRs (Autonomous Mobile Robots), automated sports broadcasting setups, smart traffic monitoring systems, and more.

Maharajan Veerabahu is co-founder and vice president of product design services of e-con Systems. He and his co-founders, Harishankkar Subramanyam and Ashok Babu Kunjukkannan, got together early in their professional journey and decided to leave their well-paying jobs to take a plunge into entrepreneurship. A first-time entrepreneur, Veerabahu has a deep passion for building products.

More from Maharajan