All about pixel binning, this technology that allows you to get better photos with your phone
Back when digital camera performance was expressed in megapixels, it was simpler: the higher the number, the more detail your camera sensor could capture if the scene was well-lit. But a technology called pixel binning, now universal in high-end smartphones, is changing that for the better.
In short, the pixel stack allows the camera to capture a lot of detail in daylight, without becoming unusable in low light.
Pixel stacking arrived in 2018, became popular in 2020 with models like the Samsung Galaxy S20 Ultra and Xiaomi Mi 10 Pro, and came to Apple and Google smartphones in 2022 with the iPhone 14 Pro and Pixel 7. Samsung’s top model, the Galaxy S22 Ultra, has a 108-megapixel primary sensor, and the pixel stack could take another technological leap with the 200-megapixel sensor on the S23 Ultra.
What is pixel binning?
Pixel binning is a technology designed to make the image sensor more adaptable to different lighting conditions by grouping pixels in different ways. When it’s bright, you can take a photo at the sensor’s full resolution, at least on some phones. When it’s dark, sets of pixels (2×2, 3×3 or 4×4) depending on the sensor can be grouped together to create larger virtual pixels that collect more light but capture images at a lower resolution.
For example, Samsung’s Isocell HP2 sensor can take photos at 200 megapixels, 50 megapixels with 2×2 pixel groups, and 12.5 megapixels with 4×4 pixel groups.
Pixel grouping offers another advantage introduced in 2020: virtual zoom. Phones can only crop the shot to collect light from the central pixels of the iPhone 14 Pro’s 48-megapixel primary sensor or the Google Pixel 7’s 50-megapixel sensor. This turns the 1x main sensor into a 2x zoom that takes 12-megapixel photos. You need to be able to take advantage of good lighting conditions, but it’s a great option, and 12 megapixels has been the preferred resolution for years.
With such a high native resolution, pixel binning sensors can also perform better with high-quality video, especially with 8K.
How does pixel binning work?
To fully understand pixel stacking, you need to know what a digital camera’s image sensor looks like. It’s a silicon chip with a grid of millions of pixels (technically called photosites) that capture the light passing through the camera lens. Each pixel records only one color: red, green or blue.
Colors are shifted in a special checkerboard arrangement called a Bayer pattern, which allows the digital camera to reconstruct three color values for each pixel, a key step in creating a photo that will then be shared on social networks.
This diagram shows how the Samsung Galaxy S20 Ultra’s 108-megapixel camera image sensor uses 3×3 pixel groups to enable pixel binning. © Samsung
Combining data from several small sensor pixels into one larger virtual pixel is really useful in low-light situations, where larger pixels can better limit image noise and capture colors better. When it’s brighter, there’s enough light for the individual pixels to work on their own, resulting in a higher resolution or zoomed-in view.
Pixel stacking typically combines four real pixels into one virtual pixel. Samsung’s Galaxy S Ultra range used a group of virtual pixels that consisted of real 3×3 pixels, and the brand is likely to adopt a 4×4 combination with the Galaxy S23 Ultra.
When should high resolution or pixel binning be used?
Most users are happy with low quality photos and this is what is recommended for everyday use. iPhones don’t even take 50-megapixel photos unless you specifically enable the ProRaw option when shooting. Google’s Pixel 7 Pro doesn’t offer 50-megapixel full-size photos.
12-megapixel shots offer better low-light performance, and they also avoid the monstrous file sizes of full-resolution photos that can eat up internal storage and overwhelm online services like Google Photos and iCloud.
Photo enthusiasts will want to use full resolution whenever possible. This can help identify distant birds or capture more dramatic nature images of distant objects. The number of megapixels is also important for printing large photos.
Why is pixel binning popular?
Because miniaturization made smaller pixels possible. ”
The reason for binning was the new trend of sub-micron pixels
“That is, the ones that are less than a micron wide,” explains Devang Patel, director of marketing at Omnivision, one of the major manufacturers of image sensors. This high number of pixels allows phone manufacturers to offer high megapixel count and 8K video.
Can we capture raw photos with pixel binning?
It depends on the phone. Photo enthusiasts appreciate the flexibility and image quality of raw photos. raw), i.e. the raw data from the image sensor represented as a DNG file. However, not all phones display the raw photo in full resolution. For example, the iPhone 14 Pro does, but the Pixel 7 Pro doesn’t.
The situation is complicated by the fact that processing programs such as Adobe Lightroom expect raw images in the form of a traditional Bayer pattern, rather than 2×2 or 3×3 arrays of pixel cells of the same color.
What are the disadvantages of pixel binning?
For a sensor of the same size, 12 actual megapixels would be slightly better than 12 megapixels per pixel, says Qualcomm director Judd Heape. In addition, the sensor will be cheaper. Also, when taking full-resolution photos, more frames need to be processed, which reduces battery life.
The cost of the sensor, power consumption, and processing power requirements of pixel stacking are the reason this option is only offered on high-end phones.
Can ordinary cameras also use pixel binning?
Yes, and according to some full-frame sensor models from Sony, today’s largest image sensor manufacturer, they will one day.
What is the future of pixel binning?
Several developments are possible. Ultra-high resolution sensors with a 4×4 pixel array can be extended to high-end phones, and a 2×2 unit to entry-level phones.
Another way is to improve HDR photography ( high dynamic range), captures better light and dark image data. Small sensors in phones struggle to capture a wide dynamic range, so brands like Google and Apple combine multiple frames to calculate HDR photos.
But stacking pixels offers an interesting flexibility. In a 2×2 group, two pixels can be dedicated to normal exposure, one to darker exposure for very bright areas, and one to brighter exposure for capturing detail in shadows. The Isocell HP2 sensor does just that for HDR imaging.
Sensor manufacturer Omnivision shows how 2×2 pixel binning (bottom left) can be used to create larger virtual pixels (second row, top) or recreate a traditional Bayer checkerboard pattern (second row, bottom). It can also be used to create HDR images (third row) or improve autofocus with larger microlenses (fourth row). Omnivision
Omnivision also expects to improve autofocus. In previous designs, each pixel was covered with its own microlens designed to collect more light. But today, a single microlens sometimes covers a group of 2x2s, 3x3s, or 4x4s. Each pixel placed under the same microlens gets a slightly different view of the scene depending on its position, and the difference helps the digital camera calculate the focal length. This should help maintain the sharpness of the captured objects.
CNET.com article adapted from CNETFrance
Photo: Andrew Lanxon/CNET