I'm tying to understand what the quad Bayer tech is doing. Reviews of the OM-1 seem to be vague and contradictory. This is what I've found so far:
1) What Sony say about the IMX472 (likely to be the sensor used in the OM-1)

The key points here are (I think):
- Confirmation that it's really Quad Bayer
- 4 readout methods - Normal, Phase difference, Single pixel, HDR
1) Quad Bayer
If I understand correctly, Quad Bayer places four pixels under each of the RGB squares on the CFA. In other words, however big the CFA and headline Mp is, there are actually FOUR times as many actual pixels/photo-diodes. In other words, the sensor is really 80Mp, but it's probably locked at 20Mp output. But why do this? - it seems there are three main reasons (two of which are variations on a theme):



2) Readout methods
My assumption is this:
- Normal - regular 20Mp image
- Single pixel - 80Mp image. Probably not used by the camera
- HDR - the long/short integration point explained above
- Phase difference
This last point is probably unrelated to the CFA and Quad Bayer point. It seems that Sony have done some tricks with micro lenses which allows all pixels to be used as phase-detect sites. This article explains it well => https://www.gsmarena.com/sony_unveil...news-40523.php.

This technique contrasts with the existing on-chip PDAF which is done by plucking pixels out of the image area and using them exclusively for AF. This limits the number of AF points and can in some cases lead to image artefacts. The key diagram is this one:

So - bottom line, this is what I've cleaned by a couple of hours of research - but I'm still not feeling uber confident that I've got it all right.
Anyone out there who can correct/elaborate/confirm all this?
1) What Sony say about the IMX472 (likely to be the sensor used in the OM-1)
The key points here are (I think):
- Confirmation that it's really Quad Bayer
- 4 readout methods - Normal, Phase difference, Single pixel, HDR
1) Quad Bayer
If I understand correctly, Quad Bayer places four pixels under each of the RGB squares on the CFA. In other words, however big the CFA and headline Mp is, there are actually FOUR times as many actual pixels/photo-diodes. In other words, the sensor is really 80Mp, but it's probably locked at 20Mp output. But why do this? - it seems there are three main reasons (two of which are variations on a theme):
a) To allow dual resolution shots with different noise levels. This is used in smartphones and is well-described in this article about Quad Bayer Coding (QBC) from Sony (https://www.sony-semicon.co.jp/e/pro...er_coding.html). It allows driving the sensor in two modes - a low light mode with a quarter of the resolution, or a regular mode with the higher resolution.
But this isn't what the OM-1 is doing. There aren't two separate resolution modes (aside from the Hi Res modes which are the old-style pixel-shift and HHHR stack tricks).
b) To reduce noise in low light
This is effectively using only the left-hand side of the diagram above. Some call it "pixel binning" and it seems it might have some advantages in low light over a single pixel of four times the area. This seems a bit counter-intuitive, but it does seem that's how it is. I assume that the improvement in higher ISO (and shadow noise at low ISO) is due to this (as well as the back-side illuminated approach). On the other hand, this post over on DPR proposes that actually it's due to rationalisation of Sony fab processes => https://www.dpreview.com/forums/thread/4629800 .
c) To provide "single shot" HDR
This is explained quite well in the IMX 294 data-sheet (https://www.sony-semicon.co.jp/produ...4CJK_Flyer.pdf). I'm assuming the IMX472 does the same (potentially):
b) To reduce noise in low light
This is effectively using only the left-hand side of the diagram above. Some call it "pixel binning" and it seems it might have some advantages in low light over a single pixel of four times the area. This seems a bit counter-intuitive, but it does seem that's how it is. I assume that the improvement in higher ISO (and shadow noise at low ISO) is due to this (as well as the back-side illuminated approach). On the other hand, this post over on DPR proposes that actually it's due to rationalisation of Sony fab processes => https://www.dpreview.com/forums/thread/4629800 .
c) To provide "single shot" HDR
This is explained quite well in the IMX 294 data-sheet (https://www.sony-semicon.co.jp/produ...4CJK_Flyer.pdf). I'm assuming the IMX472 does the same (potentially):
But I'm not sure whether this mode is used by the OM-1. It will be interesting to look at the HDR mode on the camera and see whether it's the old-style composite of multiple separate exposures, or this "two images at once" capability of the sensor. Clearly, the on-sensor version would be limited to just two exposures and from what I've read, it seems that 1-1.5 stops of exposure separation is about all you get. From what I can glean, it seems that this HDR capability is really directed at improving movie HDR and has little benefit for stills. I'm confused!
2) Readout methods
My assumption is this:
- Normal - regular 20Mp image
- Single pixel - 80Mp image. Probably not used by the camera
- HDR - the long/short integration point explained above
- Phase difference
This last point is probably unrelated to the CFA and Quad Bayer point. It seems that Sony have done some tricks with micro lenses which allows all pixels to be used as phase-detect sites. This article explains it well => https://www.gsmarena.com/sony_unveil...news-40523.php.

This technique contrasts with the existing on-chip PDAF which is done by plucking pixels out of the image area and using them exclusively for AF. This limits the number of AF points and can in some cases lead to image artefacts. The key diagram is this one:
So - bottom line, this is what I've cleaned by a couple of hours of research - but I'm still not feeling uber confident that I've got it all right.
Anyone out there who can correct/elaborate/confirm all this?
Comment