Announcement

Collapse
No announcement yet.

Quad Bayer - what's it all about?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • Quad Bayer - what's it all about?

    I'm tying to understand what the quad Bayer tech is doing. Reviews of the OM-1 seem to be vague and contradictory. This is what I've found so far:

    1) What Sony say about the IMX472 (likely to be the sensor used in the OM-1)

    Click image for larger version  Name:	Untitled.png Views:	0 Size:	68.8 KB ID:	859174

    The key points here are (I think):

    - Confirmation that it's really Quad Bayer
    - 4 readout methods - Normal, Phase difference, Single pixel, HDR

    1) Quad Bayer

    If I understand correctly, Quad Bayer places four pixels under each of the RGB squares on the CFA. In other words, however big the CFA and headline Mp is, there are actually FOUR times as many actual pixels/photo-diodes. In other words, the sensor is really 80Mp, but it's probably locked at 20Mp output. But why do this? - it seems there are three main reasons (two of which are variations on a theme):
    a) To allow dual resolution shots with different noise levels. This is used in smartphones and is well-described in this article about Quad Bayer Coding (QBC) from Sony (https://www.sony-semicon.co.jp/e/pro...er_coding.html). It allows driving the sensor in two modes - a low light mode with a quarter of the resolution, or a regular mode with the higher resolution.







    Click image for larger version  Name:	Untitled.png Views:	0 Size:	307.2 KB ID:	859176
    But this isn't what the OM-1 is doing. There aren't two separate resolution modes (aside from the Hi Res modes which are the old-style pixel-shift and HHHR stack tricks).

    b) To reduce noise in low light

    This is effectively using only the left-hand side of the diagram above. Some call it "pixel binning" and it seems it might have some advantages in low light over a single pixel of four times the area. This seems a bit counter-intuitive, but it does seem that's how it is. I assume that the improvement in higher ISO (and shadow noise at low ISO) is due to this (as well as the back-side illuminated approach). On the other hand, this post over on DPR proposes that actually it's due to rationalisation of Sony fab processes => https://www.dpreview.com/forums/thread/4629800 .

    c) To provide "single shot" HDR

    This is explained quite well in the IMX 294 data-sheet (https://www.sony-semicon.co.jp/produ...4CJK_Flyer.pdf). I'm assuming the IMX472 does the same (potentially):

    Click image for larger version  Name:	Untitled.png Views:	0 Size:	177.4 KB ID:	859175
    But I'm not sure whether this mode is used by the OM-1. It will be interesting to look at the HDR mode on the camera and see whether it's the old-style composite of multiple separate exposures, or this "two images at once" capability of the sensor. Clearly, the on-sensor version would be limited to just two exposures and from what I've read, it seems that 1-1.5 stops of exposure separation is about all you get. From what I can glean, it seems that this HDR capability is really directed at improving movie HDR and has little benefit for stills. I'm confused!








    2) Readout methods

    My assumption is this:

    - Normal - regular 20Mp image
    - Single pixel - 80Mp image. Probably not used by the camera
    - HDR - the long/short integration point explained above
    - Phase difference

    This last point is probably unrelated to the CFA and Quad Bayer point. It seems that Sony have done some tricks with micro lenses which allows all pixels to be used as phase-detect sites. This article explains it well => https://www.gsmarena.com/sony_unveil...news-40523.php.



    This technique contrasts with the existing on-chip PDAF which is done by plucking pixels out of the image area and using them exclusively for AF. This limits the number of AF points and can in some cases lead to image artefacts. The key diagram is this one:

    Click image for larger version  Name:	Untitled.png Views:	0 Size:	325.5 KB ID:	859177




    So - bottom line, this is what I've cleaned by a couple of hours of research - but I'm still not feeling uber confident that I've got it all right.

    Anyone out there who can correct/elaborate/confirm all this?
    Paul
    Panasonic S1Rii and S5 with a few lenses
    flickr
    Portfolio Site

  • #2
    That is very interesting Paul. I wonder if we will get to the bottom of it, or if it will be like AF algorithms where the manufacturers keep them under wraps despite knowing that it would be helpful to users to have some idea what's going on.

    This is exactly why I am a bit worried that it might take Adobe a while to get really good output from raw files. Unless the binning (or equivalent) is done by the camera before writing the file. In which case should we call it "rare" rather than raw?

    John

    Comment


    • drmarkf
      drmarkf commented
      Editing a comment
      Yes, my thoughts exactly. They still haven't properly sorted out X-trans, although it's a lot better than it used to be so my Fuji/Adobe using friends tell me. As you say, all the necessary processing might be done in the Digic X or whatever it's called.
      Similarly it'll be interesting to see how long it takes Phase One (ie Capture One) & the other raw converter makers.

  • #3
    And you told me I was wrong when I asked if it was really an 80Mp sensor
    Paul

    Retired and loving it.

    Comment


    • #4
      Come to think of it, will we get a clue once we know how big the raw files are? If the "binning" (or equivalent) is done in camera, they should be roughly the same as the other 20 Mpix cameras. If it isn't, they should be mahoooooosive!

      John

      Comment


      • drmarkf
        drmarkf commented
        Editing a comment
        The test raws on dpreview are around 17 to 19Mb, as I recall. Looks like it's done in camera, then.

      • Bikie John
        Bikie John commented
        Editing a comment
        Thanks Mark. I've just had a look in the downloaded manual and it says raw files are approx 22.4Mb. So as you say, it looks as if it is done in the camera.

      • pdk42
        pdk42 commented
        Editing a comment
        I 99% sure that the binning is done way before it reaches the raw file.

    • #5
      Originally posted by Bikie John View Post
      This is exactly why I am a bit worried that it might take Adobe a while to get really good output from raw files. Unless the binning (or equivalent) is done by the camera before writing the file. In which case should we call it "rare" rather than raw?

      John
      In the AP review they said Adobe were already able to accept the RAW files!?
      Paul

      Retired and loving it.

      Comment


      • Walti
        Walti commented
        Editing a comment
        I also think this technology will be widespread very soon, Sony have probably already worked with them?

    • #6
      Originally posted by pdk42 View Post
      Anyone out there who can correct/elaborate/confirm all this?
      What was that whooshing noise? That was the sound of Paul’s post going straight over my head!
      Regards,
      Stephen

      AKA Snibbo

      E-M1X | E-M1 mk1 | MZ 12-40mm | MZ 40-150mm f4 | MZ 60mm Macro | MZ 7-14mm | ZD 50-200mm | FL50-R Flash

      Comment


      • #7
        AP review says..........It is a Quad Pixel sensor and not a Quad Bayer sensor.

        The OM System 'Olympus' OM-1 is both the last camera of the Olympus range and the first from OM-System. Joshua Waller gives it a full review.

        Comment


        • pdk42
          pdk42 commented
          Editing a comment
          That comment in AP really doesn’t make sense.

        • Ian
          Ian commented
          Editing a comment
          I think it's a mistake and I've asked Andy Westlake for some feedback.

        • Ian
          Ian commented
          Editing a comment
          Andy told me that there is some general confusion about the term 'Quad Bayer' and that some say it really means each sub-pixel has its own microlens though the four sub-pixels remain covered by one colour filter. That would be different to what the Sony sensor has (one microlens covering one colour filter and four sub-pixels).

      • #8
        "4 readout methods - Normal, Phase difference, Single pixel, HDR"

        I think there is a mistake in translation on the Sony leaflet which introduces a red herring. The phrase "single pixel" should instead read, "single photosite".

        "Quad Bayer places four pixels under each of the RGB squares"

        The silicon chip doesn't have any pixels as such, it only has photosites or diodes, 80M of them. The CFA covers the photosites with R, G, and B filters, and if it is a Bayer CFA, there are 2G, 1R, and 1B filters placed over 4 photosites in a square pattern. A "conventional Bayer" CFA pattern repeats regularly, while in a "Quad Bayer" CFA the pattern is mirrored for each repeat.

        Comment


        • pdk42
          pdk42 commented
          Editing a comment
          So are you saying that the difference is just the arrangement of the CFA?

        • Lester
          Lester commented
          Editing a comment
          In a Bayer CFA, whether "conventional" or "quad", my understanding is that every pixel consists of a square of four photosites covered by four coloured microlens filters 2G 1R 1B. The trick with the quad arrangement is that the square of 2G 1R 1B microlenses is mirrored before being laid over the 4 photosites of the neighbouring pixel. The result is that you can now find a square of 4 photosites all R, a neighbouring square of all Gs, and so on. However, each square of 4 "all X" photosites belong to what are 4 different pixels. Common illustrative diagrams of the quad Bayer CFA appear to show a single colour X microlens being as large as, and covering, 4 photosites, but I think these diagrams are quite misleading, there are still undoubtedly 4 microlenses, one per photosite, and crucially each photosite "belongs" to a different pixel.

      • #9
        OK - can someone help me here...

        On a 20Mp, non Quad Bayer, sensor:
        - There are 20 million (or thereabouts) light sensitive sites (let's call them sensels)
        - Of these, approx 10 million are behind a green filter, 5m behind a blue filter, and 5m behind a red filter (there are also CMY variants I believe, but let's ignore that)
        - Therefore, we get 10m luminosity readings for green, 5m blue readings, and 5m red readings
        - Via interpolation and demosaicing of this data, an image file is constructed with 20m RBG pixels.
        - So, in summary, there are 20m sensels and 20m pixels

        On a 20Mp Quad Bayer sensor:
        - There are 80 million (or thereabouts) sensels
        - Of these, approx 40 million are behind a green filter, 20m behind a blue filter, and 20m behind a red filter
        - Therefore, we get 40m luminosity readings for green, 20m blue readings, and 20m red readings
        - Via interpolation and demosaicing of this data, an image file is constructed with 20m RBG pixels.
        - So, in summary, there are 80m sensels and 20m pixels

        Is this correct?
        Paul
        Panasonic S1Rii and S5 with a few lenses
        flickr
        Portfolio Site

        Comment


        • Mark_R2
          Mark_R2 commented
          Editing a comment
          My understanding of how a conventional Bayer pattern sensor works is exactly as Paul describes. Reading through his description of what a Quad Bayer is (thanks for the research BTW) makes his conclusion about how the Quad sensor works entirely logical.

          I would agree with Lester's point on the microlenses though. It makes no sense to me to have one large lens covering 4 photosites. The peak in the focused intensity would be where there is a dead region between the 4 photosites which would be 'wasting' photons. One lens per photosite makes a lot more sense.

        • pdk42
          pdk42 commented
          Editing a comment
          It makes no sense to me to have one large lens covering 4 photosites. The peak in the focused intensity would be where there is a dead region between the 4 photosites which would be 'wasting' photons. One lens per photosite makes a lot more sense.
          Yet that is exactly what Sony seems to have done => https://www.gsmarena.com/sony_unveil...news-40523.php

        • Mark_R2
          Mark_R2 commented
          Editing a comment
          Of course it might be that the photosites are now smaller than the diffraction limited point spread function of the microlenses. So individual microlenses would cause spillage of light into adjacent photosites. A single lens covering 4 photosites could well be more efficient than individual lenses, and that’s what matters in the end

      • #10
        Originally posted by pdk42 View Post
        This is effectively using only the left-hand side of the diagram above. Some call it "pixel binning" and it seems it might have some advantages in low light over a single pixel of four times the area. This seems a bit counter-intuitive, but it does seem that's how it is.
        My understanding is this works by reducing the (random) read out noise of the electronics. Each amplifier for each photosite has a certain background amount of noise which is random and not signal dependent. As the actual signal falls in low light, this noise becomes more obvious. Random noise can be reduced by averaging. For any system obeying Poisson statistics, if you take N successive measurements and average them, the random noise is reduced by squareroot(N). In this case, by splitting one large photosite into 4 you can effectively take 4 different measurements of the same signal. Averaging the 4 signals reduces the noise by a theoretical factor of 2. You might not get that in reality because the 4 photosites are not truly measuring the same signal. But close enough I would have thought.

        Comment


        • pdk42
          pdk42 commented
          Editing a comment
          Yes, that makes sense. Thank you.

      • #11
        This is very interesting. The IMX472 spec sheet on the Sony site is not very detailed and still marked provisional.

        I did a targeted Google search for the sensor ID against “site:*.jp” for Japanese sites to see if anything useful turns up. Not much of use to provide more info!

        This article was of interest but still remarking on not much detailed sensor info:


        I don’t read Japanese so I let the browser do auto-translate.

        The writer comments “Since it can be confirmed from the spec sheet that it has a "Quad Bayer structure", it seems that it will be compatible with all-pixel AF (dual pixel AF / quad pixel AF in Canon terms). I was told via the bulletin board the other day that Olympus has a patent for 2PD (dual pixel AF) / 4PD (quad pixel AF), so the "WOW camera" being developed by OM Digital has all-pixel AF. I have a feeling that it will be a hot topic if is adopted. * J-PlatPat: Japanese Patent Laid-Open No. 2021-004989

        So that got me interested in how the “all-pixel” focusing works on top of all this - how do those 1053 cross type AF points map on to this and why 1053?

        That led me to reading a 2021 Olympus patent on PDAF focus mechanisms here:


        I recommend to click English at top right..

        So I went down the rabbit hole of reading that patent and looking at some of the diagrams. Patents do not make easy reading because they are technical-legal documents and so verbose! I will have to give it a rest and come back to it.

        The diagrams in the patent reminded me of these quad bayer diagrams you have, so I thought this might be of interest here.

        Click image for larger version

Name:	5B47B8FC-49A4-465F-8274-925833E60DC9.jpg
Views:	1258
Size:	218.4 KB
ID:	859382

        I don’t suppose it is possible to get detailed implementation data sheets for these sensors unless you work at a place like OMDS or Panasonic, that’s what we need to understand it!

        Bill
        https://www.flickr.com/photos/macg33zr/

        Comment


        • pdk42
          pdk42 commented
          Editing a comment
          Thanks Bill - that’s really useful info.

          I’m coming to the conclusion that the Quad Bayer element of the sensor’s design is not being used for HDR or enhanced resolution purposes. However, it’s clear it plays a part in the AF system and perhaps contributes to lower noise, esp at higher ISO (by averaging, as pointed out by Mark_R2 above). I wish manufacturers would provide more information on fundamental things like this for the benefit of the more technically-minded customers.

        • Ian
          Ian commented
          Editing a comment
          Just to clarify, the way on-sensor phase detect AF works with some cameras, pixels are lost to the imaging function so that they can produce a phase signal; the pixels are half-masked. I think there is no need to mask the pixels in a Quad Bayer sensor because there are four sub-pixels per microlens. You only need to look at one half of the quartet of sub-pixels to measure the phase. This can be done on the left two or the right two, or the top two or the bottom two, making a collection of adjacent quartets able to measure vertical and horizontal phase. And at the same time none of the pixels are sacrificed to forming image pixels. Of course, any quartet of sub-pixels can be read for focusing, anywhere on the sensor.. It may be that there is a hard-wired system for optimising the read-out of the focusing points. Too many points would add extra burden into the focus processing, so 1053 'points' was probably deemed to be the sweet-spot for accuracy, coverage and speed. What does fascinate me is that Canon has patented something that sounds very similar; Quad Pixel AF. After all, Canon invented Dual Pixel AF, where there are two photodiodes under one microlens - but that had issues because they don't work as cross-type points.

      • #12
        I suspect that the sensor uses Sony's new double-layer stacking technology to maximise the photosite photon capacity, which is where the increased DR comes from. It also increases the view of the photosite through the microlens and this will help with noise, among other things.

        Ian
        Founder and editor of:
        Olympus UK E-System User Group (https://www.e-group.uk.net)

        Comment


        • #13
          Any idea on what all this implies for diffraction softening effects at higher F stops?
          Chris

          Comment


          • Ian
            Ian commented
            Editing a comment
            That's a good question.

          • Ian
            Ian commented
            Editing a comment
            Thinking about it, there probably isn't any difference because the four sub-pixels really behave like one big pixel and the microlens covers all four, so it/s the same size as previous 20MP MFT sensors.

        • #14
          Woooosh its all over my head
          Regards
          Michael

          OM-D E-M5 mk2, m12-40mm f2.8, m25mm f1.8, m45mm f1.8, m60mm f2.8 Macro, M14‑150mm 1:4-5.6 II, M75-300mm MK2, Samyang 7.5mm f/3.5 fisheye

          Comment


          • #15
            Michael, you’re not the only one!
            Regards,
            Stephen

            AKA Snibbo

            E-M1X | E-M1 mk1 | MZ 12-40mm | MZ 40-150mm f4 | MZ 60mm Macro | MZ 7-14mm | ZD 50-200mm | FL50-R Flash

            Comment

            Working...
            X