July 4, 2013

Comment: Why the new "Dual Pixel AF" will transform the photo industry

Image ©Canon Europe. The Canon Dual Pixel CMOS AF sensor structure.

I do not normally comment on developments I have no inside information about. It is just adding to the noise in the internet. Here, I decided to make an exception :)

So, Canon launched the Canon 70D digital SLR camera 2013, July 2, two days ago. It includes an autofocus technology named "Dual Pixel cmos AF" which promises to introduce high-speed autofocus performance to live view shooting and movie recording which previously would have required a phase detect AF module, at least in lower light situations.

I believe that this innovation looks much more innocent than it actually is. Here is why.

A new kind of AF?

In the following, I will refer to this type of AF as "Multi Pixel AF" or short MPAF. The traditional two types are contrast detect AF (CDAF) and phase detect AF (PDAF). I will assume the reader to be familiar with how CDAF and PDAF works.

On July 2010, 21, Fujifilm introduced the F300EXR with a then new kind of AF, called "Hybrid AF" by Fuji. It makes use of the fact that the microlens which modern cameras have in front of every pixel on the sensor, creates a directional dependency of brightness distribution on the light sensitive pixel surface.

So, Fuji masked one half in some, the other half in some other of the pixels (but modified only a few pixels in total). It could therefore create two signals (low resolution images) from the modified pixels which are used to determine a phase shift between the two. The modified pixels aren't used to take the image itself.

Image ©Fujifilm.
Canon took this idea, sat down for a few hours and thought: "well, what if we didn't mask one half of a pixel but split it into two and only reading one at a time?" Here is what it looks like:

Image © Canon replaced the mask by a second pixel which they call photo diode. When not doing the AF measurement, both pixels are combined to take the ordinary image. This is why every pixel on the sensor can become AF pixel, not just a few.

Obviously, Canon pays better graphics designers than Fuji. Other than that, the two illustrations look pretty similiar. So maybe, it is no surprise that it took them exactly 12 working days to write this idea down and file another patent on 2010, August 9, a Monday ... 2 busy weeks. The idea is obvious. But only Canon reacted quick enough to make it a patent. And only Canon was brave enough to actually double the pixel count in a real product. It took them 3 years to do. I applaud Canon for what I think is a business master piece.
(Disclaimer: the above sequence of events is deduced by me and assumed likely, it is not a fact!)

I found the relevant information in the Japanese internet:
Let me add a few last technical remarks. The dual pixel AF is reported by an insider to split the pixel into top and bottom half, yet all Canon marketing material shows it as left and right. Assuming top/bottom is correct, we end up with two different images from either half if out of focus. Both halves are blurred and shifted against each other by a few pixels. Both the blur (to be minimized, read contrast to be maximized) and the shift (which is the AF phase shift) can be used to compute a better if not optimum focus position. The following image illustrates it:

Image © The middle shows the dual out-of-focus images (look at the curves below and their shape).
The right shows the in-focus images (they coincide and the curves are both sharper and unshifted).
For perfect focus, both, contrast maximization and shift minimization, can be combined.

I call it MPAF (multi pixel AF) because an improved version would use more than two pixels (think four or Quad pixel AF) or varying orientation to get rid of the orientation preference, not just top/bottom. It is like moving from line type to cross type AF sensors in classical PDAF.


Yes, it really is a new type of AF. The idea is quite simple and is the same as with the Hybrid AF.
But executed with brute force.
And what you get in return, wow!

Unlike PDAF and Hybrid AF, CDAF can make use of all available light falling into the lens. This is a massive advantage. Sadly, it is offset by a large number of measurements it must take in order to maximize contrast. Especially as many vendors don't apply sophisticated algorithms to minimize the number of measurement steps. On the other hand, PDAF and even more Hybrid AF, only use a fraction of available light for their measurement.

The new MPAF is the first kind of AF which does both: use all light and minimize the number of measurements. Canon's first installation with its focus on continuous movie focusing may not tell the full story yet, but I am convinced that eventually, MPAF will blow all other kinds of AF out of the water. In terms of speed, tracking ability, accuracy, and low light capabilities.

And why it will transform the industry

What always was the advantage of mirror-based cameras (its better auto focus abilities) all of a sudden becomes obsolete.

Up to now, the mirrorless segment played entry level prosumer market. Because of a lack of competitive high end performance, there was no other choice. As a consequence, the choice of lenses and their image quality for mirrorless cameras left much to be desired. And because of the low price point, there was a natural limit of what electronic view finders (EVFs) could offer too (the other reason to keep the mirror). So, mirrorless vendors place a lot of emphasis on the form factor. Except that light sensitivity needs a certain lens diameter, whatever be the bodies' or sensor's form factor.

And so in 2012, the compact camera market collapsed into half, the mirrorless market shrunk considerably, even SLR sales was less and only the full frame camera market saw a solid growth. In a nutshell, the camera market sees a division into two ends: the commodity end driven by better and better smartphone cameras (e.g., iPhone 5 or Pureview) and the high end. The middle segment erodes and this started to hit the mirrorless idea. Which is a shame.

The MPAF is a breakthrough here: it eventually allows the high end cameras to drive innovation rather than play the conservative watcher. This redefines the rules and will transform the industry.

Only now it is a credible scenario that a Nikon D4 or Canon-1D X class camera becomes a mirrorless full frame with high performance MPAF, retina resolution 12bit lag-free EVF (a $1000 unit by itself), 24fps burst for stills and 4k and slow-motion video modes. With high-resolution $3000 wide angle lens options to rival medium format etc. Where new technologies can tickle down again.

Hope you enjoyed the read,

Update 2013, August 2

A new video emerged showcasing in a real life situation what the Dual pixel AF can do (and cannot do):
And Canon published an interview with the developers:


  1. Sorry but your comments that the new PD method makes the mirror obsolete is far fetched and premature. For one EVF are and never will be on par with an optical viewfinder and secondly - you fall into the trap laid by the simple sample Canon chose to explain the system. To determine focusing distance the image processor would need to separate the signal coming from the subject in question but that signal is - due to the new system working at open aperture - quickly lost in the noise of the surrounding environment be it other subjects, background, blocking elements in the foreground.
    The traditional PD sensor doesn't have this problem as it works on spatially separated image data which in turn is stopped down to f/29 (on Canon cameras). So the traditional PD sensor is far more capable to deliver a distance in one and a distance delta in two measurements.
    So for video this new focusing method is indeed a game changer, for photography and it's requirements though it's nothing too special.

    1. @Anonymous (you forgot to leave your nick...):
      > "you fall into the trap laid by the simple sample Canon chose to explain the system"
      Very certainly not. I thought it through with my background and only then came to my conclusion. I even spotted a few bad points in Canon's patent (on the algorithmic side of things) which is why I don't expect the 70D to perform so well.

      To shorten your argument: a traditional PDAF sees two shifted images which are relatively sharp but dark. A dual pixel AF sees two shifted images which are relatively blurred but bright. Using a proper 2D image correlation technique, the latter be able to compute shift reliably as well. It will perform as good as the former in bright light and better an low light.

    2. > For one EVF are and never will be on par with an optical viewfinder

      True, they are not. But with known and beatable specs of the human eye, it will of course be possible to build matching EVFs (I could go into detail here). What makes you think otherwise?

    3. First of all I can't leave a nick, I don't get a field to enter one.
      As for your first comment, you fail to spot the problem. It's a highly complex computational problem to isolate the unsharp rendered image detail from the environment.
      Canon themselves do admit that this focusing method is not capable of proper predictive tracking as required for sport or wildlife photography. So you should probably tone down your hopelessly overenthusiastic evaluation...

    4. There is a name/url option when replying, or just sign your post with a nick ;)
      It is not very polite to criticize but not even leave a name ...

      As for your remark about computational complexity. I worked in computational science and I agree it takes some expertise to solve this problem. Maybe more Canon currently has on board (judging based on their patent submission). But the problem is solvable. I may even publish a case study about it as it is an interesting exercise.

  2. ovf is rubbish

    far more pixels are used which is one of the reasons why this whatever PDAF is able to beat the hell out of traditional mirror split ones.

  3. As said in the reply form. if posting anonymously, please leave a nick name! Otherwise, your post may not pass moderation. Thanks for your cooperation.

  4. @Anonymous:

    "OVF is rubbish" - not an argument.

    "PDAF is able to beat the hell out of traditional mirror split ones" - that doesn't make sense, your comparing an SENSOR technique with MIRROR (vs none).

    Next time consider using a little more time before posting - and you might just notice the option to add your nick :)

    Now back on topic of...


    You are spot on regarding the possiblities in MPAF I think. I've been waiting some years for the AF and OVR technologies needed to mature (to be good enough for the PROs) and it looks like we're almost there now but the current OVFs are not yet good enough.

    I think we just 2-3 years away before Canon or Nikon shows us the first PRO body using an OVR (and hopefully also using a improved FF sensor).

    Here are some of the things that should be possible and which I'd hope to see in my next camera (Canon are you listing?!).

    - live sensor view using enhanced image for "night vision"-shooting (would be *really* nice when doing astro photography)

    - live sensor view infrared shooting (for sensors without IR filter)

    - all sorts of grid overlays (optionally downloaded from computer)

    - customized placement of current camera settings in viewfinder (apeture, shutter, ISO, WB, AF...)

    - instant (in viewfinder) review of taken image with optional comparison with previous images

    - in viewfinder instant 5x/10x zoom (for precision manual AF)

    - live highlight/blackout warning

    - much faster shooting (no mirror!)

    - real time HDR

    Some of these are possible today (used by Sony) and some not (need faster chips/electronics but this is also just a matter of time).

  5. This comment has been removed by a blog administrator.

  6. I don't believe we will see an EVF with the refresh rate, sensitivity, resolution, and dynamic range of the human eye for quite some time. I have a Fuji X100S, I find the EVF useless.

    1. Possible. But such an EVF would be technically feasible today, in an SLR-sized body at least. So, the question is rather one of market mechanics.

      What I find a bit disappointing today is that this kind of "high fidelity viewfinder" isn't even an official goal for many vendors. As if most people would prefer the computer screen-like experience...

    2. You would know better than me, but I would think that if such an EVF were technically feasible today we would see one in a high end Sony. But perhaps it would simply be too expensive.

    3. There are very high resolution screens available already today. I don't see the need to match the dynamic range of the eye for an EVF to be good, but refresh rate is of course important, but I don't believe that is a problem even right now (meaning, the technology to do it exist).

      Today screens are around 640x480, but 1080p would go a very long way for a detailed enough EVF.

  7. I'll take your word for its technical ability since you are far more expert than I. However why is it the high-end cameras that drive the innovation? Once the sensors are in mass-production and the DIGIC ASIC is developed the technology can be put in cameras ranging from entry-level to the top end. Arguably the greatest benefit is at entry-level since the mechanical complexity of conventional DSLRs is minimized and that is a greater percentage of the cost at that level.

    It also seems that the benefit has great potential for compact mirrorless, and indeed rumor has it the EOS-M2 will have the 70D sensor, and that is hardly high-end. Further I believe that the cost of PDAF lenses is lower (less emphasis on minimizing the moving mass, unit focus is fast enough) giving Canon a cost advantage when developing further EF-m lenses.

    So I agree that it will have a big commercial impact; I just do not see it driven from the high-end

    1. I think the question if the high-end drives the consumer end or vice versa is of minor importance only, as far as this article is concerned. However, it normally is the high end indeed which drives innovation. Simply because it allows for the margins necessary to subsidize the cost of R&D. The consumer end typically lives from a low margin on top of cost of production only, i.e., it must use technology developed elsewhere in a corporation. You see it very clearly in the tickle-down of technology in the German car industry. If you buy one of their more prestigious models, you actually pay for R&D. But then again, I think this topic doesn't really matter in the context of the 70D.

  8. I'd like to update the above blog article ...

    Meanwhile (2013 Nov 7th), Canon has filed a patent for a more advanced version of the dual pixel AF method. Their idea is to have two subpixels which aren't equal size, allowing for a more sensitive dedetection of phase shift and promising to fully match the performance of a traditional phase detect AF system.

    More details are found here:

  9. I'm also of the opinion that DPAF might revolutionize AF. Like you say, the images being compared to determine the phase difference should be brighter (albeit more blurry than what a traditional PDAF sensor would 'see', complicating the cross-correlation analysis). That said, we don't know what resolution of the data the system is working with, because at the pixel-level, the signals would be quite low. Binning can of course help, and I wonder if they're using it at all (either software or hardware, though hardware is preferable). In general, I think that on-sensor AF would benefit from proper implementation of binning; and perhaps even adjustable binning (at the cost of resolution of course, but this should be OK for AF). That'd help tremendously in low-light applications, where I see hybrid systems hunt significantly. In fact, this indicates to me perhaps that perhaps on-sensor PDAF simply turns off when light levels drop below a certain point... which wouldn't be surprising given that such (masked) pixels see so little light. We already know that the sampling intervals/integration times increase under low light, which causes AF to slow down. Binning could help increase the SNR to somewhat mitigate this need to slow down the AF system.

    What I think is particularly interesting about DPAF making use of all the light entering the lens (rather than pinholes sampling the outer edges of the lens, as with traditional PDAF) is that it minimizes the effects spherical aberration otherwise have on traditional PDAF systems. In fact, there must be correction factors built in to correct the PDAF measurement for image-forming light, since the latter uses the entire cone of light entering the lens, not just its peripheries. I'm actually willing to bet that these correction factors are one of the things AF microadjustment 'operates' on. These effects of spherical aberration and the resulting correction factor, I believe, is a large source of inaccuracy with traditional PDAF. DPAF, on the other hand, should be largely immune to this, perhaps obviating the need for this adjustment factor entirely.

    I suppose we'll have to wait and see!

  10. You were right. For video mode and for macro stills photogrpahy (in LiveView mode) PDAF is marvelous. AF points spread in LiveView mode is something exceptional, no OVF camera has got 80% frame (or more!) coverage with AF points. Touch AF (choosing particular AF point by touching rear screen of the camera) is also great thing when coupled with PDAF.


Please if posting anonymously, choose a nickname for your post. Thanks.