MTF, Lens & Sensor Resolution

MTF, Lens & Sensor Resolution

I’ve been ‘banging on’ about resolution lens performance and MTF over the last few posts so I’d like to start bringing all these various bits of information together with at least a modicum of simplicity.

If this is your first visit to my blog I strongly recommend you peruse HERE and HERE before going any further!

You might well ask the question “Do I really need to know this stuff – you’re a pro Andy and I’m not, so I don’t think I need to…”

My answer is “Yes you bloody well do need to know, so stop whinging – it’ll save you time and perhaps stop you wasting money…”

Words used like ‘resolution’ do tend to get used out of context sometimes, and when you guys ‘n gals are learning this stuff then things can get a mite confusing – and nowhere does terminology get more confusing than when we are talking ‘glass’.

But before we get into the idea of bringing lenses and sensors together I want to introduce you to something you’ve all heard of before – CONTRAST – and how it effects our ability to see detail, our lens’s ability to transfer detail, and our camera sensors ability to record detail.

Contrast & How It Effects the Resolving of Detail

In an earlier post HERE I briefly mentioned that the human eye can resolve 5 line pairs per millimeter, and the illustration I used to illustrate those line pairs looked rather like this:

5 line pairs per millimeter with a contrast ratio of 100% or 1.0

5 line pairs per millimeter with a contrast ratio of 100% or 1.0

Now don’t forget, these line pairs are highly magnified – in reality each pair should be 0.2mm wide.  These lines are easily differentiated because of the excessive contrast ratio between each line in a pair.

How far can contrast between the lines fall before we can’t tell the difference any more and all the lines blend together into a solid monotone?

Enter John William Strutt, the 3rd Baron Rayleigh…………

5 line pairs at bottom threshold of human vision - a 9% contrast ratio.

5 line pairs at bottom threshold of human vision – a 9% contrast ratio.

The Rayleigh Criterion basically stipulates that the ‘discernability’ of each line in a pair is low end limited to a line pair contrast ratio of 9% or above, for average human vision – that is, when each line pair is 0.2mm wide and viewed from 25cms.  Obviously they are reproduced much larger here, hence you can see ’em!

Low contrast limit for Human vision (left) & camera sensor (right).

Low contrast limit for Human vision (left) & camera sensor (right).

However, it is said in some circles that dslr sensors are typically limited to a 12% to 15% minimum line pair contrast ratio when it comes to discriminating between the individual lines.

Now before you start getting in a panic and misinterpreting this revelation you must realise that you are missing one crucial factor; but let’s just recap what we’ve got so far.

  1. A ‘line’ is a detail.
  2. but we can’t see one line (detail) without another line (detail) next to it that has a different tonal value ( our line pair).
  3. There is a limit to the contrast ratio between our two lines, below which our lines/details begin to merge together and become less distinct.

So, what is this crucial factor that we are missing; well, it’s dead simple – the line pair per millimeter (lp/mm) resolution of a camera sensor.

Now there’s something you won’t find in your cameras ‘tech specs’ that’s for sure!

Sensor Line Pair Resolution

The smallest “line” that can be recorded on a sensor is 1 photosite in width – now that makes sense doesn’t it.

But in order to see that line we must have another line next to it, and that line must have a higher or lower tonal value to a degree where the contrast ratio between the two lines is at or above the low contrast limit of the sensor.

So now we know that the smallest line pair our sensor can record is 2 photosites/pixels in width – the physical width is governed by the sensor pixel pitch; in other words the photosite diameter.

In a nutshell, the lp/mm resolution of a sensor is 0.5x the pixel row count per millimeter – referred to as the Nyquist Rate, simply because we have to define (sample) 2 lines in order to see/resolve 1 line.

The maximum resolution of an image projected by the lens that can be captured at the sensor plane – in other words, the limit of what can be USEFULLY sampled – is the Nyquist Limit.

Let’s do some practical calculations:

Canon 1DX 18.1Mp

Imaging Area = 36mm x 24mm / 5202 x 3533 pixels/photosites OR LINES.

I actually do this calculation based on the imaging area diagonal

So sensor resolution in lp/mm = (pixel diagonal/physical diagonal) x 0.5 = 72.01 lp/mm

Nikon D4 16.2Mp = 68.62 lp/mm

Nikon D800 36.3Mp = 102.33 lp/mm

PhaseOne P40 40Mp medium format = 83.15 lp/mm

PhaseOne IQ180 80Mp medium format = 96.12 lp/mm

Nikon D7000 16.2mp APS-C (DX) 4928×3264 pixels; 23.6×15.6mm dimensions  = 104.62 lp/mm

Canon 1D IV 16.1mp APS-H 4896×3264 pixels; 27.9×18.6mm dimensions  = 87.74 lp/mm

Taking the crackpot D800 as an example, that 102.33 lp/mm figure means that the sensor is capable of resolving 204.66 lines, or points of detail, per millimeter.

I say crackpot because:

  1. The Optical Low Pass “fights” against this high degree of resolving power
  2. This resolving power comes at the expense of S/N ratio
  3. This resolving power comes at the expense of diffraction
  4. The D800E is a far better proposition because it negates 1. above but it still leaves 2. & 3.
  5. Both sensors would purport to be “better” than even an IQ180 – newsflash – they ain’t; and not by a bloody country mile!  But the D800E is an exceptional sensor as far as 35mm format (36×24) sensors go.

A switch to a 40Mp medium format is BY FAR the better idea.

Before we go any further, we need a reality check:

In the scene we are shooting, and with the lens magnification we are using, can we actually “SEE” detail as small as 1/204th of a millimeter?

We know that detail finer than that exists all around us – that’s why we do macro/micro photography – but shooting a landscape with a 20mm wide angle where the nearest detail is 1.5 meters away ??

And let’s not forget the diffraction limit of the sensor and the incumbent reduction in depth of field that comes with 36Mp+ crammed into a 36mm x 24mm sensor area.

The D800 gives you something with one hand and takes it away with the other – I wouldn’t give the damn thing house-room!  Rant over………

Anyway, getting back to the matter at hand, we can now see that the MTF lp/mm values quoted by the likes of Nikon and Canon et al of 10 and 30 lp/mm bare little or no connectivity with the resolving power of their sensors – as I said in my previous post HERE – they are meaningless.

The information we are chasing after is all about the lens:

  1. How well does it transfer contrast because its contrast that allows us to “see” the lines of detail?
  2. How “sharp” is the lens?
  3. What is the “spread” of 1. and 2. – does it perform equally across its FoV (field of view) or is there a monstrous fall-off of 1. and 2. between 12 and 18mm from the center on an FX sensor?
  4. Does the lens vignette?
  5. What is its CA performance?

Now we can go to data sites on the net such as DXO Mark where we can find out all sorts of more meaningful data about our potential lens purchase performance.

But even then, we have to temper what we see because they do their testing using Imatest or something of that ilk, and so the lens performance data is influenced by sensor, ASIC and basic RAW file demosaicing and normalisation – all of which can introduce inaccuracies in the data; in other words they use camera images in order to measure lens performance.

The MTF 50 Standard

Standard MTF (MTF 100) charts do give you a good idea of the lens CONTRAST transfer function, as you may already have concluded. They begin by measuring targets with the highest degree of modulation – black to white – and then illustrate how well that contrast has been transferred to the image plane, measured along a corner radius of the frame/image circle.

MTF 1.0 (100%) left, MTF 0.5 (50%) center and MTF 0.1 (10%) right.

MTF 1.0 (100%) left, MTF 0.5 (50%) center and MTF 0.1 (10%) right.

As you can see, contrast decreases with falling transfer function value until we get to MTF 0.1 (10%) – here we can guess that if the value falls any lower than 10% then we will lose ALL “perceived” contrast in the image and the lines will become a single flat monotone – in other words we’ll drop to 9% and hit the Rayleigh Criterion.

It’s somewhat debatable whether or not sensors can actually discern a 10% value – as I mentioned earlier in this post, some favour a value more like 12% to 15% (0.12 to 0.15).

Now then, here’s the thing – what dictates the “sharpness” of edge detail in our images?  That’s right – EDGE CONTRAST.  (Don’t mistake this for overall image contrast!)

Couple that with:

  1. My well-used adage of “too much contrast is thine enemy”.
  2. “Detail” lies in midtones and shadows, and we want to see that detail, and in order to see it the lens has to ‘transfer’ it to the sensor plane.
  3. The only “visual” I can give you of MTF 100 would be something like power lines silhouetted against the sun – even then you would under expose the sun, so, if you like, MTF would still be sub 100.

Please note: 3. above is something of a ‘bastardisation’ and certain so-called experts will slag me off for writing it, but it gives you guys a view of reality – which is the last place some of those aforementioned experts will ever inhabit!

Hopefully you can now see that maybe measuring lens performance with reference to MTF 50 (50%, 0.5) rather than MTF 100 (100%, 1.0) might be a better idea.

Manufacturers know this but won’t do it, and the likes of Nikon can’t do it even if they wanted to because they use a damn calculator!

Don’t be trapped into thinking that contrast equals “sharpness” though; consider the two diagrams below (they are small because at larger sizes they make your eyes go funny!).

A lens can transfer full contrast but be unsharp.

A lens can have a high contrast transfer function but be unsharp.

A lens can have low contrast transmission (transfer function) but still be sharp.

A lens can have low contrast transfer function but still be sharp.

In the first diagram the lens has RESOLVED the same level of detail (the same lp/mm) in both cases, and at pretty much the same contrast transfer value; but the detail is less “sharp” on the right.

In the lower diagram the lens has resolved the same level of detail with the same degree of  “sharpness”, but with a much reduced contrast transfer value on the right.

Contrast is an AID to PERCEIVED sharpness – nothing more.

I actually hate that word SHARPNESS; and it’s a nasty word because it’s open to all sorts of misconceptions by the uninitiated.

A far more accurate term is ACUTANCE.

How Acutance effects perceived "sharpness" and is contrast independent.

How Acutance effects perceived “sharpness”.

So now hopefully you can see that LENS RESOLUTION is NOT the same as lens ACUTANCE (perceived sharpness..grrrrrr).

Seeing as it is possible to have a lens with a higher degree resolving power, but a lower degree of acutance you need to be careful – low acutance tends to make details blur into each other even at high contrast values; which tends to negate the positive effects of the resolving power. (Read as CHEAP LENS!).

Lenses need to have high acutance – they need to be sharp!  We’ve got enough problems trying to keep the sharpness once the sensor gets hold of the image, without chucking it a soft one in the first place – and I’ll argue this point with the likes of Mr. Rockwell until the cows have come home!

Things We Already Know

We already know that stopping down the aperture increases Depth of Field; and we already know that we can only do this to a certain degree before we start to hit diffraction.

What does increasing DoF do exactly; it increases ACUTANCE is what it does – exactly!

Yes it gives us increased perceptual sharpness of parts of the subject in front and behind the plane of sharp focus – but forget that bit – we need to understand that the perceived sharpness/acutance of the plane of focus increases too, until you take things too far and go beyond the diffraction limit.

And as we already know, that diffraction limit is dictated by the size of photosites/pixels in the sensor – in other words, the sensor resolution.

So the diffraction limit has two effects on the MTF of a lens:

  1. The diffraction limit changes with sensor resolution – you might get away with f14 on one sensor, but only f9 on another.
  2. All this goes “out the window” if we talk about crop-sensor cameras because their sensor dimensions are different.

We all know about “loss of wide angles” with crop sensors – if we put a 28mm lens on an FX body and like the composition but then we switch to a 1.5x crop body we then have to stand further away from the subject in order to achieve the same composition.

That’s good from a DoF PoV because DoF for any given aperture increases with distance; but from a lens resolving power PoV it’s bad – that 50 lp/mm detail has just effectively dropped to 75 lp/mm, so it’s harder for the lens to resolve it, even if the sensors resolution is capable of doing so.

There is yet another way of quantifying MTF – just to confuse the issue for you – and that is line pairs per frame size, usually based on image height and denoted as lp/IH.

Imatest uses MTF50 but quotes the frequencies not as lp/mm, or even lp/IH; but in line widths per image height – LW/IH!

Alas, there is no single source of the empirical data we need in order to evaluate pure lens performance anymore.  And because the outcome of any particular lens’s performance in terms of acutance and resolution is now so inextricably intertwined with that of the sensor behind it, then you as lens buyers, are left with a confusing myriad of various test results all freely available on the internet.

What does Uncle Andy recommend? – well a trip to DXO Mark is not a bad starting point all things considered, but I do strongly suggest that you take on board the information I’ve given you here and then scoot over to the DXO test methodology pages HERE and read them carefully before you begin to examine the data and draw any conclusions from it.

But do NOT make decisions just on what you see there; there is no substitute for hands-on testing with your camera before you go and spend your hard-earned cash.  Proper testing and evaluation is not as simple as you might think, so it’s a good idea to perhaps find someone who knows what they are doing and is prepared to help you out.   Do NOT ask the geezer in the camera shop – he knows bugger all about bugger all!

Do Sensors Out Resolve Lenses?

Well, that’s the loaded question isn’t it – you can get very poor performance from what is ostensibly a superb lens, and to a degree vice versa.

It all depends on what you mean by the question, because in reality a sensor can only resolve what the lens chucks at it.

If you somehow chiseled the lens out of your iPhone and Sellotaped it to your shiny new 1DX then I’m sure you’d notice that the sensor did indeed out resolve the lens – but if you were a total divvy who didn’t know any better then in reality all you’d be ware of is that you had a crappy image – and you’d possibly blame the camera, not the lens – ‘cos it took way better pics on your iPhone 4!

There are so many external factors that effect the output of a lens – available light, subject brightness range, angle of subject to the lens axis to name but three.  Learning how to recognise these potential pitfalls and to work around them is what separates a good photographer from an average one – and by good I mean knowledgeable – not necessarily someone who takes pics for a living.

I remember when the 1DX specs were first ‘leaked’ and everyone was getting all hot and bothered about having to buy the new Canon glass because the 1DX was going to out resolve all Canons old glass – how crackers do you need to be nowadays to get a one way ticket to the funny farm?

If they were happy with the lens’s optical performance pre 1DX then that’s what they would get post 1DX…duh!

If you still don’t get it then try looking at it this way – if lenses out resolve your sensor then you are up “Queer Street” – what you see in the viewfinder will be far better than the image that comes off the sensor, and you will not be a happy camper.

If on the other hand, our sensors have the capability to resolve more lines per millimeter than our lenses can throw at them, and we are more than satisfied with our lenses resolution and acutance, then we would be in a happy place, because we’d be wringing the very best performance from our glass – always assuming we know how to ‘drive the juggernaut’  in the first place!

Become a Patron!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Sensor Resolution

Sensor Resolution

In my previous two posts on this subject HERE and HERE I’ve been looking at pixel resolution as it pertains to digital display and print, and the basics of how we can manipulate it to our benefit.

You should also by aware by now that I’m not the worlds biggest fan of high sensor resolution 35mm format dSLRs – there’s nothing wrong with mega pixels; you can’t have enough of them in my book!

BUT, there’s a limit to how many you can cram into a 36 x 24 millimeter sensor area before things start getting silly and your photographic life gets harder.

So in this post I want to explain the reasoning behind my thoughts.

But before I get into that I want to address something else to do with resolution – the standard by which we judge everything we see around us – the resolution of the eye.

 

Human Eye – How Much Can We See?

In very simple terms, because I’m not an optician, the answer goes like this.

Someone with what some call 20/20/20 vision – 20/20 vision in a 20 year old – has a visual acuity of 5 line pairs per millimeter at a distance of 25 centimeters.

What’s a line pair?

5 line pairs per millimeter. Each line pair is 0.2mm and each line is 0.1mm.

5 line pairs per millimeter. Each line pair is 0.2mm and each line is 0.1mm.

Under ideal viewing conditions in terms of brightness and contrast the human eye can at best resolve 0.1mm detail at a distance of 25 centimeters.

Drop the brightness and the contrast and black will become less black and more grey, and white will become greyer; the contrast between light and dark becomes reduced and therefore that 0.1mm detail becomes less distinct.  until the point comes where the same eye can’t resolve detail any smaller than 0.2mm at 25cms, and so on.

Now if I try and focus on something at 25 cms my eyeballs start to ache,  so we are talking extreme close focus for the eye here.

An interesting side note is that 0.1mm is 100µm (microns) and microns are what we measure the size of sensor photosites in – which brings me nicely to SENSOR resolution.

 

Sensor Resolution – Too Many Megapixels?

As we saw in the post on NOISE we do not give ourselves the best chances by employing sensors with small photosite diameters.  It’s a basic fact of physics and mathematics – the more megapixels on a sensor, then the smaller each photosite has to be in order to fit them all in there;  and the smaller they are then the lower is their individual signal to noise or S/N ratio.

But there is another problem that comes with increased sensor resolution:

Increased diffraction threshold.

Andy Astbury,Wildlife in Pixels,sensor resolution,megapixels,pixel pitch,base noise,signal to noise ratio

Schematic of identical surface areas on lower and higher megapixel sensors.

In the above schematic we are looking at the same sized tiny surface area section on two sensors.

If we say that the sensor resolution on the left is that of a 12Mp Nikon D3, and the ‘area’ contains 3 x 3 photosites which are each 8.4 µm in size, then we can say we are looking at an area of about 25µm square.

On the right we are looking at that same 25µm (25 micron) square, but now it contains 5.2 x 5.2 photosites, each 4.84µm in size – a bit like the sensor resolution of a 36Mp D800.

 

What is Diffraction?

Diffraction is basically the bending or reflecting of waves by objects placed in their path (not to be confused with refraction).  As it pertains to our camera sensor, and overall image quality, it causes an general softening of every single point of sharp detail in the image that is projected onto the sensor during the exposure.

I say during the exposure because diffraction is ‘aperture driven’ and it’s effects only occur when the aperture is ‘stopped down’; which on modern cameras only occurs during the time the shutter is open.

At all other times you are viewing the image with the aperture wide open, and so you can’t see the effect unless you hit the stop down button (if you have one) and even then the image in the viewfinder is so small and dark you can’t see it.

As I said, diffraction is caused by aperture diameter – the size of the hole that lets the light in:

Andy Astbury,Wildlife in Pixels,sensor resolution,megapixels,pixel pitch,base noise,signal to noise ratio

Diffraction has a low presence in the system at wider apertures.

Light enters the lens, passes through the aperture and strikes the focal plane/sensor causing the image to be recorded.

Light waves passing through the center of the aperture and light waves passing through the periphery of the aperture all need to travel the same distance – the focal distance – in order for the image to be sharp.

The potential for the peripheral waves to be bent by the edge of the aperture diaphragm increases as the aperture becomes smaller.

Andy Astbury,Wildlife in Pixels,sensor resolution,megapixels,pixel pitch,base noise,signal to noise ratio

Diffraction has a greater presence in the system at narrower apertures.

If I apply some randomly chosen numbers to this you might understand it a little better:

Let’s say that the focal distance of the lens (not focal length) is 21.25mm.

As long as light passing through all points of the aperture travels 21.25mm and strikes the sensor then the image will be sharp; in other words, the more parallel the central and peripheral light waves are, then the sharper the image.

Making the aperture narrower by ‘stopping down’ increases the divergence between central and peripheral waves.

This means that peripheral waves have to travel further before the strike the sensor; further than 21.25mm – therefore they are no longer in focus, but those central waves still are.  This effect gives a fuzzy halo to every single sharply focused point of light striking our sensor.

Please remember, the numbers I’ve used above are meaningless and random.

The amount of fuzziness varies with aperture – wider aperture =  less fuzzy; narrower aperture = more fuzzy, and the circular image produced by a single point of sharp focus is known as an Airy Disc.

As we ‘stop down’ the aperture the edges of the Airy Disc become softer and more fuzzy.

Say for example, we stick a 24mm lens on our camera and frame up a nice landscape, and we need to use f14 to generate the amount of depth of field we need for the shot.  The particular lens we are using produces an Airy Disc of a very particular size at any given aperture.

Now here is the problem:

Andy Astbury,Wildlife in Pixels,sensor resolution,megapixels,pixel pitch,base noise,signal to noise ratio

Schematic of identical surface areas on lower and higher megapixel sensors and the same diameter Airy Disc projected on both of them.

As you can see, the camera with the lower sensor resolution and larger photosite diameter contains the Airy Disc within the footprint of ONE photosite; but the disc effects NINE photosites on the camera with the higher sensor resolution.

Individual photosites basically record one single flat tone which is the average of what they see; so the net outcome of the above scenario is:

Andy Astbury,Wildlife in Pixels,sensor resolution,megapixels,pixel pitch,base noise,signal to noise ratio

Schematic illustrating the tonal output effect of a particular size Airy Disc on higher and lower resolution sensors

On the higher resolution sensor the Airy Disc has produced what we might think of as ‘response pollution’ in the 8 surrounding photosites – these photosites need to record the values of the own ‘bits of the image jigsaw’ as well – so you end up with a situation where each photosite on the sensor ends up recording somewhat imprecise tonal values – this is diffraction in action.

If we were to stop down to f22 or f32 on the lower resolution sensor then the same thing would occur.

If we used an aperture wide enough on the higher resolution sensor – an aperture that generated an Airy Disc that was the same size or smaller than the diameter of the photosites – then only 1 single photosite would be effected and diffraction would not occur.

But that would leave of with a reduced depth of field – getting around that problem is fairly easy if you are prepared to invest in something like a Tilt-Shift lens.

Andy Astbury,Wildlife in Pixels,sensor resolution,megapixels,pixel pitch,base noise,signal to noise ratio

Both images shot with a 24mm TS lens at f3.5. Left image lens is set to zero and behaves as normal 24mm lens. Right image has 1 degree of down tilt applied.

Above we see two images shot with a 24mm Tilt-Shift lens, and both shots are at f3.5 – a wide open aperture.  In the left hand image the lens controls are set to zero and so it behaves like a standard construction lens of 24mm and gives the shallow depth of field that you’d expect.

The image on the right is again, shot wide open at f3.5, but this time the lens was tilted down by just 1 degree – now we have depth of field reaching all the way through the image.  All we would need to do now is stop the lens down to its sharpest aperture – around f8 – and take the shot;  and no worries about diffraction.

Getting back to sensor resolution in general, if your move into high megapixels counts on 35mm format then you are in a ‘Catch 22’ situation:

  • Greater sensor resolution enables you to theoretically capture greater levels of detail.

but that extra level of detail is somewhat problematic because:

  • Diffraction renders it ‘soft’.
  • Eliminating the diffraction causes you to potentially lose the newly acquired level of, say foreground detail in a landscape, due to lack of depth of field.

All digital sensors are susceptible to diffraction at some point or other – they are ‘diffraction limited’.

Over the years I’ve owned a Nikon D3 I’ve found it diffraction limited to between f16 & f18 – I can see it at f18 but can easily rescue the situation.  When I first used a 24Mp D3X I forgot what I was using and spent a whole afternoon shooting at f16 & f18 – I had to go back the next day for a re-shoot because the sensor is diffraction limited to f11 – the pictures certainly told the story!

Everything in photography is a trade-off – you can’t have more of one thing without having less of another.  Back in the days of film we could get by with one camera and use different films because they had very different performance values, but now we buy a camera and expect its sensor to perform all tasks with equal dexterity – sadly, this is not the case.  All modern consumer sensors are jacks of all trades.

If it’s sensor resolution you want then by far the best way to go about it is to jump to medium format, if you want image quality of the n’th degree – this way you get the ‘pixel resolution’ without many of the incumbent problems I’ve mentioned, simply because the sensors are twice the size; or invest in a TS/PC lens and take the Scheimpflug route to more depth of field at a wider aperture.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.