Photoshop View Magnification

View Magnification in Photoshop (Patreon Only).

A few days ago I uploaded a video to my YouTube channel explaining PPI and DPI – you can see that HERE .

But there is way more to pixel per inch (PPI) resolution values than just the general coverage I gave it in that video.

And this post is about a major impact of PPI resolution that seems to have evaded the understanding and comprehension of perhaps 95% of Photoshop users – and Lightroom users too for that matter.

I am talking about image view magnification, and the connection this has to your monitor.

Let’s make a new document in Photoshop:

View Magnification

We’ll make the new document 5 inches by 4 inches, 300ppi:

View Magnification

I want you to do this yourself, then get a plastic ruler – not a steel tape like I’ve used…..

Make sure you are viewing the new image at 100% magnification, and that you can see your Photoshop rulers along the top and down the left side of the workspace – and right click on one of the rulers and make sure the units are INCHES.

Take your plastic ruler and place it along the upper edge of your lower monitor bezel – not quite like I’ve done in the crappy GoPro still below:

View Magnification

Yes, my 5″ long image is in reality 13.5 inches long on the display!

The minute you do this, you may well get very confused!

Now then, the length of your 5×4 image, in “plastic ruler inches” will vary depending on the size and pixel pitch of your monitor.

Doing this on a 13″ MacBook Pro Retina the 5″ edge is actually 6.875″ giving us a magnification factor of 1.375:1

On a 24″ 1920×1200 HP monitor the 5″ edge is pretty much 16″ long giving us a magnification factor of 3.2:1

And on a 27″ Eizo ColorEdge the 5″ side is 13.75″ or there abouts, giving a magnification factor of 2.75:1

The 24″ HP monitor has a long edge of not quite 20.5 inches containing 1920 pixels, giving it a pixel pitch of around 94ppi.

The 27″ Eizo has a long edge of 23.49 inches containing 2560 pixels, giving it a pixel pitch of 109ppi – this is why its magnification factor is less then the 24″ HP.

And the 13″ MacBook Pro Retina has a pixel pitch of 227ppi – hence the magnification factor is so low.

So WTF Gives with 1:1 or 100% View Magnification Andy?

Well, it’s simple.

The greatest majority of Ps users ‘think’ that a view magnification of 100% or 1:1 gives them a view of the image at full physical size, and some think it’s a full ppi resolution view, and they are looking at the image at 300ppi.

WRONG – on BOTH counts !!

A 100% or 1:1 view magnification gives you a view of your image using ONE MONITOR or display PIXEL to RENDER ONE IMAGE PIXEL  In other words the image to display pixel ratio is now 1:1

So at a 100% or 1:1 view magnification you are viewing your image at exactly the same resolution as your monitor/display – which for the majority of desk top users means sub-100ppi.

Why do I say that?  Because the majority of desk top machine users run a 24″, sub 100ppi monitor – Hell, this time last year even I did!

When I view a 300ppi image at 100% view magnification on my 27″ Eizo, I’m looking at it in a lowly resolution of 109ppi.  With regard to its properties such as sharpness and inter-tonal detail, in essence, it looks only 1/3rd as good as it is in reality.

Hands up those who think this is a BAD THING.

Did you put your hand up?  If you did, then see me after school….

It’s a good thing, because if I can process it to look good at 109ppi, then it will look even better at 300ppi.

This also means that if I deliberately sharpen certain areas (not the whole image!) of high frequency detail until they are visually right on the ragged edge of being over-sharp, then the minuscule halos I might have generated will actually be 3 times less obvious in reality.

Then when I print the image at 1440, 2880 or even 5760 DOTS per inch (that’s Epson stuff), that print is going to look so sharp it’ll make your eyeballs fall to bits.

And that dpi print resolution, coupled with sensible noise control at monitor ppi and 100% view magnification, is why noise doesn’t print to anywhere near the degree folk imagine it will.

This brings me to a point where I’d like to draw your attention to my latest YouTube video:

Did you like that – cheeky little trick isn’t it!

Anyway, back to the topic at hand.

If I process on a Retina display at over 200ppi resolution, I have a two-fold problem:

  • 1. I don’t have as big a margin or ‘fudge factor’ to play with when it comes to things like sharpening.
  • 2. Images actually look sharper than they are in reality – my 13″ MacBook Pro is horrible to process on, because of its excessive ppi and its small dimensions.

Seriously, if you are a stills photographer with a hankering for the latest 4 or 5k monitor, then grow up and learn to understand things for goodness sake!

Ultra-high resolution monitors are valid tools for video editors and, to a degree, stills photographers using large capacity medium format cameras.  But for us mere mortals on 35mm format cameras, they can actually ‘get in the way’ when it comes to image evaluation and processing.

Working on a monitor will a ppi resolution between the mid 90’s and low 100’s at 100% view magnification, will always give you the most flexible and easy processing workflow.

Just remember, Photoshop linear physical dimensions always ‘appear’ to be larger than ‘real inches’ !

And remember, at 100% view magnification, 1 IMAGE pixel is displayed by 1 SCREEN pixel.  At 50% view magnification 1 SCREEN pixel is actually displaying the dithered average of 2 IMAGE pixels.  At 25% magnification each monitor pixel is displaying the average of 4 image pixels.

Anyway, that’s about it from me until the New Year folks, though I am the worlds biggest Grinch, so I might well do another video or two on YouTube over the ‘festive period’ so don’t forget to subscribe over there.

Thanks for reading, thanks for watching my videos, and Have a Good One!

 

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

YouTube Channel Latest Video Training

My YouTube Channel Latest Photography Video Training.

I’ve been busy this week adding more content to the old YouTube channel.

Adding content is really time-consuming, with recording times taking around twice the length of the final video.

Then there’s the editing, which usually takes around the same time, or a bit longer.  Then encoding and compression and uploading takes around the same again.

So yes, a 25 minute video takes A LOT more than 25 minutes to make and make live for the world to view.

This weeks video training uploads are:

This video deals with the badly overlooked topic of raw file demosaicing.

Next up is:

This video is a refreshed version of getting contrast under control in Lightroom – particularly Lightroom Classic CC.

Then we have:

This video is something of a follow-up to the previous one, where I explain the essential differences between contrast and clarity.

And finally, one from yesterday – which is me, restraining myself from embarking on a full blown ‘rant’, all about the differences between DPI (dots per inch) and PPI (pixels per inch):

Important Note

Viewing these videos is essential for the betterment of your understanding – yes it is!  And all I ask for in terms of repayment from yourselves is that you:

  1. Click the main channel subscribe button HERE https://www.youtube.com/c/AndyAstbury
  2. Give the video a ‘like’ by clicking the thumbs up!

YouTube is a funny old thing, but a substantial subscriber base and like videos will bring me closer to laying my hands on latest gear for me to review for you!

If all my blog subscribers would subscribe to my YouTube channel then my subs would more than treble – so go on, what are you waiting for.

I do like creating YouTube free content, but I do have to put food on the table, so I have to do ‘money making stuff’ as well, so I can’t afford to become a full-time YouTuber yet!  But wow, would I like to be in that position.

So that’s that – appeal over.

Watch the videos, and if you have any particular topic you would like me to do a video on, then please just let me know.  Either email me, or you can post in the comments below – no comment goes live here unless I approve it, so if you have a request but don’t want anyone else to see it, then just say.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

The ND Filter

Long Exposure & ND Filters

long exposure,slow shutter speed,ND filter,CMOS sensor,noise

A view of the stunning rock formations at Porth Y Post on the Welsh island of Anglesey. The image is a long exposure of very rough sea, giving the impression of smoke and fog.  30 seconds @f13 ISO 100. B&W 10stop ND – unfiltered exposure would have been 1/30th.

The reason for this particular post began last week when I was “cruising” a forum on a PoD site I’m a member of, and I came across a thread started by someone about heavy ND filters and very long exposures.

Then, a couple of days later a Facebook conversation cropped up where someone I know rather well seemed to be losing the plot over things totally by purchasing a 16 stop ND.

The poor bugger got a right mauling from “yours truly” for the simple reason that he doesn’t understand the SCIENCE behind the art of photography.  This is what pisses me off about digital photography – it readily provides “instant gratification” to folk who know bugger all about what they are doing with their equipment.  They then spend money on “pushing the envelope” only to find their ivory tower comes tumbling down around them because they THOUGHT they knew what they were doing………..stop ranting Andy before you have a coronary!

OK, I’ll stop “ranting”, but seriously folks, it doesn’t matter if you are on a 5DMkIII or a D800E, a D4 or a 1Dx – you have to realise that your camera works within a certain set of fixed parameters; and if you wander outside these boundaries for reasons of either stupidity or ignorance, then you’ll soon be up to your ass in Alligators!

Avid readers of this blog of mine (seemingly there are a few) will know that I’ve gone to great lengths in the past to explain how sensors are limited in different ways by things such as diffraction and that certain lens/sensor combinations are said to be “diffraction limited; well here’s something new to run up your flag pole – sensors can be thought of as being “photon limited” too!

I’ll explain what I mean in a minute…..

SENSOR TYPE

Most folk who own a camera of modern design by Nikon or Canon FAIL at the first hurdle by not understanding their sensor type.

Sensors generally fall into two basic types – CCD and CMOS.

Most of us use cameras fitted with CMOS sensors, because we demand accurate fast phase detection AF AND we demand high levels of ADC/BUFFER speed.  In VERY simplistic terms, CCD sensors cannot operate at the levels of speed and efficiency demanded by the general camera-buying public.

So, it’s CMOS to the rescue.  But CMOS sensors are generally noisier than CCDs.

When I say “noise” I’m NOT referring to the normal under exposure luminance noise that a some of you might be thinking of. I’m talking about the “background noise” of the sensor itself – see post HERE .

Now I’m going to over simplify things for you here – I need to because there are a lot of variables to take into account.

  • A Sensor is an ARRAY of PHOTOSITES or PHOTODIODES
  • A photodiode exists to do one thing – react to being struck by PHOTONS of light by producing electrons.
  • To produce electrons PROPORTIONAL to the number of photons that strike it.

Now in theory, a photodiode that sees ZERO photons during the exposure should release NO ELECTRONS.

At the end of the exposure the ADC comes along and counts the electrons for each photodiode – an ANALOGUE VALUE – and converts it to a DIGITAL VALUE and stores that digital value as a point of information in the RAW file.

A RAW converter such as Lightroom then reads all these individual points of information and using its own in-built algorithms it normalises and demosaics them into an RGB image that we can see on our monitor.

Sounds simple doesn’t it, and theoretically it is.  But in practice there’s a lot of places in the process where things can go sideways rapidly……..!

We make a lot of assumptions about our pride and joy – our newly purchased DSLR – and most of these assumptions are just plain wrong.  One that most folk get wrong is presuming ALL the photodiodes on their shiny new sensor BEHAVE IN THE SAME WAY and are 100% identical in response.  WRONG – even though, in theory, it should be true.

Some sensors are built to a budget, some to a standard of quality and bugger the budget.

Think of the above statement as a scale running left to right with crap sensors like a 7D or D5000 on the left, and the staggering Phase IQ260 on the right.  There isn’t, despite what sales bumph says, any 35mm format sensor that can come even close to residing on the right hand end of the scale, but perhaps a D800E might sit somewhere between 65 and 70%.

The thing I’m trying to get at here is that “quality control” and “budget” are opposites in the manufacturing process, and that linearity and uniformity of photodiode performance costs MONEY – and lots of it.

All our 35mm format sensors suffer from a lack of that expensive quality control in some form or other, but what manufacturers try to do is place the resulting poor performance “outside the envelope of normal expected operation” as a Nikon technician once told me.

In other words, during normal exposures and camera usage (is there such a thing?) the errors don’t show themselves – so you are oblivious to them. But move outside of that “envelope of normal expected operation” and as I said before, the Alligators are soon chomping on your butt cheeks.

REALITY

Long exposures in low light levels – those longer than 30 to 90 seconds – present us with one of those “outside the envelope” situations that can highlight some major discrepancies in individual photodiode performance and sensor uniformity.

Earlier, I said that a photodiode, in a perfect world, would always react proportionally to the number of photons striking it, and that if it had no photon strikes during the exposure then it would have ZERO output in terms of electrons produced.

Think of the “perfect” photodiode/photosite as being a child brought up by nuns, well mannered and perfectly behaved.

Then think of a child brought up in the Gallagher household a la “Shameless” – zero patience, no sense of right or wrong, rebellious and down right misbehaved.  We can compare this kid with some of the photodiodes on our sensor.

These odd photodiodes usually show a random distribution across the sensor surface, but you only ever see evidence of their existence when you shoot in the dark, or when executing very long exposures from behind a heavy ND filter.

These “naughty” photodiodes behave badly in numerous ways:

  • They can release a larger number of electrons than is proportional to their photon count.
  • They can go to the extreme of releasing electrons when the have a ZERO photon count.
  • They can mimic the output of their nearest neighbors.
  • They can be clustered together and produce random spurious specks of colour.

And the list goes on!

It’s a Question of Time

These errant little buggers basically misbehave because the combination of low photon count and overly long exposure time allow them to, if you like, run out of patience and start misbehaving.

It is quite common for a single photodiode or cluster of them to behave in a perfect manner for any shutter speed up to between 30 seconds and 2 minutes. But if we expose that same photodiode or cluster for 3 minutes it can show abnormal behavior in its electron output.  Expose it for 5 minutes and its output could be the same, or amplified, or even totally different.

IMPORTANT – do not confuse these with so-called “hot pixels” which show up in all exposures irrespective of shutter duration.

Putting an ND filter in front of your lens is the same as shooting under less light.  Its effect is even-handed across all exposure values in the scenes brightness range, and therein lies the problem.  Cutting 10 stops worth of photons from the highlights in the scene will still leave plenty to make the sensor work effectively in those areas of the image.

But cutting 10 stops worth of photons from the shadow areas – where there was perhaps 12 stops less to begin with – might well leave an insufficient number of photons in the very darkest areas to make those particular photodiodes function correctly.

Exposure is basically a function of Intensity and Time, back in my college days we used to say that Ex = I x T !

Our ND filter CUTS intensity across the board, so Time has to increase to avoid under exposure in general.  But because we are working with far fewer photons as a whole, we have to curb the length of the Time component BECAUSE OF the level of intensity reduction – we become caught in a “Catch 22” situation, trying to avoid the “time triggered” malfunction of those errant diodes.

Below is an 4 minute exposure from behind a Lee Big Stopper on a 1Dx – click on both images to open at full resolution in a new window.

long exposure,slow shutter speed,ND filter,CMOS sensor,noise

Canon 1Dx
4 minutes @ f13
ISO 200 Lee 10stop

long exposure,slow shutter speed,ND filter,CMOS sensor,noise

The beastly Nikon D800E fairs a lot better under similar exposure parameters, but there are still a lot of repairs to be done:

long exposure,slow shutter speed,ND filter,CMOS sensor,noise

A 4 minute exposure on a D800, f11 at 200ISO

Most people use heavy ND filters for the same reason I do – smoothing out water.

long exposure,slow shutter speed,ND filter,CMOS sensor,noise

The texture of the water in the top shot clutters the image and adds nothing – so get rid of it! D4,ISO 50, 30secs f11 Lee Big Stopper

Then we change the camera orientation and get a commercial shot:

long exposure,slow shutter speed,ND filter,CMOS sensor,noise

Cemlyn Bay on the northwest coast of Anglesey, North Wales, Approximately 2.5 km to the east is Wylfa nuclear power station. Same exposure as above.

In this next shot all I’m interested in is the jetty, neither water surface texture or horizon land add anything – the land is easy to dump in PShop but the water would be impossible:

long exposure,slow shutter speed,ND filter,CMOS sensor,noise

I see the bottom image in my head when I look at the scene top left. Again, the 10 stop ND fixes the water, which adds precisely nothing to the image. D4 ISO 50, 60 secs, f14 B&W 10 stop

The mistake folk make is this, 30 seconds is usually enough time to get the effect on the water you want, and 90 to 120 seconds is truly the maximum you should ever really need.  Any longer and you’ll get at best no more effect, and at worst the effect will not look as visually appealing – that’s my opinion anyway.

This time requirement dovetails nicely with the “operating inside the design envelope” physics of the average 35mm format sensor.

So, as I said before, we could go out on a bit of a limb and say that our sensors are all “photon limited”; all diodes on the sensor must be struck by x number of photons.

And we can regard them as being exposure length limited; all diodes on the sensor must be struck by x photons in y seconds in order to avoid the pitfalls mentioned.

So next time you have the idea of obtaining something really daft, such as the 16 stop ND filter my friend ordered, try engaging your brain.  An unfiltered exposure that meters out at 1/30th sec will be 30 seconds behind a 10 stop ND filter, and a whopping 32 minutes behind a 16 stop ND filter.  Now at that sort of exposure time the sensor noise in the image will be astonishing in both presence and variety!

As I posted on my Book of Face page the other day, just for kicks I shot this last Wednesday night:

long exposure,slow shutter speed,ND filter,CMOS sensor,noise

Penmon Lighthouse in North Wales at twilight.
Sky is 90 secs, foreground is 4 minutes, D4, f16, ISO 50 B&W 10 stop ND filter

The image truly gives the wrong impression of reality – the wind was cold and gusting to 30mph, and the sea looked very lumpy and just plain ugly.

I spent at least 45 minutes just taking the bloody speckled colour read noise out of the 4 minute foreground exposure – I have to wonder if the image was truly worth the effort in processing.

When you take into account everything I’ve mentioned so far plus the following:

  • Long exposures are prone to ground vibration and the effects of wind on the tripod etc
  • Hanging around in places like the last shot above is plain dangerous, especially when it’s dark.

you must now see that keeping the exposures as short as possible is the sensible course of action, and that for doing this sort of work a 6 stop ND filter is a more sensible addition to your armoury than a 16 stop ND filter!

Just keep away from exposures above 2 minutes.

And before anyone asks, NO – you don’t shoot star trails in one frame over 4 hours unless you’re a complete numpty!  And for anyone who thinks you can cancel noise by shooting a black frame think on this – the black frame has to be shot immediately after the image, and has to be the same exposure duration as the main image.  That means a 4 hour single frame star trail plus black frame to go with it will take at least 8 hours – will your camera battery last that long?  If it dies before the black frame is finished then you lose BOTH frames……………

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

MTF, Lens & Sensor Resolution

MTF, Lens & Sensor Resolution

I’ve been ‘banging on’ about resolution lens performance and MTF over the last few posts so I’d like to start bringing all these various bits of information together with at least a modicum of simplicity.

If this is your first visit to my blog I strongly recommend you peruse HERE and HERE before going any further!

You might well ask the question “Do I really need to know this stuff – you’re a pro Andy and I’m not, so I don’t think I need to…”

My answer is “Yes you bloody well do need to know, so stop whinging – it’ll save you time and perhaps stop you wasting money…”

Words used like ‘resolution’ do tend to get used out of context sometimes, and when you guys ‘n gals are learning this stuff then things can get a mite confusing – and nowhere does terminology get more confusing than when we are talking ‘glass’.

But before we get into the idea of bringing lenses and sensors together I want to introduce you to something you’ve all heard of before – CONTRAST – and how it effects our ability to see detail, our lens’s ability to transfer detail, and our camera sensors ability to record detail.

Contrast & How It Effects the Resolving of Detail

In an earlier post HERE I briefly mentioned that the human eye can resolve 5 line pairs per millimeter, and the illustration I used to illustrate those line pairs looked rather like this:

5 line pairs per millimeter with a contrast ratio of 100% or 1.0

5 line pairs per millimeter with a contrast ratio of 100% or 1.0

Now don’t forget, these line pairs are highly magnified – in reality each pair should be 0.2mm wide.  These lines are easily differentiated because of the excessive contrast ratio between each line in a pair.

How far can contrast between the lines fall before we can’t tell the difference any more and all the lines blend together into a solid monotone?

Enter John William Strutt, the 3rd Baron Rayleigh…………

5 line pairs at bottom threshold of human vision - a 9% contrast ratio.

5 line pairs at bottom threshold of human vision – a 9% contrast ratio.

The Rayleigh Criterion basically stipulates that the ‘discernability’ of each line in a pair is low end limited to a line pair contrast ratio of 9% or above, for average human vision – that is, when each line pair is 0.2mm wide and viewed from 25cms.  Obviously they are reproduced much larger here, hence you can see ’em!

Low contrast limit for Human vision (left) & camera sensor (right).

Low contrast limit for Human vision (left) & camera sensor (right).

However, it is said in some circles that dslr sensors are typically limited to a 12% to 15% minimum line pair contrast ratio when it comes to discriminating between the individual lines.

Now before you start getting in a panic and misinterpreting this revelation you must realise that you are missing one crucial factor; but let’s just recap what we’ve got so far.

  1. A ‘line’ is a detail.
  2. but we can’t see one line (detail) without another line (detail) next to it that has a different tonal value ( our line pair).
  3. There is a limit to the contrast ratio between our two lines, below which our lines/details begin to merge together and become less distinct.

So, what is this crucial factor that we are missing; well, it’s dead simple – the line pair per millimeter (lp/mm) resolution of a camera sensor.

Now there’s something you won’t find in your cameras ‘tech specs’ that’s for sure!

Sensor Line Pair Resolution

The smallest “line” that can be recorded on a sensor is 1 photosite in width – now that makes sense doesn’t it.

But in order to see that line we must have another line next to it, and that line must have a higher or lower tonal value to a degree where the contrast ratio between the two lines is at or above the low contrast limit of the sensor.

So now we know that the smallest line pair our sensor can record is 2 photosites/pixels in width – the physical width is governed by the sensor pixel pitch; in other words the photosite diameter.

In a nutshell, the lp/mm resolution of a sensor is 0.5x the pixel row count per millimeter – referred to as the Nyquist Rate, simply because we have to define (sample) 2 lines in order to see/resolve 1 line.

The maximum resolution of an image projected by the lens that can be captured at the sensor plane – in other words, the limit of what can be USEFULLY sampled – is the Nyquist Limit.

Let’s do some practical calculations:

Canon 1DX 18.1Mp

Imaging Area = 36mm x 24mm / 5202 x 3533 pixels/photosites OR LINES.

I actually do this calculation based on the imaging area diagonal

So sensor resolution in lp/mm = (pixel diagonal/physical diagonal) x 0.5 = 72.01 lp/mm

Nikon D4 16.2Mp = 68.62 lp/mm

Nikon D800 36.3Mp = 102.33 lp/mm

PhaseOne P40 40Mp medium format = 83.15 lp/mm

PhaseOne IQ180 80Mp medium format = 96.12 lp/mm

Nikon D7000 16.2mp APS-C (DX) 4928×3264 pixels; 23.6×15.6mm dimensions  = 104.62 lp/mm

Canon 1D IV 16.1mp APS-H 4896×3264 pixels; 27.9×18.6mm dimensions  = 87.74 lp/mm

Taking the crackpot D800 as an example, that 102.33 lp/mm figure means that the sensor is capable of resolving 204.66 lines, or points of detail, per millimeter.

I say crackpot because:

  1. The Optical Low Pass “fights” against this high degree of resolving power
  2. This resolving power comes at the expense of S/N ratio
  3. This resolving power comes at the expense of diffraction
  4. The D800E is a far better proposition because it negates 1. above but it still leaves 2. & 3.
  5. Both sensors would purport to be “better” than even an IQ180 – newsflash – they ain’t; and not by a bloody country mile!  But the D800E is an exceptional sensor as far as 35mm format (36×24) sensors go.

A switch to a 40Mp medium format is BY FAR the better idea.

Before we go any further, we need a reality check:

In the scene we are shooting, and with the lens magnification we are using, can we actually “SEE” detail as small as 1/204th of a millimeter?

We know that detail finer than that exists all around us – that’s why we do macro/micro photography – but shooting a landscape with a 20mm wide angle where the nearest detail is 1.5 meters away ??

And let’s not forget the diffraction limit of the sensor and the incumbent reduction in depth of field that comes with 36Mp+ crammed into a 36mm x 24mm sensor area.

The D800 gives you something with one hand and takes it away with the other – I wouldn’t give the damn thing house-room!  Rant over………

Anyway, getting back to the matter at hand, we can now see that the MTF lp/mm values quoted by the likes of Nikon and Canon et al of 10 and 30 lp/mm bare little or no connectivity with the resolving power of their sensors – as I said in my previous post HERE – they are meaningless.

The information we are chasing after is all about the lens:

  1. How well does it transfer contrast because its contrast that allows us to “see” the lines of detail?
  2. How “sharp” is the lens?
  3. What is the “spread” of 1. and 2. – does it perform equally across its FoV (field of view) or is there a monstrous fall-off of 1. and 2. between 12 and 18mm from the center on an FX sensor?
  4. Does the lens vignette?
  5. What is its CA performance?

Now we can go to data sites on the net such as DXO Mark where we can find out all sorts of more meaningful data about our potential lens purchase performance.

But even then, we have to temper what we see because they do their testing using Imatest or something of that ilk, and so the lens performance data is influenced by sensor, ASIC and basic RAW file demosaicing and normalisation – all of which can introduce inaccuracies in the data; in other words they use camera images in order to measure lens performance.

The MTF 50 Standard

Standard MTF (MTF 100) charts do give you a good idea of the lens CONTRAST transfer function, as you may already have concluded. They begin by measuring targets with the highest degree of modulation – black to white – and then illustrate how well that contrast has been transferred to the image plane, measured along a corner radius of the frame/image circle.

MTF 1.0 (100%) left, MTF 0.5 (50%) center and MTF 0.1 (10%) right.

MTF 1.0 (100%) left, MTF 0.5 (50%) center and MTF 0.1 (10%) right.

As you can see, contrast decreases with falling transfer function value until we get to MTF 0.1 (10%) – here we can guess that if the value falls any lower than 10% then we will lose ALL “perceived” contrast in the image and the lines will become a single flat monotone – in other words we’ll drop to 9% and hit the Rayleigh Criterion.

It’s somewhat debatable whether or not sensors can actually discern a 10% value – as I mentioned earlier in this post, some favour a value more like 12% to 15% (0.12 to 0.15).

Now then, here’s the thing – what dictates the “sharpness” of edge detail in our images?  That’s right – EDGE CONTRAST.  (Don’t mistake this for overall image contrast!)

Couple that with:

  1. My well-used adage of “too much contrast is thine enemy”.
  2. “Detail” lies in midtones and shadows, and we want to see that detail, and in order to see it the lens has to ‘transfer’ it to the sensor plane.
  3. The only “visual” I can give you of MTF 100 would be something like power lines silhouetted against the sun – even then you would under expose the sun, so, if you like, MTF would still be sub 100.

Please note: 3. above is something of a ‘bastardisation’ and certain so-called experts will slag me off for writing it, but it gives you guys a view of reality – which is the last place some of those aforementioned experts will ever inhabit!

Hopefully you can now see that maybe measuring lens performance with reference to MTF 50 (50%, 0.5) rather than MTF 100 (100%, 1.0) might be a better idea.

Manufacturers know this but won’t do it, and the likes of Nikon can’t do it even if they wanted to because they use a damn calculator!

Don’t be trapped into thinking that contrast equals “sharpness” though; consider the two diagrams below (they are small because at larger sizes they make your eyes go funny!).

A lens can transfer full contrast but be unsharp.

A lens can have a high contrast transfer function but be unsharp.

A lens can have low contrast transmission (transfer function) but still be sharp.

A lens can have low contrast transfer function but still be sharp.

In the first diagram the lens has RESOLVED the same level of detail (the same lp/mm) in both cases, and at pretty much the same contrast transfer value; but the detail is less “sharp” on the right.

In the lower diagram the lens has resolved the same level of detail with the same degree of  “sharpness”, but with a much reduced contrast transfer value on the right.

Contrast is an AID to PERCEIVED sharpness – nothing more.

I actually hate that word SHARPNESS; and it’s a nasty word because it’s open to all sorts of misconceptions by the uninitiated.

A far more accurate term is ACUTANCE.

How Acutance effects perceived "sharpness" and is contrast independent.

How Acutance effects perceived “sharpness”.

So now hopefully you can see that LENS RESOLUTION is NOT the same as lens ACUTANCE (perceived sharpness..grrrrrr).

Seeing as it is possible to have a lens with a higher degree resolving power, but a lower degree of acutance you need to be careful – low acutance tends to make details blur into each other even at high contrast values; which tends to negate the positive effects of the resolving power. (Read as CHEAP LENS!).

Lenses need to have high acutance – they need to be sharp!  We’ve got enough problems trying to keep the sharpness once the sensor gets hold of the image, without chucking it a soft one in the first place – and I’ll argue this point with the likes of Mr. Rockwell until the cows have come home!

Things We Already Know

We already know that stopping down the aperture increases Depth of Field; and we already know that we can only do this to a certain degree before we start to hit diffraction.

What does increasing DoF do exactly; it increases ACUTANCE is what it does – exactly!

Yes it gives us increased perceptual sharpness of parts of the subject in front and behind the plane of sharp focus – but forget that bit – we need to understand that the perceived sharpness/acutance of the plane of focus increases too, until you take things too far and go beyond the diffraction limit.

And as we already know, that diffraction limit is dictated by the size of photosites/pixels in the sensor – in other words, the sensor resolution.

So the diffraction limit has two effects on the MTF of a lens:

  1. The diffraction limit changes with sensor resolution – you might get away with f14 on one sensor, but only f9 on another.
  2. All this goes “out the window” if we talk about crop-sensor cameras because their sensor dimensions are different.

We all know about “loss of wide angles” with crop sensors – if we put a 28mm lens on an FX body and like the composition but then we switch to a 1.5x crop body we then have to stand further away from the subject in order to achieve the same composition.

That’s good from a DoF PoV because DoF for any given aperture increases with distance; but from a lens resolving power PoV it’s bad – that 50 lp/mm detail has just effectively dropped to 75 lp/mm, so it’s harder for the lens to resolve it, even if the sensors resolution is capable of doing so.

There is yet another way of quantifying MTF – just to confuse the issue for you – and that is line pairs per frame size, usually based on image height and denoted as lp/IH.

Imatest uses MTF50 but quotes the frequencies not as lp/mm, or even lp/IH; but in line widths per image height – LW/IH!

Alas, there is no single source of the empirical data we need in order to evaluate pure lens performance anymore.  And because the outcome of any particular lens’s performance in terms of acutance and resolution is now so inextricably intertwined with that of the sensor behind it, then you as lens buyers, are left with a confusing myriad of various test results all freely available on the internet.

What does Uncle Andy recommend? – well a trip to DXO Mark is not a bad starting point all things considered, but I do strongly suggest that you take on board the information I’ve given you here and then scoot over to the DXO test methodology pages HERE and read them carefully before you begin to examine the data and draw any conclusions from it.

But do NOT make decisions just on what you see there; there is no substitute for hands-on testing with your camera before you go and spend your hard-earned cash.  Proper testing and evaluation is not as simple as you might think, so it’s a good idea to perhaps find someone who knows what they are doing and is prepared to help you out.   Do NOT ask the geezer in the camera shop – he knows bugger all about bugger all!

Do Sensors Out Resolve Lenses?

Well, that’s the loaded question isn’t it – you can get very poor performance from what is ostensibly a superb lens, and to a degree vice versa.

It all depends on what you mean by the question, because in reality a sensor can only resolve what the lens chucks at it.

If you somehow chiseled the lens out of your iPhone and Sellotaped it to your shiny new 1DX then I’m sure you’d notice that the sensor did indeed out resolve the lens – but if you were a total divvy who didn’t know any better then in reality all you’d be ware of is that you had a crappy image – and you’d possibly blame the camera, not the lens – ‘cos it took way better pics on your iPhone 4!

There are so many external factors that effect the output of a lens – available light, subject brightness range, angle of subject to the lens axis to name but three.  Learning how to recognise these potential pitfalls and to work around them is what separates a good photographer from an average one – and by good I mean knowledgeable – not necessarily someone who takes pics for a living.

I remember when the 1DX specs were first ‘leaked’ and everyone was getting all hot and bothered about having to buy the new Canon glass because the 1DX was going to out resolve all Canons old glass – how crackers do you need to be nowadays to get a one way ticket to the funny farm?

If they were happy with the lens’s optical performance pre 1DX then that’s what they would get post 1DX…duh!

If you still don’t get it then try looking at it this way – if lenses out resolve your sensor then you are up “Queer Street” – what you see in the viewfinder will be far better than the image that comes off the sensor, and you will not be a happy camper.

If on the other hand, our sensors have the capability to resolve more lines per millimeter than our lenses can throw at them, and we are more than satisfied with our lenses resolution and acutance, then we would be in a happy place, because we’d be wringing the very best performance from our glass – always assuming we know how to ‘drive the juggernaut’  in the first place!

Become a Patron!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Lens Performance

I have a friend – yes, a strange concept I know, but I do have some – we’ll call him Steve.

Steve is a very talented photographer – when he’ll give himself half a chance; but impatience can sometimes get the better of him.

He’ll have a great scene in front of him but then he’ll forget things such as any focus or exposure considerations the scene demands, and the resulting image will be crap!

Quite often, a few of Steve’s character flaws begin to emerge at this juncture.

Firstly, Steve only remembers his successes; this leads to the unassailable ‘fact’ that he couldn’t possibly have ‘screwed up’.

So now we can all guess the conclusive outcome of that scenario can’t we……..that’s right; his camera gear has fallen short in the performance department.

Clairvoyance department would actually be more accurate!

So this ‘error in his camera system’ needs to be stamped on – hard and fast!

This leads to Steve embarking on a massive information-gathering exercise from various learned sources on ‘that there inter web’ – where another of Steve’s flaws shows up; that of disjointed speed reading…..

The terrifying outcome of these situations usually concludes with Steve’s confident affirmation that some piece of his equipment has let him down; not just by becoming faulty but sometimes, more worryingly by initial design.

These conclusions are always arrived at in the same manner – the various little snippets of truth and random dis-associated facts that Steve gathers, all get forcibly hammered into some hellish, bastardized ‘factual’ jigsaw in his head.

There was a time when Steve used to ask me first, but he gave up on that because my usual answer contravened the outcome of his first mentioned character flaw!

Lately one of Steve’s biggest peeves has been the performance of one or two of his various lenses.

Ostensibly you’ll perhaps think there’s nothing wrong in that – after all, the image generated by the camera is only as good as the lens used to gather the light in the scene – isn’t it?

 

But there’s a potential problem, and it  lies in what evidence you base your conclusions on……………

 

For Steve, at present, it’s manufacturers MTF charts, and comparisons thereof, coupled with his own images as they appear in Lightroom or Photoshop ACR.

Again, this might sound like a logical methodology – but it isn’t.

It’s flawed on so many levels.

 

The Image Path from Lens to Sensor

We could think of the path that light travels along in order to get to our camera sensor as a sort of Grand National horse race – a steeplechase for photons!

“They’re under starters orders ladies and gentlemen………………and they’re off!”

As light enters the lens it comes across it’s first set of hurdles – the various lens elements and element groups that it has to pass through.

Then they arrive at Becher’s Brook – the aperture, where there are many fallers.

Carefully staying clear of the inside rail and being watchful of any lose photons that have unseated their riders at Becher’s we move on over Foinavon – the rear lens elements, and we then arrive at the infamous Canal Turn – the Optical Low Pass filter; also known as the Anti-alias filter.

Crashing on past the low pass filter and on over Valentines only the bravest photons are left to tackle the the last big fence on their journey – The Chair – our camera sensor itself.

 

Okay, I’ll behave myself now, but you get the general idea – any obstacle that lies in the path of light between the front surface of our lens and the photo-voltaic surface of our sensor is a BAD thing.

Andy Astbury,Wildlife in Pixels,lens,resolution,optical path,sharpness,resolution,imaging pathway

The various obstacles to light as it passes through a camera (ASIC = Application Specific Integrated Circuit)

The problems are many, but let’s list a few:

  1. Every element reduces the level of transmitted light.
  2. Because the lens elements have curved surfaces, light is refracted or bent; the trick is to make all wavelengths of light refract to the same degree – failure results in either lateral or longitudinal chromatic aberration – or worse still, both.
  3. The aperture causes diffraction – already discussed HERE

We have already seen in that same previous post on Sensor Resolution that the number of megapixels can effect overall image quality in terms of overall perceived sharpness due to pixel-pitch, so all things considered, using photographs of any 3 dimensional scene is not always a wise method of judging lens performance.

And here is another reason why it’s not a good idea – the effect on image quality/perceived lens resolution of anti-alias, moire or optical low pass filter; and any other pre-filtering.

I’m not going to delve into the functional whys and wherefores of an AA filter, save to say that it’s deemed a necessary evil on most sensors, and that it can make your images take on a certain softness because it basically adds blur to every edge in the image projected by the lens onto your sensor.

The reasoning behind it is that it stops ‘moire patterning’ in areas of high frequency repeated detail.  This it does, but what about the areas in the image where its effect is not required – TOUGH!

 

Many photographers have paid service suppliers for AA filter removal just to squeeze the last bit of sharpness out of their sensors, and Nikon of course offer the ‘sort of AA filter-less’ D800E.

Side bar note:  I’ve always found that with Nikon cameras at least, the pro-body range seem to suffer a lot less from undesirable AA filtration softening than than their “amateur” and “semi pro” bodies – most notably the D2X compared to a D200, and the D3 compared to the D700 & D300.  Perhaps this is due to a ‘thinner’ filter, or a higher quality filter – I don’t know, and to be honest I’ve never had the desire to ‘poke Nikon with a sharp stick’ in order to find out.

 

Back in the days of film things were really simple – image resolution was governed by just two things; lens resolution and film resolution:

1/image resolution = 1/lens resolution + 1/film resolution

Film resolution was a variable depending on the Ag Halide distribution and structure,  dye coupler efficacy within the film emulsion, and the thickness of the emulsion or tri-pack itself.

But today things are far more complicated.

With digital photography we have all those extra hurdles to jump over that I mentioned earlier, so we end up with a situation whereby:

1/Image Resolution = 1/lens resolution + 1/AA filter resolution + 1/sensor resolution + 1/image processor/imaging ASIC resolution

Steve is chasing after lens resolution under the slightly misguided idea the resolution equates to sharpness, which is not strictly true; but he is basing his conception of lens sharpness based on the detail content and perceived detail ‘sharpness’ of his  images; which are ‘polluted’ if you like by the effects of the AA filter, sensor and imaging ASIC.

What it boils down to, in very simplified terms, is this:

You can have one particular lens that, in combination with one camera sensor produces a superb image, but in combination with another sensor produces a not-quite-so-superb image!

On top of the “fixed system” hurdles I’ve outlined above, we must not forget the potential for errors introduced by lens-to-body mount flange inaccuracies, and of course, the big elephant-in-the-room – operator error – ehh Steve.

So attempting to quantify the pure ‘optical performance’ of a lens using your ‘taken images’ is something of a pointless exercise; you cannot see the pure lens sharpness or resolution unless you put the lens on a fully equipped optical test bench – and how many of us have got access to one of those?

The truth of the matter is that the average photographer has to trust the manufacturers to supply accurately put together equipment, and he or she has to assume that all is well inside the box they’ve just purchased from their photographic supplier.

But how can we judge a lens against an assumed standard of perfection before we part with our cash?

A lot of folk, including Steve – look at MTF charts.

 

The MTF Chart

Firstly, MTF stands for Modulation Transfer Function – modu-what I hear your ask!

OK – let’s deal with the modulation bit.  Forget colour for a minute and consider yourself living in a black & white world.  Dark objects in a scene reflect few photons of light – ’tis why the appear dark!  Conversely, bright objects reflect loads of the little buggers, hence these objects appear bright.

Imagine now that we are in a sealed room totally impervious to the ingress of any light from outside, and that the room is painted matte white from floor to ceiling – what is the perceived colour of the room? Black is the answer you are looking for!

Now turn on that 2 million candle-power 6500k searchlight in the corner.  The split second before your retinas melted, what was the perceived colour of the room?

Note the use of the word ‘perceived’ – the actual colour never changed!

The luminosity value of every surface in the room changed from black to white/dark to bright – the luminosity values MODULATED.

Now back in reality we can say that a set of alternating black and white lines of equal width and crisp clean edges represent a high degree of contrast, and therefore tonal modulation; and the finer the lines the higher is the modulation frequency – which we measure in lines per millimeter (lpmm).

A lens takes in a scene of these alternating black and white lines and, just like it does with any other scene, projects it into an image circle; in other words it takes what it sees in front of it and ‘transfers’ the scene to the image circle behind it.

With a bit of luck and a fair wind this image circle is being projected sharply into the focal plane of the lens, and hopefully the focal plane matches up perfectly with the plane of the sensor – what used to be refereed to as the film plane.

The efficacy with which the lens carries out this ‘transfer’ in terms of maintaining both the contrast ratio of the modulated tones and the spatial separation of the lines is its transfer function.

So now you know what MTF stands for and what it means – good this isn’t it!

 

Let’s look at an MTF chart:

Nikon 500mm f4 MTF chart

Nikon 500mm f4 MTF chart

Now what does all this mean?

 

Firstly, the vertical axis – this can be regarded as that ‘efficacy’ I mentioned above – the accuracy of tonal contrast and separation reproduction in the projected image; 1.0 would be perfect, and 0 would be crappier than the crappiest version of a crap thing!

The horizontal axis – this requires a bit of brain power! It is scaled in increments of 5 millimeters from the lens axis AT THE FOCAL PLANE.

The terminus value at the right hand end of the axis is unmarked, but equates to 21.63mm – half the opposing corner-to-corner dimension of a 35mm frame.

Now consider the diagram below:

Andy Astbury,image circle,photography,frame,full frame,dimensions,radial,

The radial dimensions of the 35mm format.

These are the radial dimensions, in millimeters, of a 35mm format frame (solid black rectangle).

The lens axis passes through the center axis of the sensor, so the radii of the green, yellow and dashed circles correspond to values along the horizontal axis of an MTF chart.

Let’s simplify what we’ve learned about MTF axes:

Andy Astbury,image circle,photography,frame,full frame,dimensions,radial,

MTF axes hopefully made simpler!

Now we come to the information data plots; firstly the meaning of Sagittal & Meridional.   From our perspective in this instance I find it easier for folk to think of them as ‘parallel to’ and ‘at right angles to’ the axis of measurement, though strictly speaking Meridional is circular and Sagittal is radial.

This axis of measurement is from the lens/film plane/sensor center to the corner of a 35mm frame – in other words, along that 21.63mm radius.

Andy Astbury,image circle,photography,frame,full frame,dimensions,radial,

The axis of MTF measurement and the relative axial orientation of Sagittal & Meridional lines. NOTE: the target lines are ONLY for illustration.

Separate measurements are taken for each modulation frequency along the entire measurement axis:

Andy Astbury,image circle,photography,frame,full frame,dimensions,radial,

Thin Meridional MTF measurement. (They should be concentric circles but I can’t draw concentric circles!).

Let’s look at that MTF curve for the 500m f4 Nikon together with a legend of ‘sharpness’ – the 300 f2.8:

MTF chart,Andy Astbury,lens resolution

Nikon MTF comparison between the 500mm f4 & 300mm f2.8

Nikon say on their website that they measure MTF at maximum aperture, that is, wide open; so the 300mm chart is for an aperture of f2.8 (though they don’t say so) and the 500mm is for an f4 aperture – which they do specify on the chart – don’t ask me why ‘cos I’ve no idea.

As we can see, the best transfer values for the two lenses (and all other lenses) is 10 lines per millimeter, and generally speaking sagittal orientation usually performs slightly better than meridional, but not always.

10 lpmm is always going to give a good transfer value because its very coarse and represents a lower frequency of detail than 30 lpmm.

Funny thing, 10 lines per millimeter is 5 line pairs per millimeter – and where have we heard that before? HERE – it’s the resolution of the human eye at 25 centimeters.

 

Another interesting thing to bare in mind is that, as the charts clearly show, better transfer values occur closer to the lens axis/sensor center, and that performance falls as you get closer to the frame corners.

This is simply down to the fact that your are getting closer to the inner edge of the image circle (the dotted line in the diagrams above).  If manufacturers made lenses that threw a larger image circle then corner MTF performance would increase – it can be done – that’s the basis upon which PCE/TS lenses work.

One way to take advantage of center MTF performance is to use a cropped sensor – I still use my trusty D2Xs for a lot of macro work; not only do I get the benefit of center MTF performance across the majority of the frame but I also have the ability to increase the lens to subject distance and get the composition I want, so my depth of field increases slightly for any given aperture.

Back to the matter at hand, here’s my first problem with the likes of Nikon, Canon etc:  they don’t specify the lens-to-target distance. A lens that gives a transfer value of 9o% plus on a target of 10 lpmm sagittal at 2 meters distance is one thing; one that did the same but at 25 meters would be something else again.

You might look at the MTF chart above and think that the 300mm f2.8 lens is poor on a target resolution of  30 lines per millimeter compared to the 500mm, but we need to temper that conclusion with a few facts:

  1. A 300mm lens is a lot wider in Field of View (FoV) than a 500mm so there is a lot more ‘scene width’ being pushed through the lens – detail is ‘less magnified’.
  2. How much ‘less magnified’ –  40% less than at 500mm, and yet the 30 lpmm transfer value is within 6% to 7% that of the 500mm – overall a seemingly much better lens in MTF terms.
  3. The lens is f2.8 – great for letting light in but rubbish for everything else!

Most conventional lenses have one thing in common – their best working aperture for overall image quality is around f8.

But we have to counter balance the above with the lack of aforementioned target distance information.  The minimum focus distances for the two comparison lenses are 2.3 meters and 4.0 meters respectively so obviously we know that the targets are imaged and measured at vastly different distances – but without factual knowledge of the testing distances we cannot really say that one lens is better than the other.

 

My next problem with most manufacturers MTF charts is that the values are supplied ‘a la white light’.

I mentioned earlier – much earlier! – that lens elements refracted light, and the importance of all wavelengths being refracted to the same degree, otherwise we end up with either lateral or longitudinal chromatic aberration – or worse still – both!

Longitudinal CA will give us different focal planes for different colours contained within white light – NOT GOOD!

Lateral CA gives us the same plane of focus but this time we get lateral shifts in the red, green and blue components of the image, as if the 3 colour channels have come out of register – again NOT GOOD!

Both CA types are most commonly seen along defined edges of colour and/or tone, and as such they both effect transferred edge definition and detail.

So why do manufacturers NOT publish this information – there is to my knowledge only one that does – Schneider (read ‘proper lens’).

They produce some very meaningful MTF data for their lenses with modulation frequencies in excess of 90 to 150 lpmm; separate R,G & B curves; spectral weighting variations for different colour temperatures of light and all sorts of other ‘geeky goodies’ – I just love it all!

 

SHAME ON YOU NIKON – and that goes for Canon and Sigma just as much.

 

So you might now be asking WHY they don’t publish the data – they must have it – are they treating us like fools that wouldn’t be able to understand it; OR – are they trying to hide something?

You guys think what you will – I’m not accusing anyone of anything here.

But if they are trying to hide something then that ‘something’ might not be what you guys are thinking.

What would you think if I told you that if you were a lens designer you could produce an MTF plot with a calculator – ‘cos you can, and they do!

So, in a nutshell, most manufacturers MTF charts as published for us to see are worse than useless.  We can’t effectively use them to compare one lens against another because of missing data; we can’t get an idea of CA performance because of missing red, green and blue MTF curves; and finally we can’t even trust that the bit of data they do impart is even bloody genuine.

Please don’t get taken in by them next time you fancy spending money on glass – take your time and ask around – better still try one; and try it on more than 1 camera body!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Sensor Resolution

Sensor Resolution

In my previous two posts on this subject HERE and HERE I’ve been looking at pixel resolution as it pertains to digital display and print, and the basics of how we can manipulate it to our benefit.

You should also by aware by now that I’m not the worlds biggest fan of high sensor resolution 35mm format dSLRs – there’s nothing wrong with mega pixels; you can’t have enough of them in my book!

BUT, there’s a limit to how many you can cram into a 36 x 24 millimeter sensor area before things start getting silly and your photographic life gets harder.

So in this post I want to explain the reasoning behind my thoughts.

But before I get into that I want to address something else to do with resolution – the standard by which we judge everything we see around us – the resolution of the eye.

 

Human Eye – How Much Can We See?

In very simple terms, because I’m not an optician, the answer goes like this.

Someone with what some call 20/20/20 vision – 20/20 vision in a 20 year old – has a visual acuity of 5 line pairs per millimeter at a distance of 25 centimeters.

What’s a line pair?

5 line pairs per millimeter. Each line pair is 0.2mm and each line is 0.1mm.

5 line pairs per millimeter. Each line pair is 0.2mm and each line is 0.1mm.

Under ideal viewing conditions in terms of brightness and contrast the human eye can at best resolve 0.1mm detail at a distance of 25 centimeters.

Drop the brightness and the contrast and black will become less black and more grey, and white will become greyer; the contrast between light and dark becomes reduced and therefore that 0.1mm detail becomes less distinct.  until the point comes where the same eye can’t resolve detail any smaller than 0.2mm at 25cms, and so on.

Now if I try and focus on something at 25 cms my eyeballs start to ache,  so we are talking extreme close focus for the eye here.

An interesting side note is that 0.1mm is 100µm (microns) and microns are what we measure the size of sensor photosites in – which brings me nicely to SENSOR resolution.

 

Sensor Resolution – Too Many Megapixels?

As we saw in the post on NOISE we do not give ourselves the best chances by employing sensors with small photosite diameters.  It’s a basic fact of physics and mathematics – the more megapixels on a sensor, then the smaller each photosite has to be in order to fit them all in there;  and the smaller they are then the lower is their individual signal to noise or S/N ratio.

But there is another problem that comes with increased sensor resolution:

Increased diffraction threshold.

Andy Astbury,Wildlife in Pixels,sensor resolution,megapixels,pixel pitch,base noise,signal to noise ratio

Schematic of identical surface areas on lower and higher megapixel sensors.

In the above schematic we are looking at the same sized tiny surface area section on two sensors.

If we say that the sensor resolution on the left is that of a 12Mp Nikon D3, and the ‘area’ contains 3 x 3 photosites which are each 8.4 µm in size, then we can say we are looking at an area of about 25µm square.

On the right we are looking at that same 25µm (25 micron) square, but now it contains 5.2 x 5.2 photosites, each 4.84µm in size – a bit like the sensor resolution of a 36Mp D800.

 

What is Diffraction?

Diffraction is basically the bending or reflecting of waves by objects placed in their path (not to be confused with refraction).  As it pertains to our camera sensor, and overall image quality, it causes an general softening of every single point of sharp detail in the image that is projected onto the sensor during the exposure.

I say during the exposure because diffraction is ‘aperture driven’ and it’s effects only occur when the aperture is ‘stopped down’; which on modern cameras only occurs during the time the shutter is open.

At all other times you are viewing the image with the aperture wide open, and so you can’t see the effect unless you hit the stop down button (if you have one) and even then the image in the viewfinder is so small and dark you can’t see it.

As I said, diffraction is caused by aperture diameter – the size of the hole that lets the light in:

Andy Astbury,Wildlife in Pixels,sensor resolution,megapixels,pixel pitch,base noise,signal to noise ratio

Diffraction has a low presence in the system at wider apertures.

Light enters the lens, passes through the aperture and strikes the focal plane/sensor causing the image to be recorded.

Light waves passing through the center of the aperture and light waves passing through the periphery of the aperture all need to travel the same distance – the focal distance – in order for the image to be sharp.

The potential for the peripheral waves to be bent by the edge of the aperture diaphragm increases as the aperture becomes smaller.

Andy Astbury,Wildlife in Pixels,sensor resolution,megapixels,pixel pitch,base noise,signal to noise ratio

Diffraction has a greater presence in the system at narrower apertures.

If I apply some randomly chosen numbers to this you might understand it a little better:

Let’s say that the focal distance of the lens (not focal length) is 21.25mm.

As long as light passing through all points of the aperture travels 21.25mm and strikes the sensor then the image will be sharp; in other words, the more parallel the central and peripheral light waves are, then the sharper the image.

Making the aperture narrower by ‘stopping down’ increases the divergence between central and peripheral waves.

This means that peripheral waves have to travel further before the strike the sensor; further than 21.25mm – therefore they are no longer in focus, but those central waves still are.  This effect gives a fuzzy halo to every single sharply focused point of light striking our sensor.

Please remember, the numbers I’ve used above are meaningless and random.

The amount of fuzziness varies with aperture – wider aperture =  less fuzzy; narrower aperture = more fuzzy, and the circular image produced by a single point of sharp focus is known as an Airy Disc.

As we ‘stop down’ the aperture the edges of the Airy Disc become softer and more fuzzy.

Say for example, we stick a 24mm lens on our camera and frame up a nice landscape, and we need to use f14 to generate the amount of depth of field we need for the shot.  The particular lens we are using produces an Airy Disc of a very particular size at any given aperture.

Now here is the problem:

Andy Astbury,Wildlife in Pixels,sensor resolution,megapixels,pixel pitch,base noise,signal to noise ratio

Schematic of identical surface areas on lower and higher megapixel sensors and the same diameter Airy Disc projected on both of them.

As you can see, the camera with the lower sensor resolution and larger photosite diameter contains the Airy Disc within the footprint of ONE photosite; but the disc effects NINE photosites on the camera with the higher sensor resolution.

Individual photosites basically record one single flat tone which is the average of what they see; so the net outcome of the above scenario is:

Andy Astbury,Wildlife in Pixels,sensor resolution,megapixels,pixel pitch,base noise,signal to noise ratio

Schematic illustrating the tonal output effect of a particular size Airy Disc on higher and lower resolution sensors

On the higher resolution sensor the Airy Disc has produced what we might think of as ‘response pollution’ in the 8 surrounding photosites – these photosites need to record the values of the own ‘bits of the image jigsaw’ as well – so you end up with a situation where each photosite on the sensor ends up recording somewhat imprecise tonal values – this is diffraction in action.

If we were to stop down to f22 or f32 on the lower resolution sensor then the same thing would occur.

If we used an aperture wide enough on the higher resolution sensor – an aperture that generated an Airy Disc that was the same size or smaller than the diameter of the photosites – then only 1 single photosite would be effected and diffraction would not occur.

But that would leave of with a reduced depth of field – getting around that problem is fairly easy if you are prepared to invest in something like a Tilt-Shift lens.

Andy Astbury,Wildlife in Pixels,sensor resolution,megapixels,pixel pitch,base noise,signal to noise ratio

Both images shot with a 24mm TS lens at f3.5. Left image lens is set to zero and behaves as normal 24mm lens. Right image has 1 degree of down tilt applied.

Above we see two images shot with a 24mm Tilt-Shift lens, and both shots are at f3.5 – a wide open aperture.  In the left hand image the lens controls are set to zero and so it behaves like a standard construction lens of 24mm and gives the shallow depth of field that you’d expect.

The image on the right is again, shot wide open at f3.5, but this time the lens was tilted down by just 1 degree – now we have depth of field reaching all the way through the image.  All we would need to do now is stop the lens down to its sharpest aperture – around f8 – and take the shot;  and no worries about diffraction.

Getting back to sensor resolution in general, if your move into high megapixels counts on 35mm format then you are in a ‘Catch 22’ situation:

  • Greater sensor resolution enables you to theoretically capture greater levels of detail.

but that extra level of detail is somewhat problematic because:

  • Diffraction renders it ‘soft’.
  • Eliminating the diffraction causes you to potentially lose the newly acquired level of, say foreground detail in a landscape, due to lack of depth of field.

All digital sensors are susceptible to diffraction at some point or other – they are ‘diffraction limited’.

Over the years I’ve owned a Nikon D3 I’ve found it diffraction limited to between f16 & f18 – I can see it at f18 but can easily rescue the situation.  When I first used a 24Mp D3X I forgot what I was using and spent a whole afternoon shooting at f16 & f18 – I had to go back the next day for a re-shoot because the sensor is diffraction limited to f11 – the pictures certainly told the story!

Everything in photography is a trade-off – you can’t have more of one thing without having less of another.  Back in the days of film we could get by with one camera and use different films because they had very different performance values, but now we buy a camera and expect its sensor to perform all tasks with equal dexterity – sadly, this is not the case.  All modern consumer sensors are jacks of all trades.

If it’s sensor resolution you want then by far the best way to go about it is to jump to medium format, if you want image quality of the n’th degree – this way you get the ‘pixel resolution’ without many of the incumbent problems I’ve mentioned, simply because the sensors are twice the size; or invest in a TS/PC lens and take the Scheimpflug route to more depth of field at a wider aperture.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Pixel Resolution

What do we mean by Pixel Resolution?

Digital images have two sets of dimensions – physical size or linear dimension (inches, centimeters etc) and pixel dimensions (long edge & short edge).

The physical dimensions are simple enough to understand – the image is so many inches long by so many inches wide.

Pixel dimension is straightforward too – ‘x’ pixels long by ‘y’ pixels wide.

If we divide the physical dimensions by the pixel dimensions we arrive at the PIXEL RESOLUTION.

Let’s say, for example, we have an image with pixel dimensions of 3000 x 2400 pixels, and a physical, linear dimension of 10 x 8 inches.

Therefore:

3000 pixels/10 inches = 300 pixels per inch, or 300PPI

and obviously:

2400 pixels/8 inches = 300 pixels per inch, or 300PPI

So our image has a pixel resolution of 300PPI.

 

How Does Pixel Resolution Influence Image Quality?

In order to answer that question let’s look at the following illustration:

Andy Astbury,pixels,resolution,dpi,ppi,wildlife in pixels

The number of pixels contained in an image of a particular physical size has a massive effect on image quality. CLICK to view full size.

All 7 square images are 0.5 x 0.5 inches square.  The image on the left has 128 pixels per 0.5 inch of physical dimension, therefore its PIXEL RESOLUTION is 2 x 128 PPI (pixels per inch), or 256PPI.

As we move from left to right we halve the number of pixels contained in the image whilst maintaining the physical size of the image – 0.5″ x 0.5″ – so the pixels in effect become larger, and the pixel resolution becomes lower.

The fewer the pixels we have then the less detail we can see – all the way down to the image on the right where the pixel resolution is just 4PPI (2 pixels per 0.5 inch of edge dimension).

The thing to remember about a pixel is this – a single pixel can only contain 1 overall value for hue, saturation and brightness, and from a visual point of view it’s as flat as a pancake in terms of colour and tonality.

So, the more pixels we can have between point A and point B in our image the more variation of colour and tonality we can create.

Greater colour and tonal variation means we preserve MORE DETAIL and we have a greater potential for IMAGE SHARPNESS.

REALITY

So we have our 3 variables; image linear dimension, image pixel dimension and pixel resolution.

In our typical digital work flow the pixel dimension is derived from the the photosite dimension of our camera sensor – so this value is fixed.

All RAW file handlers like Lightroom, ACR etc;  all default to a native pixel resolution of 300PPI. * (this 300ppi myth annoys the hell out of me and I’ll explain all in another post).

So basically the pixel dimension and default resolution SET the image linear dimension.

If our image is destined for PRINT then this fact has some serious ramifications; but if our image is destined for digital display then the implications are very different.

 

Pixel Resolution and Web JPEGS.

Consider the two jpegs below, both derived from the same RAW file:

Andy Astbury,pixels,resolution,dpi,ppi,Wildlife in Pixels

European Adder – 900 x 599 pixels with a pixel resolution of 300PPI

European Adder - 900 x 599 pixels with a pixel resolution of 72PPI

European Adder – 900 x 599 pixels with a pixel resolution of 72PPI

In order to illustrate the three values of linear dimension, pixel dimension and pixel resolution of the two images let’s look at them side by side in Photoshop:

Andy Astbury,photoshop,resolution,pixels,ppi,dpi,wildlife in pixels,image size,image resolution

The two images opened in Photoshop – note the image size dialogue contents – CLICK to view full size.

The two images differ in one respect – their pixel resolutions.  The top Adder is 300PPI, the lower one has a resolution of 72PPI.

The simple fact that these two images appear to be exactly the same size on this page means that, for DIGITAL display the pixel resolution is meaningless when it comes to ‘how big the image is’ on the screen – what makes them appear the same size is their identical pixel dimensions of 900 x 599 pixels.

Digital display devices such as monitors, ipads, laptop monitors etc; are all PIXEL DIMENSION dependent.  The do not understand inches or centimeters, and they display images AT THEIR OWN resolution.

Typical displays and their pixel resolutions:

  • 24″ monitor = typically 75 to 95 PPI
  • 27″ iMac display = 109 PPI
  • iPad 3 or 4 = 264 PPI
  • 15″ Retina Display = 220 PPI
  • Nikon D4 LCD = 494 PPI

Just so that you are sure to understand the implication of what I’ve just said – you CAN NOT see your images at their NATIVE 300 PPI resolution when you are working on them.  Typically you’ll work on your images whilst viewing them at about 1/3rd native pixel resolution.

Yes, you can see 2/3rds native on a 15″ MacBook Pro Retina – but who the hell wants to do this – the display area is minuscule and its display gamut is pathetically small. 😉

Getting back to the two Adder images, you’ll notice that the one thing that does change with pixel resolution is the linear dimensions.

Whilst the 300 PPI version is a tiny 3″ x 2″ image, the 72 PPI version is a whopping 12″ x 8″ by comparison – now you can perhaps understand why I said earlier that the implications of pixel resolution for print are fundamental.

Just FYI – when I decide I’m going to create a small jpeg to post on my website, blog, a forum, Flickr or whatever – I NEVER ‘down sample’ to the usual 72 PPI that get’s touted around by idiots and no-nothing fools as “the essential thing to do”.

What a waste of time and effort!

Exporting a small jpeg at ‘full pixel resolution’ misses out the unnecessary step of down sampling and has an added bonus – anyone trying to send the image direct from browser to a printer ends up with a print the size of a matchbox, not a full sheet of A4.

It won’t stop image theft – but it does confuse ’em!

I’ve got a lot more to say on the topic of resolution and I’ll continue in a later post, but there is one thing related to PPI that is my biggest ‘pet peeve’:

 

PPI and DPI – They Are NOT The Same Thing

Nothing makes my blood boil more than the persistent ‘mix up’ between pixels per inch and dots per inch.

Pixels per inch is EXACTLY what we’ve looked at here – PIXEL RESOLUTION; and it has got absolutely NOTHING to do with dots per inch, which is a measure of printer OUTPUT resolution.

Take a look inside your printer driver; here we are inside the driver for an Epson 3000 printer:

Andy Astbury,printer,dots per inch,dpi,pixels per inch,ppi,photoshop,lightroom,pixel resolution,output resoloution

The Printer Driver for the Epson 3000 printer. Inside the print settings we can see the output resolutions in DPI – Dots Per Inch.

Images would be really tiny if those resolutions were anything to do with pixel density.

It surprises a lot of people when they come to the realisation that pixels are huge in comparison to printer dots – yes, it can take nearly 400 printer dots (20 dots square) to print 1 square pixel in an image at 300 PPI native.

See you in my next post!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.