YouTube Channel Latest Video Training

My YouTube Channel Latest Photography Video Training.

I’ve been busy this week adding more content to the old YouTube channel.

Adding content is really time-consuming, with recording times taking around twice the length of the final video.

Then there’s the editing, which usually takes around the same time, or a bit longer.  Then encoding and compression and uploading takes around the same again.

So yes, a 25 minute video takes A LOT more than 25 minutes to make and make live for the world to view.

This weeks video training uploads are:

This video deals with the badly overlooked topic of raw file demosaicing.

Next up is:

This video is a refreshed version of getting contrast under control in Lightroom – particularly Lightroom Classic CC.

Then we have:

This video is something of a follow-up to the previous one, where I explain the essential differences between contrast and clarity.

And finally, one from yesterday – which is me, restraining myself from embarking on a full blown ‘rant’, all about the differences between DPI (dots per inch) and PPI (pixels per inch):

Important Note

Viewing these videos is essential for the betterment of your understanding – yes it is!  And all I ask for in terms of repayment from yourselves is that you:

  1. Click the main channel subscribe button HERE https://www.youtube.com/c/AndyAstbury
  2. Give the video a ‘like’ by clicking the thumbs up!

YouTube is a funny old thing, but a substantial subscriber base and like videos will bring me closer to laying my hands on latest gear for me to review for you!

If all my blog subscribers would subscribe to my YouTube channel then my subs would more than treble – so go on, what are you waiting for.

I do like creating YouTube free content, but I do have to put food on the table, so I have to do ‘money making stuff’ as well, so I can’t afford to become a full-time YouTuber yet!  But wow, would I like to be in that position.

So that’s that – appeal over.

Watch the videos, and if you have any particular topic you would like me to do a video on, then please just let me know.  Either email me, or you can post in the comments below – no comment goes live here unless I approve it, so if you have a request but don’t want anyone else to see it, then just say.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

MTF, Lens & Sensor Resolution

MTF, Lens & Sensor Resolution

I’ve been ‘banging on’ about resolution lens performance and MTF over the last few posts so I’d like to start bringing all these various bits of information together with at least a modicum of simplicity.

If this is your first visit to my blog I strongly recommend you peruse HERE and HERE before going any further!

You might well ask the question “Do I really need to know this stuff – you’re a pro Andy and I’m not, so I don’t think I need to…”

My answer is “Yes you bloody well do need to know, so stop whinging – it’ll save you time and perhaps stop you wasting money…”

Words used like ‘resolution’ do tend to get used out of context sometimes, and when you guys ‘n gals are learning this stuff then things can get a mite confusing – and nowhere does terminology get more confusing than when we are talking ‘glass’.

But before we get into the idea of bringing lenses and sensors together I want to introduce you to something you’ve all heard of before – CONTRAST – and how it effects our ability to see detail, our lens’s ability to transfer detail, and our camera sensors ability to record detail.

Contrast & How It Effects the Resolving of Detail

In an earlier post HERE I briefly mentioned that the human eye can resolve 5 line pairs per millimeter, and the illustration I used to illustrate those line pairs looked rather like this:

5 line pairs per millimeter with a contrast ratio of 100% or 1.0

5 line pairs per millimeter with a contrast ratio of 100% or 1.0

Now don’t forget, these line pairs are highly magnified – in reality each pair should be 0.2mm wide.  These lines are easily differentiated because of the excessive contrast ratio between each line in a pair.

How far can contrast between the lines fall before we can’t tell the difference any more and all the lines blend together into a solid monotone?

Enter John William Strutt, the 3rd Baron Rayleigh…………

5 line pairs at bottom threshold of human vision - a 9% contrast ratio.

5 line pairs at bottom threshold of human vision – a 9% contrast ratio.

The Rayleigh Criterion basically stipulates that the ‘discernability’ of each line in a pair is low end limited to a line pair contrast ratio of 9% or above, for average human vision – that is, when each line pair is 0.2mm wide and viewed from 25cms.  Obviously they are reproduced much larger here, hence you can see ’em!

Low contrast limit for Human vision (left) & camera sensor (right).

Low contrast limit for Human vision (left) & camera sensor (right).

However, it is said in some circles that dslr sensors are typically limited to a 12% to 15% minimum line pair contrast ratio when it comes to discriminating between the individual lines.

Now before you start getting in a panic and misinterpreting this revelation you must realise that you are missing one crucial factor; but let’s just recap what we’ve got so far.

  1. A ‘line’ is a detail.
  2. but we can’t see one line (detail) without another line (detail) next to it that has a different tonal value ( our line pair).
  3. There is a limit to the contrast ratio between our two lines, below which our lines/details begin to merge together and become less distinct.

So, what is this crucial factor that we are missing; well, it’s dead simple – the line pair per millimeter (lp/mm) resolution of a camera sensor.

Now there’s something you won’t find in your cameras ‘tech specs’ that’s for sure!

Sensor Line Pair Resolution

The smallest “line” that can be recorded on a sensor is 1 photosite in width – now that makes sense doesn’t it.

But in order to see that line we must have another line next to it, and that line must have a higher or lower tonal value to a degree where the contrast ratio between the two lines is at or above the low contrast limit of the sensor.

So now we know that the smallest line pair our sensor can record is 2 photosites/pixels in width – the physical width is governed by the sensor pixel pitch; in other words the photosite diameter.

In a nutshell, the lp/mm resolution of a sensor is 0.5x the pixel row count per millimeter – referred to as the Nyquist Rate, simply because we have to define (sample) 2 lines in order to see/resolve 1 line.

The maximum resolution of an image projected by the lens that can be captured at the sensor plane – in other words, the limit of what can be USEFULLY sampled – is the Nyquist Limit.

Let’s do some practical calculations:

Canon 1DX 18.1Mp

Imaging Area = 36mm x 24mm / 5202 x 3533 pixels/photosites OR LINES.

I actually do this calculation based on the imaging area diagonal

So sensor resolution in lp/mm = (pixel diagonal/physical diagonal) x 0.5 = 72.01 lp/mm

Nikon D4 16.2Mp = 68.62 lp/mm

Nikon D800 36.3Mp = 102.33 lp/mm

PhaseOne P40 40Mp medium format = 83.15 lp/mm

PhaseOne IQ180 80Mp medium format = 96.12 lp/mm

Nikon D7000 16.2mp APS-C (DX) 4928×3264 pixels; 23.6×15.6mm dimensions  = 104.62 lp/mm

Canon 1D IV 16.1mp APS-H 4896×3264 pixels; 27.9×18.6mm dimensions  = 87.74 lp/mm

Taking the crackpot D800 as an example, that 102.33 lp/mm figure means that the sensor is capable of resolving 204.66 lines, or points of detail, per millimeter.

I say crackpot because:

  1. The Optical Low Pass “fights” against this high degree of resolving power
  2. This resolving power comes at the expense of S/N ratio
  3. This resolving power comes at the expense of diffraction
  4. The D800E is a far better proposition because it negates 1. above but it still leaves 2. & 3.
  5. Both sensors would purport to be “better” than even an IQ180 – newsflash – they ain’t; and not by a bloody country mile!  But the D800E is an exceptional sensor as far as 35mm format (36×24) sensors go.

A switch to a 40Mp medium format is BY FAR the better idea.

Before we go any further, we need a reality check:

In the scene we are shooting, and with the lens magnification we are using, can we actually “SEE” detail as small as 1/204th of a millimeter?

We know that detail finer than that exists all around us – that’s why we do macro/micro photography – but shooting a landscape with a 20mm wide angle where the nearest detail is 1.5 meters away ??

And let’s not forget the diffraction limit of the sensor and the incumbent reduction in depth of field that comes with 36Mp+ crammed into a 36mm x 24mm sensor area.

The D800 gives you something with one hand and takes it away with the other – I wouldn’t give the damn thing house-room!  Rant over………

Anyway, getting back to the matter at hand, we can now see that the MTF lp/mm values quoted by the likes of Nikon and Canon et al of 10 and 30 lp/mm bare little or no connectivity with the resolving power of their sensors – as I said in my previous post HERE – they are meaningless.

The information we are chasing after is all about the lens:

  1. How well does it transfer contrast because its contrast that allows us to “see” the lines of detail?
  2. How “sharp” is the lens?
  3. What is the “spread” of 1. and 2. – does it perform equally across its FoV (field of view) or is there a monstrous fall-off of 1. and 2. between 12 and 18mm from the center on an FX sensor?
  4. Does the lens vignette?
  5. What is its CA performance?

Now we can go to data sites on the net such as DXO Mark where we can find out all sorts of more meaningful data about our potential lens purchase performance.

But even then, we have to temper what we see because they do their testing using Imatest or something of that ilk, and so the lens performance data is influenced by sensor, ASIC and basic RAW file demosaicing and normalisation – all of which can introduce inaccuracies in the data; in other words they use camera images in order to measure lens performance.

The MTF 50 Standard

Standard MTF (MTF 100) charts do give you a good idea of the lens CONTRAST transfer function, as you may already have concluded. They begin by measuring targets with the highest degree of modulation – black to white – and then illustrate how well that contrast has been transferred to the image plane, measured along a corner radius of the frame/image circle.

MTF 1.0 (100%) left, MTF 0.5 (50%) center and MTF 0.1 (10%) right.

MTF 1.0 (100%) left, MTF 0.5 (50%) center and MTF 0.1 (10%) right.

As you can see, contrast decreases with falling transfer function value until we get to MTF 0.1 (10%) – here we can guess that if the value falls any lower than 10% then we will lose ALL “perceived” contrast in the image and the lines will become a single flat monotone – in other words we’ll drop to 9% and hit the Rayleigh Criterion.

It’s somewhat debatable whether or not sensors can actually discern a 10% value – as I mentioned earlier in this post, some favour a value more like 12% to 15% (0.12 to 0.15).

Now then, here’s the thing – what dictates the “sharpness” of edge detail in our images?  That’s right – EDGE CONTRAST.  (Don’t mistake this for overall image contrast!)

Couple that with:

  1. My well-used adage of “too much contrast is thine enemy”.
  2. “Detail” lies in midtones and shadows, and we want to see that detail, and in order to see it the lens has to ‘transfer’ it to the sensor plane.
  3. The only “visual” I can give you of MTF 100 would be something like power lines silhouetted against the sun – even then you would under expose the sun, so, if you like, MTF would still be sub 100.

Please note: 3. above is something of a ‘bastardisation’ and certain so-called experts will slag me off for writing it, but it gives you guys a view of reality – which is the last place some of those aforementioned experts will ever inhabit!

Hopefully you can now see that maybe measuring lens performance with reference to MTF 50 (50%, 0.5) rather than MTF 100 (100%, 1.0) might be a better idea.

Manufacturers know this but won’t do it, and the likes of Nikon can’t do it even if they wanted to because they use a damn calculator!

Don’t be trapped into thinking that contrast equals “sharpness” though; consider the two diagrams below (they are small because at larger sizes they make your eyes go funny!).

A lens can transfer full contrast but be unsharp.

A lens can have a high contrast transfer function but be unsharp.

A lens can have low contrast transmission (transfer function) but still be sharp.

A lens can have low contrast transfer function but still be sharp.

In the first diagram the lens has RESOLVED the same level of detail (the same lp/mm) in both cases, and at pretty much the same contrast transfer value; but the detail is less “sharp” on the right.

In the lower diagram the lens has resolved the same level of detail with the same degree of  “sharpness”, but with a much reduced contrast transfer value on the right.

Contrast is an AID to PERCEIVED sharpness – nothing more.

I actually hate that word SHARPNESS; and it’s a nasty word because it’s open to all sorts of misconceptions by the uninitiated.

A far more accurate term is ACUTANCE.

How Acutance effects perceived "sharpness" and is contrast independent.

How Acutance effects perceived “sharpness”.

So now hopefully you can see that LENS RESOLUTION is NOT the same as lens ACUTANCE (perceived sharpness..grrrrrr).

Seeing as it is possible to have a lens with a higher degree resolving power, but a lower degree of acutance you need to be careful – low acutance tends to make details blur into each other even at high contrast values; which tends to negate the positive effects of the resolving power. (Read as CHEAP LENS!).

Lenses need to have high acutance – they need to be sharp!  We’ve got enough problems trying to keep the sharpness once the sensor gets hold of the image, without chucking it a soft one in the first place – and I’ll argue this point with the likes of Mr. Rockwell until the cows have come home!

Things We Already Know

We already know that stopping down the aperture increases Depth of Field; and we already know that we can only do this to a certain degree before we start to hit diffraction.

What does increasing DoF do exactly; it increases ACUTANCE is what it does – exactly!

Yes it gives us increased perceptual sharpness of parts of the subject in front and behind the plane of sharp focus – but forget that bit – we need to understand that the perceived sharpness/acutance of the plane of focus increases too, until you take things too far and go beyond the diffraction limit.

And as we already know, that diffraction limit is dictated by the size of photosites/pixels in the sensor – in other words, the sensor resolution.

So the diffraction limit has two effects on the MTF of a lens:

  1. The diffraction limit changes with sensor resolution – you might get away with f14 on one sensor, but only f9 on another.
  2. All this goes “out the window” if we talk about crop-sensor cameras because their sensor dimensions are different.

We all know about “loss of wide angles” with crop sensors – if we put a 28mm lens on an FX body and like the composition but then we switch to a 1.5x crop body we then have to stand further away from the subject in order to achieve the same composition.

That’s good from a DoF PoV because DoF for any given aperture increases with distance; but from a lens resolving power PoV it’s bad – that 50 lp/mm detail has just effectively dropped to 75 lp/mm, so it’s harder for the lens to resolve it, even if the sensors resolution is capable of doing so.

There is yet another way of quantifying MTF – just to confuse the issue for you – and that is line pairs per frame size, usually based on image height and denoted as lp/IH.

Imatest uses MTF50 but quotes the frequencies not as lp/mm, or even lp/IH; but in line widths per image height – LW/IH!

Alas, there is no single source of the empirical data we need in order to evaluate pure lens performance anymore.  And because the outcome of any particular lens’s performance in terms of acutance and resolution is now so inextricably intertwined with that of the sensor behind it, then you as lens buyers, are left with a confusing myriad of various test results all freely available on the internet.

What does Uncle Andy recommend? – well a trip to DXO Mark is not a bad starting point all things considered, but I do strongly suggest that you take on board the information I’ve given you here and then scoot over to the DXO test methodology pages HERE and read them carefully before you begin to examine the data and draw any conclusions from it.

But do NOT make decisions just on what you see there; there is no substitute for hands-on testing with your camera before you go and spend your hard-earned cash.  Proper testing and evaluation is not as simple as you might think, so it’s a good idea to perhaps find someone who knows what they are doing and is prepared to help you out.   Do NOT ask the geezer in the camera shop – he knows bugger all about bugger all!

Do Sensors Out Resolve Lenses?

Well, that’s the loaded question isn’t it – you can get very poor performance from what is ostensibly a superb lens, and to a degree vice versa.

It all depends on what you mean by the question, because in reality a sensor can only resolve what the lens chucks at it.

If you somehow chiseled the lens out of your iPhone and Sellotaped it to your shiny new 1DX then I’m sure you’d notice that the sensor did indeed out resolve the lens – but if you were a total divvy who didn’t know any better then in reality all you’d be ware of is that you had a crappy image – and you’d possibly blame the camera, not the lens – ‘cos it took way better pics on your iPhone 4!

There are so many external factors that effect the output of a lens – available light, subject brightness range, angle of subject to the lens axis to name but three.  Learning how to recognise these potential pitfalls and to work around them is what separates a good photographer from an average one – and by good I mean knowledgeable – not necessarily someone who takes pics for a living.

I remember when the 1DX specs were first ‘leaked’ and everyone was getting all hot and bothered about having to buy the new Canon glass because the 1DX was going to out resolve all Canons old glass – how crackers do you need to be nowadays to get a one way ticket to the funny farm?

If they were happy with the lens’s optical performance pre 1DX then that’s what they would get post 1DX…duh!

If you still don’t get it then try looking at it this way – if lenses out resolve your sensor then you are up “Queer Street” – what you see in the viewfinder will be far better than the image that comes off the sensor, and you will not be a happy camper.

If on the other hand, our sensors have the capability to resolve more lines per millimeter than our lenses can throw at them, and we are more than satisfied with our lenses resolution and acutance, then we would be in a happy place, because we’d be wringing the very best performance from our glass – always assuming we know how to ‘drive the juggernaut’  in the first place!

Become a Patron!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Sensor Resolution

Sensor Resolution

In my previous two posts on this subject HERE and HERE I’ve been looking at pixel resolution as it pertains to digital display and print, and the basics of how we can manipulate it to our benefit.

You should also by aware by now that I’m not the worlds biggest fan of high sensor resolution 35mm format dSLRs – there’s nothing wrong with mega pixels; you can’t have enough of them in my book!

BUT, there’s a limit to how many you can cram into a 36 x 24 millimeter sensor area before things start getting silly and your photographic life gets harder.

So in this post I want to explain the reasoning behind my thoughts.

But before I get into that I want to address something else to do with resolution – the standard by which we judge everything we see around us – the resolution of the eye.

 

Human Eye – How Much Can We See?

In very simple terms, because I’m not an optician, the answer goes like this.

Someone with what some call 20/20/20 vision – 20/20 vision in a 20 year old – has a visual acuity of 5 line pairs per millimeter at a distance of 25 centimeters.

What’s a line pair?

5 line pairs per millimeter. Each line pair is 0.2mm and each line is 0.1mm.

5 line pairs per millimeter. Each line pair is 0.2mm and each line is 0.1mm.

Under ideal viewing conditions in terms of brightness and contrast the human eye can at best resolve 0.1mm detail at a distance of 25 centimeters.

Drop the brightness and the contrast and black will become less black and more grey, and white will become greyer; the contrast between light and dark becomes reduced and therefore that 0.1mm detail becomes less distinct.  until the point comes where the same eye can’t resolve detail any smaller than 0.2mm at 25cms, and so on.

Now if I try and focus on something at 25 cms my eyeballs start to ache,  so we are talking extreme close focus for the eye here.

An interesting side note is that 0.1mm is 100µm (microns) and microns are what we measure the size of sensor photosites in – which brings me nicely to SENSOR resolution.

 

Sensor Resolution – Too Many Megapixels?

As we saw in the post on NOISE we do not give ourselves the best chances by employing sensors with small photosite diameters.  It’s a basic fact of physics and mathematics – the more megapixels on a sensor, then the smaller each photosite has to be in order to fit them all in there;  and the smaller they are then the lower is their individual signal to noise or S/N ratio.

But there is another problem that comes with increased sensor resolution:

Increased diffraction threshold.

Andy Astbury,Wildlife in Pixels,sensor resolution,megapixels,pixel pitch,base noise,signal to noise ratio

Schematic of identical surface areas on lower and higher megapixel sensors.

In the above schematic we are looking at the same sized tiny surface area section on two sensors.

If we say that the sensor resolution on the left is that of a 12Mp Nikon D3, and the ‘area’ contains 3 x 3 photosites which are each 8.4 µm in size, then we can say we are looking at an area of about 25µm square.

On the right we are looking at that same 25µm (25 micron) square, but now it contains 5.2 x 5.2 photosites, each 4.84µm in size – a bit like the sensor resolution of a 36Mp D800.

 

What is Diffraction?

Diffraction is basically the bending or reflecting of waves by objects placed in their path (not to be confused with refraction).  As it pertains to our camera sensor, and overall image quality, it causes an general softening of every single point of sharp detail in the image that is projected onto the sensor during the exposure.

I say during the exposure because diffraction is ‘aperture driven’ and it’s effects only occur when the aperture is ‘stopped down’; which on modern cameras only occurs during the time the shutter is open.

At all other times you are viewing the image with the aperture wide open, and so you can’t see the effect unless you hit the stop down button (if you have one) and even then the image in the viewfinder is so small and dark you can’t see it.

As I said, diffraction is caused by aperture diameter – the size of the hole that lets the light in:

Andy Astbury,Wildlife in Pixels,sensor resolution,megapixels,pixel pitch,base noise,signal to noise ratio

Diffraction has a low presence in the system at wider apertures.

Light enters the lens, passes through the aperture and strikes the focal plane/sensor causing the image to be recorded.

Light waves passing through the center of the aperture and light waves passing through the periphery of the aperture all need to travel the same distance – the focal distance – in order for the image to be sharp.

The potential for the peripheral waves to be bent by the edge of the aperture diaphragm increases as the aperture becomes smaller.

Andy Astbury,Wildlife in Pixels,sensor resolution,megapixels,pixel pitch,base noise,signal to noise ratio

Diffraction has a greater presence in the system at narrower apertures.

If I apply some randomly chosen numbers to this you might understand it a little better:

Let’s say that the focal distance of the lens (not focal length) is 21.25mm.

As long as light passing through all points of the aperture travels 21.25mm and strikes the sensor then the image will be sharp; in other words, the more parallel the central and peripheral light waves are, then the sharper the image.

Making the aperture narrower by ‘stopping down’ increases the divergence between central and peripheral waves.

This means that peripheral waves have to travel further before the strike the sensor; further than 21.25mm – therefore they are no longer in focus, but those central waves still are.  This effect gives a fuzzy halo to every single sharply focused point of light striking our sensor.

Please remember, the numbers I’ve used above are meaningless and random.

The amount of fuzziness varies with aperture – wider aperture =  less fuzzy; narrower aperture = more fuzzy, and the circular image produced by a single point of sharp focus is known as an Airy Disc.

As we ‘stop down’ the aperture the edges of the Airy Disc become softer and more fuzzy.

Say for example, we stick a 24mm lens on our camera and frame up a nice landscape, and we need to use f14 to generate the amount of depth of field we need for the shot.  The particular lens we are using produces an Airy Disc of a very particular size at any given aperture.

Now here is the problem:

Andy Astbury,Wildlife in Pixels,sensor resolution,megapixels,pixel pitch,base noise,signal to noise ratio

Schematic of identical surface areas on lower and higher megapixel sensors and the same diameter Airy Disc projected on both of them.

As you can see, the camera with the lower sensor resolution and larger photosite diameter contains the Airy Disc within the footprint of ONE photosite; but the disc effects NINE photosites on the camera with the higher sensor resolution.

Individual photosites basically record one single flat tone which is the average of what they see; so the net outcome of the above scenario is:

Andy Astbury,Wildlife in Pixels,sensor resolution,megapixels,pixel pitch,base noise,signal to noise ratio

Schematic illustrating the tonal output effect of a particular size Airy Disc on higher and lower resolution sensors

On the higher resolution sensor the Airy Disc has produced what we might think of as ‘response pollution’ in the 8 surrounding photosites – these photosites need to record the values of the own ‘bits of the image jigsaw’ as well – so you end up with a situation where each photosite on the sensor ends up recording somewhat imprecise tonal values – this is diffraction in action.

If we were to stop down to f22 or f32 on the lower resolution sensor then the same thing would occur.

If we used an aperture wide enough on the higher resolution sensor – an aperture that generated an Airy Disc that was the same size or smaller than the diameter of the photosites – then only 1 single photosite would be effected and diffraction would not occur.

But that would leave of with a reduced depth of field – getting around that problem is fairly easy if you are prepared to invest in something like a Tilt-Shift lens.

Andy Astbury,Wildlife in Pixels,sensor resolution,megapixels,pixel pitch,base noise,signal to noise ratio

Both images shot with a 24mm TS lens at f3.5. Left image lens is set to zero and behaves as normal 24mm lens. Right image has 1 degree of down tilt applied.

Above we see two images shot with a 24mm Tilt-Shift lens, and both shots are at f3.5 – a wide open aperture.  In the left hand image the lens controls are set to zero and so it behaves like a standard construction lens of 24mm and gives the shallow depth of field that you’d expect.

The image on the right is again, shot wide open at f3.5, but this time the lens was tilted down by just 1 degree – now we have depth of field reaching all the way through the image.  All we would need to do now is stop the lens down to its sharpest aperture – around f8 – and take the shot;  and no worries about diffraction.

Getting back to sensor resolution in general, if your move into high megapixels counts on 35mm format then you are in a ‘Catch 22’ situation:

  • Greater sensor resolution enables you to theoretically capture greater levels of detail.

but that extra level of detail is somewhat problematic because:

  • Diffraction renders it ‘soft’.
  • Eliminating the diffraction causes you to potentially lose the newly acquired level of, say foreground detail in a landscape, due to lack of depth of field.

All digital sensors are susceptible to diffraction at some point or other – they are ‘diffraction limited’.

Over the years I’ve owned a Nikon D3 I’ve found it diffraction limited to between f16 & f18 – I can see it at f18 but can easily rescue the situation.  When I first used a 24Mp D3X I forgot what I was using and spent a whole afternoon shooting at f16 & f18 – I had to go back the next day for a re-shoot because the sensor is diffraction limited to f11 – the pictures certainly told the story!

Everything in photography is a trade-off – you can’t have more of one thing without having less of another.  Back in the days of film we could get by with one camera and use different films because they had very different performance values, but now we buy a camera and expect its sensor to perform all tasks with equal dexterity – sadly, this is not the case.  All modern consumer sensors are jacks of all trades.

If it’s sensor resolution you want then by far the best way to go about it is to jump to medium format, if you want image quality of the n’th degree – this way you get the ‘pixel resolution’ without many of the incumbent problems I’ve mentioned, simply because the sensors are twice the size; or invest in a TS/PC lens and take the Scheimpflug route to more depth of field at a wider aperture.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Pixel Resolution – part 2

More on Pixel Resolution

In my previous post on pixel resolution  I mentioned that it had some serious ramifications for print.

The major one is PHYSICAL or LINEAR image dimension.

In that previous post I said:

  • Pixel dimension divided by pixel resolution = linear dimension

Now, as we saw in the previous post, linear dimension has zero effect on ‘digital display’ image size – here’s those two snake jpegs again:

Andy Astbury,wildlife in pixels,pixel,dpi,ppi,pixel resolution,photoshop,lightroom,adobe

European Adder – 900 x 599 pixels with a pixel resolution of 300PPI

Andy Astbury,wildlife in pixels,pixel,dpi,ppi,pixel resolution,photoshop,lightroom,adobe

European Adder – 900 x 599 pixels with a pixel resolution of 72PPI

Digital display size is driven by pixel dimensionNOT linear dimension or pixel resolution.

Print on the other hand is directly driven by image linear dimension – the physical length and width of our image in inches, centimeters or millimeters.

Now I teach this ‘stuff’ all the time at my Calumet workshops and I know it’s hard for some folk to get their heads around print size and printer output, but it really is simple and straightforward if you just think about it logically for minute.

Let’s get away from snakes and consider this image of a cute Red Squirrel:

Andy Astbury,wildlife in pixels,

Red Squirrel with Bushy Tail – what a cutey!
Shot with Nikon D4 – full frame render.

Yeah yeah – he’s a bit big in the frame for my taste but it’s a seller so boo-hoo – what do I know ! !

Shot on a Nikon D4 – the relevance of which is this:

  • The D4 has a sensor with a linear dimension of 36 x 24 millimeters, but more importantly a photosite dimension of 4928 x 3280. (this is the effective imaging area – total photosite area is 4992 x 3292 according to DXO Labs).

Importing this image into Lightroom, ACR, Bridge, CapOne Pro etc will take that photosite dimension as a pixel dimension.

They also attach the default standard pixel resolution of 300 PPI to the image.

So now the image has a set of physical or linear dimensions:

  • 4928/300  x  3280/300 inches  or  16.43″ x 10.93″

or

  • 417.24 x 277.71 mm for those of you with a metric inclination!

So how big CAN we print this image?

 

Pixel Resolution & Image Physical Dimension

Let’s get back to that sensor for a moment and ask ourselves a question:

  • “Does a sensor contain pixels, and can it have a PPI resolution attached to it?
  • Well, the strict answer would be No and No not really.

But because the photosite dimensions end up being ‘converted’ to pixel dimensions then let’s just for a moment pretend that it can.

The ‘effective’ PPI value for the D4 sensor could be easily derived from its long edge ‘pixel’ count of the FX frame divided by the linear length which is just shy of 36mm or 1.4″ – 3520 PPI or thereabouts.

So, if we take this all literally our camera captures and stores a file that has linear dimensions of  1.4″ x 0.9″, pixel dimensions of  4928 x 3280 and a pixel resolution of 3520 PPI.

Import this file into Lightroom for instance, and that pixel resolution is reduced to 300 PPI.  It’s this very act that renders the image on our monitor at a size we can work with.  Otherwise we’d be working on postage stamps!

And what has that pixel resolution done to the linear image dimensions?  Well it’s basically ‘magnified’ the image – but by how much?

 

Magnification & Image Size

Magnification factors are an important part of digital imaging and image reproduction, so you need to understand something – magnification factors are always calculated on the diagonal.

So we need to identify the diagonals of both our sensor, and our 300 PPI image before we can go any further.

Here is a table of typical sensor diagonals:

Andy Astbury

Table of Sensor Diagonals for Digital Cameras.

And here is a table of metric print media sizes:

Andy Astbury

Metric Paper Sizes including diagonals.

To get back to our 300 PPI image derived from our D4 sensor,  Pythagoras tells us that our 16.43″ x 10.93″ image has a diagonal of 19.73″ – or 501.14mm

So with a sensor diagonal of 43.2mm we arrive at a magnification factor of around 11.6x for our 300 PPI native image as displayed on our monitor.

This means that EVERYTHING on the sensor – photosites/pixels, dust bunnies, logs, lumps of coal, circles of confusion, Airy Discs – the lot – are magnified by that factor.

Just to add variety, a D800/800E produces native 300 PPI images at 24.53″ x 16.37″ – a magnification factor of 17.3x over the sensor size.

So you can now begin to see why pixel resolution is so important when we print.

 

How To Blow Up A Squirrel !

Let’s get back to ‘his cuteness’ and open him up in Photoshop:

Our Squirrel at his native 300 PPI open in Photoshop.

Our Squirrel at his native 300 PPI open in Photoshop.

See how I keep you on your toes – I’ve switched to millimeters now!

The image is 417 x 277 mm – in other words it’s basically A3.

What happens if we hit print using A3 paper?

Red Squirrel with Bushy Tail. D4 file at 300 PPI printed to A3 media.

Red Squirrel with Bushy Tail. D4 file at 300 PPI printed to A3 media.

Whoops – that’s not good at all because there is no margin.  We need workable margins for print handling and for mounting in cut mattes for framing.

Do not print borderless – it’s tacky, messy and it screws your printer up!

What happens if we move up a full A size and print A2:

Red Squirrel 300 PPI printed on A2

Red Squirrel D4 300 PPI printed on A2

Now that’s just over kill.

But let’s open him back up in Photoshop and take a look at that image size dialogue again:

Our Squirrel at his native 300 PPI open in Photoshop.

Our Squirrel at his native 300 PPI open in Photoshop.

If we remove the check mark from the resample section of the image size dialogue box (circled red) and make one simple change:

Our Squirrel at a reduced pixel resolution of 240 PPI open in Photoshop.

Our Squirrel at a reduced pixel resolution of 240 PPI open in Photoshop.

All we need to do is to change the pixel resolution figure from 300 PPI to 240 PPI and click OK.

We make NO apparent change to the image on the monitor display because we haven’t changed any physical dimension and we haven’t resampled the image.

All we have done is tell the print pipeline that every 240 pixels of this image must occupy 1 liner inch of paper – instead of 300 pixels per linear inch of paper.

Let’s have a look at the final outcome:

Red Squirrel D4 240 PPI printed on A2.

Red Squirrel D4 240 PPI printed on A2.

Perfick… as Pop Larkin would say!

Now we have workable margins to the print for both handling and mounting purposes.

But here’s the big thing – printed at 2880+ DPI printer output resolution you would see no difference in visual print quality.  Indeed, 240 PPI was the Adobe Lightroom, ACR default pixel resolution until fairly recently.

So there we go, how big can you print?? – Bigger than you might think!

And it’s all down to pixel resolution – learn to understand it and you’ll find a lot of  the “murky stuff” in photography suddenly becomes very simple!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Pixel Resolution

What do we mean by Pixel Resolution?

Digital images have two sets of dimensions – physical size or linear dimension (inches, centimeters etc) and pixel dimensions (long edge & short edge).

The physical dimensions are simple enough to understand – the image is so many inches long by so many inches wide.

Pixel dimension is straightforward too – ‘x’ pixels long by ‘y’ pixels wide.

If we divide the physical dimensions by the pixel dimensions we arrive at the PIXEL RESOLUTION.

Let’s say, for example, we have an image with pixel dimensions of 3000 x 2400 pixels, and a physical, linear dimension of 10 x 8 inches.

Therefore:

3000 pixels/10 inches = 300 pixels per inch, or 300PPI

and obviously:

2400 pixels/8 inches = 300 pixels per inch, or 300PPI

So our image has a pixel resolution of 300PPI.

 

How Does Pixel Resolution Influence Image Quality?

In order to answer that question let’s look at the following illustration:

Andy Astbury,pixels,resolution,dpi,ppi,wildlife in pixels

The number of pixels contained in an image of a particular physical size has a massive effect on image quality. CLICK to view full size.

All 7 square images are 0.5 x 0.5 inches square.  The image on the left has 128 pixels per 0.5 inch of physical dimension, therefore its PIXEL RESOLUTION is 2 x 128 PPI (pixels per inch), or 256PPI.

As we move from left to right we halve the number of pixels contained in the image whilst maintaining the physical size of the image – 0.5″ x 0.5″ – so the pixels in effect become larger, and the pixel resolution becomes lower.

The fewer the pixels we have then the less detail we can see – all the way down to the image on the right where the pixel resolution is just 4PPI (2 pixels per 0.5 inch of edge dimension).

The thing to remember about a pixel is this – a single pixel can only contain 1 overall value for hue, saturation and brightness, and from a visual point of view it’s as flat as a pancake in terms of colour and tonality.

So, the more pixels we can have between point A and point B in our image the more variation of colour and tonality we can create.

Greater colour and tonal variation means we preserve MORE DETAIL and we have a greater potential for IMAGE SHARPNESS.

REALITY

So we have our 3 variables; image linear dimension, image pixel dimension and pixel resolution.

In our typical digital work flow the pixel dimension is derived from the the photosite dimension of our camera sensor – so this value is fixed.

All RAW file handlers like Lightroom, ACR etc;  all default to a native pixel resolution of 300PPI. * (this 300ppi myth annoys the hell out of me and I’ll explain all in another post).

So basically the pixel dimension and default resolution SET the image linear dimension.

If our image is destined for PRINT then this fact has some serious ramifications; but if our image is destined for digital display then the implications are very different.

 

Pixel Resolution and Web JPEGS.

Consider the two jpegs below, both derived from the same RAW file:

Andy Astbury,pixels,resolution,dpi,ppi,Wildlife in Pixels

European Adder – 900 x 599 pixels with a pixel resolution of 300PPI

European Adder - 900 x 599 pixels with a pixel resolution of 72PPI

European Adder – 900 x 599 pixels with a pixel resolution of 72PPI

In order to illustrate the three values of linear dimension, pixel dimension and pixel resolution of the two images let’s look at them side by side in Photoshop:

Andy Astbury,photoshop,resolution,pixels,ppi,dpi,wildlife in pixels,image size,image resolution

The two images opened in Photoshop – note the image size dialogue contents – CLICK to view full size.

The two images differ in one respect – their pixel resolutions.  The top Adder is 300PPI, the lower one has a resolution of 72PPI.

The simple fact that these two images appear to be exactly the same size on this page means that, for DIGITAL display the pixel resolution is meaningless when it comes to ‘how big the image is’ on the screen – what makes them appear the same size is their identical pixel dimensions of 900 x 599 pixels.

Digital display devices such as monitors, ipads, laptop monitors etc; are all PIXEL DIMENSION dependent.  The do not understand inches or centimeters, and they display images AT THEIR OWN resolution.

Typical displays and their pixel resolutions:

  • 24″ monitor = typically 75 to 95 PPI
  • 27″ iMac display = 109 PPI
  • iPad 3 or 4 = 264 PPI
  • 15″ Retina Display = 220 PPI
  • Nikon D4 LCD = 494 PPI

Just so that you are sure to understand the implication of what I’ve just said – you CAN NOT see your images at their NATIVE 300 PPI resolution when you are working on them.  Typically you’ll work on your images whilst viewing them at about 1/3rd native pixel resolution.

Yes, you can see 2/3rds native on a 15″ MacBook Pro Retina – but who the hell wants to do this – the display area is minuscule and its display gamut is pathetically small. 😉

Getting back to the two Adder images, you’ll notice that the one thing that does change with pixel resolution is the linear dimensions.

Whilst the 300 PPI version is a tiny 3″ x 2″ image, the 72 PPI version is a whopping 12″ x 8″ by comparison – now you can perhaps understand why I said earlier that the implications of pixel resolution for print are fundamental.

Just FYI – when I decide I’m going to create a small jpeg to post on my website, blog, a forum, Flickr or whatever – I NEVER ‘down sample’ to the usual 72 PPI that get’s touted around by idiots and no-nothing fools as “the essential thing to do”.

What a waste of time and effort!

Exporting a small jpeg at ‘full pixel resolution’ misses out the unnecessary step of down sampling and has an added bonus – anyone trying to send the image direct from browser to a printer ends up with a print the size of a matchbox, not a full sheet of A4.

It won’t stop image theft – but it does confuse ’em!

I’ve got a lot more to say on the topic of resolution and I’ll continue in a later post, but there is one thing related to PPI that is my biggest ‘pet peeve’:

 

PPI and DPI – They Are NOT The Same Thing

Nothing makes my blood boil more than the persistent ‘mix up’ between pixels per inch and dots per inch.

Pixels per inch is EXACTLY what we’ve looked at here – PIXEL RESOLUTION; and it has got absolutely NOTHING to do with dots per inch, which is a measure of printer OUTPUT resolution.

Take a look inside your printer driver; here we are inside the driver for an Epson 3000 printer:

Andy Astbury,printer,dots per inch,dpi,pixels per inch,ppi,photoshop,lightroom,pixel resolution,output resoloution

The Printer Driver for the Epson 3000 printer. Inside the print settings we can see the output resolutions in DPI – Dots Per Inch.

Images would be really tiny if those resolutions were anything to do with pixel density.

It surprises a lot of people when they come to the realisation that pixels are huge in comparison to printer dots – yes, it can take nearly 400 printer dots (20 dots square) to print 1 square pixel in an image at 300 PPI native.

See you in my next post!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.