Lens Performance

I have a friend – yes, a strange concept I know, but I do have some – we’ll call him Steve.

Steve is a very talented photographer – when he’ll give himself half a chance; but impatience can sometimes get the better of him.

He’ll have a great scene in front of him but then he’ll forget things such as any focus or exposure considerations the scene demands, and the resulting image will be crap!

Quite often, a few of Steve’s character flaws begin to emerge at this juncture.

Firstly, Steve only remembers his successes; this leads to the unassailable ‘fact’ that he couldn’t possibly have ‘screwed up’.

So now we can all guess the conclusive outcome of that scenario can’t we……..that’s right; his camera gear has fallen short in the performance department.

Clairvoyance department would actually be more accurate!

So this ‘error in his camera system’ needs to be stamped on – hard and fast!

This leads to Steve embarking on a massive information-gathering exercise from various learned sources on ‘that there inter web’ – where another of Steve’s flaws shows up; that of disjointed speed reading…..

The terrifying outcome of these situations usually concludes with Steve’s confident affirmation that some piece of his equipment has let him down; not just by becoming faulty but sometimes, more worryingly by initial design.

These conclusions are always arrived at in the same manner – the various little snippets of truth and random dis-associated facts that Steve gathers, all get forcibly hammered into some hellish, bastardized ‘factual’ jigsaw in his head.

There was a time when Steve used to ask me first, but he gave up on that because my usual answer contravened the outcome of his first mentioned character flaw!

Lately one of Steve’s biggest peeves has been the performance of one or two of his various lenses.

Ostensibly you’ll perhaps think there’s nothing wrong in that – after all, the image generated by the camera is only as good as the lens used to gather the light in the scene – isn’t it?


But there’s a potential problem, and it  lies in what evidence you base your conclusions on……………


For Steve, at present, it’s manufacturers MTF charts, and comparisons thereof, coupled with his own images as they appear in Lightroom or Photoshop ACR.

Again, this might sound like a logical methodology – but it isn’t.

It’s flawed on so many levels.


The Image Path from Lens to Sensor

We could think of the path that light travels along in order to get to our camera sensor as a sort of Grand National horse race – a steeplechase for photons!

“They’re under starters orders ladies and gentlemen………………and they’re off!”

As light enters the lens it comes across it’s first set of hurdles – the various lens elements and element groups that it has to pass through.

Then they arrive at Becher’s Brook – the aperture, where there are many fallers.

Carefully staying clear of the inside rail and being watchful of any lose photons that have unseated their riders at Becher’s we move on over Foinavon – the rear lens elements, and we then arrive at the infamous Canal Turn – the Optical Low Pass filter; also known as the Anti-alias filter.

Crashing on past the low pass filter and on over Valentines only the bravest photons are left to tackle the the last big fence on their journey – The Chair – our camera sensor itself.


Okay, I’ll behave myself now, but you get the general idea – any obstacle that lies in the path of light between the front surface of our lens and the photo-voltaic surface of our sensor is a BAD thing.

Image Path1 900x426 Lens Performance

The various obstacles to light as it passes through a camera (ASIC = Application Specific Integrated Circuit)

The problems are many, but let’s list a few:

  1. Every element reduces the level of transmitted light.
  2. Because the lens elements have curved surfaces, light is refracted or bent; the trick is to make all wavelengths of light refract to the same degree – failure results in either lateral or longitudinal chromatic aberration – or worse still, both.
  3. The aperture causes diffraction – already discussed HERE

We have already seen in that same previous post on Sensor Resolution that the number of megapixels can effect overall image quality in terms of overall perceived sharpness due to pixel-pitch, so all things considered, using photographs of any 3 dimensional scene is not always a wise method of judging lens performance.

And here is another reason why it’s not a good idea – the effect on image quality/perceived lens resolution of anti-alias, moire or optical low pass filter; and any other pre-filtering.

I’m not going to delve into the functional whys and wherefores of an AA filter, save to say that it’s deemed a necessary evil on most sensors, and that it can make your images take on a certain softness because it basically adds blur to every edge in the image projected by the lens onto your sensor.

The reasoning behind it is that it stops ‘moire patterning’ in areas of high frequency repeated detail.  This it does, but what about the areas in the image where its effect is not required – TOUGH!


Many photographers have paid service suppliers for AA filter removal just to squeeze the last bit of sharpness out of their sensors, and Nikon of course offer the ‘sort of AA filter-less’ D800E.

Side bar note:  I’ve always found that with Nikon cameras at least, the pro-body range seem to suffer a lot less from undesirable AA filtration softening than than their “amateur” and “semi pro” bodies – most notably the D2X compared to a D200, and the D3 compared to the D700 & D300.  Perhaps this is due to a ‘thinner’ filter, or a higher quality filter – I don’t know, and to be honest I’ve never had the desire to ‘poke Nikon with a sharp stick’ in order to find out.


Back in the days of film things were really simple – image resolution was governed by just two things; lens resolution and film resolution:

1/image resolution = 1/lens resolution + 1/film resolution

Film resolution was a variable depending on the Ag Halide distribution and structure,  dye coupler efficacy within the film emulsion, and the thickness of the emulsion or tri-pack itself.

But today things are far more complicated.

With digital photography we have all those extra hurdles to jump over that I mentioned earlier, so we end up with a situation whereby:

1/Image Resolution = 1/lens resolution + 1/AA filter resolution + 1/sensor resolution + 1/image processor/imaging ASIC resolution

Steve is chasing after lens resolution under the slightly misguided idea the resolution equates to sharpness, which is not strictly true; but he is basing his conception of lens sharpness based on the detail content and perceived detail ‘sharpness’ of his  images; which are ‘polluted’ if you like by the effects of the AA filter, sensor and imaging ASIC.

What it boils down to, in very simplified terms, is this:

You can have one particular lens that, in combination with one camera sensor produces a superb image, but in combination with another sensor produces a not-quite-so-superb image!

On top of the “fixed system” hurdles I’ve outlined above, we must not forget the potential for errors introduced by lens-to-body mount flange inaccuracies, and of course, the big elephant-in-the-room – operator error – ehh Steve.

So attempting to quantify the pure ‘optical performance’ of a lens using your ‘taken images’ is something of a pointless exercise; you cannot see the pure lens sharpness or resolution unless you put the lens on a fully equipped optical test bench – and how many of us have got access to one of those?

The truth of the matter is that the average photographer has to trust the manufacturers to supply accurately put together equipment, and he or she has to assume that all is well inside the box they’ve just purchased from their photographic supplier.

But how can we judge a lens against an assumed standard of perfection before we part with our cash?

A lot of folk, including Steve – look at MTF charts.


The MTF Chart

Firstly, MTF stands for Modulation Transfer Function – modu-what I hear your ask!

OK – let’s deal with the modulation bit.  Forget colour for a minute and consider yourself living in a black & white world.  Dark objects in a scene reflect few photons of light – ’tis why the appear dark!  Conversely, bright objects reflect loads of the little buggers, hence these objects appear bright.

Imagine now that we are in a sealed room totally impervious to the ingress of any light from outside, and that the room is painted matte white from floor to ceiling – what is the perceived colour of the room? Black is the answer you are looking for!

Now turn on that 2 million candle-power 6500k searchlight in the corner.  The split second before your retinas melted, what was the perceived colour of the room?

Note the use of the word ‘perceived’ – the actual colour never changed!

The luminosity value of every surface in the room changed from black to white/dark to bright – the luminosity values MODULATED.

Now back in reality we can say that a set of alternating black and white lines of equal width and crisp clean edges represent a high degree of contrast, and therefore tonal modulation; and the finer the lines the higher is the modulation frequency – which we measure in lines per millimeter (lpmm).

A lens takes in a scene of these alternating black and white lines and, just like it does with any other scene, projects it into an image circle; in other words it takes what it sees in front of it and ‘transfers’ the scene to the image circle behind it.

With a bit of luck and a fair wind this image circle is being projected sharply into the focal plane of the lens, and hopefully the focal plane matches up perfectly with the plane of the sensor – what used to be refereed to as the film plane.

The efficacy with which the lens carries out this ‘transfer’ in terms of maintaining both the contrast ratio of the modulated tones and the spatial separation of the lines is its transfer function.

So now you know what MTF stands for and what it means – good this isn’t it!


Let’s look at an MTF chart:

Nikon 500mmf4MTF Lens Performance

Nikon 500mm f4 MTF chart

Now what does all this mean?


Firstly, the vertical axis – this can be regarded as that ‘efficacy’ I mentioned above – the accuracy of tonal contrast and separation reproduction in the projected image; 1.0 would be perfect, and 0 would be crappier than the crappiest version of a crap thing!

The horizontal axis – this requires a bit of brain power! It is scaled in increments of 5 millimeters from the lens axis AT THE FOCAL PLANE.

The terminus value at the right hand end of the axis is unmarked, but equates to 21.63mm – half the opposing corner-to-corner dimension of a 35mm frame.

Now consider the diagram below:

ImageCircle 853x900 Lens Performance

The radial dimensions of the 35mm format.

These are the radial dimensions, in millimeters, of a 35mm format frame (solid black rectangle).

The lens axis passes through the center axis of the sensor, so the radii of the green, yellow and dashed circles correspond to values along the horizontal axis of an MTF chart.

Let’s simplify what we’ve learned about MTF axes:

Modi500mmf4MTF Lens Performance

MTF axes hopefully made simpler!

Now we come to the information data plots; firstly the meaning of Sagittal & Meridional.   From our perspective in this instance I find it easier for folk to think of them as ‘parallel to’ and ‘at right angles to’ the axis of measurement, though strictly speaking Meridional is circular and Sagittal is radial.

This axis of measurement is from the lens/film plane/sensor center to the corner of a 35mm frame – in other words, along that 21.63mm radius.

ImageCircle21 900x708 Lens Performance

The axis of MTF measurement and the relative axial orientation of Sagittal & Meridional lines. NOTE: the target lines are ONLY for illustration.

Separate measurements are taken for each modulation frequency along the entire measurement axis:

ImageCircle31 900x708 Lens Performance

Thin Meridional MTF measurement. (They should be concentric circles but I can’t draw concentric circles!).

Let’s look at that MTF curve for the 500m f4 Nikon together with a legend of ‘sharpness’ – the 300 f2.8:

MTF2 900x545 Lens Performance

Nikon MTF comparison between the 500mm f4 & 300mm f2.8

Nikon say on their website that they measure MTF at maximum aperture, that is, wide open; so the 300mm chart is for an aperture of f2.8 (though they don’t say so) and the 500mm is for an f4 aperture – which they do specify on the chart – don’t ask me why ‘cos I’ve no idea.

As we can see, the best transfer values for the two lenses (and all other lenses) is 10 lines per millimeter, and generally speaking sagittal orientation usually performs slightly better than meridional, but not always.

10 lpmm is always going to give a good transfer value because its very coarse and represents a lower frequency of detail than 30 lpmm.

Funny thing, 10 lines per millimeter is 5 line pairs per millimeter – and where have we heard that before? HERE – it’s the resolution of the human eye at 25 centimeters.


Another interesting thing to bare in mind is that, as the charts clearly show, better transfer values occur closer to the lens axis/sensor center, and that performance falls as you get closer to the frame corners.

This is simply down to the fact that your are getting closer to the inner edge of the image circle (the dotted line in the diagrams above).  If manufacturers made lenses that threw a larger image circle then corner MTF performance would increase – it can be done – that’s the basis upon which PCE/TS lenses work.

One way to take advantage of center MTF performance is to use a cropped sensor – I still use my trusty D2Xs for a lot of macro work; not only do I get the benefit of center MTF performance across the majority of the frame but I also have the ability to increase the lens to subject distance and get the composition I want, so my depth of field increases slightly for any given aperture.

Back to the matter at hand, here’s my first problem with the likes of Nikon, Canon etc:  they don’t specify the lens-to-target distance. A lens that gives a transfer value of 9o% plus on a target of 10 lpmm sagittal at 2 meters distance is one thing; one that did the same but at 25 meters would be something else again.

You might look at the MTF chart above and think that the 300mm f2.8 lens is poor on a target resolution of  30 lines per millimeter compared to the 500mm, but we need to temper that conclusion with a few facts:

  1. A 300mm lens is a lot wider in Field of View (FoV) than a 500mm so there is a lot more ‘scene width’ being pushed through the lens – detail is ‘less magnified’.
  2. How much ‘less magnified’ –  40% less than at 500mm, and yet the 30 lpmm transfer value is within 6% to 7% that of the 500mm – overall a seemingly much better lens in MTF terms.
  3. The lens is f2.8 – great for letting light in but rubbish for everything else!

Most conventional lenses have one thing in common – their best working aperture for overall image quality is around f8.

But we have to counter balance the above with the lack of aforementioned target distance information.  The minimum focus distances for the two comparison lenses are 2.3 meters and 4.0 meters respectively so obviously we know that the targets are imaged and measured at vastly different distances – but without factual knowledge of the testing distances we cannot really say that one lens is better than the other.


My next problem with most manufacturers MTF charts is that the values are supplied ‘a la white light’.

I mentioned earlier – much earlier! – that lens elements refracted light, and the importance of all wavelengths being refracted to the same degree, otherwise we end up with either lateral or longitudinal chromatic aberration – or worse still – both!

Longitudinal CA will give us different focal planes for different colours contained within white light – NOT GOOD!

Lateral CA gives us the same plane of focus but this time we get lateral shifts in the red, green and blue components of the image, as if the 3 colour channels have come out of register – again NOT GOOD!

Both CA types are most commonly seen along defined edges of colour and/or tone, and as such they both effect transferred edge definition and detail.

So why do manufacturers NOT publish this information – there is to my knowledge only one that does – Schneider (read ‘proper lens’).

They produce some very meaningful MTF data for their lenses with modulation frequencies in excess of 90 to 150 lpmm; separate R,G & B curves; spectral weighting variations for different colour temperatures of light and all sorts of other ‘geeky goodies’ – I just love it all!


SHAME ON YOU NIKON – and that goes for Canon and Sigma just as much.


So you might now be asking WHY they don’t publish the data – they must have it – are they treating us like fools that wouldn’t be able to understand it; OR – are they trying to hide something?

You guys think what you will – I’m not accusing anyone of anything here.

But if they are trying to hide something then that ‘something’ might not be what you guys are thinking.

What would you think if I told you that if you were a lens designer you could produce an MTF plot with a calculator – ‘cos you can, and they do!

So, in a nutshell, most manufacturers MTF charts as published for us to see are worse than useless.  We can’t effectively use them to compare one lens against another because of missing data; we can’t get an idea of CA performance because of missing red, green and blue MTF curves; and finally we can’t even trust that the bit of data they do impart is even bloody genuine.


Please don’t get taken in by them next time you fancy spending money on glass – take your time and ask around – better still try one; and try it on more than 1 camera body!


Please consider supporting this blog.

This blog really does need your support. All the information I put on these pages I do freely, but it does involve costs in both time and money.

If you find this post useful and informative please could you help by making a small donation – it would really help me out a lot – whatever you can afford would be gratefully received.

Your donation will help offset the costs of running this blog and so help me to bring you lots more useful and informative content.

Many thanks in advance.


Please note, I’ve not written this blog article for any other reason than to make its readers aware of the facts – no one pays me in cash or kind for any products mentioned in this article; it is written purely for information purposes; you do with this information “what you will”!

If you have any questions please feel free to drop me a line via email at tuition@wildlifeinpixels.net



Sensor Resolution

Sensor Resolution

In my previous two posts on this subject HERE and HERE I’ve been looking at pixel resolution as it pertains to digital display and print, and the basics of how we can manipulate it to our benefit.

You should also by aware by now that I’m not the worlds biggest fan of high sensor resolution 35mm format dSLRs – there’s nothing wrong with mega pixels; you can’t have enough of them in my book!

BUT, there’s a limit to how many you can cram into a 36 x 24 millimeter sensor area before things start getting silly and your photographic life gets harder.

So in this post I want to explain the reasoning behind my thoughts.

But before I get into that I want to address something else to do with resolution – the standard by which we judge everything we see around us – the resolution of the eye.


Human Eye – How Much Can We See?

In very simple terms, because I’m not an optician, the answer goes like this.

Someone with what some call 20/20/20 vision – 20/20 vision in a 20 year old – has a visual acuity of 5 line pairs per millimeter at a distance of 25 centimeters.

What’s a line pair?

LinePairs Sensor Resolution

5 line pairs per millimeter. Each line pair is 0.2mm and each line is 0.1mm.

Under ideal viewing conditions in terms of brightness and contrast the human eye can at best resolve 0.1mm detail at a distance of 25 centimeters.

Drop the brightness and the contrast and black will become less black and more grey, and white will become greyer; the contrast between light and dark becomes reduced and therefore that 0.1mm detail becomes less distinct.  until the point comes where the same eye can’t resolve detail any smaller than 0.2mm at 25cms, and so on.

Now if I try and focus on something at 25 cms my eyeballs start to ache,  so we are talking extreme close focus for the eye here.

An interesting side note is that 0.1mm is 100µm (microns) and microns are what we measure the size of sensor photosites in – which brings me nicely to SENSOR resolution.


Sensor Resolution – Too Many Megapixels?

As we saw in the post on NOISE we do not give ourselves the best chances by employing sensors with small photosite diameters.  It’s a basic fact of physics and mathematics – the more megapixels on a sensor, then the smaller each photosite has to be in order to fit them all in there;  and the smaller they are then the lower is their individual signal to noise or S/N ratio.

But there is another problem that comes with increased sensor resolution:

Increased diffraction threshold.

basic copy 900x640 Sensor Resolution

Schematic of identical surface areas on lower and higher megapixel sensors.

In the above schematic we are looking at the same sized tiny surface area section on two sensors.

If we say that the sensor resolution on the left is that of a 12Mp Nikon D3, and the ‘area’ contains 3 x 3 photosites which are each 8.4 µm in size, then we can say we are looking at an area of about 25µm square.

On the right we are looking at that same 25µm (25 micron) square, but now it contains 5.2 x 5.2 photosites, each 4.84µm in size – a bit like the sensor resolution of a 36Mp D800.


What is Diffraction?

Diffraction is basically the bending or reflecting of waves by objects placed in their path (not to be confused with refraction).  As it pertains to our camera sensor, and overall image quality, it causes an general softening of every single point of sharp detail in the image that is projected onto the sensor during the exposure.

I say during the exposure because diffraction is ‘aperture driven’ and it’s effects only occur when the aperture is ‘stopped down’; which on modern cameras only occurs during the time the shutter is open.

At all other times you are viewing the image with the aperture wide open, and so you can’t see the effect unless you hit the stop down button (if you have one) and even then the image in the viewfinder is so small and dark you can’t see it.

As I said, diffraction is caused by aperture diameter – the size of the hole that lets the light in:

Dif11 Sensor Resolution

Diffraction has a low presence in the system at wider apertures.

Light enters the lens, passes through the aperture and strikes the focal plane/sensor causing the image to be recorded.

Light waves passing through the center of the aperture and light waves passing through the periphery of the aperture all need to travel the same distance – the focal distance – in order for the image to be sharp.

The potential for the peripheral waves to be bent by the edge of the aperture diaphragm increases as the aperture becomes smaller.

Dif21 Sensor Resolution

Diffraction has a greater presence in the system at narrower apertures.

If I apply some randomly chosen numbers to this you might understand it a little better:

Let’s say that the focal distance of the lens (not focal length) is 21.25mm.

As long as light passing through all points of the aperture travels 21.25mm and strikes the sensor then the image will be sharp; in other words, the more parallel the central and peripheral light waves are, then the sharper the image.

Making the aperture narrower by ‘stopping down’ increases the divergence between central and peripheral waves.

This means that peripheral waves have to travel further before the strike the sensor; further than 21.25mm – therefore they are no longer in focus, but those central waves still are.  This effect gives a fuzzy halo to every single sharply focused point of light striking our sensor.

Please remember, the numbers I’ve used above are meaningless and random.

The amount of fuzziness varies with aperture – wider aperture =  less fuzzy; narrower aperture = more fuzzy, and the circular image produced by a single point of sharp focus is known as an Airy Disc.

As we ‘stop down’ the aperture the edges of the Airy Disc become softer and more fuzzy.

Say for example, we stick a 24mm lens on our camera and frame up a nice landscape, and we need to use f14 to generate the amount of depth of field we need for the shot.  The particular lens we are using produces an Airy Disc of a very particular size at any given aperture.

Now here is the problem:

basic2 900x640 Sensor Resolution

Schematic of identical surface areas on lower and higher megapixel sensors and the same diameter Airy Disc projected on both of them.

As you can see, the camera with the lower sensor resolution and larger photosite diameter contains the Airy Disc within the footprint of ONE photosite; but the disc effects NINE photosites on the camera with the higher sensor resolution.

Individual photosites basically record one single flat tone which is the average of what they see; so the net outcome of the above scenario is:

basic3 900x640 Sensor Resolution

Schematic illustrating the tonal output effect of a particular size Airy Disc on higher and lower resolution sensors

On the higher resolution sensor the Airy Disc has produced what we might think of as ‘response pollution’ in the 8 surrounding photosites – these photosites need to record the values of the own ‘bits of the image jigsaw’ as well – so you end up with a situation where each photosite on the sensor ends up recording somewhat imprecise tonal values – this is diffraction in action.

If we were to stop down to f22 or f32 on the lower resolution sensor then the same thing would occur.

If we used an aperture wide enough on the higher resolution sensor – an aperture that generated an Airy Disc that was the same size or smaller than the diameter of the photosites – then only 1 single photosite would be effected and diffraction would not occur.

But that would leave of with a reduced depth of field – getting around that problem is fairly easy if you are prepared to invest in something like a Tilt-Shift lens.

Tilt Shift Sensor Resolution

Both images shot with a 24mm TS lens at f3.5. Left image lens is set to zero and behaves as normal 24mm lens. Right image has 1 degree of down tilt applied.

Above we see two images shot with a 24mm Tilt-Shift lens, and both shots are at f3.5 – a wide open aperture.  In the left hand image the lens controls are set to zero and so it behaves like a standard construction lens of 24mm and gives the shallow depth of field that you’d expect.

The image on the right is again, shot wide open at f3.5, but this time the lens was tilted down by just 1 degree – now we have depth of field reaching all the way through the image.  All we would need to do now is stop the lens down to its sharpest aperture – around f8 – and take the shot;  and no worries about diffraction.

Getting back to sensor resolution in general, if your move into high megapixels counts on 35mm format then you are in a ‘Catch 22’ situation:

  • Greater sensor resolution enables you to theoretically capture greater levels of detail.

but that extra level of detail is somewhat problematic because:

  • Diffraction renders it ‘soft’.
  • Eliminating the diffraction causes you to potentially lose the newly acquired level of, say foreground detail in a landscape, due to lack of depth of field.

All digital sensors are susceptible to diffraction at some point or other – they are ‘diffraction limited’.

Over the years I’ve owned a Nikon D3 I’ve found it diffraction limited to between f16 & f18 – I can see it at f18 but can easily rescue the situation.  When I first used a 24Mp D3X I forgot what I was using and spent a whole afternoon shooting at f16 & f18 – I had to go back the next day for a re-shoot because the sensor is diffraction limited to f11 – the pictures certainly told the story!

Everything in photography is a trade-off – you can’t have more of one thing without having less of another.  Back in the days of film we could get by with one camera and use different films because they had very different performance values, but now we buy a camera and expect its sensor to perform all tasks with equal dexterity – sadly, this is not the case.  All modern consumer sensors are jacks of all trades.

If it’s sensor resolution you want then by far the best way to go about it is to jump to medium format, if you want image quality of the n’th degree – this way you get the ‘pixel resolution’ without many of the incumbent problems I’ve mentioned, simply because the sensors are twice the size; or invest in a TS/PC lens and take the Scheimpflug route to more depth of field at a wider aperture.


Please consider supporting this blog.

This blog really does need your support. All the information I put on these pages I do freely, but it does involve costs in both time and money.

If you find this post useful and informative please could you help by making a small donation – it would really help me out a lot – whatever you can afford would be gratefully received.

Your donation will help offset the costs of running this blog and so help me to bring you lots more useful and informative content.

Many thanks in advance.


Pixel Resolution – part 2

More on Pixel Resolution

In my previous post on pixel resolution  I mentioned that it had some serious ramifications for print.

The major one is PHYSICAL or LINEAR image dimension.

In that previous post I said:

  • Pixel dimension divided by pixel resolution = linear dimension

Now, as we saw in the previous post, linear dimension has zero effect on ‘digital display’ image size – here’s those two snake jpegs again:

300ppi Pixel Resolution   part 2

European Adder – 900 x 599 pixels with a pixel resolution of 300PPI

72ppi Pixel Resolution   part 2

European Adder – 900 x 599 pixels with a pixel resolution of 72PPI

Digital display size is driven by pixel dimensionNOT linear dimension or pixel resolution.

Print on the other hand is directly driven by image linear dimension – the physical length and width of our image in inches, centimeters or millimeters.

Now I teach this ‘stuff’ all the time at my Calumet workshops and I know it’s hard for some folk to get their heads around print size and printer output, but it really is simple and straightforward if you just think about it logically for minute.

Let’s get away from snakes and consider this image of a cute Red Squirrel:

D4R6113 Edit240 900x598 Pixel Resolution   part 2

Red Squirrel with Bushy Tail – what a cutey!
Shot with Nikon D4 – full frame render.

Yeah yeah – he’s a bit big in the frame for my taste but it’s a seller so boo-hoo – what do I know ! !

Shot on a Nikon D4 – the relevance of which is this:

  • The D4 has a sensor with a linear dimension of 36 x 24 millimeters, but more importantly a photosite dimension of 4928 x 3280. (this is the effective imaging area – total photosite area is 4992 x 3292 according to DXO Labs).

Importing this image into Lightroom, ACR, Bridge, CapOne Pro etc will take that photosite dimension as a pixel dimension.

They also attach the default standard pixel resolution of 300 PPI to the image.

So now the image has a set of physical or linear dimensions:

  • 4928/300  x  3280/300 inches  or  16.43″ x 10.93″


  • 417.24 x 277.71 mm for those of you with a metric inclination!

So how big CAN we print this image?


Pixel Resolution & Image Physical Dimension

Let’s get back to that sensor for a moment and ask ourselves a question:

  • “Does a sensor contain pixels, and can it have a PPI resolution attached to it?
  • Well, the strict answer would be No and No not really.

But because the photosite dimensions end up being ‘converted’ to pixel dimensions then let’s just for a moment pretend that it can.

The ‘effective’ PPI value for the D4 sensor could be easily derived from its long edge ‘pixel’ count of the FX frame divided by the linear length which is just shy of 36mm or 1.4″ – 3520 PPI or thereabouts.

So, if we take this all literally our camera captures and stores a file that has linear dimensions of  1.4″ x 0.9″, pixel dimensions of  4928 x 3280 and a pixel resolution of 3520 PPI.

Import this file into Lightroom for instance, and that pixel resolution is reduced to 300 PPI.  It’s this very act that renders the image on our monitor at a size we can work with.  Otherwise we’d be working on postage stamps!

And what has that pixel resolution done to the linear image dimensions?  Well it’s basically ‘magnified’ the image – but by how much?


Magnification & Image Size

Magnification factors are an important part of digital imaging and image reproduction, so you need to understand something – magnification factors are always calculated on the diagonal.

So we need to identify the diagonals of both our sensor, and our 300 PPI image before we can go any further.

Here is a table of typical sensor diagonals:

COCX3fh 900x640 Pixel Resolution   part 2

Table of Sensor Diagonals for Digital Cameras.

And here is a table of metric print media sizes:

PaperSizeX 900x311 Pixel Resolution   part 2

Metric Paper Sizes including diagonals.

To get back to our 300 PPI image derived from our D4 sensor,  Pythagoras tells us that our 16.43″ x 10.93″ image has a diagonal of 19.73″ – or 501.14mm

So with a sensor diagonal of 43.2mm we arrive at a magnification factor of around 11.6x for our 300 PPI native image as displayed on our monitor.

This means that EVERYTHING on the sensor – photosites/pixels, dust bunnies, logs, lumps of coal, circles of confusion, Airy Discs – the lot – are magnified by that factor.

Just to add variety, a D800/800E produces native 300 PPI images at 24.53″ x 16.37″ – a magnification factor of 17.3x over the sensor size.

So you can now begin to see why pixel resolution is so important when we print.


How To Blow Up A Squirrel !

Let’s get back to ‘his cuteness’ and open him up in Photoshop:

PR1 900x597 Pixel Resolution   part 2

Our Squirrel at his native 300 PPI open in Photoshop.

See how I keep you on your toes – I’ve switched to millimeters now!

The image is 417 x 277 mm – in other words it’s basically A3.

What happens if we hit print using A3 paper?

D4R6113 Edit240A3paper 900x636 Pixel Resolution   part 2

Red Squirrel with Bushy Tail. D4 file at 300 PPI printed to A3 media.

Whoops – that’s not good at all because there is no margin.  We need workable margins for print handling and for mounting in cut mattes for framing.

Do not print borderless – it’s tacky, messy and it screws your printer up!

What happens if we move up a full A size and print A2:

300A2 900x646 Pixel Resolution   part 2

Red Squirrel D4 300 PPI printed on A2

Now that’s just over kill.

But let’s open him back up in Photoshop and take a look at that image size dialogue again:

PR1 900x597 Pixel Resolution   part 2

Our Squirrel at his native 300 PPI open in Photoshop.

If we remove the check mark from the resample section of the image size dialogue box (circled red) and make one simple change:

PR2 900x596 Pixel Resolution   part 2

Our Squirrel at a reduced pixel resolution of 240 PPI open in Photoshop.

All we need to do is to change the pixel resolution figure from 300 PPI to 240 PPI and click OK.

We make NO apparent change to the image on the monitor display because we haven’t changed any physical dimension and we haven’t resampled the image.

All we have done is tell the print pipeline that every 240 pixels of this image must occupy 1 liner inch of paper – instead of 300 pixels per linear inch of paper.

Let’s have a look at the final outcome:

240A2 900x647 Pixel Resolution   part 2

Red Squirrel D4 240 PPI printed on A2.

Perfick… as Pop Larkin would say!

Now we have workable margins to the print for both handling and mounting purposes.

But here’s the big thing – printed at 2880+ DPI printer output resolution you would see no difference in visual print quality.  Indeed, 240 PPI was the Adobe Lightroom, ACR default pixel resolution until fairly recently.

So there we go, how big can you print?? – Bigger than you might think!

And it’s all down to pixel resolution – learn to understand it and you’ll find a lot of  the “murky stuff” in photography suddenly becomes very simple!


Help Me to Help You!

If you’ve found this or any other article on this blog useful or informative then please do me a favour and leave a comment – and don’t forget to click the “Follow” button – it’s free and you’ll get notified of my next blog post.

Please consider supporting this blog.

This blog really does need your support. All the information I put on these pages I do freely, but it does involve costs in both time and money.

If you find this post useful and informative please could you help by making a small donation – it would really help me out a lot – whatever you can afford would be gratefully received.

Your donation will help offset the costs of running this blog and so help me to bring you lots more useful and informative content.

Many thanks in advance.


Pixel Resolution

What do we mean by Pixel Resolution?

Digital images have two sets of dimensions – physical size or linear dimension (inches, centimeters etc) and pixel dimensions (long edge & short edge).

The physical dimensions are simple enough to understand – the image is so many inches long by so many inches wide.

Pixel dimension is straightforward too – ‘x’ pixels long by ‘y’ pixels wide.

If we divide the physical dimensions by the pixel dimensions we arrive at the PIXEL RESOLUTION.

Let’s say, for example, we have an image with pixel dimensions of 3000 x 2400 pixels, and a physical, linear dimension of 10 x 8 inches.


3000 pixels/10 inches = 300 pixels per inch, or 300PPI

and obviously:

2400 pixels/8 inches = 300 pixels per inch, or 300PPI

So our image has a pixel resolution of 300PPI.


How Does Pixel Resolution Influence Image Quality?

In order to answer that question let’s look at the following illustration:

Pixel ResolutionSX1 900x305 Pixel Resolution

The number of pixels contained in an image of a particular physical size has a massive effect on image quality. CLICK to view full size.

All 7 square images are 0.5 x 0.5 inches square.  The image on the left has 128 pixels per 0.5 inch of physical dimension, therefore its PIXEL RESOLUTION is 2 x 128 PPI (pixels per inch), or 256PPI.

As we move from left to right we halve the number of pixels contained in the image whilst maintaining the physical size of the image – 0.5″ x 0.5″ – so the pixels in effect become larger, and the pixel resolution becomes lower.

The fewer the pixels we have then the less detail we can see – all the way down to the image on the right where the pixel resolution is just 4PPI (2 pixels per 0.5 inch of edge dimension).

The thing to remember about a pixel is this – a single pixel can only contain 1 overall value for hue, saturation and brightness, and from a visual point of view it’s as flat as a pancake in terms of colour and tonality.

So, the more pixels we can have between point A and point B in our image the more variation of colour and tonality we can create.

Greater colour and tonal variation means we preserve MORE DETAIL and we have a greater potential for IMAGE SHARPNESS.


So we have our 3 variables; image linear dimension, image pixel dimension and pixel resolution.

In our typical digital work flow the pixel dimension is derived from the the photosite dimension of our camera sensor – so this value is fixed.

All RAW file handlers like Lightroom, ACR etc;  all default to a native pixel resolution of 300PPI. * (this 300ppi myth annoys the hell out of me and I’ll explain all in another post).

So basically the pixel dimension and default resolution SET the image linear dimension.

If our image is destined for PRINT then this fact has some serious ramifications; but if our image is destined for digital display then the implications are very different.


Pixel Resolution and Web JPEGS.

Consider the two jpegs below, both derived from the same RAW file:

300ppi 600x400 Pixel Resolution

European Adder – 900 x 599 pixels with a pixel resolution of 300PPI

72ppi 600x400 Pixel Resolution

European Adder – 900 x 599 pixels with a pixel resolution of 72PPI

In order to illustrate the three values of linear dimension, pixel dimension and pixel resolution of the two images let’s look at them side by side in Photoshop:

ResolutionPPI 900x619 Pixel Resolution

The two images opened in Photoshop – note the image size dialogue contents – CLICK to view full size.

The two images differ in one respect – their pixel resolutions.  The top Adder is 300PPI, the lower one has a resolution of 72PPI.

The simple fact that these two images appear to be exactly the same size on this page means that, for DIGITAL display the pixel resolution is meaningless when it comes to ‘how big the image is’ on the screen – what makes them appear the same size is their identical pixel dimensions of 900 x 599 pixels.

Digital display devices such as monitors, ipads, laptop monitors etc; are all PIXEL DIMENSION dependent.  The do not understand inches or centimeters, and they display images AT THEIR OWN resolution.

Typical displays and their pixel resolutions:

  • 24″ monitor = typically 75 to 95 PPI
  • 27″ iMac display = 109 PPI
  • iPad 3 or 4 = 264 PPI
  • 15″ Retina Display = 220 PPI
  • Nikon D4 LCD = 494 PPI

Just so that you are sure to understand the implication of what I’ve just said – you CAN NOT see your images at their NATIVE 300 PPI resolution when you are working on them.  Typically you’ll work on your images whilst viewing them at about 1/3rd native pixel resolution.

Yes, you can see 2/3rds native on a 15″ MacBook Pro Retina – but who the hell wants to do this – the display area is minuscule and its display gamut is pathetically small. 😉

Getting back to the two Adder images, you’ll notice that the one thing that does change with pixel resolution is the linear dimensions.

Whilst the 300 PPI version is a tiny 3″ x 2″ image, the 72 PPI version is a whopping 12″ x 8″ by comparison – now you can perhaps understand why I said earlier that the implications of pixel resolution for print are fundamental.

Just FYI – when I decide I’m going to create a small jpeg to post on my website, blog, a forum, Flickr or whatever – I NEVER ‘down sample’ to the usual 72 PPI that get’s touted around by idiots and no-nothing fools as “the essential thing to do”.

What a waste of time and effort!

Exporting a small jpeg at ‘full pixel resolution’ misses out the unnecessary step of down sampling and has an added bonus – anyone trying to send the image direct from browser to a printer ends up with a print the size of a matchbox, not a full sheet of A4.

It won’t stop image theft – but it does confuse ’em!

I’ve got a lot more to say on the topic of resolution and I’ll continue in a later post, but there is one thing related to PPI that is my biggest ‘pet peeve’:


PPI and DPI – They Are NOT The Same Thing

Nothing makes my blood boil more than the persistent ‘mix up’ between pixels per inch and dots per inch.

Pixels per inch is EXACTLY what we’ve looked at here – PIXEL RESOLUTION; and it has got absolutely NOTHING to do with dots per inch, which is a measure of printer OUTPUT resolution.

Take a look inside your printer driver; here we are inside the driver for an Epson 3000 printer:

printerDPI2 900x640 Pixel Resolution

The Printer Driver for the Epson 3000 printer. Inside the print settings we can see the output resolutions in DPI – Dots Per Inch.

Images would be really tiny if those resolutions were anything to do with pixel density.

It surprises a lot of people when they come to the realisation that pixels are huge in comparison to printer dots – yes, it can take nearly 400 printer dots (20 dots square) to print 1 square pixel in an image at 300 PPI native.

See you in my next post!

Help Me to Help You!

Please consider supporting this blog.

This blog really does need your support. All the information I put on these pages I do freely, but it does involve costs in both time and money.

If you find this post useful and informative please could you help by making a small donation – it would really help me out a lot – whatever you can afford would be gratefully received.

Your donation will help offset the costs of running this blog and so help me to bring you lots more useful and informative content.

Many thanks in advance.


Noise and the Camera Sensor

Camera sensors all suffer with two major afflictions; diffraction and noise; and between them these two afflictions cause more consternation amongst photographers than anything else.

In this post I’m going to concentrate on NOISE, that most feared of sensor afflictions, and its biggest influencer – LIGHT, and its properties.

What Is Light?

As humans we perceive light as being a constant continuous stream or flow of electromagnetic energy, but it isn’t!   Instead of flowing like water it behaves more like rain, or indeed, bullets from a machine gun!   Here’s a very basic physics lesson:

Below is a diagram showing the Bohr atomic model.

We have a single positively charged proton (black) forming the nucleus, and a single negatively charged electron (green) orbiting the nucleus.

The orbit distance n1 is defined by the electrostatic balance of the two opposing charges.

How Light Works 3e Edit 900x900 Noise and the Camera Sensor

The Bohr Atomic Model

If we apply energy to the system then a ‘tipping point’ is reached and the electron is forced to move away from the nucleus – n2.

Apply even more energy and the system tips again and the electron is forced to move to an even higher energy level – n3.

Now here’s the fun bit – stop applying energy to the system.

As the system is no longer needing to cope with the excess energy it returns to its natural ‘ground’ state and the electron falls back to n1.

In the process the electron sheds the energy it has absorbed – the red squiggly bit – as a quantum, or packet, of electromagnetic energy.

This is basically how a flash gun works.

This ‘packet’ has a start and an end; the start happens as the electron begins its fall back to its ground state; and the end occurs once the electron arrives at n1 – therefore it can perhaps be tentatively thought of as being particulate in nature.

So now you know what Prof. Brian Cox knows – CERN here we come!

Right, so what’s this got to do with photography and camera sensor noise

Camera Sensor Noise

All camera sensors are effected by noise, and this noise comes in various guises:

Firstly, the ‘noise control’ sections of most processing software we use tend to break it down into two components; luminosity, or luminance noise; and colour noise.  Below is a rather crappy image that I’m using to illustrate what we might assume is the reality of noise:

LumCol 400x400 Noise and the Camera Sensor

This shot shows both Colour & Luminance noise.
The insert shows the shot and the small white rectangle is the area we’re concentrating on.

Now let’s look at the two basic components: Firstly the LUMINANCE component

LuminanceOnly 400x400 Noise and the Camera Sensor

Here we see the LUMINANCE noise component – colour & colour noise components have been removed for clarity.

Next, the COLOUR NOISE bit:

ColourOnly 400x400 Noise and the Camera Sensor

The COLOUR NOISE component of the area we’re looking at. All luminance noise has been removed.

I must stress that the majority of colour noise you see in your files inside LR,ACR,CapOne,PS etc: is ‘demosaicing colour noise’, which occurs during the demosaic processes.

But the truth is, it’s not that simple.

Localised random colour errors are generated ‘on sensor’ due to the individual sensor characteristics as we’ll see in a moment, because noise, in truth, comes in various guises that collectively effect luminosity and colour:

Shot 400x400 Noise and the Camera Sensor

Shot Noise

This first type of noise is Shot Noise – called so because it’s basically an intrinsic part of the exposure, and is caused by photon flux in the light reflected by the subject/scene.

Remember – we see light in a different way to that of our camera. What we don’t notice is the fact that photon streams rise and fall in intensity – they ‘flux’ – these variations happen far too fast for our eyes to notice, but they do effect the sensor output.

On top of this ‘fluxing’ problem we have something more obvious to consider.

Lighter subjects reflect more light (more photons), darker subjects reflect less light (less photons).

Your exposure is always going to some sort of ‘average’, and so is only going to be ‘accurate’ for certain areas of the scene.

Lighter areas will be leaning towards over exposure; darker areas towards under exposure – your exposure can’t be perfect for all tones contained in the scene.

Tonal areas outside of the ‘average exposure perfection’ – especially the darker ones – may well contain more shot noise.

Shot noise is therefore quite regular in its distribution, but in certain areas it becomes irregular – so its often described as ‘pseudo random’ .

Read 400x400 Noise and the Camera Sensor

Read Noise

Read Noise – now we come to a different category of noise completely.

The image is somewhat exaggerated so that you can see it, but basically this is a ‘zero light’ exposure; take a shot with the lens cap on and this is what happens!

What you can see here is the background sensor noise when you take any shot.

Certain photosites on the sensor are actually generating electrons even in the complete absence of light – seeing as they’re photo-voltaic they shouldn’t be doing this – but they do.

Added to this are AD Converter errors and general ‘system noise’ generated by the camera – so we can regard Read Noise as being like the background hiss, hum and rumble we can hear on a record deck when we turn the Dolby off.

Thermal Pattern 400x400 Noise and the Camera Sensor

Thermal & Pattern Noise

In the same category as Read Noise are two other types of noise – thermal and pattern.

Both again have nothing to do with light falling on the sensor, as this too was shot under a duvet with the lens cap on – a 30 minute exposure at ISO 100 – not beyond stupid when you think of astro photography and star trail shots in particular.

You can see in the example that there are lighter and darker areas especially over towards the right side and top right corner – this is Thermal Noise.

During long exposures the sensor actually heats up, which in turn increases the response of photosites in those areas and causes them to release more electrons.

You can also see distinct vertical and some horizontal banding in the example image – this is pattern noise, yet another sensor noise signature.

under exposure 400x400 Noise and the Camera Sensor

Under Exposure Noise – pretty much what most photographers think of when they hear the word “noise”.

Read Noise, Pattern Noise, Thermal Noise and to a degree Shot Noise all go together to form a ‘base line noise signature’ for your particular sensor, so when we put them all together and take a shot where we need to tweak the exposure in the shadow areas a little we get an overall Under Exposure Noise characteristic for our camera – which let’s not forget, contains other elements of  both luminance noise and colour noise components derived from the ISO settings we use.

All sensors have a base ISO – this can be thought of as the speed rating which yields the highest Dynamic Range (Dynamic Range falls with increasing ISO values, which is basically under exposure).

At this base ISO the levels of background noise generated by the sensor just being active (Pattern,Read & Thermal) will be at their lowest, and can be thought of as the ‘base noise’ of the sensor.

How visually apparent this base noise level is depends on what is called the Signal to Noise Ratio – the higher the S/N ratio the less you see the noise.

And what is it that gives us a high signal?

MORE Photons – that’s what..!

The more photons each photosite on the sensor can gather during the exposure then the more ‘masked’ will be any internal noise.

And how do we catch more photons?

By using a sensor with BIGGER photosites, a larger pixel pitch – that’s how.  And bigger photosites means LESS MEGAPIXELS – allow me to explain.

Buckets in the Rain A 900x643 Noise and the Camera Sensor

Here we see a representation of various sized photosites from different sensors.

On the right is the photosite of a Nikon D3s – a massive ‘bucket’ for catching photons in – and 12Mp resolution.

Moving left we have another FX sensor photosite – the D3X at 24Mp, and then the crackpot D800 and it’s mental 36Mp tiny photosite  – can you tell I dislike the D800 yet? 

One the extreme left is the photosite from the 1.5x APS-C D7100 just for comparison.

Now cast your mind back to the start of this post where I said we could tentatively regard photons as particles – well, let’s imagine them as rain drops, and the photosites in the diagram above as different sized buckets.

Let’s put the buckets out in the back yard and let’s make the weather turn to rain:

Buckets in the Rain B 900x643 Noise and the Camera Sensor

Various sizes of photosites catching photon rain.

Here it comes…

Buckets in the Rain C 900x643 Noise and the Camera Sensor

It’s raining

OK – we’ve had 2 inches of rain in 10 seconds! Make it stop!

Buckets in the Rain D 900x643 Noise and the Camera Sensor

All buckets have 2 inches of water in them, but which has caught the biggest volume of rain?

Thank God for that..

If we now get back to reality, we can liken the duration of the rain downpour as shutter speed, the rain drops themselves as photons falling on the sensor, and the consistency of water depth in each ‘bucket’ as a correct level of exposure.

Which bucket has the largest volume of water, or which photosite has captured the most photons – in other words which sensor has the highest S/N Ratio?   That’s right – the 12Mp D3s.

To put this into practical terms let’s consider the next diagram:

SNRatio 2 900x643 Noise and the Camera Sensor

Increased pixel pitch = Increased Signal to Noise Ratio

The importance of S/N ratio and its relevance to camera sensor noise can be seen clearly in the diagram above – but we are talking about base noise at native or base ISO.

If we now look at increasing the ISO speed we have a potential problem.

As I mentioned before, increasing ISO is basically UNDER EXPOSURE followed by in-camera “push processing” – now I’m showing my age..

ISO1 900x640 Noise and the Camera Sensor

The effect of increased ISO – in camera “push processing” automatically lift the exposure value to where the camera thinks it is supposed to be.

By under exposing the image we reduce the overall Signal to Noise Ratio, then the camera internals lift all the levels by a process of amplification – and this includes amplifying  the original level of base noise.

So now you know WHY and HOW your images look noisy at higher ISO’s – or so you’d think – again,  it’s not that simple; take the next two image crops for instance:

Dull 900x600 Noise and the Camera Sensor

Kingfisher – ISO 3200 Nikon D4 – POOR LIGHT – Click for bigger view

Sunny 900x591 Noise and the Camera Sensor

Kingfisher – ISO 3200 Nikon D4 – GOOD LIGHT – CLICK for bigger view

If you click on the images (they’ll open up in new browser tabs) you’ll see that the noise from 3200 ISO on the D4 is a lot more apparent on the image taken in poor light than it is on the image taken in full sun.

You’ll also notice that in both cases the noise is less apparent in the high frequency detail (sharp high detail areas) and more apparent in areas of low frequency detail (blurred background).

So here’s “The Andy Approach” to noise and high ISO.

1. It’s not a good idea to use higher ISO settings just to combat poor light – in poor light everything looks like crap, and if it looks crap then the image will look even crappier.When I get in a poor light situation and I’m not faced with a “shot in a million” then I don’t take the shot.

2. There’s a big difference between poor light and low light that looks good – if that’s the case shoot as close to base ISO as you can get away with in terms of shutter speed.

3. I you shoot landscapes then shoot at base ISO at all times and use a tripod and remote release – make full use of your sensors dynamic range.

4. The Important One – don’t get hooked on megapixels and so-called sensor resolution – I’ve made thousands of landscape sales shot on a 12Mp D3 at 100 ISO. If you are compelled to have more megapixels buy a medium format camera which will generate a higher S/N Ratio because the photosites are larger.

5. If you shoot wildlife you’ll find that the necessity for full dynamic range decreases with angle of view/increasing focal length – using a 500mm lens you are looking at a very small section of what your eye can see, and tones contained within that small window will rarely occupy anywhere near the full camera dynamic range.

Under good light this will allow you to use a higher ISO in order to gain that crucial bit of extra shutter speed – remember, wildlife images tend to be at least 30 to 35% high frequency detail – noise will not be as apparent in these areas as it is in the background; hence to ubiquitous saying of  wildlife photographers “Watch your background at all times”.

Well, I think that’s enough to be going on with – but there’s oh so much more!


Help Me to Help You!

If you’ve found this or any other article on this blog useful or informative then please do me a favour and leave a comment – and don’t forget to click the “Follow” button – it’s free and you’ll get notified of my next blog post.

Please note, I’ve not written this blog article for any other reason than to make its readers aware of the facts – no one pays me in cash or kind for any products mentioned in this article; it is written purely for information purposes; you do with this information “what you will”! If you have any questions please feel free to drop me a line via email at tuition@wildlifeinpixels.net

Please consider supporting this blog.

This blog really does need your support. All the information I put on these pages I do freely, but it does involve costs in both time and money.

If you find this post useful and informative please could you help by making a small donation – it would really help me out a lot – whatever you can afford would be gratefully received.

Your donation will help offset the costs of running this blog and so help me to bring you lots more useful and informative content.

Many thanks in advance.


Bit Depth

Bit Depth – What is a Bit?


Good question – from a layman’s point of view it’s the smallest USEFUL unit of computer/digital information; useful in the fact that it can have two values – 0 or 1.


Think of it as a light switch; it has two positions – ON and OFF, 1 or 0.


1 bit 562x400 Bit Depth

A bit is like a light switch.


We have 1 switch (bit) with 2 potential positions (bit value 0 or 1) so we have a bit depth of 1. We can arrive at this by simple maths – number of switch positions to the power of the number of switches; in other words 2 to the 1st power.


How Does Bit Depth Impact Our Images:


So what would this bit depth of 1 mean in image terms:


1bitEagle 600x400 Bit Depth

An Image with a Bit Depth of 1 bit.



Well, it’s not going to win Wildlife Photographer of the Year is it!


Because each pixel in the image can only be black or white, on or off, 0 or 1 then we only have two tones we can use to describe the entire image.


Now if we were to add another bit to the overall bit depth of the image we would have 2 switches (bits) each with 2 potential values so the total number of potential values, so 2 to the 2nd, or 4 potential output values/tones.



2bitEagle 600x400 Bit Depth

An image with a bit depth of 2 bits.



Not brilliant – but it’s getting there!


If we now double the bit depth again, this time to 4 bit, then we have 2 to the 4th, or 16 potential tones or output values per image pixel:



4bitEagle 600x400 Bit Depth

A bit depth of 4 bits gives us 16 tonal values.



And if we double the bit depth again, up to 8 bit we will end up with 2 to the 8th power, or 256 tonal values for each image pixel:



8bitEagle 600x400 Bit Depth

A bit depth of 8 bits yields what the eye perceives to be continuous unbroken tone.



This range of 256 tones (0 to 255) is the smallest number of tonal values that the human eye can perceive as being continuous in nature; therefore we see an unbroken range of greys from black to white.


More Bits is GOOD


Why do we need to use bit depths HIGHER than 8 bit?


Our modern digital cameras capture and store RAW images to a bit depth of 12 bit, and now in most cases 14 bit – 4096 & 16,384 tonal values respectively.


Just as we use the ProPhotoRGB colour space to preserve as many CAPTURED COLOURS as we can, we need to apply a bit depth to our pixel-based images that is higher than the capture depth in order to preserve the CAPTURED TONAL RANGE.


It’s the “bigger bucket” or “more stairs on the staircase” scenario all over again – more information about a pixels brightness and colour is GOOD.


bitdepthBW 562x400 Bit Depth

How Tonal Graduation Increases with Bit Depth.


Black is black, and white is white, but increased bit depth gives us a higher number of steps/tones; tonal graduations, to get from black to white and vice versa.


So, if our camera captures at 14 bit we need a 15 bit or 16 bit “bucket” to keep it in.  And for those who want to know why a 14 bit bucket ISN’T a good idea then try carrying 2 gallons of water in a 2 gallon bucket without spillage!



The 8 bit Image Killer


Below we have two identical grey scale images open in Photoshop – simple graduations from black to white; one is a 16 bit image, the other 8 bit:


8bit vs 16bit1 581x400 Bit Depth

16 bit greyscale at the top. 8 bit greyscale below – CLICK Image to view full size.



Now everything looks OK at this “fit to screen” magnification; and it doesn’t look so bad at 1:1 either, but let’s increase the magnification to 1600% so we can see every pixel:



8bit vs 16bit2 582x400 Bit Depth

CLICK Image to view full size. At 1600% magnification we can see that the 8 bit file is degraded.


At this degree of magnification we can see a huge amount of image degradation in the lower, 8 bit image whereas the upper, 16 bit image looks tonally smooth in its graduation.


The degradation in the 8 bit image is simply due to the fact that the total number of tones is “capped” at 256. and 256 steps to get from the black to the white values of the image are not sufficient – this leaves gaps in the image that Photoshop has to fill with “invented” tonal information based on its own internal “logic”….mmmmmm….


There was a time when I thought “girlies” were the most illogical things on the planet; but since Photoshop, now I’m not so sure…!


The image is a GREYSCALE – RGB ratios are supposedly equal in every pixel, but as you can see, Photoshop begins to skew the ratios where it has to do its “inventing” so we not only have luminosity artifacts, but we have colour artifacts being generated too.


You might look upon this as “pixel peeping” and “geekey”, but when it comes to image quality, being a pixel-peeping Geek is never a bad thing.


Of course, we all know 8bit as being “jpeg”, and these artifacts won’t show up on a web-based jpeg for your website; but if you are in the business of large scale gallery prints, then printing from an 8 bit image file is never going to be a good idea as these artifacts WILL show on the final print.


Help Me to Help You!

If you’ve found this or any other article on this blog useful or informative then please do me a favour and leave a comment – and don’t forget to click the “Follow” button – it’s free and you’ll get notified of my next blog post.


Please note, I’ve not written this blog article for any other reason than to make its readers aware of the facts – no one pays me in cash or kind for any products mentioned in this article; it is written purely for information purposes; you do with this information “what you will”!

If you have any questions please feel free to drop me a line via email at tuition@wildlifeinpixels.net






Lightroom Tutorials #2


Eagle 600x400 Lightroom Tutorials #2

Image Processing in Lightroom & Photoshop


In this Lightroom tutorial preview I take a close look at the newly evolved Clone/Heal tool and dust spot removal in Lightroom 5.

This newly improved tool is simple to use and highly effective – a vast improvement over the great tool that it was already in Lightroom 4.


Lightroom Tutorials  Sample Video Link below: Video will open in a new window




This 4 disc Lightroom Tutorials DVD set is available from my website at http://wildlifeinpixels.net/dvd.html

How White is Paper White?

What is Paper White?


We should all know by now that, in RGB terms, BLACK is 0,0,0 and that WHITE is 255,255,255 when expressed in 8 bit colour values.


White can also be 32,768: 32,768: 32,768 when viewed in Photoshop as part of a 16 bit image (though those values are actually 15 bit – yet another story!).


Either way, WHITE is WHITE; or is it?



WIP00034947 Edit How White is Paper White?

Arctic Fox in Deep Snow ©Andy Astbury/Wildlife in Pixels


Take this Arctic Fox image – is anything actually white?  No, far from it! The brightest area of snow is around 238,238,238 which is neutral, but it’s not white but a very light grey.  And we won’t even discuss the “whiteness” of  the fox itself.


DSC6545 600x312 How White is Paper White?

Hen Pheasant in Snow ©Andy Astbury/Wildlife in Pixels


The Hen Pheasant above was shot very late on a winters afternoon when the sun was at a very low angle directly behind me – the colour temperature has gone through the roof and everything has taken on a very warm glow which adds to the atmosphere of the image.


WIP00052572 3 4 5 Edit 2 Edit 600x600 How White is Paper White?

Extremes of colour temperature – Snow Drift at Sunset ©Andy Astbury/Wildlife in Pixels


We can take the ‘snow at sunset’ idea even further, where the suns rays strike the snow it lights up pink, but the shadows go a deep rich aquamarine blue – what we might call a ‘crossed curves’ scenario, where shadow and lower mid tones are at a low Kelvin temperature, and upper mid tones and highlights are at a much higher Kelvin.


All three of these images might look a little bit ‘too much’ – but try clicking one and viewing it on a darker background without the distractions of the rest of the page – GO ON, TRY IT.


Showing you these three images has a couple of purposes:

Firstly, to show you that “TRUE WHITE” is something you will rarely, if ever, photograph.

Secondly, viewing the same image in a different environment changes the eyes perception of the image.


The secondary purpose is the most important – and it’s all to do with perception; and to put it bluntly, the pack of lies that your eyes and brain lead you to believe is the truth.


Only Mother Nature, wildlife and cameras tell the truth!



So Where’s All This Going Andy, and What’s it got to do with Paper White?


Fair question, but bare with me!


If we go to the camera shop and peruse a selection of printer papers or unprinted paper samplers, our eyes tell us that we are looking at blank sheets of white paper;  but ARE WE ?


Each individual sheet of paper appears to be white, but we see very subtle differences which we put down to paper finish.


But if we put a selection of, say Permajet papers together and compare them with ‘true RGB white’ we see the truth of the matter:


paper whites3 600x600 How White is Paper White?

Paper whites of a few Permajet papers in comparison to RGB white – all colour values are 8bit.


Holy Mary Mother of God!!!!!!!!!!!!!!!!


I’ll bet that’s come as a bit of a shocker………


No paper is WHITE; some papers are “warm”; and some are “cool”.


So, if we have a “warmish” toned image it’s going to be a lot easier to “soft proof” that image to a “warm paper” than a cool one – with the result of greater colour reproduction accuracy.


If we were to try and print a “cool” image on to “warm paper” then we’ve got to shift the whole colour balance of the image, in other words warm it up in order for the final print to be perceived as neutral – don’t forget, that sheet of paper looked neutral to you when you stuck it in the printer!


Well, that’s simple enough you might think, but you’d be very, very wrong…


We see colour on a print because the inks allow use to see the paper white through them, but only up to a point.  As colours and tones become darker on our print we see less “paper white” and more reflected colour from the ink surface.


If we shift the colour balance of the entire image – in this case warm it up – we shift the highlight areas so they match the paper white; but we also shift the shadows and darker tones.  These darker areas hide paper white so the colour shift in those areas is most definitely NOT desirable because we want them to be as perceptually neutral as the highlights.


What we need to do in truth is to somehow warm up the higher tonal values while at the same time keep the lowest tonal values the same, and then somehow match all the tones in between the shadows and highlights to the paper.

This is part of the process called SOFT PROOFING – but the job would be a lot easier if we chose to print on a paper whose “paper white” matched the overall image a little more closely.


The Other Kick in the Teeth


Not only are we battling the hue of paper white, or tint if you like, but we also have to take into account the luminance values of the paper – in other words just how “bright” it is.


Those RGB values of paper whites across a spread of Permajet papers – here they are again to save you scrolling back:


paper whites3 600x600 How White is Paper White?

Paper whites of a few Permajet papers in comparrison to RGB white – all colour values are 8bit.


not only tell us that there is a tint to the paper due to the three colour channel values being unequal, but they also tell us the brightest value we can “print” – in other words not lay any ink down!


Take Oyster for example; a cracking all-round general printer paper that has a very large colour gamut and is excellent value for money – Permajet deserve a medal for this paper in my opinion because it’s economical and epic!


Its paper white is on average 240 Red, 245 Green ,244 Blue.  If we have any detail in areas of our image that are above 240, 240, 240 then part of that detail will be lost in the print because the red channel minimum density (d-min) tops out at 240; so anything that is 241 red or higher will just not be printed and will show as 240 Red in the paper white.


Again, this is a problem mitigated in the soft proofing process.


But it’s also one of the reasons why the majority of photographers are disappointed with their prints – they look good on screen because they are being displayed with a tonal range of 0 to 255, but printed they just look dull, flat and generally awful.


Just another reason for adopting a Colour Managed Work Flow!



Help Me to Help You!

If you’ve found this or any other article on this blog useful or informative then please do me a favour and leave a comment – and don’t forget to click the “Follow” button – it’s free and you’ll get notified of my next blog post.




Please note, I’ve not written this blog article for any other reason than to make its readers aware of the facts – no one pays me in cash or kind for any products mentioned in this article; it is written purely for information purposes; you do with this information “what you will”!

If you have any questions please feel free to drop me a line via email at tuition@wildlifeinpixels.net