Pixel Resolution

What do we mean by Pixel Resolution?

Digital images have two sets of dimensions – physical size or linear dimension (inches, centimeters etc) and pixel dimensions (long edge & short edge).

The physical dimensions are simple enough to understand – the image is so many inches long by so many inches wide.

Pixel dimension is straightforward too – ‘x’ pixels long by ‘y’ pixels wide.

If we divide the physical dimensions by the pixel dimensions we arrive at the PIXEL RESOLUTION.

Let’s say, for example, we have an image with pixel dimensions of 3000 x 2400 pixels, and a physical, linear dimension of 10 x 8 inches.

Therefore:

3000 pixels/10 inches = 300 pixels per inch, or 300PPI

and obviously:

2400 pixels/8 inches = 300 pixels per inch, or 300PPI

So our image has a pixel resolution of 300PPI.

 

How Does Pixel Resolution Influence Image Quality?

In order to answer that question let’s look at the following illustration:

Andy Astbury,pixels,resolution,dpi,ppi,wildlife in pixels

The number of pixels contained in an image of a particular physical size has a massive effect on image quality. CLICK to view full size.

All 7 square images are 0.5 x 0.5 inches square.  The image on the left has 128 pixels per 0.5 inch of physical dimension, therefore its PIXEL RESOLUTION is 2 x 128 PPI (pixels per inch), or 256PPI.

As we move from left to right we halve the number of pixels contained in the image whilst maintaining the physical size of the image – 0.5″ x 0.5″ – so the pixels in effect become larger, and the pixel resolution becomes lower.

The fewer the pixels we have then the less detail we can see – all the way down to the image on the right where the pixel resolution is just 4PPI (2 pixels per 0.5 inch of edge dimension).

The thing to remember about a pixel is this – a single pixel can only contain 1 overall value for hue, saturation and brightness, and from a visual point of view it’s as flat as a pancake in terms of colour and tonality.

So, the more pixels we can have between point A and point B in our image the more variation of colour and tonality we can create.

Greater colour and tonal variation means we preserve MORE DETAIL and we have a greater potential for IMAGE SHARPNESS.

REALITY

So we have our 3 variables; image linear dimension, image pixel dimension and pixel resolution.

In our typical digital work flow the pixel dimension is derived from the the photosite dimension of our camera sensor – so this value is fixed.

All RAW file handlers like Lightroom, ACR etc;  all default to a native pixel resolution of 300PPI. * (this 300ppi myth annoys the hell out of me and I’ll explain all in another post).

So basically the pixel dimension and default resolution SET the image linear dimension.

If our image is destined for PRINT then this fact has some serious ramifications; but if our image is destined for digital display then the implications are very different.

 

Pixel Resolution and Web JPEGS.

Consider the two jpegs below, both derived from the same RAW file:

Andy Astbury,pixels,resolution,dpi,ppi,Wildlife in Pixels

European Adder – 900 x 599 pixels with a pixel resolution of 300PPI

European Adder - 900 x 599 pixels with a pixel resolution of 72PPI

European Adder – 900 x 599 pixels with a pixel resolution of 72PPI

In order to illustrate the three values of linear dimension, pixel dimension and pixel resolution of the two images let’s look at them side by side in Photoshop:

Andy Astbury,photoshop,resolution,pixels,ppi,dpi,wildlife in pixels,image size,image resolution

The two images opened in Photoshop – note the image size dialogue contents – CLICK to view full size.

The two images differ in one respect – their pixel resolutions.  The top Adder is 300PPI, the lower one has a resolution of 72PPI.

The simple fact that these two images appear to be exactly the same size on this page means that, for DIGITAL display the pixel resolution is meaningless when it comes to ‘how big the image is’ on the screen – what makes them appear the same size is their identical pixel dimensions of 900 x 599 pixels.

Digital display devices such as monitors, ipads, laptop monitors etc; are all PIXEL DIMENSION dependent.  The do not understand inches or centimeters, and they display images AT THEIR OWN resolution.

Typical displays and their pixel resolutions:

  • 24″ monitor = typically 75 to 95 PPI
  • 27″ iMac display = 109 PPI
  • iPad 3 or 4 = 264 PPI
  • 15″ Retina Display = 220 PPI
  • Nikon D4 LCD = 494 PPI

Just so that you are sure to understand the implication of what I’ve just said – you CAN NOT see your images at their NATIVE 300 PPI resolution when you are working on them.  Typically you’ll work on your images whilst viewing them at about 1/3rd native pixel resolution.

Yes, you can see 2/3rds native on a 15″ MacBook Pro Retina – but who the hell wants to do this – the display area is minuscule and its display gamut is pathetically small. 😉

Getting back to the two Adder images, you’ll notice that the one thing that does change with pixel resolution is the linear dimensions.

Whilst the 300 PPI version is a tiny 3″ x 2″ image, the 72 PPI version is a whopping 12″ x 8″ by comparison – now you can perhaps understand why I said earlier that the implications of pixel resolution for print are fundamental.

Just FYI – when I decide I’m going to create a small jpeg to post on my website, blog, a forum, Flickr or whatever – I NEVER ‘down sample’ to the usual 72 PPI that get’s touted around by idiots and no-nothing fools as “the essential thing to do”.

What a waste of time and effort!

Exporting a small jpeg at ‘full pixel resolution’ misses out the unnecessary step of down sampling and has an added bonus – anyone trying to send the image direct from browser to a printer ends up with a print the size of a matchbox, not a full sheet of A4.

It won’t stop image theft – but it does confuse ’em!

I’ve got a lot more to say on the topic of resolution and I’ll continue in a later post, but there is one thing related to PPI that is my biggest ‘pet peeve’:

 

PPI and DPI – They Are NOT The Same Thing

Nothing makes my blood boil more than the persistent ‘mix up’ between pixels per inch and dots per inch.

Pixels per inch is EXACTLY what we’ve looked at here – PIXEL RESOLUTION; and it has got absolutely NOTHING to do with dots per inch, which is a measure of printer OUTPUT resolution.

Take a look inside your printer driver; here we are inside the driver for an Epson 3000 printer:

Andy Astbury,printer,dots per inch,dpi,pixels per inch,ppi,photoshop,lightroom,pixel resolution,output resoloution

The Printer Driver for the Epson 3000 printer. Inside the print settings we can see the output resolutions in DPI – Dots Per Inch.

Images would be really tiny if those resolutions were anything to do with pixel density.

It surprises a lot of people when they come to the realisation that pixels are huge in comparison to printer dots – yes, it can take nearly 400 printer dots (20 dots square) to print 1 square pixel in an image at 300 PPI native.

See you in my next post!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Noise and the Camera Sensor

Camera sensors all suffer with two major afflictions; diffraction and noise; and between them these two afflictions cause more consternation amongst photographers than anything else.

In this post I’m going to concentrate on NOISE, that most feared of sensor afflictions, and its biggest influencer – LIGHT, and its properties.

What Is Light?

As humans we perceive light as being a constant continuous stream or flow of electromagnetic energy, but it isn’t!   Instead of flowing like water it behaves more like rain, or indeed, bullets from a machine gun!   Here’s a very basic physics lesson:

Below is a diagram showing the Bohr atomic model.

We have a single positively charged proton (black) forming the nucleus, and a single negatively charged electron (green) orbiting the nucleus.

The orbit distance n1 is defined by the electrostatic balance of the two opposing charges.

Andy Astbury,noise,light,Bohr atomic model

The Bohr Atomic Model

If we apply energy to the system then a ‘tipping point’ is reached and the electron is forced to move away from the nucleus – n2.

Apply even more energy and the system tips again and the electron is forced to move to an even higher energy level – n3.

Now here’s the fun bit – stop applying energy to the system.

As the system is no longer needing to cope with the excess energy it returns to its natural ‘ground’ state and the electron falls back to n1.

In the process the electron sheds the energy it has absorbed – the red squiggly bit – as a quantum, or packet, of electromagnetic energy.

This is basically how a flash gun works.

This ‘packet’ has a start and an end; the start happens as the electron begins its fall back to its ground state; and the end occurs once the electron arrives at n1 – therefore it can perhaps be tentatively thought of as being particulate in nature.

So now you know what Prof. Brian Cox knows – CERN here we come!

Right, so what’s this got to do with photography and camera sensor noise

Camera Sensor Noise

All camera sensors are effected by noise, and this noise comes in various guises:

Firstly, the ‘noise control’ sections of most processing software we use tend to break it down into two components; luminosity, or luminance noise; and colour noise.  Below is a rather crappy image that I’m using to illustrate what we might assume is the reality of noise:

Andy Astbury,noise

This shot shows both Colour & Luminance noise.
The insert shows the shot and the small white rectangle is the area we’re concentrating on.

Now let’s look at the two basic components: Firstly the LUMINANCE component

Andy Astbury,noise

Here we see the LUMINANCE noise component – colour & colour noise components have been removed for clarity.

Next, the COLOUR NOISE bit:

Andy Astbury,noise

The COLOUR NOISE component of the area we’re looking at. All luminance noise has been removed.

I must stress that the majority of colour noise you see in your files inside LR,ACR,CapOne,PS etc: is ‘demosaicing colour noise’, which occurs during the demosaic processes.

But the truth is, it’s not that simple.

Localised random colour errors are generated ‘on sensor’ due to the individual sensor characteristics as we’ll see in a moment, because noise, in truth, comes in various guises that collectively effect luminosity and colour:

Andy Astbury,noise

Shot Noise

This first type of noise is Shot Noise – called so because it’s basically an intrinsic part of the exposure, and is caused by photon flux in the light reflected by the subject/scene.

Remember – we see light in a different way to that of our camera. What we don’t notice is the fact that photon streams rise and fall in intensity – they ‘flux’ – these variations happen far too fast for our eyes to notice, but they do effect the sensor output.

On top of this ‘fluxing’ problem we have something more obvious to consider.

Lighter subjects reflect more light (more photons), darker subjects reflect less light (less photons).

Your exposure is always going to some sort of ‘average’, and so is only going to be ‘accurate’ for certain areas of the scene.

Lighter areas will be leaning towards over exposure; darker areas towards under exposure – your exposure can’t be perfect for all tones contained in the scene.

Tonal areas outside of the ‘average exposure perfection’ – especially the darker ones – may well contain more shot noise.

Shot noise is therefore quite regular in its distribution, but in certain areas it becomes irregular – so its often described as ‘pseudo random’ .

Andy Astbury,noise

Read Noise

Read Noise – now we come to a different category of noise completely.

The image is somewhat exaggerated so that you can see it, but basically this is a ‘zero light’ exposure; take a shot with the lens cap on and this is what happens!

What you can see here is the background sensor noise when you take any shot.

Certain photosites on the sensor are actually generating electrons even in the complete absence of light – seeing as they’re photo-voltaic they shouldn’t be doing this – but they do.

Added to this are AD Converter errors and general ‘system noise’ generated by the camera – so we can regard Read Noise as being like the background hiss, hum and rumble we can hear on a record deck when we turn the Dolby off.

Andy Astbury,noise

Thermal & Pattern Noise

In the same category as Read Noise are two other types of noise – thermal and pattern.

Both again have nothing to do with light falling on the sensor, as this too was shot under a duvet with the lens cap on – a 30 minute exposure at ISO 100 – not beyond stupid when you think of astro photography and star trail shots in particular.

You can see in the example that there are lighter and darker areas especially over towards the right side and top right corner – this is Thermal Noise.

During long exposures the sensor actually heats up, which in turn increases the response of photosites in those areas and causes them to release more electrons.

You can also see distinct vertical and some horizontal banding in the example image – this is pattern noise, yet another sensor noise signature.

Andy Astbury,noise

Under Exposure Noise – pretty much what most photographers think of when they hear the word “noise”.

Read Noise, Pattern Noise, Thermal Noise and to a degree Shot Noise all go together to form a ‘base line noise signature’ for your particular sensor, so when we put them all together and take a shot where we need to tweak the exposure in the shadow areas a little we get an overall Under Exposure Noise characteristic for our camera – which let’s not forget, contains other elements of  both luminance noise and colour noise components derived from the ISO settings we use.

All sensors have a base ISO – this can be thought of as the speed rating which yields the highest Dynamic Range (Dynamic Range falls with increasing ISO values, which is basically under exposure).

At this base ISO the levels of background noise generated by the sensor just being active (Pattern,Read & Thermal) will be at their lowest, and can be thought of as the ‘base noise’ of the sensor.

How visually apparent this base noise level is depends on what is called the Signal to Noise Ratio – the higher the S/N ratio the less you see the noise.

And what is it that gives us a high signal?

MORE Photons – that’s what..!

The more photons each photosite on the sensor can gather during the exposure then the more ‘masked’ will be any internal noise.

And how do we catch more photons?

By using a sensor with BIGGER photosites, a larger pixel pitch – that’s how.  And bigger photosites means LESS MEGAPIXELS – allow me to explain.

Buckets in the Rain A

Here we see a representation of various sized photosites from different sensors.

On the right is the photosite of a Nikon D3s – a massive ‘bucket’ for catching photons in – and 12Mp resolution.

Moving left we have another FX sensor photosite – the D3X at 24Mp, and then the crackpot D800 and it’s mental 36Mp tiny photosite  – can you tell I dislike the D800 yet? 

One the extreme left is the photosite from the 1.5x APS-C D7100 just for comparison.

Now cast your mind back to the start of this post where I said we could tentatively regard photons as particles – well, let’s imagine them as rain drops, and the photosites in the diagram above as different sized buckets.

Let’s put the buckets out in the back yard and let’s make the weather turn to rain:

Andy Astbury,Wildlife in Pixels,sensor resolution,megapixels,pixel pitch,base noise,signal to noise ratio

Various sizes of photosites catching photon rain.

Here it comes…

Andy Astbury,Wildlife in Pixels,sensor resolution,megapixels,pixel pitch,base noise,signal to noise ratio

It’s raining

OK – we’ve had 2 inches of rain in 10 seconds! Make it stop!

Andy Astbury,Wildlife in Pixels,sensor resolution,megapixels,pixel pitch,base noise,signal to noise ratio

All buckets have 2 inches of water in them, but which has caught the biggest volume of rain?

Thank God for that..

If we now get back to reality, we can liken the duration of the rain downpour as shutter speed, the rain drops themselves as photons falling on the sensor, and the consistency of water depth in each ‘bucket’ as a correct level of exposure.

Which bucket has the largest volume of water, or which photosite has captured the most photons – in other words which sensor has the highest S/N Ratio?   That’s right – the 12Mp D3s.

To put this into practical terms let’s consider the next diagram:

Andy Astbury,Wildlife in Pixels,sensor resolution,megapixels,pixel pitch,base noise,signal to noise ratio

Increased pixel pitch = Increased Signal to Noise Ratio

The importance of S/N ratio and its relevance to camera sensor noise can be seen clearly in the diagram above – but we are talking about base noise at native or base ISO.

If we now look at increasing the ISO speed we have a potential problem.

As I mentioned before, increasing ISO is basically UNDER EXPOSURE followed by in-camera “push processing” – now I’m showing my age..

Andy Astbury,noise,iso

The effect of increased ISO – in camera “push processing” automatically lift the exposure value to where the camera thinks it is supposed to be.

By under exposing the image we reduce the overall Signal to Noise Ratio, then the camera internals lift all the levels by a process of amplification – and this includes amplifying  the original level of base noise.

So now you know WHY and HOW your images look noisy at higher ISO’s – or so you’d think – again,  it’s not that simple; take the next two image crops for instance:

Andy Astbury, iso,noise,sensor noise

Kingfisher – ISO 3200 Nikon D4 – POOR LIGHT – Click for bigger view

Andy Astbury, iso,noise,sensor noise

Kingfisher – ISO 3200 Nikon D4 – GOOD LIGHT – CLICK for bigger view

If you click on the images (they’ll open up in new browser tabs) you’ll see that the noise from 3200 ISO on the D4 is a lot more apparent on the image taken in poor light than it is on the image taken in full sun.

You’ll also notice that in both cases the noise is less apparent in the high frequency detail (sharp high detail areas) and more apparent in areas of low frequency detail (blurred background).

So here’s “The Andy Approach” to noise and high ISO.

1. It’s not a good idea to use higher ISO settings just to combat poor light – in poor light everything looks like crap, and if it looks crap then the image will look even crappier.When I get in a poor light situation and I’m not faced with a “shot in a million” then I don’t take the shot.

2. There’s a big difference between poor light and low light that looks good – if that’s the case shoot as close to base ISO as you can get away with in terms of shutter speed.

3. I you shoot landscapes then shoot at base ISO at all times and use a tripod and remote release – make full use of your sensors dynamic range.

4. The Important One – don’t get hooked on megapixels and so-called sensor resolution – I’ve made thousands of landscape sales shot on a 12Mp D3 at 100 ISO. If you are compelled to have more megapixels buy a medium format camera which will generate a higher S/N Ratio because the photosites are larger.

5. If you shoot wildlife you’ll find that the necessity for full dynamic range decreases with angle of view/increasing focal length – using a 500mm lens you are looking at a very small section of what your eye can see, and tones contained within that small window will rarely occupy anywhere near the full camera dynamic range.

Under good light this will allow you to use a higher ISO in order to gain that crucial bit of extra shutter speed – remember, wildlife images tend to be at least 30 to 35% high frequency detail – noise will not be as apparent in these areas as it is in the background; hence to ubiquitous saying of  wildlife photographers “Watch your background at all times”.

Well, I think that’s enough to be going on with – but there’s oh so much more!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Bit Depth

Bit Depth – What is a Bit?

Good question – from a layman’s point of view it’s the smallest USEFUL unit of computer/digital information; useful in the fact that it can have two values – 0 or 1.

Think of it as a light switch; it has two positions – ON and OFF, 1 or 0.

bit, Andy Astbury, bit depth

A bit is like a light switch.

We have 1 switch (bit) with 2 potential positions (bit value 0 or 1) so we have a bit depth of 1. We can arrive at this by simple maths – number of switch positions to the power of the number of switches; in other words 2 to the 1st power.

How Does Bit Depth Impact Our Images:

So what would this bit depth of 1 mean in image terms:

Andy Astbury,bit depth,

An Image with a Bit Depth of 1 bit.

Well, it’s not going to win Wildlife Photographer of the Year is it!

Because each pixel in the image can only be black or white, on or off, 0 or 1 then we only have two tones we can use to describe the entire image.

Now if we were to add another bit to the overall bit depth of the image we would have 2 switches (bits) each with 2 potential values so the total number of potential values, so 2 to the 2nd, or 4 potential output values/tones.

Andy Astbury,bits,bit depth

An image with a bit depth of 2 bits.

Not brilliant – but it’s getting there!

If we now double the bit depth again, this time to 4 bit, then we have 2 to the 4th, or 16 potential tones or output values per image pixel:

Andy Astbury,bits,bit depth

A bit depth of 4 bits gives us 16 tonal values.

And if we double the bit depth again, up to 8 bit we will end up with 2 to the 8th power, or 256 tonal values for each image pixel:

Andy Astbury,bits,bit depth

A bit depth of 8 bits yields what the eye perceives to be continuous unbroken tone.

This range of 256 tones (0 to 255) is the smallest number of tonal values that the human eye can perceive as being continuous in nature; therefore we see an unbroken range of greys from black to white.

More Bits is GOOD

Why do we need to use bit depths HIGHER than 8 bit?

Our modern digital cameras capture and store RAW images to a bit depth of 12 bit, and now in most cases 14 bit – 4096 & 16,384 tonal values respectively.

Just as we use the ProPhotoRGB colour space to preserve as many CAPTURED COLOURS as we can, we need to apply a bit depth to our pixel-based images that is higher than the capture depth in order to preserve the CAPTURED TONAL RANGE.

It’s the “bigger bucket” or “more stairs on the staircase” scenario all over again – more information about a pixels brightness and colour is GOOD.

Andy Astbury,bits,bit depth,tonal range,tonality,tonal graduation

How Tonal Graduation Increases with Bit Depth.

Black is black, and white is white, but increased bit depth gives us a higher number of steps/tones; tonal graduations, to get from black to white and vice versa.

So, if our camera captures at 14 bit we need a 15 bit or 16 bit “bucket” to keep it in.  And for those who want to know why a 14 bit bucket ISN’T a good idea then try carrying 2 gallons of water in a 2 gallon bucket without spillage!

The 8 bit Image Killer

Below we have two identical grey scale images open in Photoshop – simple graduations from black to white; one is a 16 bit image, the other 8 bit:

Andy Astbury,bits,bit depth,tone,tonal graduation

16 bit greyscale at the top. 8 bit greyscale below – CLICK Image to view full size.

Now everything looks OK at this “fit to screen” magnification; and it doesn’t look so bad at 1:1 either, but let’s increase the magnification to 1600% so we can see every pixel:

 

Andy Astbury,bits,bit depth,tone,tonal range,tonal graduation

CLICK Image to view full size. At 1600% magnification we can see that the 8 bit file is degraded.

At this degree of magnification we can see a huge amount of image degradation in the lower, 8 bit image whereas the upper, 16 bit image looks tonally smooth in its graduation.

The degradation in the 8 bit image is simply due to the fact that the total number of tones is “capped” at 256. and 256 steps to get from the black to the white values of the image are not sufficient – this leaves gaps in the image that Photoshop has to fill with “invented” tonal information based on its own internal “logic”….mmmmmm….

There was a time when I thought “girlies” were the most illogical things on the planet; but since Photoshop, now I’m not so sure…!

The image is a GREYSCALE – RGB ratios are supposedly equal in every pixel, but as you can see, Photoshop begins to skew the ratios where it has to do its “inventing” so we not only have luminosity artifacts, but we have colour artifacts being generated too.

You might look upon this as “pixel peeping” and “geekey”, but when it comes to image quality, being a pixel-peeping Geek is never a bad thing.

Of course, we all know 8bit as being “jpeg”, and these artifacts won’t show up on a web-based jpeg for your website; but if you are in the business of large scale gallery prints, then printing from an 8 bit image file is never going to be a good idea as these artifacts WILL show on the final print.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Colour Space & Profiles

colour space

From Camera to Print
copyright 2013 Andy Astbury/Wildlife in Pixels

Colour space and device profiles seem to cause a certain degree of confusion for a lot of people; and a feeling of dread, panic and total fear in others!

The reality of colour spaces and device profiles is that they are really simple things, and that how and why we use them in a colour managed work flow is perfectly logical and easy to understand.

Up to a point colour spaces and device profiles are one and the same thing – they define a certain “volume” of colours from red to green to blue, and from black to white – and all the colours that lie in between those five points.

The colour spaces that most photographers are by now familiar with are ProPhotoRGB, AdobeRGB(1998) and sRGB – these are classed as “working colour spaces” and are standards of colour set by the International Color Consortium, or ICC; and they all have one thing in common; where red, green and blue are present in equal amounts the colour produced will be NEUTRAL.

The only real differences between these three working colour spaces is the “distances” between the five set points of red, green, blue, black and white.  The greater the distance between the three primary colours then the greater is the degree of graduation between them, hence the greater the number of potential colours.  In the diagram below we can see the sRGB & ProPhoto working colour spaces displayed on the same axes:

colour space volume

The sRGB & ProPhoto colour spaces. The larger volume of ProPhoto contains more colour variety between red, green & blue than sRGB.

If we were to mark five different points on the surface of a partially inflated balloon,  and then inflate it some more then the points in relation to the balloons surface would NOT change: the points remain the same.  But the spatial distances between the points would change, as would the internal volume.  It’s the same with our five points of colour reference – red, green, blue, black & white – they do NOT change between colour spaces; red is red no matter what the working colour space.  But the range of potential colours between our 5 points of reference increases due to increased colour space volume.

So now we have dealt with the basics of the three main working colour spaces, we need to consider the volume of colour our camera sensor can capture – if you like, its colour space; but I’d rather use the word “gamut”.

Let’s take the Canon 5DMk3 as an example, and look at the volume, or gamut, of colour that its sensor can capture, in direct comparison with our 3 quantifiable working colour spaces:

colour space

The Canon 5DMk3 sensor gamut (black) in comparison to ProPhoto (largest), AdobeRGB1998 & sRGB (smallest) working colour spaces.

In a previous blog article I wrote – see here – I mentioned how to setup the colour settings in Photoshop, and this is why.  If you want to keep the greatest proportion of your camera sensors captured colour then you need to contain the image within the ProPhotoRGB working colour space.  If you don’t, and you use AdobeRGB or sRGB as Photoshops working colour space then you will loose a certain proportion of those captured colours – as I’ve heard it put before, it’s like a sex change operation – certain colours get chopped off, and once that’s happened you can’t get them back!

To keep things really simple just think of the 3 standard working colour spaces as buckets – the bigger the bucket, the more colour it contains; and you can’t tip the colours captured by your camera into a smaller bucket without getting spillage and making a mess on the floor!

As I said before, working colour spaces are neutral; but seldom does our camera ever capture a scene that contains pure neutrals.  Even though an item in the scene may well be neutral in colour, camera sensors quite often skew these colours ever so slightly; most Canon RAW files always look a teeny-weeny ever so slight bit magenta to me when I import them; but there again I’m a Nikon shooter seem to have a minute greenish tinge to them before processing.

Throughout our imaging work flow we have 3 stages:

1. Input (camera or scanner).

2. Working Process (Lightroom, Photoshop etc).

3. Output (printer for example).

And each stage has its representative type of colour space – we have input profiles, working colour spaces and output profiles.

So we have our camera capture gamut (colour space if you like) and we’ve opened our image in Photoshop or Lightroom in the ProPhoto working colour space – there’s NO SPILLAGE!

We now come to the crux of colour management; before we can do anything else we need to profile our “window onto our image” – the monitor.

In order to see the reality of what the camera captured we need to ensure that our monitor is in line with our WORKING COLOUR SPACE in terms of colour neutrality – not that of the camera as some people seem to think.

All 3 working colour spaces posses the same degree of colour neutrality where red, green & blue are present at the same values irrespective of physical size of the colour space.

So as long as our monitor is profiled to be:

1. Accurately COLOUR NEUTRAL

2. Displaying maximum brightness only in the presence true white – which you’ll hardly ever photograph, even snow isn’t white.

then we will see a highly workable representation of image colour neutrality and luminosity on our monitor.  Only by working this way can we actually tell if the camera has captured the image correctly in terms of colour balance and overall exposure.

And the fact that our monitor CANNOT display all the colours contained within our big ProPhoto bucket is, to all intents and purposes,  a fairly mute point; though seeing as many of them as possible is never a bad thing.

And using a monitor that does NOT display the volume of colour approximating or exceeding that of the Adobe working space can be highly detrimental for the reasons discussed in my previous post.

Now that we’ve covered input profiles and working colour spaces we need to move on and outline the basics of output profiles, and printer profiles in particular.

colour space, profile, print profile

Adobe & sRGB working paces in comparison to the colours contained in the Kingfisher image and the profile for Permajet Oyster paper using the Epson 7900 printer. (CLICK image for full sized view).

In the image above we can see both the Adobe and sRGB working spaces and the full distribution of colours contained in the Kingfisher image which is a TIFF file in our big ProPhoto bucket of colour;  and a black trace which is the colour profile (or space if you like) for Permajet Oyster paper using Epson UltraChrome HDR ink on an Epson 7900 printer.

As we can see, some of the colours contained in the image fall outside the gamut of the sRGB working colour space; notably some oranges and “electric blues” which are basically colours of the subject and are most critical to keep in the print.

However, all those ProPhoto colours are capable of being reproduced on the Epson 7900 using Permajet Oyster paper because, as the black trace shows, the printer/ink/paper combination can reproduce colours that lie outside of the Adobe working colour space.

The whole purpose of that particular profile is to ensure that the print matches what we can see on the monitor both in terms of colour and brightness – in other words, what we see is what we get – WYSIWYG!

The beauty of a colour managed workflow is that it’s economical – assuming the image is processed correctly then printing via an accurate printer profile can give you a perfect printed rendition of your screen image using just a single sheet of paper – and only one sheets worth of ink.

colour space, colour profile

The difference between colour profiles for the same printer paper on different printers. Epson 3000 printer profile trace in Red (CLICK image for full size view).

If we were to switch printers to an Epson 3000 using UltraChrome K3 ink on the very same paper, the area circled in white shows us that there are a couple of orange hue colours that are a little problematic – they lie either close to or outside the colour gamut of this printer/ink/paper combination, and so they need to be changed in order to ‘fit’, either by localised adjustment or variation of rendering intent – but that’s a story for later!

Why is it different? Well, it’s not to do with the paper for sure, so it’s down to either the ink change or printer head.  Using the same K3 ink in an Epson 4800 brings the colours back into gamut, so the difference is in the printer head itself, or the printer driver, but as I said, it’s a small problem easily fixed.

When you consider the low cost of achieving an accurate monitor profile – see this previous post – and combine that with an accurate printer output profile or two to match your chosen printer papers, and then deploy these assets correctly you have a proper colour managed workflow.  Add to that the cost savings in ink and paper and it becomes a bit of a “no-brainer” doesn’t it?

In this post I set out to hopefully ‘demystify’ colour spaces and profiles in terms of what they are and how they are used – I hope I’ve succeeded!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Monitor Calibration with ColorMunki

Monitor Calibration with ColorMunki Photo

Following on from my previous posts on the subject of monitor calibration I thought I’d post a fully detailed set of instructions, just to make sure we’re all “singing from the same hymn sheet” so to speak.

Basic Setup

_D4R7794

Put the ColorMunki spectrophotometer into the cover/holder and attach the USB cable.

_D4R7798

Always keep the sliding dust cover closed when storing the ColorMunki in its holder – this prevents dust ingress which will effect the device performance.

BUT REMEMBER – slide the cover out of the way before you begin the calibration process!

colormunkiSpecCover

Install the ColorMunki software on your machine, register it via the internet, then check for any available updates.

Once the software is fully installed and working you are ready to begin.

Plug the USB cable into an empty USB port on your computer – NOT an external hub port as this can sometimes cause device/system communication problems.

Launch the ColorMunki software.

The VERY FIRST THING YOU NEED TO DO is open the ColorMunki software preferences and ensure that it looks like the following screen:

PC: File > Preferences

Mac: ColorMunki Photo > Preferences

Screen Shot 2013-10-17 at 11.28.32

The value for the Tone Response Curve MUST be set to 2.2 which is the default value.

The ICC Profile Version number MUST be set to v2 for best results – this is NOT the default.

Ensure the two check boxes are “ticked”.**

** These settings can be something of a contentious issue. DDC & LUT check boxes should only be “ticked” if your Monitor/Graphics card combination offers support for these modes.

If you find these settings make your monitor become excessively dark once profiling has been completed, start again ensuring BOTH check boxes are “unticked”.

Untick both boxes if you are working on an iMac or laptop as for the most part these devices support neither function.

For more information on this, a good starting point is a page on the X-Rite website available on the link below:

http://xritephoto.com/ph_product_overview.aspx?ID=1115&Action=Support&SupportID=5561

If you are going to use the ColorMunki to make printer profiles then ensure the ICC Profile Version is set to v2.

By default the ColorMunki writes profiles in ICC v4 – not all computer operating systems can function correctly from a graphics colour aspect; but they can all function perfectly using ICC v2.

You should only need to do this operation once, but any updates from X-Rite, or a re-installation of the software will require you to revisit the preferences panel just to check all is well.

Once this panel is set as above Click OK and you are ready to begin.

 

Monitor Calibration

This is the main ColorMunki GUI, or graphic user interface:

Screen Shot 2013-10-17 at 12.32.58

Click Profile My Display

Screen Shot 2013-10-17 at 11.17.49

Select the display you want to profile.

I use what is called a “double desktop” and have two monitors running side by side; if you have just a single monitor connected then that will be the only display you see listed.

Click Next>.

Screen Shot 2013-10-17 at 11.18.18

Select the type of display – we are talking here about monitor calibration of a screen attached to a PC or Mac so select LCD.

Laptops – it never hurts a laptop to be calibrated for luminance and colour, but in most cases the graphics output LUT (colour Look Up Table) is barely 8 bit to begin with; the calibration process will usually reduce that to less than 8 bit. This will normally result in the laptop screen colour range being reduced in size and you may well see “virtual” colour banding in your images.

Remedy: DON’T PROCESS ON A LAPTOP – otherwise “me and the boys” will be paying you a visit!

Select Advanced.

Deselect the ambient light measurement optionit can be expensive to set yourself up with proper lighting in order to have an ICC standard viewing/processing environment; daylight (D65) bulbs are fairly cheap and do go a long way towards helping, but the correct amount of light and the colour of the walls and ceiling, and the exclusion of extraneous light sources of incorrect colour temperature (eg windows) can prove somewhat more problematic and costly.

Processing in darkened room without light is by far the easiest, cheapest and most cost-effective way of obtaining correct working conditions.

Set the Luminance target Value to 120 (that’s 120 candelas per square meter if you’re interested!).

Set the Target White Point to D65 (that’s 6500 degrees Kelvin – mean average daylight).

Click Next>.

Screen Shot 2013-10-17 at 11.19.44

With the ColorMunki connected to your system this is the screen you will be greeted with.

You need to calibrate the device itself, so follow the illustration and rotate the ColorMunki dial to the indicated position.

Once the device has calibrated itself to its internal calibration tile you will see the displayed GUI change to:

Screen Shot 2013-10-17 at 11.20.26

Follow the illustration and return the ColorMunki dial to its measuring position.

Screen Shot 2013-10-17 at 11.20.49

Click Next>.

Screen Shot 2013-10-17 at 11.21.11

With the ColorMunki in its holder and with the spectrophotometer cover OPEN for measurement, place the ColorMunki on the monitor as indicated on screen and in the image below:

XR-CLRMNK-01

We are now ready to begin the monitor calibration.

Click Next>.

The first thing the ColorMunki does is measure the luminosity of the screen. If you get a manual adjustment prompt such as this (indicates non-support/disabling of DDC preferences option):

ColorMunki-Photo-display-screen-111

Simply turn adjust the monitor brightness slowly until the indicator line is level with the central datum line; you should see a “tick” suddenly appear when the luminance value of 120 is reached by your adjustments.

LCDs are notoriously slow to respond to changes in “backlight brightness” so make an adjustment and give the monitor a few seconds to settle down.

You may have to access your monitor controls via the screen OSD menu, or on Mac via the System Preferences > Display menu.

Once the Brightness/Luminance of the monitor is set correctly then ColorMunki will proceed will proceed with its monitor output colour measurements.

In order for you to understand monitor calibration and what is going on here is a sequence of slides from one of my workshops on colour management:

moncal1

moncal2

moncal3

moncal4

Once the measurements are complete the GUI will return to the screen in this form.

Screen Shot 2013-10-17 at 11.26.29

Either use the default profile name, or one of your own choice and click Save.

NOTE: Under NO CIRCUMSTANCES can you rename the profile after it has been saved, or any other .icc profile for that matter, otherwise the profile will not work.

Click Next>.

Screen Shot 2013-10-17 at 11.27.00

Click Save again to commit the new monitor profile to you operating system as the default monitor profile.

You can set the profile reminder interval from the drop down menu.

Click Next>.

Screen Shot 2013-10-17 at 12.32.58

Monitor calibration is now complete and you are now back to the ColorMunki startup GUI.

Quit or Exit the ColorMunki application – you are done!

Please consider supporting this blog.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Screen Capture logos denoting ColorMunki & X-Rite are the copyright of X-Rite.

Monitor Calibration Devices

Colour management is the simple process of maintaining colour accuracy and consistency between the ACTUAL COLOURS in your image, in terms of Hue, Saturation and Luminosity; and those reproduced on your RGB devices; in this case, displayed on your monitor. Each and every pixel in your image has its very own individual RGB colour values and it is vital to us as photographers that we “SEE” these values accurately displayed on our monitors.

If we were to visit The National Gallery and gaze upon Turners “Fighting Temeraire” we would see all those sumptuous colours on the canvass just as J.M.W. intended; but could we see the same colours if we had a pair of Ray Bans on?

No, we couldn’t; because the sunglasses behave as colour filters and so they would add a “tint” to every colour of light that passes through them.

What you need to understand about your monitor is that it behaves like a filter between your eyes and the recorded colours in your image; and unless that “filter” is 100% neutral in colour, then it will indeed “tint” your displayed image.

So, the first effect of monitor calibration is that the process NEUTRALIZES any colour tint in the monitor display and so shows us the “real colours” in our images; the correct values of Hue and Saturation.

Now imagine we have an old fashioned Kodak Ektachrome colour slide sitting in a projector. If we have the correct wattage bulb in the projector we will see the correct LUMINOSITY of the slide when it is projected.

But if the bulb wattage is too high then the slide will project too brightly, and if the bulb wattage is too low then the projected image will not be bright enough.

All our monitors behave just like a projector, and as such they all have a brightness adjustment which we can directly correlate to our old fashioned slide projector bulb, and this brightness, or backlight control is another aspect of monitor calibration.

Have you done a print that comes out DARKER than the image displayed on the screen?

If you have then your monitor backlight is too bright!

And so, the second effect of monitor calibration is the setting of the correct level of brightness or back lighting of our monitor in order for us to see the true Luminosity of the pixels in our images.

Without accurate Monitor Calibration your ability to control the accuracy of colour and overall brightness of your images is severely limited.

I get asked all the time “what’s the best monitor calibration device to use” so, above is a short video (no sound) I’ve made showing the 3D and 2D plots of profiles I’ve just made for the same monitor using teo different monitor calibration devices/spectrophotometers from opposite ends of the pricing scale.

The first plot you see in black is the AdobeRGB1998 working colour space – this is only shown as a standard by which you can judge the other two profiles; if you like, monitor working colour spaces.

The yellow plot that shows up as an overlay is a profile done with an Xrite ColourMunki Photo, which usually retails for around £300 – and it clearly shows this particular monitor rendering a greater number of colours in certain areas than are contained in the Adobe1998 reference space.

The cyan plot is the same monitor, but profiled with the i1Photo Pro 2 spectro – not much change out of £1300 thank you very much – and the resulting profile virtually an identical twin of the one obtained with the ColorMunki which retails for a quarter of the price!

Don’t get me wrong, the i1 is a far more efficient monitor calibration device if you want to produce custom PRINTER profiles as well, but if you are happy using OEM profiles and just want perfect monitor calibration then I’d say the ColorMunki Photo is the more sensible purchase; or better still the ColorMunki Display at only around £110.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Monitor, Is Yours Up To The Job?

Is Your Monitor Actually Up To The Job?

As photographers we have to take something of a “leap of faith” that the monitor we use to view and process our images on is actually up to the job – or do we?

No – is the short answer!  As a Photoshop & Lightroom educator I try and teach this mystical thing called “Colour Management” – note the correct spelling of the word COLOUR!

The majority of amateur photographers (and a few so-called pros come to that!) seem to think that colour management is some great complicated edifice; or even some sort of “re-invention of the wheel” – and so they either bury their head in the sand or generally “pooh-pooh” the idea as unnecessary.

Well, it’s certainly NOT complicated, but it certainly IS necessary.

The first stage in a colour managed workflow is to ensure that your monitor is calibrated – in other words it is working at the correct brightness level, and the correct colour balance or white point – this will ensure that when your computer sends pure red to your monitor, pure red is seen on the screen; not red with a blue tint to it!

But correct calibration of your monitor is fairly useless if your monitor cannot reproduce a large variation of colour – in other words, if its’ colour gamut is too small.

And it’s Monitor Colour Gamut that I want to look at in this post.

The first thing I’d like you to do is open up Photoshop and go to the Colour Settings – that’s Edit>Colour Settings, or shift+cmd+K on Mac, or shift+Ctrl+K on PC.

Once this dialogue box is open, set it up as follows:

Screen Shot 2013-11-18 at 13.47.30

This is the optimum setup of Photoshop for digital photography as ProPhoto is the best colour space for preserving the largest number of colours captured by your dslr sensor; far better than AdobeRGB1998 – but that’s another story.

If you like you can click the SAVE button and then give this settings profile a name – I call mine ProPhoto_Balanced_CC

Now that you are working with the largest colour palette possible inside Photoshop I want you to go to File>New and created a new 500×500 pixel square with a resolution of 300 pixels per inch with the settings as follows:

Screen Shot 2013-11-18 at 13.58.34

Click OK and you should now have a white square.

Now go to your foreground colour, click it to bring the colour palette dialogue box into view and manually add the following values indicated by the small red arrows:

Screen Shot 2013-11-18 at 14.06.52

The colour will look a little different than it does in the jpeg above.

So now we have a rather lurid sickly-looking green square in the ProPhoto colour space.

Now duplicate the image TWICE and then go to Window>Arrange>3up Vertical and you should end up with a display looking like this:

unconverted

Now comes the point of the exercise – click on the tab for the centre image and go Edit>Convert to Profile and choose AdobeRGB(1998) as the destination space (colour space).

Then click on the tab for the left hand image and go Edit>Convert to Profile and choose sRGB as the destination space.

Here’s the thing – if your display DOES NOT look like this:

MonitorColourDisplay

and all three squares look the same as the square on the left then your monitor only has a small sRGB colour gamut and is going to severely inhibit your ability to process your images properly or with any degree of colour accuracy.

Monitors rely on their Colour Look-up Table or LUT in order to display colour. Calibration of the monitor can reduce the size of the available range of colours in the LUT if it’s not big enough in the first place, and so calibration can indeed make things worse from a colour point of view; BUT, it will still ensure the monitor is set to the correct levels of brightness and colour neutrality; so calibration is still a good idea.

Laptops are usually the best illustration of this small LUT problem; normally their display gamuts are barely 8bit sRGB to begin with, and if calibration drops the LUT to below 8bit then the commonest problem you see is colour banding in your images.

If however, your display looks like the image above then you’re laughing!

Why is a large monitor colour gamut essential for digital photography?  Well it’s all to do with those colour spaces:

Screen Shot 2013-11-18 at 14.56.11

If you look at the image above you’ll see the three standard primary working colour spaces of ProPhoto, AdobeRGB(1998) and sRGB overlaid for comparison with each other.  There’s also a 4th plot – this is the input space of the Canon 1Dx dslr – in other words, it encompasses all the colours the sensor of that camera can record.

In actual fact, some colours can be recorded by the camera that lie OUTSIDE even the ProPhoto colour space!

But you can clearly see that the Adobe space looses more camera-captured colour than ProPhoto – hence RAW file handlers like Lightroom work in Prophoto (or to be more strictly true MelissaRGB – but that’s yet another story!) in order to at least preserve as many of the colours captured by the camera as possible.

Even more camera colour is lost to the sRGB colour space.

So this is why we should always have Photoshop set to a default ProPhoto working space – the archival images we produce will therefore retain as much of the original colours captured by the camera as possible.

If we now turn our attention back to monitors – the windows on to our images – we can now deduce that:

a. If a monitor can only display sRGB at best, then we will only be able to see a small portion of the cameras captured colour.

b. However, if the monitor has a larger colour gamut and a bigger LUT both in terms of colour spectrum and bit depth, then we will see a lot more of the original capture colours – and the more we can see then more effectively we can colour manage.

Monitors are available that can display the Adobe colour gamut, indeed quite a few can display more colours – but if you are on a tight budget these can seem more than expensive to say the least.

A good monitor that I recommend quite a lot – indeed I use one myself – is the HP LP2475W, well worth the price if you can find one; and with a bit of tweaking it will display 98%+ of the AdobeRGB colour space in all three primary colours and even some of the warmer colours that are only ProPhoto:

Screen Shot 2013-11-18 at 15.40.07

The green plot is the Adobe space, the red plot is the HP LP2475W display colour space.

So it’s a good buy if you can find one.

However, there’s a catch – there always is! This monitor relies on the LUT of the graphics card driving it – plugged into the modest 512Mb nVidea GT120 on my Mac Pro it is brilliant and competes at every level with the likes of Eizo ColourEdge and NEC Spectraviews for all practical purposes.  But plugged into the back of a laptop then it can only reproduce what the lower specification graphics chips can supply it with.

So there we have it, a simple way to test if your monitor is giving you the best advantage when it comes to processing your images – food for thought?

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.