Raw File Compression

Raw File Compression.

Today I’m going to give you my point of view over that most vexatious question – is LOSSLESS raw file compression TRULY lossless?

I’m going to upset one heck of a lot of people here, and my chances of Canon letting me have any new kit to test are going to disappear over the horizon at a great rate of knots, but I feel compelled to post!

What prompts me to commit this act of potential suicide?

It’s this shot from my recent trip to Norway:

FW1Q1351-2

Direct from Camera

FW1Q1351

Processed in Lightroom

I had originally intended to shoot Nikon on this trip using a hire 400mm f2.8, but right at the last minute there was a problem with the lens that couldn’t be sorted out in time, so Calumet supplied me with a 1DX and a 200-400 f4 to basically get me out of a sticky situation.

As you should all know by now, the only problems I have with Canon cameras are their  short Dynamic Range, and Canons steadfast refusal to allow for uncompressed raw recording.

The less experienced shooter/processor might look at the shot “ex camera” and be disappointed – it looks like crap, with far too much contrast, overly dark shadows and near-blown highlights.

Shot on Nikon the same image would look more in keeping with the processed version IF SHOT using the uncompressed raw option, which is something I always do without fail; and the extra 3/4 stop dynamic range of the D4 would make a world of difference too.

Would the AF have done as good a job – who knows!

The lighting in the shot is epic from a visual PoV, but bad from a camera exposure one. A wider dynamic range and zero raw compression on my Nikon D4 would allow me to have a little more ‘cavalier attitude’ to lighting scenarios like this – usually I’d shoot with +2/3Ev permanently dialled into the camera.  Overall the extra dynamic range would give me less contrast, and I’d have more highlight detail and less need to bump up the shadow areas in post.

In other words processing would be easier, faster and a lot less convoluted.

But I can’t stress enough just how much detrimental difference LOSSLESS raw file compression CAN SOMETIMES make to a shot.

Now there is a lot – and I mean A LOT – of opinionated garbage written all over the internet on various forums etc about lossless raw file compression, and it drives me nuts.  Some say it’s bad, most say it makes no difference – and both camps are WRONG!

Sometimes there is NO visual difference between UNCOMPRESSED and LOSSLESS, and sometimes there IS.  It all depends on the lighting and the nature of the scene/subject colours and how they interact with said lighting.

The main problem with the ‘it makes no difference’ camp is that they never substantiate their claims; and if they are Canon shooters they can’t – because they can’t produce an image with zero raw file compression to compare their standard lossless CR2 files to!

So I’ve come up with a way of illustrating visually the differences between various levels of raw file compression on Nikon using the D800E and Photoshop.

But before we ‘get to it’ let’s firstly refresh your understanding. A camera raw file is basically a gamma 1.0, or LINEAR gamma file:

gamma,gamma encoding,Andy Astbury

Linear (top) vs Encoded Gamma

The right hand 50% of the linear gamma gradient represents the brightest whole stop of exposure – that’s one heck of a lot of potential for recording subtle highlight detail in a raw file.

It also represents the area of tonal range that is frequently most effected by any form of raw file compression.

Neither Nikon or Canon will reveal to the world the algorithm-based methods they use for lossless or lossy raw file compression, but it usually works by a process of ‘Bayer Binning’.

Bayer_Pattern

If we take a 2×2 block, it contains 2 green, 1 red and 1 blue photosite photon value – if we average the green value and then interpolate new values for red and blue output we will successfully compress the raw file.  But the data will be ‘faux’ data, not real data.

The other method we could use is to compress the tonal values in that brightest stop of recorded highlight tone – which is massive don’t forget – but this will result in a ’rounding up or down’ of certain bright tonal values thus potentially reducing some of the more subtle highlight details.

We could also use some variant of the same type of algorithm to ‘rationalise’ shadow detail as well – with pretty much the same result.

In the face of Nikon and Canons refusal to divulge their methodologies behind raw file compression, especially lossless, we can only guess what is actually happening.

I read somewhere that with lossless raw file compression the compression algorithms leave a trace instruction about what they have done and where they’ve done it in order that a raw handler programme such as Lightroom can actually ‘undo’ the compression effects – that sounds like a recipe for disaster if you ask me!

Personally I neither know nor do I care – I know that lossless raw file compression CAN be detrimental to images shot under certain conditions, and here’s the proof – of a fashion:

Let’s look at the following files:

raw file compression

Image 1: 14 bit UNCOMPRESSED

raw file compression

Image 2: 14 bit UNCOMPRESSED

raw file compression

Image 3: 14 bit LOSSLESS compression

raw file compression

Image 4: 14 bit LOSSY compression

raw file compression

Image 5: 12 bit UNCOMPRESSED

Yes, there are 2 files which are identical, that is 14 bit uncompressed – and there’s a reason for that which will become apparent in a minute.

First, some basic Photoshop ‘stuff’.  If I open TWO images in Photoshop as separate layers in the same document, and change the blend mode of the top layer to DIFFERENCE I can then see the differences between the two ‘images’.  It’s not a perfect way of proving my point because of the phenomenon of photon flux.

Photon Flux Andy??? WTF is that?

Well, here’s where shooting two identical 14 bit uncompressed files comes in – they themselves are NOT identical!:

controlunamplified control

The result of overlaying the two identical uncompressed raw files (above left) – it looks almost black all over indicating that the two shots are indeed pretty much the same in every pixel.  But if I amplify the image with a levels layer (above right) you can see the differences more clearly.

So there you have it – Photon Flux! The difference between two 14 bit UNCOMPRESSED raw files shot at the same time, same ISO, shutter speed AND with a FULLY MANUAL APERTURE.  The only difference between the two shots is the ratio and number of photons striking the subject and being reflected into the lens.

Firstly 14 Bit UNCOMPRESSED compared to 14 bit LOSSLESS (the important one!):

raw file compression

14 bit UNCOMPRESSED vs 14 bit LOSSLESS

Please remember, the above ‘difference’ image contains photon flux variations too, but if you look carefully you will see greater differences than in the ‘flux only’ image above.

raw file compression raw file compression

The two images above illustrate the differences between 14 bit uncompressed and 14 bit LOSSY compression (left) and 14 bit UNCOMPRESSED and 12 bit UNCOMPRESSED (right) just for good measure!

In Conclusion

As I indicated earlier in the post, this is not a definitive testing method, sequential shots will always contain a photon flux variation that ‘pollutes’ the ‘difference’ image.

I purposefully chose this white subject with textured aluminium fittings and a blackish LED screen because the majority of sensor response will lie in that brightest gamma 1.0 stop.

The exposure was a constant +1EV, 1/30th @ f 18 and 100 ISO – nearly maximum dynamic range for the D800E, and f18 was set manually to avoid any aperture flicker caused by auto stop down.

You can see from all the ‘difference’ images that the part of the subject that seems to suffer the most is the aluminium part, not the white areas.  The aluminium has a stippled texture causing a myriad of small specular highlights – brighter than the white parts of the subject.

What would 14 bit uncompressed minus 14 bit lossless minus photon flux look like?  In a perfect world I’d be able to show you accurately, but we don’t live in one of those so I can’t!

We can try it using the flux shot from earlier:

raw file compression

But this is wildly inaccurate as the flux component is not pertinent to the photons at the actual time the lossless compression shot was taken.  But the fact that you CAN see an image does HINT that there is a real difference between UNCOMPRESSED and LOSSLESS compression – in certain circumstances at least.

If you have never used a camera that offers the zero raw file compression option then basically what you’ve never had you never miss.  But as a Nikon shooter I shoot uncompressed all the time – 90% of the time I don’t need to, but it just saves me having to remember something when I do need the option.

raw file compression

Would this 1DX shot be served any better through UNCOMPRESSED raw recording?  Most likely NO – why?  Low Dynamic Range caused in the main by flat low contrast lighting means no deep dark shadows and nothing approaching a highlight.

I don’t see it as a costly option in terms of buffer capacity or on-board storage, and when it comes to processing I would much rather have a surfeit of sensor data rather than a lack of it – no matter how small that deficit might be.

Lossless raw file compression has NO positive effect on your images, and it’s sole purpose in life is to allow you to fit more shots on the storage media – that’s it pure and simple.  If you have the option to shoot uncompressed then do so, and buy a bigger card!

What pisses my off about Canon is that it would only take, I’m sure, a firmware upgrade to give the 1DX et al the ability to record with zero raw file compression – and, whether needed or not, it would stop miserable grumpy gits like me banging on about it!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Colour in Photoshop

Colour in Photoshop.

Understanding colour inside Photoshop is riddled with confusion for the majority of users.  This is due to the perpetual misuse of certain words and terms.  Adobe themselves use incorrect terminology – which doesn’t help!

The aim of this post is to understand the attributes or properties of colour inside the Photoshop environment – “…is that right Andy?”  “Yeh, it is!”

So, the first colour attribute we’re going to look at is HUE:

Understanding Colour in Photoshop. A colour wheel showing point-sampled HUES (colours) at 30 degree increments.

A colour wheel showing point-sampled HUES (colours) at 30 degree increments.

HUE can be construed as meaning ‘colour’ – or color for the benefit of our American friends “come on guys, learn to spell – you’ve had long enough!”

The colour wheel begins at 0 degrees with pure Red (255,0,0 in 8bit RGB terms), and moves clockwise through all the HUES/colours to end up back at pure Red – simple!

Understanding Colour in Photoshop.

Above, we can see samples of primary red and secondary yellow together with their respective HUE degree values which are Red 0 degrees and Yellow 60 degrees.  You can also see that the colour channel values for Red are 255,0,0 and Yellow 255,255,0.  This shows that Yellow is a mix of Red light and Green light in equal proportions.

I told you it was easy!

Inside Photoshop the colour wheel starts and ends at 180 degrees CYAN, and is flattened out into a horizontal bar as in the Hue/Saturation adjustment:

Understanding Colour in Photoshop.

Overall, there is no ambiguity over the meaning or terminology HUE; it is what it is, and it is usually taken as meaning ‘what colour’ something is.

The same can be said for the next attribute of colour – SATURATION.

Or can it?

How do we define saturation?

Understanding Colour in Photoshop. Two different SATURATION values (100% & 50%) of the same HUE.

Two different SATURATION values (100% & 50%) of the same HUE.

Above we can see two different saturation values for the same HUE (0 degrees Hue, 100% and 50% Saturation). I suppose the burning question is, do we have two different ‘colours’?

As photographers we mainly work with additive colour; that is we add Red, Green and Blue coloured light to black in order to attain white.  But in the world of painting for instance, subtractive colour is used; pigments are overlaid on white (thus subtracting white) to make black.  Printing uses the same model – CMY+K inks overlaid on ‘white’ paper …..mmm see here

If we take a particular ‘colour’ of paint and we mix it with BLACK we have a different SHADE of the same colour.  If we instead add WHITE we end up with what’s called a TINT of the same colour; and if add grey to the original paint we arrive at a different TONE of the same colour.

Let’s look at that 50% saturated Red again:

Understanding Colour in Photoshop. Hue Red 0 degrees with 50% saturation.

Hue Red 0 degrees with 50% saturation.

We’ve basically added 128 Green and 128 Blue to 255 Red. Have we kept the same HUE – yes we have.

Is it the same colour? Be honest – you don’t know do you!

The answer is NO – they are two different ‘colours’, and the hexadecimal codes prove it – those are the hash-tag values ff0000 and ff8080.  But in our world of additive colour we should only think of the word ‘colour’ as a generalisation because it is somewhat ambiguous and imprecise.

But we can quantify the SATURATION of a HUE – so we’re all good up to this point!

So we beaver away in Photoshop in the additive RGB colour mode, but what you might not realise is that we are working in a colour model within that mode, and quite frankly this is where the whole chebang turns to pooh for a lot of folk.

There are basically two colour models for dare I use the word ‘normal’, photography work; HSB (also known as HSV) and HSL, and both are cylindrical co-ordinate colour models:

Understanding Colour in Photoshop. HSB (HSV) and HSL colour models for additive RGB.

HSB (HSV) and HSL colour models for additive RGB.

Without knowing one single thing about either, you can tell they are different just by looking at them.

All Photoshop default colour picker referencing is HSB – that is Hue, Saturation & Brightness; with equivalent RGB, Lab, CMYK  hexadecimal values:

Understanding Colour in Photoshop.

But in the Hue/Sat adjustment for example, we see the adjustments are HSL:

Understanding Colour in Photoshop.

The HSL model references colour in terms of Hue, Saturation & Lightness – not flaming LUMINOSITY as so many people wrongly think!

And it’s that word luminosity that’s the single largest purveyor of confusion and misunderstanding – luminosity masking, luminosity blending mode are both terms that I and oh so many others use – and we’re all wrong.

I have an excuse – I know everything, but I have to use the wrong terminology otherwise no one else knows what I’m talking about!!!!!!!!!  Plausible story and I’m sticking to it your honour………

Anyway, within Photoshop, HSB is used to select colours, and HSL is used to change them.

The reason for this is somewhat obvious when you take a close look at the two models again:

HSB (HSV) and HSL colour models for additive RGB.

HSB (HSV) and HSL colour models for additive RGB. (V stands for Value = B in HSB).

In the HSB model look where the “whiteness” information is; it’s radial, and bound up in the ‘S’ saturation co-ordinate.  But the “blackness” information is vertical, on the ‘B’ brightness co-ordinate.  This great when we want to pick/select/reference a colour.

But surely it would be more beneficial for the “whiteness” and “blackness” information to be attached to the axis or dimension, especially when we need to increase or decrease that “white” or “black” co-ordinate value in processing.

So within the two models the ‘H’ hue co-ordinates are pretty much the same, but the ‘S’ saturation co-ordinates are different.

So this leaves us with that most perennial of questions – what is the difference between Brightness and Lightness?

Firstly, there is a massive visual difference between the Brightness and Lightness  information contained within an image as you will see now:

Understanding Colour in Photoshop. The 'Brightness' channel of HSB.

The ‘Brightness’ channel of HSB.

Understanding Colour in Photoshop. The 'L' channel of HSL

The ‘L’ channel of HSL

Straight off the bat you can see that there is far more “whites detail” information contained in the ‘L’ lightness map of the image than in the brightness map.  Couple that with the fact that Lightness controls both black and white values for every pixel in your image – and you should now be able to comprehend the difference between Lightness and Brightness, and so be better at understanding colour inside Photoshop.

We’ll always use the highly bastardised terms like luminosity, luminance etc – but please be aware that you may be using them to describe something to which they DO NOT APPLY.

Luminosity is a measure of the magnitude of a light source – typically stars; but could loosely be applied to the lumens output power of any light source.  Luminance is a measure of the reflected light from a subject being illuminated by a light source; and varies with distance from said light source – a la the inverse square law etc.

Either way, neither of them have got anything to do with the pixel values of an image inside Photoshop!

But LIGHTNESS certainly does.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Night Sky Imaging

Night Sky Photography – A Brief Introduction

I really get a massive buzz from photographing the night sky – PROPERLY.

By properly I mean using your equipment to the best of its ability, and using correct techniques in terms of both ‘shooting’ and post processing.

The majority of images within the vast plethora of night sky images on Google etc, and methods described, are to be frank PANTS!

Those 800 pixel long-edge jpegs hide a multitude of shooting and processing sins – such as HUGE amounts of sensor noise and the biggest sin of all – elongated stars.

Top quality full resolution imagery of the night sky demands pin-prick stars, not trails that look like blown out sausages – unless of course, you are wanting them for visual effect.

Pin sharp stars require extremely precise MANUAL FOCUS in conjunction with a shutter speed that is short enough to arrest the perceived movement of the night sky across the cameras field of view.

They also demand that the lens is ‘shot’ pretty much wide open in terms of aperture – this allows the sensor to ‘see and gather’ as many photons of light from each point-source (star) in the night sky.

So we are in the situation where we have to use manual focus and exposure with f2.8 as an approximate working aperture – and high ISO values, because of the demand for a relatively fast shutter speed.

And when it comes to our shutter speed the much-vaunted ‘500 Rule’ needs to be consigned to the waste bin – it’s just not a good enough standard to work to, especially considering modern high megapixel count sensors such as Nikon’s D800E/D810/D810A and Canons 5DS.

Leaving the shutter open for just 10 seconds using a 14mm lens will elongate stars EVER SO SLIGHTLY – so the ‘500 Rule’ speed of 500/14 = 35.71 seconds is just going to make a total hash of things.

In the shot below; a crop from the image top left; I’ve used a 10 second exposure, but in preference I’ll use 5 seconds if I can get away with it:

Nikon D800E,14-24 f2.8@14mm,10 seconds exposure,f2.8,ISO 6400 Full Resolution Crop

Nikon D800E,14-24 f2.8@14mm,10 seconds exposure,f2.8,ISO 6400
RAW, Unprocessed, Full Resolution Crop

WOW….look at all that noise…well, it’s not going to be there for long folks; and NO, I won’t make it vanish with any Noise Reduction functions or plugins either!

6 consecutive frames put through Starry Landscape Stacker

5 consecutive frames put through Starry Landscape Stacker – now we have something we can work with!

Download Starry Landscape Stacker from the App Store:
icon175x175

Huge amounts of ‘noise’ can be eradicated using Median Stacking within Photoshop, but Mac users can circumnavigate the ‘agro’ of layer alignment and layer masking by using this great ‘app’ Starry Landscape Stacker – which does all the ‘heavy lifting’ for you.  Click the link above to download it from the App Store.  Just ignore any daft iTunes pop-ups and click ‘View in Mac App Store’!

I have a demonstration of Median Stacking on my YouTube channel:

This video is best viewed on YouTube in full screen mode.

In a manner of speaking, the ‘shooting aspect’ of Milky Way/Night Sky/Wide-field Astro is pretty straight forward.  You are working in between some very hard constraints with little margin for error.

  • The Earths rotation makes the stars track across our frame – so this dictates our shutter speed for any given focal length of lens – shorter focal length = longer shutter speed.
  • Sensor Megapixel count – more megs = shorter shutter speed.
  • We NEED to shoot with a ‘wide open’ aperture, so our ISO speed takes over as our general exposure control.
  • Focusing – this always seems to be the big ‘sticking point’ for most folk – and despite what you read to the contrary, you can’t reliably use the ‘hyperfocal’ method with wide open apertures – it especially will not work with wide-angle zoom lenses!
  • The Earths ‘seasonal tilt’ dictates what we can and can’t see from a particular latitude; and in conjunction with time of day, dictates the direction and orientation of a particular astral object such as the Milky Way.
  • Light pollution can mask even the cameras ability to record all the stars, and it effects the overall scene luminance level.
  • The position and phase of the moon – a full moon frequently throws far too much light into the entire sky – my advice is to stay at home!
  • A moon in between its last quarter and new moon is frequently diagonally opposite the Milky Way, and can be useful for illuminating your foreground.

And there are quite a few other considerations to take into account, like dew point and relative humidity – and of course, the bloody clouds!

The point I’m trying to make is that these shots take PLANNING.

Using applications and utilities like Stellarium and Photographers Ephemeris in conjunction with Google Earth has always been a great way of planning shots.  But for me, the best planning aid is Photopills – especially because of its augmented reality feature.  This allows you to pre-visualise your shot from your current location, and it will compute the dates and times that the shot is ‘on’.

Download Photopills from the App Store:

Photopills400x400bb

But it won’t stop the clouds from rolling in!

Even with the very best planning the weather conditions can ruin the whole thing!

I’m hoping that before the end of the year I’ll have a full training video finished about shooting perfect ‘wide field astro’ images – it’ll cover planning as well as BOTH shooting AND processing.

I will show you how to:

  • Effectively use Google Earth in conjunction with Stellarium and Photopills for forward planning.
  • The easiest way to ensure perfect focus on those stars – every time.
  • How to shoot for improved foreground.
  • When, and when NOT to deploy LONG EXPOSURE noise reduction in camera – black frame shooting.
  • How to process RAW files in Lightroom for correct colour balance.
  • How to properly use both Median Stacking in Photoshop and Starry Landscape Stacker to reduce ISO noise.
  • And much more!

One really useful FREE facility on the net is the Light Pollution Map website – I suggest using the latest 2015 VIIRIS overlay and the Bing Map Hybrid mode in order to get a rough idea of your foreground and the background light pollution effecting your chosen location.

Don’t forget – if you shoot vertical (portrait?) with a 14mm lens, the top part of the frame can be slightly behind you!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

 

Monitor Brightness.

Monitor Brightness & Room Lighting Levels.

I had promised myself I was going to do a video review of my latest purchase – the Lee SW150Mk2 system and Big and Little Stopper filters I’ve just spent a Kings ransom on for my Nikon 14-24mm and D800E:

Lee SW150 mk2,Lee Big Stopper,Lee Little Stopper,Lee Filters, Nikon 14-24, Nikon, Nikon D800E, landscape photography,Andy Astbury,Wildlife in Pixels

PURE SEX – and I’ve bloody well paid for this! My new Lee SW150 MkII filter system for the Nikon 14-24. Just look at those flashy red anodised parts – bound to make me a better photographer!

But I think that’ll have to wait while I address a question that keeps cropping up lately.  What’s the question?

Well, that’s the tricky bit because it comes in many guises. But they all boil down to “what monitor brightness or luminance level should I calibrate to?”

Monitor brightness is as critical as monitor colour when it comes to calibration.  If you look at previous articles on this blog you’ll see that I always quote the same calibration values, those being:

White Point: D65 – that figure takes care of colour.

Gamma: 2.2 – that value covers monitor contrast.

Luminance: 120 cdm2 (candelas per square meter) – that takes care of brightness.

Simple in’it….?!

However, when you’ve been around all this photography nonsense as long as I have you can overlook the possibility that people might not see things as being quite so blindingly obvious as you do.

And one of those ‘omissions on my part’ has been to do with monitor brightness settings COMBINED with working lighting levels in ‘the digital darkroom’.  So I suppose I’d better correct that failing on my part now.

What does a Monitor Profile Do for your image processing?

A correctly calibrated monitor and its .icc profile do a really simple but very mission-critical job.

If we open a new document in Photoshop and fill it with flat 255 white we need to see that it’s white.  If we hold an ND filter in front of our eye then the image won’t look white, it’ll look grey.

If we hold a blue filter in front of our eye the image will not look white – it’ll look blue.

That white image doesn’t exist ‘inside the monitor’ – it’s on our computer!  It only gets displayed on the monitor because of the graphics output device in our machine.

So, if you like, we’re on the outside looking in; and we are looking through a window on to our white image.  The colour and brightness level in our white image are correct on the inside of the system – our computer – but the viewing window or monitor might be too bright or too dark, and/or might be exhibiting a colour tint or cast.

Unless our monitor is a totally ‘clean window’ in terms of colour neutrality, then our image colour will not be displayed correctly.

And if the monitor is not running at the correct brightness then the colours and tones in our images will appear to be either too dark or too bright.  Please note the word ‘appear’…

Let’s get a bit fancy and make a greyscale in Photoshop:

monitor brightness,monitor calibration,monitor luminance,ColorMunki,i1 Display,spectrophotometer,colourimeter,ambient light,work space,photography,digital darkroom,Andy Astbury,Wildlife in Pixels,gamma,colour correction,image processing

The dots represent Lab 50 to Lab 95 – the most valuable tonal range between midtone and highlight detail.

Look at the distance between Lab 50 & Lab 95 on the three greyscales above – the biggest ‘span’ is the correctly calibrated monitor.  In both the ‘too bright & contrasty’ and the ‘too dark low contrast’ calibration, that valuable tonal range is compressed.

In reality the colours and tones in, say an unprocessed RAW file on one of our hard drives, are what they are.  But if our monitor isn’t calibrated correctly, what we ‘see’ on our monitor IS NOT REALITY.

Reality is what we need – the colours and tones in our images need to be faithfully reproduced on our monitor.

And so basically a monitor profile ensures that we see our images correctly in terms of colour and brightness; it ensures that we look at our images through a clean window that displays 100% of the luminance being sent to it – not 95% and not 120% – and that all our primary colours are being displayed with 100% fidelity.

In a nutshell, on an uncalibrated monitor, an image might look like crap, when in reality it isn’t.  The shit really starts to fly when you start making adjustments in an uncalibrated workspace – what you see becomes even further removed from reality.

“My prints come out too dark Andy – why?”

Because your monitor is too bright – CALIBRATE it!

“My pics look great on my screen, but everyone on Nature Photographers Network keeps telling me they’ve got too much contrast and they need a levels adjustment.  One guy even reprocessed one – everyone thought his version was better, but frankly it looked like crap to me – why is this happening Andy?

“Because your monitor brightness is too low but your gamma is too high – CALIBRATE it!  If you want your images to look like mine then you’ve got to do ALL the things I do, not just some of ’em – do you think I do all this shit for fun??????????……………grrrrrrr….

But there’s a potential problem;  just because your monitor is calibrated to perfection, that does NOT mean that everything will be golden from this point on

Monitor Viewing Conditions

So we’re outside taking a picture on a bright sunny day, but we can’t see the image on the back of the camera because there’s too much daylight, and we have to dive under a coat with our camera to see what’s going on.

But if we review that same image on the camera in the dark then it looks epic.

Now you have all experienced that…….

The monitor on the back of your camera has a set brightness level – if we view the screen in a high level of ambient light the image looks pale, washed out and in a general state of ultra low contrast.  Turn the ambient light down and the image on the camera screen becomes more vivid and the contrast increases.

But the image hasn’t changed, and neither has the camera monitor.

What HAS changed is your PERCEPTION of the colour and luminance values contained within the image itself.

Now come on kids – join the dots will you!

It does not matter how well your monitor is calibrated, if your monitor viewing conditions are not within specification.

Just like with your camera monitor, if there is too much ambient light in your working environment then your precisely calibrated monitor brightness and gamma will fail to give you a correct visualization or ‘perception’ of your image.

And the problems don’t end there either; coloured walls and ceilings reflect that colour onto the surface of your monitor, as does that stupid luminous green shirt you’re wearing – yes, I can see you!  And if you are processing on an iMac then THAT problem just got 10 times worse because of the glossy screen!

Nope – bead-blasting your 27 inches of Apple goodness is not the answer!

Right, now comes the serious stuff, so READ, INGEST and ACT.

ISO Standard 3664:2009 is the puppy we need to work to (sort of) – you can actually go and purchase this publication HERE should you feel inclined to dump 138 CHF on 34 pages of light bedtime reading.

There are actually two ISO standards that are relevant to us as image makers; ISO 12646:2015(draft) being the other.

12646 pertains to digital image processing where screens are to be compared to prints side by side (that does not necessarily refer to ‘desktop printer prints from your Epson 3000’).

3664:2009 applies to digital image processing where screen output is INDEPENDENT of print output.

We work to this standard (for the most part) because we want to process for the web as well as for print.

If we employ a print work flow involving modern soft-proofing and otherwise keep within the bounds of 3664 then we’re pretty much on the dance-floor.

ISO 3664 sets out one or two interesting and highly critical working parameters:

Ambient Light White Point: D50 – that means that the colour temperature of the light in your editing/working environment should be 5000Kelvin (not your monitor) – and in particular this means the light FALLING ON TO YOUR MONITOR from within your room. So room décor has to be colour neutral as well as the light source.

Ambient Light Value in your Editing Area: 32 to 64 Lux or lower.  Now this is what shocks so many of you guys – lower than 32 lux is basically processing in the dark!

Ambient Light Glare Permissible: 0 – this means NO REFLECTIONS on your monitor and NO light from windows or other light sources falling directly on the monitor.

Monitor White Point – D65 (under 3664) and D50 (under 12646) – we go with D65.

Monitor Luminance – 75 to 100 cdm2 (under 3664) and 80 to 120 cdm2 (under 12646 – here we begin to deviate from 3664.

We appear to be dealing with mixed reference units, but 1 Lux = 1 cdm2 or 1 candela per square metre.

The way Monitor Brightness or Luminance relates to ambient light levels is perhaps a little counter-intuitive for some folk.  Basically the LOWER your editing area Lux value the LOWER your Monitor Brightness or luminance needs to be.

Now comes the point in the story where common sense gets mixed with experience, and the outcome can be proved by looking at displayed images and prints; aesthetics as opposed numbers.

Like all serious photographers I process my own images on a wide-gamut monitor, and I print on a wide-gamut printer.

Wide gamut monitors display pretty much 90% to100% of the AdobeRGB1998 colour space.

What we might refer to as Standard Gamut monitors display something a little larger than the sRGB colour space, which as we know is considerably smaller than AdobeRGB1998.

monitor brightness,monitor calibration,monitor luminance,ColorMunki,i1 Display,spectrophotometer,colourimeter,ambient light,work space,photography,digital darkroom,Andy Astbury,Wildlife in Pixels,gamma,colour correction,image processing

Left is a standard gamut/sRGB monitor and right is a typical wide gamut/AdobeRGB1998 monitor – if you can call any NEC ‘typical’!

Find all the gory details about monitors on this great resource site – TFT Central.

At workshops I process on a 27 inch non-Retina iMac – this is to all intents and purposes a ‘standard gamut’ monitor.

I calibrate my monitors with a ColorMunki Photo – which is a spectrophotometer.  Spectro’s have a tendency to be slow, and slightly problematic in the very darkest tones and exhibit something of a low contrast reaction to ‘blacks’ below around Lab 6.3 (RGB 20,20,20).

If you own a ColorMunki Display or i1Dispaly you do NOT own a spectro, you own a colorimeter!  A very different beast in the way it works, but from a colour point of view they give the same results as a spectro of the same standard – plus, for the most part, they work faster.

However, from a monitor brightness standpoint, they differ from spectros in their slightly better response to those ultra-dark tones.

So from a spectrophotometer standpoint I prefer to calibrate to ISO 12646 standard of 120cdm2 and control my room lighting to around 35-40 Lux.

Just so that you understand just how ‘nit-picking’ these standards are, the difference between 80cdm2 and 120 cdm2 is just 1/2 or 1/3rd of a stop Ev in camera exposure terms, depending on which way you look at it!

However, to put this monitor brightness standard into context, my 27 inch iMac came from Apple running at 290 cdm2 – and cranked up fully it’ll thump out 340 cdm2.

Most stand-alone monitors you buy, especially those that fall under the ‘standard gamut’ banner, will all be running at massively high monitor brightness levels and will require some severe turning down in the calibration process.

You will find that most monitor tests and reviews are done with calibration to the same figures that I have quoted – D65, 120cdm2 and Gamma 2.2 – in fact this non-standard set up has become so damn common it is now ‘standard’ – despite what the ISO chaps may think.

Using these values, printing out of Lightroom for example, becomes a breeze when using printer profiles created to the ICC v2 standard as long as you ‘soft proof’ the image in a fit and proper manner – that means CAREFULLY, take your time.  The one slight shortcoming of the set up is that side by side print/monitor comparisons may look ever so slightly out of kilter because of the D65 monitor white point – 6,500K transmitted white point as opposed to a 5,000K reflective white point.  But a shielded print-viewer should bring all that back into balance if such a thing floats your boat.

But the BIG THING you need to take away from the rather long article is the LOW LUX VALUE of you editing/working area ambient illumination.

Both the ColorMunki Photo and i1Pro2 spectrophotometers will measure your ambient light, as will the ColorMunki Display and i1 Display colorimeters, to name but a few.

But if you measure your ambient light and find the device gives you a reading of more than 50-60 lux then DO NOT ask the device to profile for your ambient light; in fact I would not recommend doing this AT ALL, here’s why.

I have a main office light that is colour corrected to 5000K and it chucks out 127 Lux at the monitor.  If I select the ‘measure and calibrate to ambient’ option on the ColorMunki Photo it eventually tells me I need a monitor brightness or luminance of 80 cdm2 – the only problem is that it gives me the same figure if I drop the ambient lux value to 100.

Now that smells a tad fishy to me……..

So my advice to anyone is to remove the variables, calibrate to 120 cdm2 and work in a very subdued ambient condition of 35 to 40 Lux. I find it easier to control my low lux working ambient light levels than bugger about with over-complex calibration.

To put a final perspective on this figure there is an interesting page on the Apollo Energytech website which quotes lux levels that comply with the law for different work environments – don’t go to B&Q or Walmart to do a spot of processing, and we’re all going to end up doing hard time at Her Madges Pleasure –  law breakers that we are!

Please consider supporting this blog.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

HDR in Lightroom CC (2015)

Lightroom CC (2015) – exciting stuff!

New direct HDR MERGE for bracketed exposure sequences inside the Develop Module of Lightroom CC 2015 – nice one Adobe!  I can see Eric Chan’s finger-prints all over this one…!

Andy Astbury,Lightroom,HDR,merge,photomerge, merge to HDR,high dynamic range,photography,Wildlife in Pixels

Twilight at Porth Y Post, Anglesey.

After a less than exciting 90 minutes on the phone with Adobe this vary morning – that’s about 10 minutes of actual conversation and an eternity of crappy ‘Muzak’ – I’ve managed to switch from my expensive old single app PsCC subscription to the Photography Plan – yay!

They wouldn’t let me upgrade my old stand-alone Lr4/Lr5 to Lr6 ‘on the cheap’ so now they’ve given me two apps for half the price I was paying for 1 – mental people, but I’ll not be arguing!

I was really eager to try out the new internal ‘Merge’ script/command for HDR sequences – and boy am I impressed.

I picked a twilight seascape scene I shot last year:

Andy Astbury,Lightroom,HDR,merge,photomerge, merge to HDR,high dynamic range,photography,Wildlife in Pixels

Click to view LARGER IMAGE.

I’ve taken a 6 shot exposure bracketed sequence of RAW files above, into the Develop Module of Lightroom CC and done 3 simple adjustments to all 6 under Auto Synch:

  1. Change camera profile from Adobe Standard to Camera Neutral.
  2. ‘Tick’ Remove Chromatic Aberration in the Lens Corrections panel.
  3. Change the colour temperature from ‘as shot’ to a whopping 13,400K – this neutralises the huge ‘twilight’ blue cast.

You have to remember that NOT ALL adjustments you can make in the Develop Module will carry over in this process, but these 3 will.

Andy Astbury,Lightroom,HDR,merge,photomerge, merge to HDR,high dynamic range,photography,Wildlife in Pixels

Click to view LARGER IMAGE.

Ever since Lr4 came out we have had the ability to take a bracketed sequence in Lightroom and send them to Photoshop to produce what’s called a ’32 bit floating point TIFF’ file – HDR without any of the stupid ‘grunge effects’ so commonly associated with the more normal styles of HDR workflow.

The resulting TIFF file would then be brought back into Lightroom where some very fancy processing limits were given to us – namely the exposure latitude above all else.

‘Normal’ range images, be they RAW or TIFF etc, have a potential 10 stops of exposure adjustment, +5 to -5 stops, both in the Basics Panel, and with Linear and Radial graduated filters.

But 32 bit float TIFFs had a massive 20 stops of adjustment, +10 to -10 stops – making for some very fancy and highly flexible processing.

Now the, what’s a ‘better’ file type than pixel-based TIFF?  A RAW file……

Andy Astbury,Lightroom,HDR,merge,photomerge, merge to HDR,high dynamic range,photography,Wildlife in Pixels

Click to view LARGER IMAGE.

So, after selecting the six RAW images, right-clicking and selecting ‘Photomerge>HDR’…

Andy Astbury,Lightroom,HDR,merge,photomerge, merge to HDR,high dynamic range,photography,Wildlife in Pixels

Click to view LARGER IMAGE.

…and selecting ‘NONE’ from the ‘de-ghost’ options, I was amazed to find the resulting ‘merged file’ was a DNG – not a TIFF – yet it still carries the 20 stop exposure adjustment  latitude.

Andy Astbury,Lightroom,HDR,merge,photomerge, merge to HDR,high dynamic range,photography,Wildlife in Pixels

Click to view LARGER IMAGE.

This is the best news for ages, and grunge-free, ‘real-looking’ HDR workflow time has just been axed by at least 50%.  I can’t really say any more about it really, except that, IMHO of course, this is the best thing to happen for Adobe RAW workflow since the advent of PV2012 itself – BRILLIANT!

Note: Because all the shots in this sequence featured ‘blurred water’, applying any de-ghosting would be detrimental to the image, causing some some weird artefacts where water met static rocks etc.

But if you have image sequences that have moving objects in them you can select from 3 de-ghost pre-sets to try and combat the artefacts caused by them, and you can check the de-ghost overlay tick-box to pre-visualise the de-ghosting areas in the final image.

Andy Astbury,Lightroom,HDR,merge,photomerge, merge to HDR,high dynamic range,photography,Wildlife in Pixels

Click to view LARGER IMAGE.

Switch up to Lightroom CC 2015 – it’s worth it for this facility alone.

Andy Astbury,Lightroom,HDR,merge,photomerge, merge to HDR,high dynamic range,photography,Wildlife in Pixels

Click to view LARGER IMAGE.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Image Sharpness

Image Sharpness

I spent the other afternoon in the Big Tower at Gigrin, in the very pleasant company company of Mr. Jeffrey “Jeffer-Cakes” Young.    Left arm feeling better yet Jeff?

I think I’m fairly safe in saying that once feeding time commenced at 3pm it didn’t take too long before Jeff got a firm understanding of just how damn hard bird flight photography truly is – if you are shooting for true image sharpness at 1:1 resolution.

I’d warned Jeff before-hand that his Canon 5Dmk3 would make his session somewhat more difficult than a 1Dx, due to it’s slightly less tractable autofocus adjustments.  But that with his 300mm f2.8 – even with his 1.4x converter mounted, his equipment was easily up to the job at hand.

I on the other hand was back on the Nikon gear – my 200-400 f4; but using a D4S I’d borrowed from Paul Atkins for some real head-to-head testing against the D4 (there’s a barrow load of Astbury venom headed Nikon’s way shortly I can tell you….watch this space as they say).

Amongst the many topics discussed and pondered upon, I was trying to explain to Jeff the  fundamental difference between ‘perceived’ and ‘real’ image sharpness.

Gigrin is a good place to find vast armies of ‘photographers’ who have ZERO CLUE that such an argument or difference even exists.

As a ‘teacher’ I can easily tell when I’m sharing hide space with folk like this because they develop quizzical frowns and slightly self-righteous smirks as they eavesdrop on the conversation between my client and I.

“THEY” don’t understand that my client is wanting to achieve the same goal as the one I’m always chasing after; and that that goal is as different from their goal as a fillet of oak-smoked Scottish salmon is from a tin of John West mush.

I suppose I’d better start explaining myself at this juncture; so below are two 800 pixel long edge jpeg files that you typically see posted on a nature photography forum, website or blog:

image sharpness, Andy Astbury, Wildlife in Pixels, Red Kite

IMAGE 1. Red Kite – Nikon D4S+200-400 f4 – CLICK IMAGE to view properly.

Click the images to view them properly.

image sharpness, Andy Astbury, Wildlife in Pixels, Red Kite

IMAGE 2. Red Kite – Nikon D4S+200-400 f4 – CLICK IMAGE to view properly.

“THEY” would be equally as pleased with either…..!

Both images look pretty sharp, well exposed and have pretty darn good composition from an editorial point of view too – so we’re all golden aren’t we!

Or are we?

Both images would look equally as good in terms of image sharpness at 1200 pixels on the long edge, and because I’m a smart-arse I could easily print both images to A4 – and they’d still look as good as each other.

But, one of them would also readily print to A3+ and in its digital form would get accepted at almost any stock agency on the planet, but the other one would most emphatically NOT pass muster for either purpose.

That’s because one of them has real, true image sharpness, while the other has none; all it’s image sharpness is perceptual and artificially induced through image processing.

Guessed which is which yet?

image sharpness, Andy Astbury, Wildlife in Pixels, Red Kite

IMAGE 1 at 1:1 native resolution – CLICK IMAGE to view properly.

Image 1. has true sharpness because it is IN FOCUS.

image sharpness, Andy Astbury, Wildlife in Pixels, Red Kite

IMAGE 2 at 1:1 native resolution – CLICK IMAGE to view properly.

And you don’t need glasses to see that image 2 is simply OUT OF FOCUS.

The next question is; which image is the cropped one – number 2 ?

Wrong…it’s number 1…

image sharpness, Andy Astbury, Wildlife in Pixels, Red Kite

Image 1 uncropped is 4928 pixels long edge, and cropped is 3565, in other words a 28% crop, which will yield a 15+ inch print without any trouble whatsoever.

Image 2 is NOT cropped – it has just been SHRUNK to around 16% of its original size in the Lightroom export utility with standard screen output sharpening.  So you can make a ‘silk purse from a sows ear’ – and no one would be any the wiser, as long as they never saw anything approaching the full resolution image!

Given that both images were shot at 400mm focal length, it’s obvious that the bird in image 1 (now you know it’s cropped a bit) is FURTHER AWAY than the bird in image 2.

So why is one IN FOCUS and the other not?

The bird in image 1 is ‘crossing’ the frame more than it is ‘closing in’ on the camera.

The bird in image 2 is closer to the camera to begin with, and is getting closer by the millisecond.

These two scenarios impose totally different work-loads on the autofocus system.

The ability of the autofocus system to cope with ANY imposed work-load is totally dependent upon the control parameters you have set in the camera.

The ‘success’ rate of these adjustable autofocus parameter settings is effected by:

  1. Changing spatial relationship between camera and subject during a burst of frames.
  2. Subject-to-camera closing speed
  3. Pre-shot tracking time.
  4. Frame rate.

And a few more things besides…!

The autofocus workloads for images 1 & 2 are poles apart, but the control parameter settings are identical.

The Leucistic Red Kite in the shot below is chugging along at roughly the same speed as its non-leucistic cousin in image 2. It’s also at pretty much the same focus distance:

image sharpness, Andy Astbury, Wildlife in Pixels, Red Kite

Image 3. Leucistic Red Kite – same distance, closing speed and focal length as image 2. CLICK IMAGE to view larger version.

So why is image 3 IN FOCUS when, given a similar scenario, image 2 is out of focus?

Because the autofocus control parameters are set differently – that’s why.

FACT: no single combination of autofocus control parameter settings will be your ‘magic bullet’ and give you nothing but sharp images with no ‘duds’ – unless you use a 12mm fish-eye lens that is!

Problems and focus errors INCREASE in frequency in direct proportion to increasing focal length.

They will also increase in frequency THE INSTANT you switch from a prime lens to a zoom lens, especially if the ‘zoom ratio’ exceeds 3:1.

Then we have to consider the accuracy and speed of the cameras autofocus system AND the speed of the lens autofocus motor – and sadly these criteria generally become more favourable with an increased price tag.

So if you’re using a Nikon D800 with an 80-400, or a Canon 70D with a 100-400 then there are going to be more than a few bumps in your road.  And if you stick to just one set of autofocus control settings all the time then those bumps are going to turn into mountains – some of which are going to kill you off before you make their summit….metaphorically speaking of course!

And God forbid that you try this image 3 ‘head on close up’ malarkey with a Sigma 50-500 – if you want that level of shot quality then you might just as well stay at home and save yourself the hide fees and petrol money !

Things don’t get any easier if you do spend the ‘big bucks’ either.

Fast glass and a pro body ‘speed machine’ will offer you more control adjustments for sure.  But that just means more chances to ‘screw things up’ unless you know EXACTLY how your autofocus system works, exactly what all those different controls actually DO, and you know how to relate those controls to what’s happening in front of you.

Whatever lens and camera body combination any of us use, we have to first of all find, then learn to work within it’s ‘effective envelope of operation’ – and by that I mean the REAL one, which is not necessarily always on a par with what the manufacturer might lead you to believe.

Take my Nikon 200-400 for example.  If I used autofocus on a static subject, let alone a moving one, at much past 50 metres using the venerable old D3 body and 400mm focal length, things in the critical image sharpness department became somewhat sketchy to say the least.  But put it on a D4 or D4S and I can shoot tack sharp focussing targets at 80 to 100 metres all day long……not that I make a habit of this most meaningless of photographic pastimes.

That discrepancy is due to the old D3 autofocus system lacking the ability to accurately  discriminate between precise distances from infinity to much over 50 metres when that particular lens was being used. But swap the lens out for a 400 f2.8 prime and things were far better!

Using the lens on either a D4 or D4S on head-on fast moving/closing subjects such as Mr.Leucistic above, we hit another snag at 400mm – once the subject is less than 20 metres away the autofocus system can’t keep up and the image sharpness effectively drops off the proverbial cliff.  But zoom out to 200mm and that ‘cut-off’ distance will reduce to 10 metres or so. Subjects closing at slower speeds can get much closer to the camera before sharp focus begins to fail.

As far as I’m concerned this problem is more to do with the speed of the autofocus motor inside the lens than anything else.  Nikon brought out an updated version of this lens a few years back – amongst its ‘star qualities’ was a new nano-coating that stopped the lens from flaring.  But does it focus any faster – does it heck!  And my version doesn’t suffer from flare either….!

Getting to know your equipment and how it all works is critical if you want your photography to improve in terms of image sharpness.

Shameless Plug Number 1.

I keep mentioning it – my ebook on Canon & Nikon Autofocus with long glass.

Understanding Canon & Nikon Autofocus

for

Bird in Flight Photography

Understanding Canon & Nikon Autofocus for Bird in Flight Photography

Click Image for details.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

View Autofocus Points in Lightroom

Mr. Malcolm Clayton sent me a link last week to a free plug-in for Lightroom that displays the autofocus points used for the shot, plus other very useful information such as focus distance, f-number and shutter speed, depth of field (DoF) values and other bits and bobs.

The plug-in is called “Show Focus Points” and you can download it HERE

Follow the installation instruction to the letter!

Once installed you can only launch it from the LIBRARY MODULE:

Accessing the Plug-in via the Library>Plug-in Extras menu

Accessing the Plug-in via the Library>Plug-in Extras menu CLICK to view LARGER

You will see this sort of thing:

The "Show Focus Points" for Lightroom plug-in window.

The “Show Focus Points” for Lightroom plug-in window. CLICK to view LARGER.

It’s a usefull tool to have because short of running the rather clunky Canon DPP or Nikon ViewNX software it’s the easiest way of getting hold of autofocus information without sending the image to Photoshop and looking through the mind-numbing RAW schema data – something I do out of habbit!

It displays a ton of useful data about your camera focus settings and exposure, and the autofocus point used – be it set by you, or chosen by the camera.

As far as I can see, the plug-in only displays the main active autofocus point on Nikon D4 and D4S files, but all the autofocus group as well as active points seem to display when viewing .CR2 Canon files as we can see on this very impressive car number plate!:

Screen grab of an unprocessed 1Dx/200-400/TC shot I did while testing the tracking capabilities of the Canon lens with the TC active - the REAL image looks more impressive than this!

Screen grab of an unprocessed 1Dx/200-400/TC shot I did while testing the tracking capabilities of the Canon lens with the TC active – the REAL image looks more impressive than this! I’m actually zooming out while tracking too – this is around 200mm + the 1.4x TC. CLICK to view LARGER

Canon 1Dx in AF Point Expansion 4 point; what I call "1 with 4 friends".

Canon 1Dx in AI Servo AF Point Expansion 4 point; what I call “1 with 4 friends”. CLICK to view LARGER.

Canon 1Dx in AI-F autofocus showing all autofocus points used be the camera.

Canon 1Dx in AI-F autofocus showing all autofocus points used be the camera.

Viewing your autofocus points is a very valid learning tool when trying to become familiar with your cameras autofocus, and it’s also handy if you want to see why and where you’ve “screwed the pooch” – hey, we ALL DO IT from time to time!

Useful tool to have IMO and it’s FREE – Andy likes free…

Cheers to Malc Clayton for bringing this to my attention.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Autofocus Drill-down

Long Lens Autofocus Considerations.

If you read my previous post about the 1Dx sensor you will have seen that I mentioned my, as yet unfinished, tome about long lens autofocus for wildlife photography.  It’s a frustrating project because I keep having to change various bits to make them simpler, re-order certain paragraphs etc.

But I thought I’d blog-post something here that I expand on in the project, and it’s something an awful lot of people NEVER take into consideration.

As a Nikon user I’m used to the vagaries of the Nikon AF system and I manage to work with it just fine – I have to!

But photographers who don’t shoot wildlife, and don’t use 400mm or 500mm lumps of glass as their “standard lens” might not find the vagaries I bitch about quite so apparent; indeed some might not come across them at all.

As a wildlife photographer I shoot in crappy light, I shoot with slow lenses (both in terms of f-number and focus speed), I shoot low contrast subjects on equally low contrast backgrounds, I’m constantly shooting brown-on-brown, grey on grey etc, I shoot stupidly small subjects….the list goes on!

For years, good wildlife photography has been done by pushing camera/lens capabilities beyond their performance design parameters; and this particularly applies to our “expectations” of our latest and greatest AF system – be it Canon or Nikon.

I find so many people who come to my workshops etc. are not even aware of this one simple fact – sharp focus requires more work AND increased speed of work by the lens AF motor the closer a subject is to the camera.

Just try looking at the delineations on the focusing ring of a lens:

Canon 200-400 focused at 20 meters.

Canon 200-400 focused at 20 meters. (Lens porn WARNING: This lens will cause movements in the front-of-trouser department).

Look at the scale and note the distance between 20m and 50m marks – that distance is indicative of the amount of work required of the autofocus controller and motor to move from 20m to 50m or vice versa.

Now look where the 10m mark is – it requires FAR MORE work from the focus controller and motor to move from 20m to 10m, than it did to move the 30 meters from 50m to 20m.

On top of that extra work, if we are tracking a subject moving at 10 meters per second the lens takes 3 seconds to move from 50m to 20m, but then has to move a lot FASTER as well to cover the extra workload moving from 20m to 10m in just 1 second.

Then you wonder why your Nikon D40 + Sigma 50-500mm is crap at doing “birds in flight”; you never realise that your autofocus system is bag of spanners and powered by a hamster on a wheel…….it’s just not fast enough kids

Autofocus accuracy is nothing without speed if you are wanting to do productive wildlife photography.

As I alluded to before, as a photographer of the old wildlife I, and YOU will always encounter problems that users in other photographic disciplines may not, or if they do then the problem has a lot less impact than it does for us.

Think of it this way – a sports photographer will use a 500mm f4 to photograph a 6 foot tall overpaid git who’s 25m to 70m away, on a sunny Saturday afternoon or under a squillion watts of flood lighting; and he’s looking for a 6×12 for the back page of the Sunday Sport.  I’ll use the same lens to photograph a cute Red Squirrel at 5m to 7m in a gloomy wood in the middle of winter and I’m looking for a full size, full resolution image for stock.

Red Squirrel - this is basically the FURTHEST DISTANCE you could shoot at with a 500mm lens and still get a meaningful composition.

Red Squirrel – this is basically the FURTHEST DISTANCE you could shoot at with a 500mm lens and still get a meaningful composition. Click for larger view.

Note the distance – 631/100 – that means 6.31 meters. Aperture is f8, so DoF is around 7 centimeters.

The image is UNCROPPED as are all the other images in this post

We don’t really want to be any further away because “his cuteness” will be too small in the frame:

The factors effecting subject distance choice are:

  1.  lens resolving power – small, fine details need to be as close as possible.*
  2.  sensor resolving power – we need as many pixels as possible covering the subject.*
  3.  auto focus point placement accuracy – if the subject is too small in the frame, point placement is inaccurate.
  4. general “in camera” composition

*These two are inextricably intertwined

I’ve indicated the active focus point on the above image too  because here’s a depth of field “point of note” – autofocus wastes DoF.  Where is the plane of focus? Just between the eyes of the squirrel.

Assuming the accepted modern norm of DoF distribution – 50/50 – that’s 3.5 centimeters in front of the plane of focus, or indicted AF point, that will be sharp.  Only problem there is that the squirrel’s nose is only around 1 centimeter closer to the camera than the AF point, so the remaining 2 .5 centimeters of DoF is wasted on a sharp rendition of the fresh air between its nose and the camera!!

Now let’s change camera orientation and go a bit closer to get the very TIGHTEST shot composition:

Red Squirrel - this is basically the CLOSEST DISTANCE you could shoot at with a 500mm lens and still get a meaningful composition.

Red Squirrel – this is basically the CLOSEST DISTANCE you could shoot at with a 500mm lens and still get a meaningful composition. Click for larger view

The subject distance is 5.62 meters. Aperture is f6.3 so DoF is around 4.4 centimeters.

Now let’s change photographic hats and imagine we are a sports photographer and we are spending a Saturday afternoon photographing a bunch of over-paid 6 foot tall gits chasing a ball around a field, using the very same camera and lens:

He's not over-paid or chasing a ball, but this is the CLOSEST distance we can shoot at with this orientation and still get a "not too tight" composition of a 6 foot git! "Shep's" not a git really - well, not much!

He’s not over-paid or chasing a ball, but this is the CLOSEST distance we can shoot at with this orientation and still get a “not too tight” composition of a 6 foot git! “Shep’s” not a git really – well, not much! Click to enlarge

The distance for this shot is 29.9 meters. Aperture is f6.3 so DoF is around 1.34 meters.

And here we are at the CLOSEST distance for this horizontal camera orientation - still not too tight.

And here we are at the CLOSEST distance for this horizontal camera orientation – still not too tight. Click to enlarge.

The distance here is 50.1 meters. Aperture is f6.3 so DoF is around 3.79 meters.

So with this new “sports shooter” hat on, have we got an easier job than the cold, wet squirrel photographer?

You bet your sweet life we have!

The “Shepster” can basically jump around and move about like an idiot on acid and stay in sharp focus because:

  1. the depth of field at those distances is large.
  2. more importantly, the autofocus has VERY little work to do along the lens axis, because 1 or 2 meters of subject movement closer to the camera requires very small movements of the lens focus mechanicals.

But the poor wildlife photographer with his cute squirrel has so much more of a hard time getting good sharp shots because:

  1. he/she has got little or no depth of field
  2. small subject movements along the lens axis require very large and very fast movement of the lens focus mechanicals.

So the next time you watch a video by Canon or Nikon demonstrating the effectiveness of their new AF system on some new camera body or other; or you go trawling the internet looking for what AF settings the pros use, just bear in mind that “one mans fruit may be another mans poison” just because he/she photographs bigger subjects at longer average distances”.

Equipment choice and its manner of deployment and use is just not a level playing field is it…but it’s something a lot of folk don’t realise or think about.

And how many folk would ever consider that a desired “in camera” image composition has such a massive set of implications for autofocus performance – not many – but if you put your brain in gear it’s blindingly obvious.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Colormunki Photo Update

Colormunki Photo Update

Both my MacPro and non-retina iMac used to be on Mountain Lion, or OSX 10.8, and nope, I never updated to Mavericks as I’d heard so many horror stories, and I basically couldn’t be bothered – hey, if it ain’t broke don’t fix it!

But, I wanted to install CapOne Pro on the iMac for the live-view capabilities – studio product shot lighting training being the biggest draw on that score.

So I downloaded the 60 day free trial, and whadyaknow, I can’t install it on anything lower than OSX 10.9!

Bummer thinks I – and I upgrade the iMac to OSX 10.10 – YOSEMITE.

Now I was quite impressed with the upgrade and I had no problems in the aftermath of the Yosemite installation; so after a week or so muggins here decided to do the very same upgrade to his late 2009 Mac Pro.

OHHHHHHH DEARY ME – what a pigs ear of a move that turned out to be!

Needless to say, I ended up making a Yosemite boot installer and setting up on a fresh HDD.  After re-installing all the necessary software like Lightroom and Photoshop, iShowU HD Pro and all the other crap I use, the final task arrived of sorting colour management out and profiling the monitors.

So off we trundle to X-Rite and download the Colormunki Photo software – v1.2.1.  I then proceeded to profile the 2 monitors I have attached to the Mac Pro.

Once the colour measurement stage got underway I started to think that it was all looking a little different and perhaps a bit more comprehensive than it did before.  Anyway, once the magic had been done and the profile saved I realised that I had no way of checking the new profile against the old one – t’was on the old hard drive!

So I go to the iMac and bring up the Colormunki software version number – 1.1.1 – so I tell the software to check for updates – “non available” came the reply.

Colormunki software downloads

Colormunki software downloads

Colormunki v1.2.1 for Yosemite

Colormunki v1.2.1 for Yosemite

So I download 1.2.1, remove the 1.1.1 software and restart the iMac as per X-Rites instructions, and then install said 1.2.1 software.

Once installation was finished I profiled the iMac and found something quite remarkable!

Check out the screen grab below:

iMac screen profile comparrisons.

iMac screen profile comparisons. You need to click this to open full size in a new tab.

On the left is a profile comparison done in the ColourThink 2-D grapher, and on the right one done in the iMacs own ColourSynch Utility.

In the left image the RED gamut projection is the new Colormunki v1.2.1 profile. This also corresponds to the white mesh grid in the Colour Synch image.

Now the smaller WHITE gamut projection was produced with an i1Pro 2 using the maximum number of calibration colours; this corresponds to the coloured projection in the Coloursynch window image.

The GREEN gamut projection is the supplied iMac system monitor profile – which is slightly “pants” due to its obvious smaller size.

What’s astonished me is that the Colormunki Photo with the new software v1.2.1 has produced a larger gamut for the display than the i1 Pro 2 did under Mountain Lion OSX 10.8

I’ve only done a couple of test prints via softproofing in Lightroom, but so far the new monitor profile has led to a small improvement in screen-to-print matching of the some subtle yellow-green and green-blue mixes, aswell as those yellowish browns which I often found tricky to match when printing from the iMac.

So, my advice is this, if you own a Colormunki Photo and have upgraded your iMac to Yosemite CHECK your X-Rite software version number. Checking for updates doesn’t always work, and the new 1.2.1 Mac version is well worth the trouble to install.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Camera Calibration

Custom Camera Calibration

The other day I had an email fall into my inbox from leading UK online retailer…whose name escapes me but is very short… that made my blood pressure spike.  It was basically offering me 20% off the cost of something that will revolutionise my photography – ColorChecker Passport Camera Calibration Profiling software.

I got annoyed for two reasons:

  1. Who the “f***” do they think they’re talking to sending ME this – I’ve forgotten more about this colour management malarkey than they’ll ever know….do some customer research you idle bastards and save yourselves a mauling!
  2. Much more importantly – tens of thousands of you guys ‘n gals will get the same email and some will believe the crap and buy it – and you will get yourselves into the biggest world of hurt imaginable!

Don’t misunderstand me, a ColorChecker Passport makes for a very sound purchase indeed and I would not like life very much if I didn’t own one.  What made me seethe is the way it’s being marketed, and to whom.

Profile all your cameras for accurate colour reproduction…..blah,blah,blah……..

If you do NOT fully understand the implications of custom camera calibration you’ll be in so much trouble when it comes to processing you’ll feel like giving up the art of photography.

The problems lie in a few areas:

First, a camera profile is a SENSOR/ASIC OUTPUT profile – think about that a minute.

Two things influence sensor/asic output – ISO and lens colour shift – yep. that’s right, no lens is colour-neutral, and all lenses produce colour shifts either by tint or spectral absorption. And higher ISO settings usually produce a cooler, bluer image.

Let’s take a look at ISO and its influence on custom camera calibration profiling – I’m using a far better bit of software for doing the job – “IN MY OPINION” – the Adobe DNG Profile Editor – free to all MAC download and Windows download – but you do need the ColorChecker Passport itself!

I prefer the Adobe product because I find the ColorChecker software produced camera calibration profiles there were, well, pretty vile in terms of increased contrast especially; not my cup of tea at all.

camera calibration, Andy Astbury, colour, color management

5 images shot at 1 stop increments of ISO on the same camera/lens combination.

Now this is NOT a demo of software – a video tutorial of camera profiling will be on my next photography training video coming sometime soon-ish, doubtless with a somewhat verbose narrative explaining why you should or should not do it!

Above, we have 5 images shot on a D4 with a 24-70 f2.8 at 70mm under a consistent overcast daylight at 1stop increments of ISO between 200 and 3200.

Below, we can see the resultant profile and distribution of known colour reference points on the colour wheel.

camera calibration, Andy Astbury, colour, color management

Here’s the 200 ISO custom camera calibration profile – the portion of interest to us is the colour wheel on the left and the points of known colour distribution (the black squares and circled dot).

Next, we see the result of the image shot at 3200 ISO:

camera calibration, Andy Astbury, colour, color management

Here’s the result of the custom camera profile based on the shot taken at 3200 ISO.

Now let’s super-impose one over t’other – if ISO doesn’t matter to a camera calibration profile then we should see NO DIFFERENCE………….

camera calibration, Andy Astbury, colour, color management

The 3200 ISO profile colour distribution overlaid onto the 200 ISO profile colour distribution – it’s different and they do not match up.

……..well would you bloody believe it!  Embark on custom camera calibration  profiling your camera and then apply that profile to an image shot with the same lens under the same lighting conditions but at a different ISO, and your colours will not be right.

So now my assertions about ISO have been vindicated, let’s take a look at skinning the cat another way, by keeping ISO the same but switching lenses.

Below is the result of a 500mm f4 at 1000 ISO:

camera calibration, Andy Astbury, colour, color management

Profile result of a 500mm f4 at 1000 ISO

And below we have the 24-70mm f2.8 @ 70mm and 1000 ISO:

camera calibration, Andy Astbury, colour, color management

Profile result of a 24-70mm f2.8 @ 70mm at 1000 ISO

Let’s overlay those two and see if there’s any difference:

camera calibration, Andy Astbury, colour, color management

Profile results of a 500mm f4 at 1000 ISO and the 24-70 f2.8 at 1000 ISO – as massively different as day and night.

Whoops….it’s all turned to crap!

Just take a moment to look at the info here.  There is movement in the orange/red/red magentas, but even bigger movements in the yellows/greens and the blues and blue/magentas.

Because these comparisons are done simply in Photoshop layers with the top layer at 50% opacity you can even see there’s an overall difference in the Hue and Saturation slider values for the two profiles – the 500mm profile is 2 and -10 respectively and the 24-70mm is actually 1 and -9.

The basic upshot of this information is that the two lenses apply a different colour cast to your image AND that cast is not always uniformly applied to all areas of the colour spectrum.

And if you really want to “screw the pooch” then here’s the above comparison side by side with with  the 500f4 1000iso against the 24-70mm f2.8 200iso view:

camera calibration, Andy Astbury, colour, color management

500mm f4/24-70mm f2.8 1000 ISO comparison versus 500mm f4 1000 ISO and 24-70mm f2.8 200 ISO.

A totally different spectral distribution of colour reference points again.

And I’m not even going to bother showing you that the same camera/lens/ISO combo will give different results under different lighting conditions – you should by now be able to envisage that little nugget yourselves.

So, Custom Camera Calibration – if you do it right then you’ll be profiling every body/lens combo you have, at every conceivable ISO value and lighting condition – it’s one of those things that if you don’t do it all then you’d be best off not doing at all in most cases.

I can think of a few instances where I would do it as a matter of course, such as scientific work, photo-microscopy, and artwork photography/copystand work etc, but these would be well outside the remit the more normal photographic practices.

As I said earlier, the Passport device itself is worth far more than it’s weight in gold – set up and light your shot and include the Passport device in a prominent place. Take a second shot without it and use shot 1 to custom white balance shot 2 – a dead easy process that makes the device invaluable for portrait and studio work etc.

But I hope by now you can begin to see the futility of trying to use a custom camera calibration profile on a “one size fits all” basis – it just won’t work correctly; and yet for the most part this is how it’s marketed – especially by third party retailers.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.