Monitor Calibration and Profile Validation

In this video I show you my usual calibration procedure for my monitor,  thus ensuring a perfect foundation for good color management.  

I’m using Eizo ColorNavigator6 software and the X-Rite ColorMunki Photo  spectrophotometer which I seem to have had since for ever and it still  functions better than nearly any other calibrator on the market today.  

The final part of the procedure is Profile Validation to ISO 12646 in  order to obtain the DeltaE2000 values of the new profile.

As ever folks I hope you find this content useful, and if you have any questions then please just ask!

Many thanks to all my Patreon members without who’s contributions making this content would be difficult to say the least.

Help keep this content advert-free by supporting me on Patreon https://www.patreon.com/join/andyastbury?

Exposure Value – What does it mean?

Exposure Value (Ev) – what does Ev mean?

I get asked this question every now and again because I frequently use it in the description annotations of image shot data here on the blog.

And I have to say from the outset the Exposure Value comes in two flavours – relative and absolute – and here I’m only talking mainly about the former.

So, let’s start with basic exposure.

Exposure can be thought of as Intensity x Time.

Intensity is controlled by our aperture, and time is controlled by our shutter speed.

This image was shot at 0.5sec (time), f11 (intensity) and ISO 100.

exposure value

We can think of the f11 intensity of light striking the sensor for 0.5sec as a ‘DOSAGE’ – and if that dosage results in the desired scene exposure then that dosage can be classed as the exposure value.

Let’s consider two exposure settings – 0.5sec at f11 ISO100 and 1sec at f16 ISO 100.

Technically speaking they are two different exposures, but BOTH result in the same light dosage at the sensor.  The second exposure is TWICE the length of time but HALF the intensity.

So both exposures have the same Exposure Value or Ev.

The following exposure of the same scene is 1sec at f11 ISO 100:

exposure value

The image was shot at the same intensity (f11) but the shutter speed (time) was twice as long, and so the dosage was doubled.  Double the dose = +1Ev!

And in this version the exposure was 0.25sec at f11 ISO 100:

exposure value

Here the light dosage at the sensor is HALF that of the correct/desired exposure because the time factor was halved while using the same intensity.

So half the dose = -1Ev!

Now some of you will be thinking that -1Ev is 1 stop under exposure – and you’d be right!

But Ev, or exposure value, is just a cleaner way of thinking about exposure because it doesn’t tie you to any specific camera setting – and it’s more easily transferable between cameras.

What Do I Mean by that?

Example – If I use say a 50mm prime lens on my Nikon D800E with the metering in matrix mode, ISO 100 and f14 I might get a metered exposure shutter speed of 1/10th of a second.

But if I replace the D800E with a D4 set at 100 ISO, matrix and f14 I’ll guarantee the metered shutter speed requirement will be either 1/13 or 1/15th of a second.

The D4 meters between -1/3Ev and -2/3Ev (in other words 1/2 stop) faster/brighter than the D800E when fitted with the same lens and set to the same aperture and ISO, and shooting exactly the same framing/composition.

Yet the ‘as metered’ shots from both cameras look pretty much the same with respect to light dosage – exposure value.

Exposure Settings Don’t Transfer between camera models very well, because the meter in a camera is calibrated to the response curve of the sensor.

A Canon 1DX Mk2 will usually generate a evaluative metered shutter speed 1/3rd of a stop faster than a matrix metered Nikon D4S for the same given focal length, aperture and ISO setting.

Both setups ‘as metered’ shots will look pretty much the same, but transposing the Canon settings to the Nikon will result in -1/3 stop under exposure – which on a digital camera is definitely NOT the way to go!

‘As Metered’ can be regarded as +/-0Ev for any camera (Note: this does NOT mean Ev=0!)

Any exposure compensation you use in order to achieve the ‘desired’ exposure on the other hand can be thought of as ‘metered + or – xEv’.

exposure compensation

Shot with the D4 plus 70-200 f2.8@70mm in manual exposure mode, 1/2000th sec, f8 and ISO 400 using +2/3Ev compensation.

The matrix metered exposure indicated by the camera before the exposure value compensation was 1/3200th – this would have made the Parasitic Jaeger (posh name for an Arctic Skua!) too dark.

A 1DXMk2 using the corresponding lens and focal length, f8, ISO 400 and evaluative metering would have wanted to generate a shutter speed of at least 1/4000th sec without any exposure compensation, and 1/2500th with +2/3Ev exposure compensation.

And if shot at those settings the Canon image would look pretty much like the above.

But if the Nikon D4 settings had been fully replicated on the Canon then the shot would be between 1/3 and 1/2 stop over exposed, risking ‘blowing’ of some of the under-wing and tail highlights.

So the simple lesson here is don’t use other photographers settings – they never work unless you’re on identical gear! 

But if you are out with me and I tell you “matrix/evaluative plus 1Ev” then your exposure will have pretty much the same ‘light dosage’ as mine irrespective of you using the right shutter speed, aperture or ISO for the job or not!

I was brought up to think in terms of exposure value and Ev units, and to use light meters that had Ev scales on them – hell, the good ones still have ’em!

If you look up the ‘tech-specs’ for your camera you’ll find that metering sensitivity is normally quoted as an Ev range.  And that’s not all – your auto focus may well have a low light Ev limited quoted too!

To all intents and purposes Ev units and your more familiar ‘f-stops’ amount to one and the same thing.

As we’ve seen before, different exposures in terms of intensity and time can have the same exposure value, and all Ev is concerned with is the cumulative outcome of our shutter speed, aperture and ISO choices.

Most of you will take exposures at ‘what the camera meter says’ settings, or you will use the meter indicated exposure as a baseline and modify the exposure settings with either positive or negative ‘weighting’ via your exposure compensation dial.

That’s Ev compensation relative to your meters baseline.

But have you ever asked yourself just how accurate your camera meter is?

So I’ve just stepped outside my front door and taken these two frames:

exposure value

EV=15/Sunny 16 Rule 1/100th sec, f16, 100 ISO – click to view large.

exposure value

Matrix Metering, no exposure compensation 1/200th sec, f16, ISO 100 – click to view large

These two raw files have been brought into Lightroom and THE ONLY adjustment has been to change the profile from Adobe Color to Camera Neutral.

Members of my subscription site can download the raw files and see for themselves.

Look at the histogram in both images!

The exposure for xxx164.NEF (the top image) is perfection personified while xxx162.NEF is under exposed by ONE WHOLE STOP – why?

Because the bottom image has been shot at the camera-specified matrix metered exposure, while the top image has been shot using the good old ‘Sunny 16 Rule’ that’s been around since God knows when!

“Yeah, but I could just use the shadow recovery slider on the bottom shot Andy….”  Yes, you could, if you wanted to be an idle tit, and even then the top image would still be better because there’s no ‘recovery’ being used on it in the first place.  Remember, more work at the camera means less work in processing!

Recovery of either shadows or highlights is ‘poor form’ and no substitute for correct exposure in the first place. Digital photography is just like shooting colour transparency film – you need to ‘peg the highlights’ as highlights BUT without over exposing them and causing them to ‘blow’.

In other words – ETTR, expose to the right!

And seeing as your camera meter wants to turn everything into midtone grey shite it’s the very last thing you should ever allow to dictate your final exposure settings – as the two images above prove beyond argument.

And herein lies the problem.

Even if you use the spot metering function the meter will read the brightness of what is covered by the ‘spot’ and then calculate the exposure required to expose that tonal brightness AS A MID TONE GREY.

That’s all fine ‘n dandy – if the metered area is actually an exact mid tone.  But what if you were metering a highlight?

Then the metered exposure would want to expose said highlight as a midtone and the overall highlight exposure would be far too dark.  And you can guess what would happen if you trusted your meter to spot-read a shadow.

A proper hand-held spot meter has an angle of view or AoV of 1 degree.

Your camera spot meter angle of view is dictated by the focal length of the lens you have fitted.

On my D800E for example, I need to have a lens AoV of around 130mm focal length equivalent for my spot to cover 1 degree, because the ‘spot’ is 4mm in diameter – total stupidity.

But it does function fairly well with wider angle lenses and exposure calculations when used in conjunction with the live view histogram.  And that will be subject of my next blog post – or perhaps I’ll do a video for YouTube!

So I doubt this blog post about relative exposure compensation is going to light your world on fire – it began as an explanation to a recurring question about my exif annotation habits and snowballed somewhat from there!

But I’ll leave you with this little guide to the aforementioned Sunny 16 Rule, which has been around since Noah took up boat-building:

To use this table just set your ISO to 100.

Your shutter speed needs to be the reciprocal of your ISO – in other words 1/100 sec for use with the stated aperture values:

Aperture Lighting conditions Shadow PROPERTIES
f/22* Snow/sand Dark with sharp edges
f/16 Sunny Distinct
f/11 Slight overcast Soft around edges
f/8 Overcast Barely visible
f/5.6** Heavy overcast No shadows
f/4 Open shade/sunset No shadows

* – I would not shoot at f22 because of diffraction – try 1/200th f16

** – let’s try some cumulative Ev thinking here and go for more depth of field using f11 and sticking with 100 ISO. -2Ev intensity (f5.6 to f11) requires +2Ev on time, so 1/100th sec becomes 1/25th sec.

Over the years I’ve taken many people out on photo training days, and a lot of them seem to think I’m some sort of magician when I turn their camera on, switch it manual, dial in a couple of settings and produce a half decent image without ever looking at the meter on their camera.

It ain’t magic – I just had this table burnt into the back of my eyeballs years ago.

Works a charm – if you can do the mental calculations in your head, and that’s easy with practice.  The skill is in evaluating your shooting conditions and relating them to the lighting and shadow descriptions.

And here’s a question for you; we know our camera meter wants to ‘peg’ what it’s measuring as a midtone irrespective of whether it’s measuring a midtone or not.  But what do you think the Sunny 16 Rule is ‘pegging’ and where is it pegging it on the exposure curve?

If you can answer that question correctly then the other flavour of exposure value – absolute – might well be of distinct interest to you!

Give it a try, and if you use it correctly you’ll never be more than 1/3rd of a stop out, if that.  Then you can go and unsubscribe from all those twats on YouTube who told you it was out-dated and defunct or never told you about it in the first place!

I hope you’ve found the information in this post useful.

I don’t monetize my YouTube videos or fill my blog posts with masses of affiliate links, and I rely solely on my patrons to help cover my time and server costs. If you would like to help me to produce more content please visit my Patreon page on the button above.

Many thanks and best light to you all.

Astro Landscape Photography

Astro Landscape Photography

Astro Landscape Photography

One of my patrons, Paul Smith, and I ventured down to Shropshire and the spectacular quartsite ridge of The Stiperstones to get this image of the Milky Way and Mars (the large bright ‘star’ above the rocks on the left).

I always work the same way for astro landscape photography, beginning with getting into position just before sunset.

Using the PhotoPills app on my phone I can see where the milky way will be positioned in my field of view at the time of peak sky darkness.  This enables me to position the camera exactly where I want it for the best composition.

The biggest killer in astro landscape photography is excessive noise in the foreground.

The other problem is that foregrounds in most images of this genre are not sharp due to a lack of depth of field at the wide apertures you need to shoot the night sky at – f2.8 for example.

To get around this problem we need to shoot a separate foreground image at a lower ISO, a narrower aperture and focused closer to the camera.

Some photographers change focus, engage long exposure noise reduction and then shoot a very long exposure.  But that’s an eminently risky thing to do in my opinion, both from a technical standpoint and one of time – a 60 minute exposure will take 120 minutes to complete.

The length of exposure is chosen to allow the very low photon-count from the foreground to ‘build-up’ on the sensor and produced a usable level of exposure from what little natural light is around.

From a visual perspective, when it works, the method produces images that can be spectacular because the light in the foreground matches the light in the sky in terms of directionality.

Light Painting

To get around the inconvenience of time and super-long exposures a lot of folk employ the technique of light painting their foregrounds.

Light painting – in my opinion – destroys the integrity of the finished image because it’s so bloody obvious!  The direction of light that’s ‘painted’ on the foreground bares no resemblance to that of the sky.

The other problem with light painting is this – those that employ the technique hardly ever CHECK to see if they are in the field of view of another photographer – think about that one for a second or two!

My Method

As I mentioned before, I set up just before sunset.  In the shot above I knew the milky way and Mars were not going to be where I wanted them until just after 1am, but I was set up by 9.20pm – yep, a long wait ahead, but always worth the effort.

Astro Landscape Photography

As we move towards the latter half of civil twilight I start shooting my foreground exposure, and I’ll shoot a few of these at regular intervals between then and mid nautical twilight.

Because I shoot raw the white balance set in camera is irrelevant, and can be balanced with that of the sky in Photoshop during post processing.

The key things here are that I have a shadowless even illumination of my foreground which is shot at a low ISO, in perfect focus, and shot at say f8 has great depth of field.

Once deep into blue hour and astronomical twilight the brighter stars are visible and so I now use full magnification in live view and focus on a bright star in the cameras field of view.

Then it’s a waiting game – waiting for the sky to darken to its maximum and the Milky Way to come into my desired position for my chosen composition.

Shooting the Sky

Astro landscape photography is all about showing the sky in context with the foreground – I have absolutely ZERO time for those popular YouTube photographers who composite a shot of the night sky into a landscape image shot in a different place or a different angle.

Good astro landscape photography HAS TO BE A COMPOSITE though – there is no way around that.

And by GOOD I mean producing a full resolution image that will sell through the agencies and print BIG if needed.

The key things that contribute to an image being classed good in my book are simple:

  • Pin-point stars with no trailing
  • Low noise
  • Sharp from ‘back’ to ‘front’.

Pin-points stars are solely down to correct shutter speed for your sensor size and megapixel count.

Low noise is covered by shooting a low ISO foreground and a sequence of high ISO sky images, and using Starry Landscape Stacker on Mac (Sequator on PC appears to be very similar) in conjunction with a mean or median stacking mode.

Further noise cancelling is achieved but the shooting of Dark Frames, and the typical wide-aperture vignetting is cancelled out by the creation of a flat field frame.

And ‘back to front’ image sharpness should be obvious to you from what I’ve already written!

So, I’ll typically shoot a sequence of 20 to 30 exposures – all one after the other with no breaks or pauses – and then a sequence of 20 to 30 dark frames.

Shutter speeds usually range from 4 to 6 seconds

Watch this video on my YouTube Channel about shutter speed:

Best viewed on the channel itself, and click the little cog icon to choose 1080pHD as the resolution.

Putting it all Together

Shooting all the frames for astro landscape photography is really quite simple.

Putting it all together is fairly simple and straight forward too – but it’s TEDIOUS and time-consuming if you want to do it properly.

The shot above took my a little over 4 hours!

And 80% of it is retouching in Photoshop.

I produce a very extensive training title – Complete Milky Way Photography Workflow – with teaches you EVERYTHING you need to know about the shooting and processing of astro landscape photography images – you can purchase it here – and if you use the offer code MWAY15 at the checkout you’ll get £15 off the purchase price.

But I wanted to try Raw Therapee for this Stiperstones image, and another of my patrons – Frank – wanted a video of processing methodology in Raw Therapee.

Easier said than done, cramming 4 hours into a typical YouTube video!  But after about six attempts I think I’ve managed it, and you can see it here, but I warn you now that it’s 40 minutes long:

Best viewed on the channel itself, and click the little cog icon to choose 1080pHD as the resolution.

I hope you’ve found the information in this post useful, together with the YouTube videos.

I don’t monetize my YouTube videos or fill my blog posts with masses of affiliate links, and I rely solely on my patrons to help cover my time and server costs.  If you would like to help me to produce more content please visit my Patreon page on the button above.

Many thanks and best light to you all.

ETTR Processing in Lightroom

ETTR Processing in Lightroom

When we shoot ETTR (expose to the right) in bright, harsh light, Lightroom can sometimes get the wrong idea and make a real ‘hash’ of rendering the raw file.

Sometimes it can be so bad that the less experienced photographer can get the wrong impression of their raw file exposure – and in some extreme cases they may even ‘bin’ the image thinking it irretrievably over exposed.

I’ve just uploaded a video to my YouTube channel which shows you exactly what I’m talking about:

The image was shot by my client and patron Paul Smith when he visited the Mara back in October last year,  and it’s a superb demo image of just how badly Lightroom can demosaic a straight forward +1.6 Ev ETTR shot.

Importing the raw file directly into Lightroom gives us this:

ETTR

But importing the raw file directly into RawTherapee with no adjustments gives us this:

ETTR

Just look at the two histogram versions – Lightroom is doing some crazy stuff to the image ‘in the background’ as there are ZERO develop settings applied.

But if you watch the video you’ll see that it’s quite straight forward to regain all that apparent ‘blown detail’.

And here’s the important bit – we do so WITHOUT the use of the shadow or highlight recovery sliders.  Anyone who has purchased my sharpening videos HERE knows that those two sliders can VERY EASILY cause undesirable ‘pseudo-sharpening’ halos, and they should only be used with caution.

ETTR

The way I process this +1.6 stop ETTR exposure inside Lightroom has revealed all the superb mid tone detail and given us a really good image that we could take into Photoshop and improve with some precision localized adjustments.

So don’t let Lightroom control you – you need to control IT!

Thanks for reading and watching.

You can also view this post on the free section of my Patreon pages HERE

If you feel this article and video has been beneficial to you and would like to see more per week, then supporting my Patreon page for as little as $1 per month would be a massive help.  Thanks everyone!

 

Professional Grade Image Sharpening

Professional Grade Image Sharpening for Archive, Print & Web – my latest training video collection.

image sharpening

View the overview page on my download store HERE

Over 11 hours of video training, spread across 58 videos…well, I told you it was going to be big!

And believe me, I could have made it even bigger, because there is FAR MORE to image sharpening than 99% of photographers think.

And you don’t need ANY stupid sharpener plugins – or noise reductions ones come to that.  Because Photoshop does it ALL anyway, and is far more customizable and controllable than any plugin could hope to be.

So don’t waste your money any more – spend it instead, on some decent training to show you how to do the job properly in the first place!

You won’t find a lot of these methods anywhere else on the internet – free or paid for – because ‘teachers cannot teach what they don’t know’ – and I know more than most!

image sharpening

As you can see from the list of lessons above, I cover more than just ‘plain old sharpening’.

Traditionally, image sharpening produces artifacts – usually white and black halos – if it’s over done. And image sharpening emphasizes ‘noise’ in areas of shadow and other low frequency detail, when it’s applied to an image in the ‘traditional’, often taught, blanket manner.

Why sharpen what isn’t in focus – to do so is madness, because all you do is sharpen the noise, and cause more artifacts!

Maximum sharpening should only be applied to detail in the image that is ‘fully in focus’.

So, as ‘focus sharpness’ falls off, so to should the level of applied sharpening.  That way, noise and other artifacts CAN NOT build up in an image.

And the same can be said for noise reduction, but ‘in reverse’.

So image sharpening needs to be applied in a differential manor – and that’s what this training is all about.

Using a brush in Lightroom etc to ‘brush in’ some sort of differential sharpening is NOT a good idea, because it’s imprecise, and something of a fools task.

Why do I say that? Simple……. Because the ‘differential factor bit’ is contained within the image itself – and it’s just sitting there on your computer screen WAITING for you to get stuck in and use it.

But, like everything else in modern digital photography, the knowledge and skill to do so has somehow been lost in the last 12 to 15 years, and the internet is full of ‘teachers’ who have never had these skills in the first place – hence they can’t teach ’em!

However, everyone who buys this training of mine WILL have those skills by the end of the course.

It’s been a real hard slog to produce these videos.  Recording the lessons is easy – it’s the editing and video call-outs that take a lot of time.  And I’ve edited all the audio in Audacity to remove breath sounds and background noise – many thanks to Curtis Judd for putting those great lessons on YouTube!

The price is £59.99. So right now, that’s over 11 hours of training for less than £5.50 per hour – that’s way cheaper than a 1to1, or even a workshop day with a crowd of other people!

So head off over to my download store and buy it, because what you’ll learn will improve your image processing, whether it’s for big prints or just jpegs on the web – guaranteed – just click here!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

 

Adobe Lightroom Classic and Photoshop CC 2018 tips

Adobe Lightroom Classic and Photoshop CC 2018 tips – part 1

So, you’ve either upgraded to Lightroom Classic CC and Photoshop CC 2018, or you are thinking doing so.

Well, here are a couple of things I’ve found – I’ve called this part1, because I’m sure there will be other problems/irritations!

Lightroom Classic CC GPU Acceleration problem

If you are having problems with shadow areas appearing too dark and somewhat ‘chocked’ in the develop module – but things look fine in the Library module – then just follow the simple steps in the video above and TURN OFF GPU Acceleration in the Lightroom preferences panel under the performance tab.

Adobe Lightroom Classic and Photoshop CC 2018 tips

Turn OFF GPU Acceleration

UPDATE: I have subsequently done another video on this topic that illustrates the fact that the problem did not exist in Lr CC 2015 v.12/Camera Raw v.9.12

In the new Photoshop CC 2018 there is an irritation/annoyance with the brush tool, and something called the ‘brush leash’.

Now why on earth you need your brush on a leash God ONLY KNOWS!

But the brush leash manifests itself as a purple/magenta line that follows your brush tool everywhere.

You have a smoothness slider for your brush – it’s default setting is 10%.  If we increase that value then the leash line gets even longer, and even more bloody irritating.

And why we would need an indicator (which is what the leash is) of smoothness amount and direction for our brush strokes is a bit beyond me – because we can see it anyway.

So, if you want to change the leash length, use the smoothing slider.

If you want to change the leash colour just go to Photoshop>Preferences>Cursors

Adobe Lightroom Classic and Photoshop CC 2018 tips

Here, you can change the colour, or better still, get rid of it completely by unticking the “show brush leash while smoothing” option.

So there are a couple of tips from my first 24 hours with the latest 2018 ransom ware versions from Adobe!

But I’m sure there will be more, so stay tuned, and consider heading over to my YouTube channel and hitting the subscribe button, and hit the ‘notifications bell’ while you’re at it!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

 

Monitors & Color Bit Depth

Monitors and Color Bit Depth – yawn, yawn – Andy’s being boring again!

Well, perhaps I am, but I know ‘stuff’ you don’t – and I’m telling YOU that you need to know it if you want to get the best out of your photography – so there!

Let me begin by saying that NOTHING monitor-related has any effect on your captured images.  But  EVERYTHING monitor-related DOES have an effect on the way you SEE your images, and therefore definitely has an effect on your image adjustments and post-processing.

So anything monitor-related can have either a positive or negative effect on your final image output.

Bit Depth

I’m going to begin with a somewhat disconnected analogy, but bare with me here.

We live in the ‘real and natural world’, and everything that we see around us is ANALOGUE.  Nature exists on a natural curve and is full of infinite variation. In the digital world though, everything has to be put in a box.

We’ll begin with two dogs – a Labrador and a Poodle.  In this instance both natural  and digital worlds can cope with the situation, because nature just regards them for what they are, and digital can put the Labrador in a box named ‘Labrador’ and the Poodle in a separate box just for Poodles.

Let’s now imagine for a fleeting second that Mr. Lab and Miss Poodle ‘get jiggy’ with the result of dog number 3 – a Labradoodle.  Nature just copes with the new dog because it sits on natures ‘doggy curve’ half way between Mum and Dad.

But digital is having a bloody hissy-fit in the corner because it can’t work out what damn box to put the new dog in.  The only way we can placate digital is to give it another box, one for 50% Labrador and 50% Poodle.

Now if our Labradoodle grows up a bit then starts dating and makes out with another Labrador then we end up with a fourth dog that is 75% Labrador and 25% Poodle.  Again, nature just takes all in her stride, but digital in now having a stroke because it’s got no box for that gene mix.

Every time we give digital a new box we have effectively given it a greater bit depth.

Now imagine this process of cross-breed gene dilution continues until the glorious day arrives when a puppy is born that is 99% Labrador and only 1% Poodle.  It’ll be obvious to you that by this time digital has a flaming warehouse full of boxes that can cope with just about any gene mix, but alas, the last time bit depth was increased was to accommodate 98% Lab 2% Poodle.

Digital is by now quite old and grumpy and just can’t be arsed anymore, so instead of filling in triplicate forms to request a bit depth upgrade it just lumps our new dog in the same classification box as the previous one.

So our new dog is put in the wrong box.

Digital hasn’t been slap-dash though and put the pup in any old box, oh no.  Digital has put the pup in the nearest suitable box – the box with the closest match to reality.

Please note that the above mentioned boxes are strictly metaphorical, and no puppies were harmed during the making of this analogy.

Digital images are made up of pixels, and a pixel can be thought of as a data point.  That single data point contains information about luminance and colour.  The precision of that information is determined by the bit depth of the data

Very little in our ‘real world’ has a surface that looks flat and uniform.  Even a supposedly flat, uniform white wall on a building has subtle variations and graduations of colour and brightness/luminance caused by the angular direction of light and its own surface texture. That’s nature for you in the analogy above.

We are all familiar with RGB values for white being 255,255,255 and black being 0,0,0, but those are only 8 bit values.

8 bit allows for 256 discrete levels of information (or gene mix classification boxes for our Labradoodles), and a scale from 0 to 255 contains 256 values – think about it for a second!

At all bit depth values black is always 0,0,0 but white is another matter entirely:

8 bit = 256 discrete values so image white is 255,255,255

10 bit = 1,024 discrete values so image white is 1023,1023,1023

12 bit = 4,096 discrete values so image white is 4095,4095,4095

14 bit = 16,384 discrete values so image white is 16383,16383,16383

15 bit = 32,768 discrete values so image white is 32767,32767,32767

16 bit = 65,536 discrete values so image white should be 65535,65535,65535 – but it isn’t – more later!

And just for giggles here are some higher bit depth potentials:

24 bit = 16,777,216 discrete values

28 bit = 268,435,456 discrete values

32 bit = 4,294,967,296 discrete values

So you can see a pattern here.  If we double the bit depth we square the value of the information, and if we halve the bit depth the information we are left with is the square root of what we started with.

And if we convert to a lower or smaller bit depth “digital has fewer boxes to put the different dogs in to, so Labradoodles of varying genetic make-ups end up in the same boxes.  They are no longer sorted in such a precise manner”.

The same applies to our images. Where we had two adjacent pixels of slightly differing value in 16 bit, those same two adjacent pixels can very easily become totally identical if we do an 8 bit conversion and so we lose fidelity of colour variation and hence definition.

This is why we should archive our processed images as 16 bit TIFFS instead of 8 bit JPEGs!

In an 8 bit image we have black 0,0,0 and white 255,255,255 and ONLY 254 available shades or tones to graduate from one to the other.

Monitor Display Bit Depth

Whereas, in a 16 bit image black is 0,0,0 and white is 65535,65535,65535 with 65,534 intervening shades of grey to make the same black to white transition:

Monitor Display Bit Depth

But we have to remember that whatever the bit depth value is, it applies to all 3 colour channels:

Monitor Display Bit Depth Monitor Display Bit Depth Monitor Display Bit Depth

So a 16 bit image should contain a potential of 65536 values per colour channel.

How Many Colours?

So how many colours can our bit depth describe Andy?

Simple answer is to cube the bit depth value, so:

8 bit = 256x256x256 = 16,777,216 often quoted as 16.7 million colours.

10 bit = 1024x1024x1024 = 1,073,741,824 or 1.07 billion colours or EXACTLY 64x the value of 8 bit!

16 bit = 65536x65536x65536 = 281,474,976,710,656 colours. Or does it?

Confusion Reigns Supreme

Now here’s where folks get confused.

Photoshop does not WORK  in 16 bit, but in 15 bit + 1 level.  Don’t believe me? Go New Document, RGB, 16 bit and select white as the background colour.

Open up your info panel, stick your cursor anywhere in the image area and look at the 16 bit RGB read out and you will see a value of 32768 for all 3 colour channels – that’s 15 bit folks! Now double the 32768 value – yup, that’s right, you get 16 bit or 65,536!

Why does Photoshop do this?  Simple answer is ‘for speed’ – or so they say at Adobe!  There are numerous others reasons that you’ll find on various forums etc – signed and unsigned integers, mid-points, float-points etc – but really, do we care?

Things are what they are, and rumor has it that once you hit the save button on a 16 bit TIFF is does actually save out at 16 bit.

So how many potential colours in 16 bit Photoshop?  Dunno! But it’ll be somewhere between 35,184,372,088,832 and 281,474,976,710,656, and to be honest either value is plenty enough for me!

The second line of confusion usually comes from PC users under Windows, and the  Windows 24 bit High Color and 32 bit True Color that a lot of PC users mistakenly think mean something they SERIOUSLY DO NOT!

Windows 24 bit means 24 bit TOTAL – in short, 8 bits per channel, not 24!

Windows 32 bit True Color is something else again. Correctly known as 32 bit RGBA it contains 4 channels of 8 bits each; three 8 bit colour channels and an 8 bit Alpha channel used for transparency.

The same 32 bit RGBA colour (Mac call it ARGB) has been utilised on Mac OS for ever, but most Mac users never questioned it because it’s not quite so obvious in OSX as it is in Windows unless you look at the Graphics/Displays section of your System report, and who the Hell ever goes there apart from twats like me:

bit depth

Above you can see the pixel depth being reported as 32 bit colour ARGB8888 – that’s Apple-speak for Windows 32 bit True Colour RGBA.  But like a lot of ‘things Mac’ the numbers give you the real information.  The channels are ordered Alpha, Red, Green, Blue and the four ‘8’s give you the bit depth of each pixel, or as Apple put it ‘pixel depth’.

However, in the latter part of 2015 Apple gave OSX 10.11 El Capitan a 10 bit colour capability, though hardly anyone knew including ‘yours truly’.  I never have understood why they kept it ‘on the down-low’ but there was no fan-fare that’s for sure.

bit depth

Now you can see the pixel depth being reported as 30 bit ARGB2101010 – meaning that the transparency Alpha channel has been reduced from 8 bit to 2 bit and the freed-up 6 bits have been distributed evenly between the Red, Green and Blue colour channels.

Monitor Display

Your computer has a maximum display bit depth output capability that is defined by:

  • a. the operating system
  • b. the GPU fitted

Your system might well support 10 bit colour, but will only output 8 bit if the GPU is limited to 8 bit.

Likewise, you could be running a 10 bit GPU but if your OS only supports 8 bit, then 8 bit is all you will get out of the system (that’s if the OS will support the GPU in the first place).

Monitors have their own panel display bit depth, and panel bit depth costs money.

A lot of LCD panels on the market are only capable of displaying 8 bit, even if you run an OS and GPU that output 10 bit colour.

And then again certain monitors such as Eizo ColorEdge, NEC MultiSynch and the odd BenQ for example, are capable of displaying 10 bit colour from a 10 bit OS/GPU combo, but only if the monitor-to-system connection has 10 bit capability.  This basically means Display Port or HDMI connection.

As photographers we really should be looking to maximise our visual capabilities by viewing the maximum number of colour graduations captured by our cameras.  This means operating with the greatest available colour bit depth on a properly calibrated monitor.

Just to reiterate the fundamental difference between 8 bit and 10 bit monitor display pixel depth:

  • 8 bit = 256x256x256 = 16,777,216 often quoted as 16.7 million colours.
  • 10 bit = 1024x1024x1024 = 1,073,741,824 or 1.07 billion colours.

So 10 bit colour allows us to see exactly 64 times more colour on our display than 8 bit colour. (please note the word ‘see’).

It certainly does NOT add a whole new spectrum of colour to what we see; nor does it ‘add’ anything physical to our files.  It’s purely a ‘visual’ improvement that allows us to see MORE of what we ALREADY have.

I’ve made a pound or two from my images over the years and I’ve been happily using 8 bit colour right up until I bought my Eizo the other month, even though my system has been 10 bit capable since I upgraded the graphics card back in August last year.

The main reason for the upgrade with NOT 10 bit capability either, but for the 4Gb of ‘heavy lifting power’ for Photoshop.

But once I splashed the cash on a 10 bit display I of course made instant use of the systems 10 bit capability and all its benefits – of which there’s really only one!

The Benefits

The ability to see 64 times more colour means that I can see 64x more subtle variantions of the same colours I could see before.

With my wildlife images I find very little benefit if I’m honest, but with landscapes – especially sunset and twilight shots – it’s a different story.  Sunset and twighlight images have massive graduations of similar hues.  Quite often an 8 bit display will not be able to display every colour variant in a graduation and so will replace it with its nearest neighbor that it can display – (putting the 99% Lab pup in the 98% Lab box!).

This leads to a visual ‘banding’ on the display:

bit depth

The banding in the shot above is greatly exaggerated but you get the idea.

A 10 bit colour display also helps me to soft proof slightly faster for print too, and for the same reason.  I can now see much more subtle shifts in proofing when making the same tiny adjustments as I made when using 8 bit.  It doesn’t bring me to a different place, but it allows me to get there faster.

For me the switch to 10 bit colour hasn’t really improved my product, but it has increased my productivity.

If you can’t afford a 10 bit display then don’t stress as 8 bit ARGB has served me well for years!

But if you are still needing a new monitor display then PLEASE be careful what you are buying, as some displays are not even true 8 bit.

A good place to research your next monitor (if not taking the Eizo, NEC 10 bit route) is TFT Central

If you select the panel size you fancy and then look at the Colour Depth column you will see the bit depth values for the display.

You should also check the Tech column and only consider H-IPS panel tech.

Beware of 10 bit panels that are listed as 8 bit + FRC, and 8 bit panels listed as 6 bit + FRC.

FRC is the acronym for FRAME RATE CONTROL – also known as Temporal Dithering.  In very simple terms FRC involves making the pixels flash different colours at you at a frame rate faster than your eye can see.  Therefore you are fooled into seeing what is to all intents and purposes an out ‘n out lie.

It’s a tech that’s okay for gamers and watching movies, but certainly not for any form of colour management or photography workflow.

Do not entertain the idea of anything that isn’t an IPS, H-IPS or other IPS derivative.  IPS is the acronym for In Plane Switching technology.  This the the type of panel that doesn’t visually change if you move your head when looking at it!

So there we go, that’s been a bit of a ramble hasn’t it, but I hope now that you all understand bit depth and how it relates to a monitors display colour.  And let’s not forget that you are all up to speed on Labradoodles!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Monitor Calibration Update

Monitor Calibration Update

Okay, so I no longer NEED a new monitor, because I’ve got one – and my wallet is in Leighton Hospital Intensive Care Unit on the critical list..

What have you gone for Andy?  Well if you remember, in my last post I was undecided between 24″ and 27″, Eizo or BenQ.  But I was favoring the Eizo CS2420, on the grounds of cost, both in terms of monitor and calibration tool options.

But I got offered a sweet deal on a factory-fresh Eizo CS270 by John Willis at Calumet – so I got my desire for more screen real-estate fulfilled, while keeping the costs down by not having to buy a new calibrator.

monitor calibration update

But it still hurt to pay for it!

Monitor Calibration

There are a few things to consider when it comes to monitor calibration, and they are mainly due to the physical attributes of the monitor itself.

In my previous post I did mention one of them – the most important one – the back light type.

CCFL and WCCFL – cold cathode fluorescent lamps, or LED.

CCFL & WCCFL (wide CCFL) used to be the common type of back light, but they are now less common, being replaced by LED for added colour reproduction, improved signal response time and reduced power consumption.  Wide CCFL gave a noticeably greater colour reproduction range and slightly warmer colour temperature than CCFL – and my old monitor was fitted with WCCFL back lighting, hence I used to be able to do my monitor calibration to near 98% of AdobeRGB.

CCFL back lights have one major property – that of being ‘cool’ in colour, and LEDs commonly exhibit a slightly ‘warmer’ colour temperature.

But there’s LEDs – and there’s LEDs, and some are cooler than others, some are of fixed output and others are of a variable output.

The colour temperature of the backlighting gives the monitor a ‘native white point’.

The ‘brightness’ of the backlight is really the only true variable on a standard type of LCD display, and the inter-relationship between backlight brightness and colour temperature, and the size of the monitors CLUT (colour look-up table) can have a massive effect on the total number of colours that the monitor can display.

Industry-standard documentation by folk a lot cleverer than me has for years recommended the same calibration target settings as I have alluded to in previous blog posts:

White Point: D65 or 6500K

Brightness: 120 cdm² or candelas per square meter

Gamma: 2.2

monitor calibration update

The ubiquitous ColorMunki Photo ‘standard monitor calibration’ method setup screen.

This setup for ‘standard monitor calibration’ works extremely well, and has stood me in good stead for more years than I care to add up.

As I mentioned in my previous post, standard monitor calibration refers to a standard method of calibration, which can be thought of as ‘software calibration’, and I have done many print workshops where I have used this method to calibrate Eizo ColorEdge and NEC Spectraviews with great effect.

However, these more specialised colour management monitors have the added bonus of giving you a ‘hardware monitor calbration’ option.

To carry out a hardware monitor calibration on my new CS270 ColorEdge – or indeed any ColorEdge – we need to employ the Eizo ColorNavigator.

The start screen for ColorNavigator shows us some interesting items:

monitor calibration update

The recommended brightness value is 100 cdm² – not 120.

The recommended white point is D55 not D65.

Thank God the gamma value is the same!

Once the monitor calibration profile has been done we get a result screen of the physical profile:

monitor calibration update

Now before anyone gets their knickers in a knot over the brightness value discrepancy there’s a couple of things to bare in mind:

  1. This value is always slightly arbitrary and very much dependent on working/viewing conditions.  The working environment should be somewhere between 32 and 64 lux or cdm² ambient – think Bat Cave!  The ratio of ambient to monitor output should always remain at between 32:75/80 and 64:120/140 (ish) – in other words between 1:2 and 1:3 – see earlier post here.
  2. The difference between 100 and 120 cdm² is less than 1/4 stop in camera Ev terms – so not a lot.

What struck me as odd though was the white point setting of D55 or 5500K – that’s 1000K warmer than I’m used to. (yes- warmer – don’t let that temp slider in Lightroom cloud your thinking!).

monitor calibration updateAfter all, 1000k is a noticeable variation – unlike the brightness 20cdm² shift.

Here’s the funny thing though; if I ‘software calibrate’ the CS270 using the ColorMunki software with the spectro plugged into the Mac instead of the monitor, I visually get the same result using D65/120cdm² as I do ‘hardware calibrating’ at D55 and 100cdm².

The same that is, until I look at the colour spaces of the two generated ICC profiles:

monitor calibration update

The coloured section is the ‘software calibration’ colour space, and the wire frame the ‘hardware calibrated’ Eizo custom space – click the image to view larger in a separate window.

The hardware calibration profile is somewhat larger and has a slightly better black point performance – this will allow the viewer to SEE just that little bit more tonality in the deepest of shadows, and those perennially awkward colours that sit in the Blue, Cyan, Green region.

It’s therefore quite obvious that monitor calibration via the hardware/ColorNavigator method on Eizo monitors does buy you that extra bit of visual acuity, so if you own an Eizo ColorEdge then it is the way to go for sure.

Having said that, the differences are small-ish so it’s not really worth getting terrifically evangelical over it.

But if you have the monitor then you should have the calibrator, and if said calibrator is ‘on the list’ of those supported by ColorNavigator then it’s a bit of a JDI – just do it.

You can find the list of supported calibrators here.

Eizo and their ColorNavigator are basically making a very effective ‘mash up’ of the two ISO standards 3664 and 12646 which call for D65 and D50 white points respectively.

Why did I go CHEAP ?

Well, cheaper…..

Apart from the fact that I don’t like spending money – the stuff is so bloody hard to come by – I didn’t want the top end Eizo in either 27″ or 24″.

With the ‘top end’ ColorEdge monitors you are paying for some things that I at least, have little or no use for:

  • 3D CLUT – I’m a general sort of image maker who gets a bit ‘creative’ with my processing and printing.  If I was into graphics and accurate repro of Pantone and the like, or I specialised in archival work for the V & A say, then super-accurate colour reproduction would be critical.  The advantage of the 3D CLUT is that it allows a greater variety of SUBTLY different tones and hues to be SEEN and therefore it’s easier to VISUALLY check that they are maintained when shifting an image from one colour space to another – eg softproofing for print.  I’m a wildlife and landscape photographer – I don’t NEED that facility because I don’t work in a world that requires a stringent 100% colour accuracy.
  • Built-in Calibrator – I don’t need one ‘cos I’ve already got one!
  • Built-in Self-Correction Sensor – I don’t need one of those either!

So if your photography work is like mine, then it’s worth hunting out a ‘zero hours’ CS270 if you fancy the extra screen real-estate, and you want to spend less than if buying its replacement – the CS2730.  You won’t notice the extra 5 milliseconds slower response time, and the new CS2730 eats more power – but you do get a built-in carrying handle!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Good Contrast Control in Lightroom CC

Contrast Control in Lightroom

Learning how to deploy proper contrast control in Lightroom brings with it two major benefits:

  • It allows you to reveal more of your camera sensors dynamic range.
  • It will allow you to reveal considerably more image detail.

contrast control

I have posted on this subject before, under the guise of neutralising Lightrooms ‘hidden background adjustments’.  But as Lightroom CC 2015 evolves, trying to ‘nail’ the best way of doing something becomes like trying to hit a moving target.

For the last few months I’ve been using this (for me) new method – and to be honest it works like a charm!

It involves the use of the ‘zero’ preset together with a straight process version swap around, as illustrated in the before/after shot above and in the video linked below.  This video is best viewed on my YouTube channel:

The process might seem a little tedious at first, but it’s really easy when you get used to it, and it works on ALL images from ALL cameras.

Here is a step-by-step guide to the various Lightroom actions you need to take in order to obtain good contrast control:

Contrast Control Workflow Steps:

1. Develop Module Presets: Choose ZEROED
2. Camera Calibration Panel: Choose CAMERA NEUTRAL
3. Camera Calibration Panel: Choose Process Version 2010
4. Camera Calibration Panel: Choose Process Version 2012
5. Basics Panel: Double Click Exposure (goes from -1 to 0)
6. Basics Panel: Adjust Black Setting to taste if needed.
7. Details Panel: Reset Sharpening to default +25
8. Details Panel: Reset Colour Noise to default +25
9. Lens Corrections Panel: Tick Remove Chromatic Aberration.

Now that you’ve got good contrast control you can set about processing your image – just leave the contrast slider well alone!

Why is contrast control important, and why does it ‘add’ so much to my images Andy?

We are NOT really reducing the contrast of the raw file we captured.  We are simply reducing the EXCESSIVE CONTRAST that Lightroom ADDS to our files.

  • Lightroom typically ADDS a +33 contrast adjustment but ‘calls it’ ZERO.
  • Lightroom typically ADDS a medium contrast tone curve but ‘calls it’ LINEAR.

Both of this are contrast INCREASES, and any increase in contrast can be seen as a ‘compression’ of the tonal space between BLACK and WHITE.  This is a dynamic range visualisation killer because it crushes the ends of the midtone range.

It’s also a detail killer, because 99% of the subject detail is in the mid tone range.  Typically the Lightroom tonal curve range for midtones is 25% to 75%, but Lightroom is quite happy to accept a midtone range of 10% to 90% – check those midtone arrow adjusters at the bottom edge of the parametric tone curve!

I hope you find this post useful folks, and don’t forget to watch the video at full resolution on my YouTube Channel.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

 

Raw File Compression

Raw File Compression.

Today I’m going to give you my point of view over that most vexatious question – is LOSSLESS raw file compression TRULY lossless?

I’m going to upset one heck of a lot of people here, and my chances of Canon letting me have any new kit to test are going to disappear over the horizon at a great rate of knots, but I feel compelled to post!

What prompts me to commit this act of potential suicide?

It’s this shot from my recent trip to Norway:

FW1Q1351-2

Direct from Camera

FW1Q1351

Processed in Lightroom

I had originally intended to shoot Nikon on this trip using a hire 400mm f2.8, but right at the last minute there was a problem with the lens that couldn’t be sorted out in time, so Calumet supplied me with a 1DX and a 200-400 f4 to basically get me out of a sticky situation.

As you should all know by now, the only problems I have with Canon cameras are their  short Dynamic Range, and Canons steadfast refusal to allow for uncompressed raw recording.

The less experienced shooter/processor might look at the shot “ex camera” and be disappointed – it looks like crap, with far too much contrast, overly dark shadows and near-blown highlights.

Shot on Nikon the same image would look more in keeping with the processed version IF SHOT using the uncompressed raw option, which is something I always do without fail; and the extra 3/4 stop dynamic range of the D4 would make a world of difference too.

Would the AF have done as good a job – who knows!

The lighting in the shot is epic from a visual PoV, but bad from a camera exposure one. A wider dynamic range and zero raw compression on my Nikon D4 would allow me to have a little more ‘cavalier attitude’ to lighting scenarios like this – usually I’d shoot with +2/3Ev permanently dialled into the camera.  Overall the extra dynamic range would give me less contrast, and I’d have more highlight detail and less need to bump up the shadow areas in post.

In other words processing would be easier, faster and a lot less convoluted.

But I can’t stress enough just how much detrimental difference LOSSLESS raw file compression CAN SOMETIMES make to a shot.

Now there is a lot – and I mean A LOT – of opinionated garbage written all over the internet on various forums etc about lossless raw file compression, and it drives me nuts.  Some say it’s bad, most say it makes no difference – and both camps are WRONG!

Sometimes there is NO visual difference between UNCOMPRESSED and LOSSLESS, and sometimes there IS.  It all depends on the lighting and the nature of the scene/subject colours and how they interact with said lighting.

The main problem with the ‘it makes no difference’ camp is that they never substantiate their claims; and if they are Canon shooters they can’t – because they can’t produce an image with zero raw file compression to compare their standard lossless CR2 files to!

So I’ve come up with a way of illustrating visually the differences between various levels of raw file compression on Nikon using the D800E and Photoshop.

But before we ‘get to it’ let’s firstly refresh your understanding. A camera raw file is basically a gamma 1.0, or LINEAR gamma file:

gamma,gamma encoding,Andy Astbury

Linear (top) vs Encoded Gamma

The right hand 50% of the linear gamma gradient represents the brightest whole stop of exposure – that’s one heck of a lot of potential for recording subtle highlight detail in a raw file.

It also represents the area of tonal range that is frequently most effected by any form of raw file compression.

Neither Nikon or Canon will reveal to the world the algorithm-based methods they use for lossless or lossy raw file compression, but it usually works by a process of ‘Bayer Binning’.

Bayer_Pattern

If we take a 2×2 block, it contains 2 green, 1 red and 1 blue photosite photon value – if we average the green value and then interpolate new values for red and blue output we will successfully compress the raw file.  But the data will be ‘faux’ data, not real data.

The other method we could use is to compress the tonal values in that brightest stop of recorded highlight tone – which is massive don’t forget – but this will result in a ’rounding up or down’ of certain bright tonal values thus potentially reducing some of the more subtle highlight details.

We could also use some variant of the same type of algorithm to ‘rationalise’ shadow detail as well – with pretty much the same result.

In the face of Nikon and Canons refusal to divulge their methodologies behind raw file compression, especially lossless, we can only guess what is actually happening.

I read somewhere that with lossless raw file compression the compression algorithms leave a trace instruction about what they have done and where they’ve done it in order that a raw handler programme such as Lightroom can actually ‘undo’ the compression effects – that sounds like a recipe for disaster if you ask me!

Personally I neither know nor do I care – I know that lossless raw file compression CAN be detrimental to images shot under certain conditions, and here’s the proof – of a fashion:

Let’s look at the following files:

raw file compression

Image 1: 14 bit UNCOMPRESSED

raw file compression

Image 2: 14 bit UNCOMPRESSED

raw file compression

Image 3: 14 bit LOSSLESS compression

raw file compression

Image 4: 14 bit LOSSY compression

raw file compression

Image 5: 12 bit UNCOMPRESSED

Yes, there are 2 files which are identical, that is 14 bit uncompressed – and there’s a reason for that which will become apparent in a minute.

First, some basic Photoshop ‘stuff’.  If I open TWO images in Photoshop as separate layers in the same document, and change the blend mode of the top layer to DIFFERENCE I can then see the differences between the two ‘images’.  It’s not a perfect way of proving my point because of the phenomenon of photon flux.

Photon Flux Andy??? WTF is that?

Well, here’s where shooting two identical 14 bit uncompressed files comes in – they themselves are NOT identical!:

controlunamplified control

The result of overlaying the two identical uncompressed raw files (above left) – it looks almost black all over indicating that the two shots are indeed pretty much the same in every pixel.  But if I amplify the image with a levels layer (above right) you can see the differences more clearly.

So there you have it – Photon Flux! The difference between two 14 bit UNCOMPRESSED raw files shot at the same time, same ISO, shutter speed AND with a FULLY MANUAL APERTURE.  The only difference between the two shots is the ratio and number of photons striking the subject and being reflected into the lens.

Firstly 14 Bit UNCOMPRESSED compared to 14 bit LOSSLESS (the important one!):

raw file compression

14 bit UNCOMPRESSED vs 14 bit LOSSLESS

Please remember, the above ‘difference’ image contains photon flux variations too, but if you look carefully you will see greater differences than in the ‘flux only’ image above.

raw file compression raw file compression

The two images above illustrate the differences between 14 bit uncompressed and 14 bit LOSSY compression (left) and 14 bit UNCOMPRESSED and 12 bit UNCOMPRESSED (right) just for good measure!

In Conclusion

As I indicated earlier in the post, this is not a definitive testing method, sequential shots will always contain a photon flux variation that ‘pollutes’ the ‘difference’ image.

I purposefully chose this white subject with textured aluminium fittings and a blackish LED screen because the majority of sensor response will lie in that brightest gamma 1.0 stop.

The exposure was a constant +1EV, 1/30th @ f 18 and 100 ISO – nearly maximum dynamic range for the D800E, and f18 was set manually to avoid any aperture flicker caused by auto stop down.

You can see from all the ‘difference’ images that the part of the subject that seems to suffer the most is the aluminium part, not the white areas.  The aluminium has a stippled texture causing a myriad of small specular highlights – brighter than the white parts of the subject.

What would 14 bit uncompressed minus 14 bit lossless minus photon flux look like?  In a perfect world I’d be able to show you accurately, but we don’t live in one of those so I can’t!

We can try it using the flux shot from earlier:

raw file compression

But this is wildly inaccurate as the flux component is not pertinent to the photons at the actual time the lossless compression shot was taken.  But the fact that you CAN see an image does HINT that there is a real difference between UNCOMPRESSED and LOSSLESS compression – in certain circumstances at least.

If you have never used a camera that offers the zero raw file compression option then basically what you’ve never had you never miss.  But as a Nikon shooter I shoot uncompressed all the time – 90% of the time I don’t need to, but it just saves me having to remember something when I do need the option.

raw file compression

Would this 1DX shot be served any better through UNCOMPRESSED raw recording?  Most likely NO – why?  Low Dynamic Range caused in the main by flat low contrast lighting means no deep dark shadows and nothing approaching a highlight.

I don’t see it as a costly option in terms of buffer capacity or on-board storage, and when it comes to processing I would much rather have a surfeit of sensor data rather than a lack of it – no matter how small that deficit might be.

Lossless raw file compression has NO positive effect on your images, and it’s sole purpose in life is to allow you to fit more shots on the storage media – that’s it pure and simple.  If you have the option to shoot uncompressed then do so, and buy a bigger card!

What pisses my off about Canon is that it would only take, I’m sure, a firmware upgrade to give the 1DX et al the ability to record with zero raw file compression – and, whether needed or not, it would stop miserable grumpy gits like me banging on about it!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.