Monitors & Color Bit Depth

Monitors and Color Bit Depth – yawn, yawn – Andy’s being boring again!

Well, perhaps I am, but I know ‘stuff’ you don’t – and I’m telling YOU that you need to know it if you want to get the best out of your photography – so there!

Let me begin by saying that NOTHING monitor-related has any effect on your captured images.  But  EVERYTHING monitor-related DOES have an effect on the way you SEE your images, and therefore definitely has an effect on your image adjustments and post-processing.

So anything monitor-related can have either a positive or negative effect on your final image output.

Bit Depth

I’m going to begin with a somewhat disconnected analogy, but bare with me here.

We live in the ‘real and natural world’, and everything that we see around us is ANALOGUE.  Nature exists on a natural curve and is full of infinite variation. In the digital world though, everything has to be put in a box.

We’ll begin with two dogs – a Labrador and a Poodle.  In this instance both natural  and digital worlds can cope with the situation, because nature just regards them for what they are, and digital can put the Labrador in a box named ‘Labrador’ and the Poodle in a separate box just for Poodles.

Let’s now imagine for a fleeting second that Mr. Lab and Miss Poodle ‘get jiggy’ with the result of dog number 3 – a Labradoodle.  Nature just copes with the new dog because it sits on natures ‘doggy curve’ half way between Mum and Dad.

But digital is having a bloody hissy-fit in the corner because it can’t work out what damn box to put the new dog in.  The only way we can placate digital is to give it another box, one for 50% Labrador and 50% Poodle.

Now if our Labradoodle grows up a bit then starts dating and makes out with another Labrador then we end up with a fourth dog that is 75% Labrador and 25% Poodle.  Again, nature just takes all in her stride, but digital in now having a stroke because it’s got no box for that gene mix.

Every time we give digital a new box we have effectively given it a greater bit depth.

Now imagine this process of cross-breed gene dilution continues until the glorious day arrives when a puppy is born that is 99% Labrador and only 1% Poodle.  It’ll be obvious to you that by this time digital has a flaming warehouse full of boxes that can cope with just about any gene mix, but alas, the last time bit depth was increased was to accommodate 98% Lab 2% Poodle.

Digital is by now quite old and grumpy and just can’t be arsed anymore, so instead of filling in triplicate forms to request a bit depth upgrade it just lumps our new dog in the same classification box as the previous one.

So our new dog is put in the wrong box.

Digital hasn’t been slap-dash though and put the pup in any old box, oh no.  Digital has put the pup in the nearest suitable box – the box with the closest match to reality.

Please note that the above mentioned boxes are strictly metaphorical, and no puppies were harmed during the making of this analogy.

Digital images are made up of pixels, and a pixel can be thought of as a data point.  That single data point contains information about luminance and colour.  The precision of that information is determined by the bit depth of the data

Very little in our ‘real world’ has a surface that looks flat and uniform.  Even a supposedly flat, uniform white wall on a building has subtle variations and graduations of colour and brightness/luminance caused by the angular direction of light and its own surface texture. That’s nature for you in the analogy above.

We are all familiar with RGB values for white being 255,255,255 and black being 0,0,0, but those are only 8 bit values.

8 bit allows for 256 discrete levels of information (or gene mix classification boxes for our Labradoodles), and a scale from 0 to 255 contains 256 values – think about it for a second!

At all bit depth values black is always 0,0,0 but white is another matter entirely:

8 bit = 256 discrete values so image white is 255,255,255

10 bit = 1,024 discrete values so image white is 1023,1023,1023

12 bit = 4,096 discrete values so image white is 4095,4095,4095

14 bit = 16,384 discrete values so image white is 16383,16383,16383

15 bit = 32,768 discrete values so image white is 32767,32767,32767

16 bit = 65,536 discrete values so image white should be 65535,65535,65535 – but it isn’t – more later!

And just for giggles here are some higher bit depth potentials:

24 bit = 16,777,216 discrete values

28 bit = 268,435,456 discrete values

32 bit = 4,294,967,296 discrete values

So you can see a pattern here.  If we double the bit depth we square the value of the information, and if we halve the bit depth the information we are left with is the square root of what we started with.

And if we convert to a lower or smaller bit depth “digital has fewer boxes to put the different dogs in to, so Labradoodles of varying genetic make-ups end up in the same boxes.  They are no longer sorted in such a precise manner”.

The same applies to our images. Where we had two adjacent pixels of slightly differing value in 16 bit, those same two adjacent pixels can very easily become totally identical if we do an 8 bit conversion and so we lose fidelity of colour variation and hence definition.

This is why we should archive our processed images as 16 bit TIFFS instead of 8 bit JPEGs!

In an 8 bit image we have black 0,0,0 and white 255,255,255 and ONLY 254 available shades or tones to graduate from one to the other.

Monitor Display Bit Depth

Whereas, in a 16 bit image black is 0,0,0 and white is 65535,65535,65535 with 65,534 intervening shades of grey to make the same black to white transition:

Monitor Display Bit Depth

But we have to remember that whatever the bit depth value is, it applies to all 3 colour channels:

Monitor Display Bit Depth Monitor Display Bit Depth Monitor Display Bit Depth

So a 16 bit image should contain a potential of 65536 values per colour channel.

How Many Colours?

So how many colours can our bit depth describe Andy?

Simple answer is to cube the bit depth value, so:

8 bit = 256x256x256 = 16,777,216 often quoted as 16.7 million colours.

10 bit = 1024x1024x1024 = 1,073,741,824 or 1.07 billion colours or EXACTLY 64x the value of 8 bit!

16 bit = 65536x65536x65536 = 281,474,976,710,656 colours. Or does it?

Confusion Reigns Supreme

Now here’s where folks get confused.

Photoshop does not WORK  in 16 bit, but in 15 bit + 1 level.  Don’t believe me? Go New Document, RGB, 16 bit and select white as the background colour.

Open up your info panel, stick your cursor anywhere in the image area and look at the 16 bit RGB read out and you will see a value of 32768 for all 3 colour channels – that’s 15 bit folks! Now double the 32768 value – yup, that’s right, you get 16 bit or 65,536!

Why does Photoshop do this?  Simple answer is ‘for speed’ – or so they say at Adobe!  There are numerous others reasons that you’ll find on various forums etc – signed and unsigned integers, mid-points, float-points etc – but really, do we care?

Things are what they are, and rumor has it that once you hit the save button on a 16 bit TIFF is does actually save out at 16 bit.

So how many potential colours in 16 bit Photoshop?  Dunno! But it’ll be somewhere between 35,184,372,088,832 and 281,474,976,710,656, and to be honest either value is plenty enough for me!

The second line of confusion usually comes from PC users under Windows, and the  Windows 24 bit High Color and 32 bit True Color that a lot of PC users mistakenly think mean something they SERIOUSLY DO NOT!

Windows 24 bit means 24 bit TOTAL – in short, 8 bits per channel, not 24!

Windows 32 bit True Color is something else again. Correctly known as 32 bit RGBA it contains 4 channels of 8 bits each; three 8 bit colour channels and an 8 bit Alpha channel used for transparency.

The same 32 bit RGBA colour (Mac call it ARGB) has been utilised on Mac OS for ever, but most Mac users never questioned it because it’s not quite so obvious in OSX as it is in Windows unless you look at the Graphics/Displays section of your System report, and who the Hell ever goes there apart from twats like me:

bit depth

Above you can see the pixel depth being reported as 32 bit colour ARGB8888 – that’s Apple-speak for Windows 32 bit True Colour RGBA.  But like a lot of ‘things Mac’ the numbers give you the real information.  The channels are ordered Alpha, Red, Green, Blue and the four ‘8’s give you the bit depth of each pixel, or as Apple put it ‘pixel depth’.

However, in the latter part of 2015 Apple gave OSX 10.11 El Capitan a 10 bit colour capability, though hardly anyone knew including ‘yours truly’.  I never have understood why they kept it ‘on the down-low’ but there was no fan-fare that’s for sure.

bit depth

Now you can see the pixel depth being reported as 30 bit ARGB2101010 – meaning that the transparency Alpha channel has been reduced from 8 bit to 2 bit and the freed-up 6 bits have been distributed evenly between the Red, Green and Blue colour channels.

Monitor Display

Your computer has a maximum display bit depth output capability that is defined by:

  • a. the operating system
  • b. the GPU fitted

Your system might well support 10 bit colour, but will only output 8 bit if the GPU is limited to 8 bit.

Likewise, you could be running a 10 bit GPU but if your OS only supports 8 bit, then 8 bit is all you will get out of the system (that’s if the OS will support the GPU in the first place).

Monitors have their own panel display bit depth, and panel bit depth costs money.

A lot of LCD panels on the market are only capable of displaying 8 bit, even if you run an OS and GPU that output 10 bit colour.

And then again certain monitors such as Eizo ColorEdge, NEC MultiSynch and the odd BenQ for example, are capable of displaying 10 bit colour from a 10 bit OS/GPU combo, but only if the monitor-to-system connection has 10 bit capability.  This basically means Display Port or HDMI connection.

As photographers we really should be looking to maximise our visual capabilities by viewing the maximum number of colour graduations captured by our cameras.  This means operating with the greatest available colour bit depth on a properly calibrated monitor.

Just to reiterate the fundamental difference between 8 bit and 10 bit monitor display pixel depth:

  • 8 bit = 256x256x256 = 16,777,216 often quoted as 16.7 million colours.
  • 10 bit = 1024x1024x1024 = 1,073,741,824 or 1.07 billion colours.

So 10 bit colour allows us to see exactly 64 times more colour on our display than 8 bit colour. (please note the word ‘see’).

It certainly does NOT add a whole new spectrum of colour to what we see; nor does it ‘add’ anything physical to our files.  It’s purely a ‘visual’ improvement that allows us to see MORE of what we ALREADY have.

I’ve made a pound or two from my images over the years and I’ve been happily using 8 bit colour right up until I bought my Eizo the other month, even though my system has been 10 bit capable since I upgraded the graphics card back in August last year.

The main reason for the upgrade with NOT 10 bit capability either, but for the 4Gb of ‘heavy lifting power’ for Photoshop.

But once I splashed the cash on a 10 bit display I of course made instant use of the systems 10 bit capability and all its benefits – of which there’s really only one!

The Benefits

The ability to see 64 times more colour means that I can see 64x more subtle variantions of the same colours I could see before.

With my wildlife images I find very little benefit if I’m honest, but with landscapes – especially sunset and twilight shots – it’s a different story.  Sunset and twighlight images have massive graduations of similar hues.  Quite often an 8 bit display will not be able to display every colour variant in a graduation and so will replace it with its nearest neighbor that it can display – (putting the 99% Lab pup in the 98% Lab box!).

This leads to a visual ‘banding’ on the display:

bit depth

The banding in the shot above is greatly exaggerated but you get the idea.

A 10 bit colour display also helps me to soft proof slightly faster for print too, and for the same reason.  I can now see much more subtle shifts in proofing when making the same tiny adjustments as I made when using 8 bit.  It doesn’t bring me to a different place, but it allows me to get there faster.

For me the switch to 10 bit colour hasn’t really improved my product, but it has increased my productivity.

If you can’t afford a 10 bit display then don’t stress as 8 bit ARGB has served me well for years!

But if you are still needing a new monitor display then PLEASE be careful what you are buying, as some displays are not even true 8 bit.

A good place to research your next monitor (if not taking the Eizo, NEC 10 bit route) is TFT Central

If you select the panel size you fancy and then look at the Colour Depth column you will see the bit depth values for the display.

You should also check the Tech column and only consider H-IPS panel tech.

Beware of 10 bit panels that are listed as 8 bit + FRC, and 8 bit panels listed as 6 bit + FRC.

FRC is the acronym for FRAME RATE CONTROL – also known as Temporal Dithering.  In very simple terms FRC involves making the pixels flash different colours at you at a frame rate faster than your eye can see.  Therefore you are fooled into seeing what is to all intents and purposes an out ‘n out lie.

It’s a tech that’s okay for gamers and watching movies, but certainly not for any form of colour management or photography workflow.

Do not entertain the idea of anything that isn’t an IPS, H-IPS or other IPS derivative.  IPS is the acronym for In Plane Switching technology.  This the the type of panel that doesn’t visually change if you move your head when looking at it!

So there we go, that’s been a bit of a ramble hasn’t it, but I hope now that you all understand bit depth and how it relates to a monitors display colour.  And let’s not forget that you are all up to speed on Labradoodles!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Good Contrast Control in Lightroom CC

Contrast Control in Lightroom

Learning how to deploy proper contrast control in Lightroom brings with it two major benefits:

  • It allows you to reveal more of your camera sensors dynamic range.
  • It will allow you to reveal considerably more image detail.

contrast control

I have posted on this subject before, under the guise of neutralising Lightrooms ‘hidden background adjustments’.  But as Lightroom CC 2015 evolves, trying to ‘nail’ the best way of doing something becomes like trying to hit a moving target.

For the last few months I’ve been using this (for me) new method – and to be honest it works like a charm!

It involves the use of the ‘zero’ preset together with a straight process version swap around, as illustrated in the before/after shot above and in the video linked below.  This video is best viewed on my YouTube channel:

The process might seem a little tedious at first, but it’s really easy when you get used to it, and it works on ALL images from ALL cameras.

Here is a step-by-step guide to the various Lightroom actions you need to take in order to obtain good contrast control:

Contrast Control Workflow Steps:

1. Develop Module Presets: Choose ZEROED
2. Camera Calibration Panel: Choose CAMERA NEUTRAL
3. Camera Calibration Panel: Choose Process Version 2010
4. Camera Calibration Panel: Choose Process Version 2012
5. Basics Panel: Double Click Exposure (goes from -1 to 0)
6. Basics Panel: Adjust Black Setting to taste if needed.
7. Details Panel: Reset Sharpening to default +25
8. Details Panel: Reset Colour Noise to default +25
9. Lens Corrections Panel: Tick Remove Chromatic Aberration.

Now that you’ve got good contrast control you can set about processing your image – just leave the contrast slider well alone!

Why is contrast control important, and why does it ‘add’ so much to my images Andy?

We are NOT really reducing the contrast of the raw file we captured.  We are simply reducing the EXCESSIVE CONTRAST that Lightroom ADDS to our files.

  • Lightroom typically ADDS a +33 contrast adjustment but ‘calls it’ ZERO.
  • Lightroom typically ADDS a medium contrast tone curve but ‘calls it’ LINEAR.

Both of this are contrast INCREASES, and any increase in contrast can be seen as a ‘compression’ of the tonal space between BLACK and WHITE.  This is a dynamic range visualisation killer because it crushes the ends of the midtone range.

It’s also a detail killer, because 99% of the subject detail is in the mid tone range.  Typically the Lightroom tonal curve range for midtones is 25% to 75%, but Lightroom is quite happy to accept a midtone range of 10% to 90% – check those midtone arrow adjusters at the bottom edge of the parametric tone curve!

I hope you find this post useful folks, and don’t forget to watch the video at full resolution on my YouTube Channel.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

 

Photoshop Save for Web

Save for Web in Photoshop CC – where the Chuff has it gone?

“Who’s moved my freakin’ cheese?”

Adobe have moved it……..

For years Photoshop has always offered the same ‘Save for Web’ or ‘Save for Web & Devices’ option and dialogue box:

save for web,Photoshop CC 2015,colour management,save for web and devices,export,quick export as JPG,export as,export prferences,Andy Astbury,Wildlife in Pixels,Adobe,

The traditional route to the ‘Save for Web’ dialogue in all versions of Photoshop prior to CC 2015.

But Adobe have embarked on a cheese-moving exercise with CC 2015 and moved ‘save for web’ out of the traditional navigation pathway:

save for web,Photoshop CC 2015,colour management,save for web and devices,export,quick export as JPG,export as,export prferences,Andy Astbury,Wildlife in Pixels,Adobe

Adobe have ‘moved your cheese’ to here, though the dialogue and options are the same.

If we take a closer look at that new pathway:

save for web,Photoshop CC 2015,colour management,save for web and devices,export,quick export as JPG,export as,export prferences,Andy Astbury,Wildlife in Pixels,Adobe

…we see that wonderful Adobe term ‘Legacy’ – which secretly means crap, shite, old fashioned, out dated, sub standard and scheduled for abandonment and/or termination.

‘THEY’ don’t want you to use it!

I have no idea why they have done this, though there are plenty of excuses being posted by Adobe on the net.  But what is interesting is this page HERE and more to the point this small ‘after thought’:

save for web,Photoshop CC 2015,colour management,save for web and devices,export,quick export as JPG,export as,export prferences,Andy Astbury,Wildlife in Pixels,Adobe

That sounds really clever – especially the bit about ‘may be’……. let’s chuck colour management out the freakin’ window and be done!

So if we don’t use the ‘legacy’ option of save for web, let’s see what happens.  Here’s our image, in the ProPhotoRGB colour space open in Photoshop CC 2015:

save for web,Photoshop CC 2015,colour management,save for web and devices,export,quick export as JPG,export as,export prferences,Andy Astbury,Wildlife in Pixels,Adobe

So let’s try the Export>Quick Export as JPG option and bring the result back into Photoshop:

save for web,Photoshop CC 2015,colour management,save for web and devices,export,quick export as JPG,export as,export prferences,Andy Astbury,Wildlife in Pixels,Adobe

Straight away we can see that the jpg is NOT tagged with a colour space, but it looks fine inside the Photoshop CC 2105 work space:

save for web,Photoshop CC 2015,colour management,save for web and devices,export,quick export as JPG,export as,export prferences,Andy Astbury,Wildlife in Pixels,Adobe

“Perfect” – yay!…………NOT!

Let’s open in with an internet browser……

save for web,Photoshop CC 2015,colour management,save for web and devices,export,quick export as JPG,export as,export prferences,Andy Astbury,Wildlife in Pixels,Adobe

Whoopsy – doopsy…!  Looks like a severe colour management problem is happening somewhere……..but Adobe did tell us:

SFW4

Might the Export Preferences help us:

save for web,Photoshop CC 2015,colour management,save for web and devices,export,quick export as JPG,export as,export prferences,Andy Astbury,Wildlife in Pixels,Adobe

In a word……..NO

Let’s try Export>Export As:

save for web,Photoshop CC 2015,colour management,save for web and devices,export,quick export as JPG,export as,export prferences,Andy Astbury,Wildlife in Pixels,Adobe

Oh Hell No!

If we open the original image in Photoshop CC 2015 in the ProPhotoRGB colour space and then go Edit>Convert to Profile and select sRGB; then select Export>Quick Export as JPG, the resulting image will look fine in a browser.  But it will still be ‘untagged’ with any colour space – which is never a good idea.

And if you’ve captioned and key worded the image then all that hard work is lost too.

So if you must make your web jpeg images via Photoshop you will only achieve a quick and accurate work flow by using the Save for Web (Legacy) option.  That way you’ll have a correctly ‘tagged’ and converted image complete with all your IPTC key words, caption and title.

Of course you could adopt the same work flow as me, and always export as jpeg out of Lightroom; thus avoiding this mess entirely.

I seriously don’t know what the devil Adobe are thinking of here, and doubtless there is or will be a work around for the problem, but whatever it is it’ll be more work for the photographer.

Adobe – if it ain’t broke then don’t fix it !!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Brilliant Supreme Lustre Ultimate Paper

Brilliant Supreme Lustre Paper Review

(26/07/2015: Important update added at end of post re: Canon Pixma Pro 1 .icc profile from the Brilliant website).

Printing an image is the final part of the creative process, and I don’t think there are many of my peers who would disagree with me on that score.

Whenever I’m teaching printing, be it a 1to1 session or a workshop group, I invariably get asked what my recommendation for a good general purpose printing paper would be – one that would suit the widest spread of image styles and subjects.

Until quite recently that recommendation was always the same – Permajet Oyster.

It’s a wide gamut paper – it reproduces a lot of colour and hue variation – that has a high level of brightness and is really easy to soft-proof to in Lightroom. And even though it’s not absolutely colour neutral, it’s natural base tint isn’t too cool to destroy the atmosphere in a hazy orange sunset seascape.

But, after months of printing and testing I have now changed my mind – and for good reason.

Calumet Brilliant Supreme Lustre Paper

Brilliant Supreme Lustre Ultimate paper from Calumet is my new recommendation for general printing, and for anyone who wants printing with the minimum of fuss and without the hassle of trying to decide what paper to choose.

Let’s look at how the two papers stack up:

Paper Weight:

Permajet Oyster 271gsm

Brilliant Supreme Lustre Ultimate 300gsm

A heavier paper is a good thing in my book; heavier means thicker, and that means a bit more structural stability; a boon when it comes to matting and mounting, and general paper handling.

Paper Tint & Base Neutrality:

Permajet Oyster:     RGB 241,246,243

Brilliant Supreme Lustre Ultimate:     RGB 241,245,245

The above RGB values are measured using a ColorMunki Photo in spot colour picker mode, as are the L,a,b values below.

L,a,b Luminosity Value:

Permajet Oyster:     96.1

Brilliant Supreme Lustre Ultimate:     95.8

So both papers have the same red value in their ‘paper white’, but both have elevated green and blue values, and yes, green + blue = cyan!

But the green/blue ratios are different – they are skewed in the Permajet Oyster, but 1:1 in the Brilliant paper – so where does this leave us in terms of paper proofing?

The image below is a fully processed TIFF open in Lightroom and ready for soft-proofing:

BSLU2

Now if we load the image into the Permajet Oyster colour space – that’s all soft proofing is by the way – we can see a number of changes, all to the detriment of the image:

BSLU3

The image has lost luminance, the image has become slightly cooler overall but, there is a big colour ‘skew’ in the brown, reds and oranges of both the eagle and the muted background colours.

Now look at what happens when we send the image into the Brilliant Supreme Lustre Ultimate colour space:

BSLU4

Yes the image has lost luminance, and there is an overall colour temperature change; but the important thing is that it’s nowhere near as skewed as it was in the Permajet Oyster soft-proofing environment.

The more uniform the the colour change the easier it is to remove!

BSLU5

The only adjustments I’ve needed to make to put me in the middle of the right ball park are a +6 Temp and +2 Clarity – and we are pretty much there, ready to press the big “print me now” button.

The image below just serves to show the difference between the proof adjusted and unadjusted image:

BSLU6

But here is the same image soft-proofed to pretty much the same level, but for Permajet Oyster paper – click the image to see it at full size, just look at the number of adjustments I’ve had to do to get basically the same effect:

BSLU7

Couple of things – firstly, apologies for the somewhat violent image – the wife just pointed that out to me!  Secondly though, after testing various images of vastly differing colour distributions and gamuts, I consistently find I’m having to do less work in soft-proofing with the Brilliant Supreme Lustre Ultimate paper than its rival.  Though I must stress that the adjustments don’t always follow the same direction for obvious reasons..

Media Settings:

These are important.  For most printers the Oyster paper has a media setting recommendation on Epson printers ( someone once told me there were other makes that used bubbles – ewee, yuck) of Premium Gloss Photo Paper or PGPP.  But I find that PSPP (Premium Semi Gloss Photo Paper) works best on my 4800,  and I know that it’s the recommended media setting for the Epson SCP600.

See update below for Canon Pixma Pro 1 media settings and new updated .icc profile

Conclusion:

Buy a 25 sheet box A3 HERE or 50 sheet box A4 size HERE

They say time is money, so anything that saves time is a no-brainer, especially if it costs no more than its somewhat more labour-intensive alternative.

Gamut1

The gamut or colour spaces of the two paper ‘canned profiles’ is shown above – red plot is the Brilliant Supreme Lustre Ultimate and white is Oyster – both profiles being for the Epson 4800.  Yes, the Calumet paper gamut is slightly smaller, but in real terms and with real-world images and the relative colour-metric rendering intent I’ve not noticed any short-comings whatsoever.

I have little doubt that the gamut of the paper would be expanded further with the application of a custom profile, but that’s a whole other story.

Running at around £1 per sheet of A3 it’s no more expensive than any other top quality general printing paper, and it impresses the heck out of me with relatively neutral base tint.

So easy to print to – so buy some!

I’ll be demonstrating just how well this paper works at a series of Print Workshops for Calumet later in the year, where we’ll be using the Epson SC-P600 printer, which is the replacement for the venerable R3000.

UPDATE:

Canon Pixma Pro One .ICC Profile

If anyone has tried using the Lustre profile BriLustreCanPro1.icc that was available for download on the Brilliant website, then please STOP trying to use it – it’s an abomination and whoever produced it should be shot.

I discovered just how bad it was when I was doing a print 1to1 day and the client had a PixmaPro1 printer.  I spoke to Andy Johnson at Calumet and within a couple of days a new profile was sorted out and it works great.

Now that same new profile is available for download at the Brilliant website HERE – just click and download the zip file.  In the file you will find the new .icc profile which goes by the name of BriLustreCanonPro1_PPPL_1.icc

I got them to add the media settings acronym in the profile name – a la Permajet – so set the paper type to Photo Paper Pro Lustre when using this paper on the Pixma Pro 1.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Please consider supporting this blog.

Lumenzia for Wildlife

The Lumenzia Photoshop extension

Yet more on the usefulness of the Lumenzia Photoshop extension, the short cut to great looking images of all types and styles.

I had an email from client and blog follower David Sparks after my last post about this useful mighty Photoshop tool.

He sent these before and after rail shots:

20141002-_D4S6303

Before adding Lumenzia. Click for larger view.

After adding Lumenzia

After adding Lumenzia. Click for larger view.

difference

Comparison overlay – see how the left side of the image has that extra presence – and that’s just with the click of a couple of buttons in the Lumenzia GUI. Click to view larger.

Here is what David had to say in his email:

Andy, here is a before and after.  Processing was much, much faster than usual, using Lumenzia.

Thanks for bringing it to my attention….I’m working my way through your Image Processing in LR4 & Photoshop + LR5 bundle and enjoying it very much.

And as my friend and blog follower Frank Etchells put it:

Excellent recommendation this Andy. Bought it first time from your previous posting… at just over £27 it’s marvellous :)

What gets me puzzled is the fact that these Lumenzia posts have had over 500 separate page views in the last few days but less then 3% of you have bought it – WTF are you guys waiting for…

Get it BOUGHT – NOW – HERE

UPDATE: Greg Benz (the plugin author) has launched a comprehensive Lumenzia training course – see my post here for more information.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Camera Calibration

Custom Camera Calibration

The other day I had an email fall into my inbox from leading UK online retailer…whose name escapes me but is very short… that made my blood pressure spike.  It was basically offering me 20% off the cost of something that will revolutionise my photography – ColorChecker Passport Camera Calibration Profiling software.

I got annoyed for two reasons:

  1. Who the “f***” do they think they’re talking to sending ME this – I’ve forgotten more about this colour management malarkey than they’ll ever know….do some customer research you idle bastards and save yourselves a mauling!
  2. Much more importantly – tens of thousands of you guys ‘n gals will get the same email and some will believe the crap and buy it – and you will get yourselves into the biggest world of hurt imaginable!

Don’t misunderstand me, a ColorChecker Passport makes for a very sound purchase indeed and I would not like life very much if I didn’t own one.  What made me seethe is the way it’s being marketed, and to whom.

Profile all your cameras for accurate colour reproduction…..blah,blah,blah……..

If you do NOT fully understand the implications of custom camera calibration you’ll be in so much trouble when it comes to processing you’ll feel like giving up the art of photography.

The problems lie in a few areas:

First, a camera profile is a SENSOR/ASIC OUTPUT profile – think about that a minute.

Two things influence sensor/asic output – ISO and lens colour shift – yep. that’s right, no lens is colour-neutral, and all lenses produce colour shifts either by tint or spectral absorption. And higher ISO settings usually produce a cooler, bluer image.

Let’s take a look at ISO and its influence on custom camera calibration profiling – I’m using a far better bit of software for doing the job – “IN MY OPINION” – the Adobe DNG Profile Editor – free to all MAC download and Windows download – but you do need the ColorChecker Passport itself!

I prefer the Adobe product because I find the ColorChecker software produced camera calibration profiles there were, well, pretty vile in terms of increased contrast especially; not my cup of tea at all.

camera calibration, Andy Astbury, colour, color management

5 images shot at 1 stop increments of ISO on the same camera/lens combination.

Now this is NOT a demo of software – a video tutorial of camera profiling will be on my next photography training video coming sometime soon-ish, doubtless with a somewhat verbose narrative explaining why you should or should not do it!

Above, we have 5 images shot on a D4 with a 24-70 f2.8 at 70mm under a consistent overcast daylight at 1stop increments of ISO between 200 and 3200.

Below, we can see the resultant profile and distribution of known colour reference points on the colour wheel.

camera calibration, Andy Astbury, colour, color management

Here’s the 200 ISO custom camera calibration profile – the portion of interest to us is the colour wheel on the left and the points of known colour distribution (the black squares and circled dot).

Next, we see the result of the image shot at 3200 ISO:

camera calibration, Andy Astbury, colour, color management

Here’s the result of the custom camera profile based on the shot taken at 3200 ISO.

Now let’s super-impose one over t’other – if ISO doesn’t matter to a camera calibration profile then we should see NO DIFFERENCE………….

camera calibration, Andy Astbury, colour, color management

The 3200 ISO profile colour distribution overlaid onto the 200 ISO profile colour distribution – it’s different and they do not match up.

……..well would you bloody believe it!  Embark on custom camera calibration  profiling your camera and then apply that profile to an image shot with the same lens under the same lighting conditions but at a different ISO, and your colours will not be right.

So now my assertions about ISO have been vindicated, let’s take a look at skinning the cat another way, by keeping ISO the same but switching lenses.

Below is the result of a 500mm f4 at 1000 ISO:

camera calibration, Andy Astbury, colour, color management

Profile result of a 500mm f4 at 1000 ISO

And below we have the 24-70mm f2.8 @ 70mm and 1000 ISO:

camera calibration, Andy Astbury, colour, color management

Profile result of a 24-70mm f2.8 @ 70mm at 1000 ISO

Let’s overlay those two and see if there’s any difference:

camera calibration, Andy Astbury, colour, color management

Profile results of a 500mm f4 at 1000 ISO and the 24-70 f2.8 at 1000 ISO – as massively different as day and night.

Whoops….it’s all turned to crap!

Just take a moment to look at the info here.  There is movement in the orange/red/red magentas, but even bigger movements in the yellows/greens and the blues and blue/magentas.

Because these comparisons are done simply in Photoshop layers with the top layer at 50% opacity you can even see there’s an overall difference in the Hue and Saturation slider values for the two profiles – the 500mm profile is 2 and -10 respectively and the 24-70mm is actually 1 and -9.

The basic upshot of this information is that the two lenses apply a different colour cast to your image AND that cast is not always uniformly applied to all areas of the colour spectrum.

And if you really want to “screw the pooch” then here’s the above comparison side by side with with  the 500f4 1000iso against the 24-70mm f2.8 200iso view:

camera calibration, Andy Astbury, colour, color management

500mm f4/24-70mm f2.8 1000 ISO comparison versus 500mm f4 1000 ISO and 24-70mm f2.8 200 ISO.

A totally different spectral distribution of colour reference points again.

And I’m not even going to bother showing you that the same camera/lens/ISO combo will give different results under different lighting conditions – you should by now be able to envisage that little nugget yourselves.

So, Custom Camera Calibration – if you do it right then you’ll be profiling every body/lens combo you have, at every conceivable ISO value and lighting condition – it’s one of those things that if you don’t do it all then you’d be best off not doing at all in most cases.

I can think of a few instances where I would do it as a matter of course, such as scientific work, photo-microscopy, and artwork photography/copystand work etc, but these would be well outside the remit the more normal photographic practices.

As I said earlier, the Passport device itself is worth far more than it’s weight in gold – set up and light your shot and include the Passport device in a prominent place. Take a second shot without it and use shot 1 to custom white balance shot 2 – a dead easy process that makes the device invaluable for portrait and studio work etc.

But I hope by now you can begin to see the futility of trying to use a custom camera calibration profile on a “one size fits all” basis – it just won’t work correctly; and yet for the most part this is how it’s marketed – especially by third party retailers.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Desktop Printing 101

Understanding Desktop Printing – part 1

 

desktop printingDesktop printing is what all photographers should be doing.

Holding a finished print of your epic image is the final part of the photographic process, and should be enjoyed by everyone who owns a camera and loves their photography.

But desktop printing has a “bad rap” amongst the general hobby photography community – a process full of cost, danger, confusion and disappointment.

Yet there is no need for it to be this way.

Desktop printing is not a black art full of ‘ju-ju men’ and bear-traps  – indeed it’s exactly the opposite.

But if you refuse to take on board a few simple basics then you’ll be swinging in the wind and burning money for ever.

Now I’ve already spoken at length on the importance of monitor calibration & monitor profiling on this blog HERE and HERE so we’ll take that as a given.

But in this post I want to look at the basic material we use for printing – paper media.

Print Media

A while back I wrote a piece entitled “How White is Paper White” – it might be worth you looking at this if you’ve not already done so.

Over the course of most of my blog posts you’ll have noticed a recurring undertone of contrast needs controlling.

Contrast is all about the relationship between blacks and whites in our images, and the tonal separation between them.

This is where we, as digital photographers, can begin to run into problems.

We work on our images via a calibrated monitor, normally calibrated to a gamma of 2.2 and a D65 white point.  Modern monitors can readily display true black and true white (Lab 0 to Lab 100/RGB 0 to 255 in 8 bit terms).

Our big problem lies in the fact that you can print NEITHER of these luminosity values in any of the printer channels – the paper just will not allow it.

A papers ability to reproduce white is obviously limited to the brightness and background colour tint of the paper itself – there is no such think as ‘white’ paper.

But a papers ability to render ‘black’ is the other vitally important consideration – and it comes as a major shock to a lot of photographers.

Let’s take 3 commonly used Permajet papers as examples:

  • Permajet Gloss 271
  • Permajet Oyster 271
  • Permajet Portrait White 285

The following measurements have been made with a ColorMunki Photo & Colour Picker software.

L* values are the luminosity values in the L*ab colour space where 0 = pure black (0RGB) and 100 = pure white (255RGB)

Gloss paper:

  • Black/Dmax = 4.4 L* or 14,16,15 in 8 bit RGB terms
  • White/Dmin = 94.4 L* or 235,241,241 (paper white)

From these measurements we can see that the deepest black we can reproduce has an average 8bit RGB value of 15 – not zero.

We can also see that “paper white” has a leaning towards cyan due to the higher 241 green & blue RGB values, and this carries over to the blacks which are 6 points deficient in red.

Oyster paper:

  • Black/Dmax = 4.7 L* or 15,17,16 in 8 bit RGB terms
  • White/Dmin = 94.9 L* or 237,242,241 (paper white)

We can see that the Oyster maximum black value is slightly lighter than the Gloss paper (L* values reflect are far better accuracy than 8 bit RGB values).

We can also see that the paper has a slightly brighter white value.

Portrait White Matte paper:

  • Black/Dmax = 25.8 L* or 59,62,61 in 8 bit RGB terms
  • White/Dmin = 97.1 L* or 247,247,244 (paper white)

You can see that paper white is brighter than either Gloss or Oyster.

The paper white is also deficient in blue, but the Dmax black is deficient in red.

It’s quite common to find this skewed cool/warm split between dark tones and light tones when printing, and sometimes it can be the other way around.

And if you don’t think there’s much of a difference between 247,247,244 & 247,247,247 you’d be wrong!

The image below (though exaggerated slightly due to jpeg compression) effectively shows the difference – 247 neutral being at the bottom.

paper white,printing

247,247,244 (top) and 247,247,247 (below) – slightly exaggerated by jpeg compression.

See how much ‘warmer’ the top of the square is?

But the real shocker is the black or Dmax value:

paper,printing,desktop printing

Portrait White matte finish paper plotted against wireframe sRGB on L*ab axes.

The wireframe above is the sRGB colour space plotted on the L*ab axes; the shaded volume is the profile for Portrait White.  The sRGB profile has a maximum black density of 0RGB and so reaches the bottom of vertical L axis.

However, that 25.8 L* value of the matte finish paper has a huge ‘gap’ underneath it.

The higher the black L* value the larger is the gap.

What does this gap mean for our desktop printing output?

It’s simple – any tones in our image that are DARKER, or have a lower L* value than the Dmax of the destination media will be crushed into “paper black” – so any shadow detail will be lost.

Equally the same can be said for gaps at the top of the L* axis where “paper white” or Dmin is lower than the L* value of the brightest tones in our image – they too will get homogenized into the all-encompassing paper white!

Imagine we’ve just processed an image that makes maximum use of our monitors display gamut in terms of luminosity – it looks magnificent, and will no doubt look equally as such for any form of electronic/digital distribution.

But if we send this image straight to a printer it’ll look really disappointing, if only for the reasons mentioned above – because basically the image will NOT fit on the paper in terms of contrast and tonal distribution, let alone colour fidelity.
It’s at this point where everyone gives up the idea of desktop printing:

  • It looks like crap
  • It’s a waste of time
  • I don’t know what’s happened.
  • I don’t understand what’s gone wrong

Well, in response to the latter, now you do!

But do we have to worry about all this tech stuff ?

No, we don’t have to WORRY about it – that’s what a colour managed work flow & soft proofing is for.

But it never hurts to UNDERSTAND things, otherwise you just end up in a “monkey see monkey do” situation.

And that’s as dangerous as it can get – change just one thing and you’re in trouble!

But if you can ‘get the point’ of this post then believe me you are well on your way to understanding desktop printing and the simple processes we need to go through to ensure accurate and realistic prints every time we hit the PRINT button.

desktop printing

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Gamma Encoding – Under the Hood

Gamma, Gamma Encoding & Decoding

Gamma – now there’s a term I see cause so much confusion and misunderstanding.

So many people use the term without knowing what it means.

Others get gamma mixed up with contrast, which is the worst mistake anyone could ever make!

Contrast controls the spatial relationship between black and white; in other words the number of grey tones.  Higher contrast spreads black into the darker mid tones and white into the upper mid tones.  In other words, both the black point and white point are moved.

The only tones that are not effected by changes in image gamma are the black point and white point – that’s why getting gamma mixed up with contrast is the mark of a “complete idiot” who should be taken outside and summarily shot before they have chance to propagate this shocking level of misunderstanding!

What is Gamma?

Any device that records an image does so with a gamma value.

Any device which displays/reproduces said image does so with a gamma value.

We can think of gamma as the proportional distribution of tones recorded by, or displayed on, a particular device.

Because different devices have different gamma values problems would arise were we to display an image that has a gamma of X on a display with a gamma of Y:

Ever wondered what a RAW file would look like displayed on a monitor without any fancy colour & gamma managed software such as LR or ACR?

gamma,gamma encoding,Andy Astbury

A raw file displayed on the back of the camera (left) and as it would look on a computer monitor calibrated to a gamma of 2.2 & without any colour & gamma management (right).

The right hand image looks so dark because it has a native gamma of 1.0 but is being displayed on a monitor with a native gamma of 2.2

RAW file Gamma

To all intents and purposes ALL RAW files have a gamma of 1.0

gamma,gamma encoding,Andy Astbury

Camera Sensor/Linear Gamma (Gamma 1.0)

Digital camera sensors work in a linear fashion:

If we have “X” number of photons striking a sensor photosite then “Y” amount of electrons will be generated.

Double the number of photons by doubling the amount of light, then 2x “Y” electrons will be generated.

Halve the number of photons by reducing the light on the scene by 50% then 0.5x “Y” electrons will be generated.

We have two axes on the graph; the horizontal x axis represents the actual light values in the scene, and the vertical y axis represents the output or recorded tones in the image.

So, if we apply Lab L* values to our graph axes above, then 0 equates to black and 1.0 equates to white.

The “slope” of the graph is a straight line giving us an equal relationship between values for input and output.

It’s this relationship between input and output values in digital imaging that helps define GAMMA.

In our particular case here, we have a linear relationship between input and output values and so we have LINEAR GAMMA, otherwise known as gamma 1.0.

Now let’s look at a black to white graduation in gamma 1.0 in comparison to one in what’s called an encoding gamma:

gamma,gamma encoding,Andy Astbury

Linear (top) vs Encoded Gamma

The upper gradient is basically the way our digital cameras see and record a scene.

There is an awful lot of information about highlights and yet the darker tones and ‘shadow’ areas are seemingly squashed up together on the left side of the gradient.

Human vision does not see things in the same way that a camera sensor does; we do not see linearly.

If the amount of ambient light falling on a scene suddenly doubles we will perceive the increase as an unquantifiable “it’s got brighter”; whereas our sensors response will be exactly double and very quantifiable.

Our eyes see a far more ‘perceptually even’ tonal distribution with much greater tonal separation in the darker tones and a more compressed distribution of highlights.

In other words we see a tonal distribution more like that contained in the gamma encoded gradient.

Gamma encoding can be best illustrated with another graph:

gamma,gamma encoding,Andy Astbury

Linear Gamma vs Gamma Encoding 1/2.2 (0.4545)

Now sadly this is where things often get misunderstood, and why you need to be careful about where you get information from.

The cyan curve is NOT gamma 2.2 – we’ll get to that shortly.

Think of the graph above as the curves panel in Lightroom, ACR or Photoshop – after all, that’s exactly what it is.

Think of our dark, low contrast linear gamma image as displayed on a monitor – what would we need to do to the linear slope  to improve contrast and generally brighten the image?

We’d bend the linear slope to something like the cyan curve.

The cyan curve is the encoding gamma 1/2.2.

There’s a direct numerical relationship between the two gamma curves; linear and 1/2.2. and it’s a simple power law:

  •  VO = VIγ where VO = output value, VI = input value and γ = gamma

Any input value (VI) on the linear gamma curve to the power of γ equals the output value of the cyan encoding curve; and γ as it works out equals 0.4545

  •  VI 0 = VO 0
  •  VI 0.25 = VO 0.532
  •  VI 0.50 = VO 0.729
  •  VI 0.75 = VO 0.878
  •  VI 1.0 = VO 1.0

Now isn’t that bit of maths sexy………………..yeah!

Basically the gamma encoding process remaps all the tones in the image and redistributes them in a non-linear ratio which is more familiar to our eye.

Note: the gamma of human vision is not really gamma 1/2.2 – gamma 0.4545.  It would be near impossible to actually quantify gamma for our eye due to the behavior of the iris etc, but to all intents and purposes modern photographic principles regard it as being ‘similar to’..

So the story so far equates to this:

gamma,gamma encoding,Andy Astbury

Gamma encoding redistributes tones in a non-linear manner.

But things are never quite so straight forward are they…?

Firstly, if gamma < 1 (less than 1) the encoding curve goes upwards – as does the cyan curve in the graph above.

But if gamma > 1 (greater than 1) the curve goes downwards.

A calibrated monitor has (or should have) a calibrated device gamma of 2.2:

gamma,gamma encoding,Andy Astbury

Linear, Encoding & Monitor gamma curves.

As you can now see, the monitor device gamma of 2.2 is the opposite of the encoding gamma – after all, the latter is the reciprocal of the former.

So what happens when we apply the decoding gamma/monitor gamma of 2.2 to our gamma encoded image?

gamma,gamma encoding,Andy Astbury

The net effect of Encode & Decode gamma – Linear.

That’s right, we end up back where we started!

Now, are you thinking:

  • Don’t understand?
  • We are back with our super dark image again?

Welcome to the worlds biggest Bear-Trap!

The “Learning Gamma Bear Trap”

Hands up those who are thinking this is what happens:

gamma,gamma encoding,Andy Astbury

If your arm so much as twitched then you are not alone!

I’ll admit to being naughty and leading you to edge of the pit containing the bear trap – but I didn’t push you!

While you’ve been reading this post have you noticed the occasional random bold and underlined text?

Them’s clues folks!

The super dark images – both seascape and the rope coil – are all “GAMMA 1.0 displayed on a GAMMA 2.2 device without any management”.

That doesn’t mean a gamma 1.0 RAW file actually LOOKS like that in it’s own gamma environment!

That’s the bear trap!

gamma,gamma encoding,Andy Astbury

Gamma 1.0 to gamma 2.2 encoding and decoding

Our RAW file actually looks quite normal in its own gamma environment (2nd from left) – but look at the histogram and how all those darker mid tones and shadows are piled up to the left.

Gamma encoding to 1/2.2 (gamma 0.4545) redistributes and remaps those all the tones and lightens the image by pushing the curve up BUT leaves the black and white points where they are.  No tones have been added or taken away, the operation just redistributes what’s already there.  Check out the histogram.

Then the gamma decode operation takes place and we end up with the image on the right – looks perfect and ready for processing, but notice the histogram, we keep the encoding redistribution of tones.

So, are we back where we started?  No.

Luckily for us gamma encoding and decoding is all fully automatic within a colour managed work flow and RAW handlers such as Lightroom, ACR and CapOnePro etc.

Image gamma changes are required when an image is moved from one RGB colour space to another:

  • ProPhoto RGB has a gamma of 1.8
  • Adobe RGB 1998 has a gamma of 2.2
  • sRGB has an oddball gamma that equates to an average of 2.2 but is nearly 1.8 in the deep shadow tones.
  • Lightrooms working colour space is ProPhoto linear, in other words gamma 1.0
  • Lightrooms viewing space is MelissaRGB which equates to Prophoto with an sRGB gamma.

Image gamma changes need to occur when images are sent to a desktop printer – the encode/decode characteristics are actually part and parcel of the printer profile information.

Gamma awareness should be exercised when it comes to monitors:

  • Most plug & play monitors are set to far too high a gamma ‘out the box’ – get it calibrated properly ASAP; it’s not just about colour accuracy.
  • Laptop screen gamma changes with viewing position – God they are awful!

Anyway, that just about wraps up this brief explanation of gamma; believe me it is brief and somewhat simplified – but hopefully you get the picture!

Become a Patron!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Accurate Camera Colour within Lightroom

Obtaining accurate camera colour within Lightroom 5, in other words making the pics in your Lr Library look like they did on the back of the camera; is a problem that I’m asked about more and more since the advent of Lightroom 5 AND the latest camera marks – especially Nikon!

UPDATE NOTE: Please feel free to read this post THEN go HERE for a further post on achieving image NEUTRALITY in Lightroom 6/CC 2015

Does this problem look familiar?

Accurate Camera Colour within Lightroom

Back of the camera (left) to Lightroom (right) – click to enlarge.

The image looks fine (left) on the back of the camera, fine in the import dialogue box, and fine in the library module grid view UNTIL the previews have been created – then it looks like the image on the right.

I hear complaints that the colours are too saturated and the contrast has gone through the roof, the exposure has gone down etc etc.

All the visual descriptions are correct, but what’s responsible for the changes is mostly down to a shift in contrast.

Let’s have a closer look at the problem:

Accurate Camera Colour within Lightroom

Back of the camera (left) to Lightroom (right) – click to enlarge.

The increase in contrast has resulted in “choking” of the shadow detail under the wing of the Red Kite, loss of tonal separation in the darker mid tones, and a slight increase in the apparent luminance noise level – especially in that out-of-focus blue sky.

And of course, the other big side effect is an apparent increase in saturation.

You should all be aware of my saying that “Contrast Be Thine Enemy” by now – and so we’re hardly getting off to a good start with a situation like this are we…………

So how do we go about obtaining accurate camera colour within Lightroom?

Firstly, we need to understand just what’s going on inside the camera with regard to various settings, and what happens to those settings when we import the image into Lightroom.

Camera Settings & RAW files

Let’s consider all the various settings with regard to image control that we have in our cameras:

  • White Balance
  • Active D lighting
  • Picture Control – scene settings, sharpening etc:
  • Colour Space
  • Distortion Control
  • Vignette Control
  • High ISO NR
  • Focus Point/Group
  • Uncle Tom Cobbly & all…………..

All these are brought to bare to give us the post-view jpeg on the back of the camera.

And let’s not forget

  • Exif
  • IPTC

That post-view/review jpeg IS subjected to all the above image control settings, and is embedded in the RAW file; and the image control settings are recorded in what is called the raw file “header”.

It’s actually a lot more complex than that, with IFD & MakerNote tags and other “scrummy” tech stuff – see this ‘interesting’ article HERE – but don’t fall asleep!

If we ship the raw file to our camera manufacturers RAW file handler software such as Nikon CapNX then the embedded jpeg and the raw header data form the image preview.

However, to equip Lightroom with the ability to read headers from every digital camera on the planet would be physically impossible, and in my opinion, totally undesirable as it’s a far better raw handler than any proprietary offering from Nikon or Canon et al.

So, in a nutshell, Lightroom – and ACR – bin the embedded jpeg preview and ignore the raw file header, with the exception of white balance, together with Exif & IPTC data.

However, we still need to value the post jpeg on the camera because we use it to decide many things about exposure, DoF, focus point etc – so the impact of the various camera image settings upon that image have to be assessed.

Now here’s the thing about image control settings “in camera”.

For the most part they increase contrast, saturation and vibrancy – and as a consequence can DECREASE apparent DYNAMIC RANGE.  Now I’d rather have total control over the look and feel of my image rather than hand that control over to some poxy bit of cheap post-ASIC circuitry inside my camera.

So my recommendations are always the same – all in-camera ‘picture control’ type settings should be turned OFF; and those that can’t be turned off are set to LOW or NEUTRAL as applicable.

That way, when I view the post jpeg on the back of the camera I’m viewing the very best rendition possible of what the sensor has captured.

And it’s pointless having it any other way because when you’re shooting RAW then both Lightroom and Photoshop ACR ignore them anyway!

Accurate Camera Colour within Lightroom

So how do we obtain accurate camera colour within Lightroom?

We can begin to understand how to achieve accurate camera colour within Lightroom if we look at what happens when we import a raw file; and it’s really simple.

Lightroom needs to be “told” how to interpret the data in the raw file in order to render a viewable preview – let’s not forget folks, a raw file is NOT a visible image, just a matrix full of numbers.

In order to do this seemingly simple job Lightroom uses process version and camera calibration settings that ship inside it, telling it how to do the “initial process” of the image – if you like, it’s a default process setting.

And what do you think the default camera calibration setting is?

Accurate Camera Colour within Lightroom

The ‘contrasty’ result of the Lightroom Nikon D4 Adobe Standard camera profile.

Lightroom defaults to this displayed nomenclature “Adobe Standard” camera profile irrespective of what camera make and model the raw file is recorded by.

Importantly – you need to bare in mind that this ‘standard’ profile is camera-specific in its effect, even though the displayed name is the same when handling say D800E NEF files as it is when handling 1DX CR2 files, the background functionality is totally different and specific to the make and model of camera.

What it says on the tin is NOT what’s inside – so to speak!

So this “Adobe Standard” has as many differing effects on the overall image look as there are cameras that Lightroom supports – is it ever likely that some of them are a bit crap??!!

Some files, such as the Nikon D800 and Canon 5D3 raws seem to suffer very little if any change – in my experience at any rate – but as a D4 shooter this ‘glitch in the system’ drives me nuts.

But the walk-around is so damned easy it’s not worth stressing about:

  1. Bring said image into Lightroom (as above).
  2. Move the image to the DEVELOP module
  3. Go to the bottom settings panel – Camera Calibration.
  4. Select “Camera Neutral” from the drop-down menu:
    Accurate Camera Colour within Lightroom

    Change camera profile from ‘Adobe Standard’ to ‘Camera Neutral’ – see the difference!

    You can see that I’ve added a -25 contrast adjustment in the basics panel here too – you might not want to do that*

  5. Scoot over to the source panel side of the Lightroom GUI and open up the Presets Panel

    Accurate Camera Colour within Lightroom

    Open Presets Panel (indicated) and click the + sign to create a new preset.

  6. Give the new preset a name, and then check the Process Version and Calibration options (because of the -25 contrast adjustment I’ve added here the Contrast option is ticked).
  7. Click CREATE and the new “camera profile preset” will be stored in the USER PRESETS across ALL your Lightroom 5 catalogs.
  8. The next time you import RAW files you can ADD this preset as a DEVELOP SETTING in the import dialogue box:
    Accurate Camera Colour within Lightroom

    Choose new preset

    Accurate Camera Colour within Lightroom

    Begin the import

  9. Your images will now look like they did on the back of the camera (if you adopt my approach to camera settings at least!).

You can play around with this procedure as much as you like – I have quite a few presets for this “initial process” depending on a number of variables such as light quality and ISO used to name but two criteria (as you can see in the first image at 8. above).

The big thing I need you to understand is that the camera profile in the Camera Calibration panel of Lightroom acts merely as Lightroom’s own internal guide to the initial process settings it needs to apply to the raw file when generating it’s library module previews.

There’s nothing complicated, mysterious or sinister going on, and no changes are being made to your raw images – there’s nothing to change.

In fact, I don’t even bother switching to Camera Neutral half the time; I just do a rough initial process in the Develop module to negate the contrast in the image, and perhaps noise if I’ve been cranking the ISO a bit – then save that out as a preset.

Then again, there are occasions when I find switching to Camera Neutral is all that’s needed –  shooting low ISO wide angle landscapes when I’m using the full extent of the sensors dynamic range springs to mind.

But at least now you’ve got shots within your Lightroom library that look like they did on the back of the camera, and you haven’t got to start undoing the mess it’s made on import before you get on with the proper task at hand – processing – and keeping that contrast under control.

Some twat on a forum somewhere slagged this post off the other day saying that I was misleading folk into thinking that the shot on the back of the camera was “neutral” – WHAT A PRICK…………

All we are trying to do here is to make the image previews in Lr5 look like they did on the back of the camera – after all, it is this BACK OF CAMERA image that made us happy with the shot in the first place.

And by ‘neutralising’ the in-camera sharpening and colour/contrast picture control ramping the crappy ‘in camera’ jpeg is the best rendition we have of what the sensor saw while the shutter was open.

Yes, we are going to process the image and make it look even better, so our Lr5 preview starting point is somewhat irrelevant in the long run; but a lot of folk freak-out because Lr5 can make some really bad changes to the look of their images before they start.  All we are doing in this article is stopping Lr5 from making those unwanted changes.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Pixel Resolution – part 2

More on Pixel Resolution

In my previous post on pixel resolution  I mentioned that it had some serious ramifications for print.

The major one is PHYSICAL or LINEAR image dimension.

In that previous post I said:

  • Pixel dimension divided by pixel resolution = linear dimension

Now, as we saw in the previous post, linear dimension has zero effect on ‘digital display’ image size – here’s those two snake jpegs again:

Andy Astbury,wildlife in pixels,pixel,dpi,ppi,pixel resolution,photoshop,lightroom,adobe

European Adder – 900 x 599 pixels with a pixel resolution of 300PPI

Andy Astbury,wildlife in pixels,pixel,dpi,ppi,pixel resolution,photoshop,lightroom,adobe

European Adder – 900 x 599 pixels with a pixel resolution of 72PPI

Digital display size is driven by pixel dimensionNOT linear dimension or pixel resolution.

Print on the other hand is directly driven by image linear dimension – the physical length and width of our image in inches, centimeters or millimeters.

Now I teach this ‘stuff’ all the time at my Calumet workshops and I know it’s hard for some folk to get their heads around print size and printer output, but it really is simple and straightforward if you just think about it logically for minute.

Let’s get away from snakes and consider this image of a cute Red Squirrel:

Andy Astbury,wildlife in pixels,

Red Squirrel with Bushy Tail – what a cutey!
Shot with Nikon D4 – full frame render.

Yeah yeah – he’s a bit big in the frame for my taste but it’s a seller so boo-hoo – what do I know ! !

Shot on a Nikon D4 – the relevance of which is this:

  • The D4 has a sensor with a linear dimension of 36 x 24 millimeters, but more importantly a photosite dimension of 4928 x 3280. (this is the effective imaging area – total photosite area is 4992 x 3292 according to DXO Labs).

Importing this image into Lightroom, ACR, Bridge, CapOne Pro etc will take that photosite dimension as a pixel dimension.

They also attach the default standard pixel resolution of 300 PPI to the image.

So now the image has a set of physical or linear dimensions:

  • 4928/300  x  3280/300 inches  or  16.43″ x 10.93″

or

  • 417.24 x 277.71 mm for those of you with a metric inclination!

So how big CAN we print this image?

 

Pixel Resolution & Image Physical Dimension

Let’s get back to that sensor for a moment and ask ourselves a question:

  • “Does a sensor contain pixels, and can it have a PPI resolution attached to it?
  • Well, the strict answer would be No and No not really.

But because the photosite dimensions end up being ‘converted’ to pixel dimensions then let’s just for a moment pretend that it can.

The ‘effective’ PPI value for the D4 sensor could be easily derived from its long edge ‘pixel’ count of the FX frame divided by the linear length which is just shy of 36mm or 1.4″ – 3520 PPI or thereabouts.

So, if we take this all literally our camera captures and stores a file that has linear dimensions of  1.4″ x 0.9″, pixel dimensions of  4928 x 3280 and a pixel resolution of 3520 PPI.

Import this file into Lightroom for instance, and that pixel resolution is reduced to 300 PPI.  It’s this very act that renders the image on our monitor at a size we can work with.  Otherwise we’d be working on postage stamps!

And what has that pixel resolution done to the linear image dimensions?  Well it’s basically ‘magnified’ the image – but by how much?

 

Magnification & Image Size

Magnification factors are an important part of digital imaging and image reproduction, so you need to understand something – magnification factors are always calculated on the diagonal.

So we need to identify the diagonals of both our sensor, and our 300 PPI image before we can go any further.

Here is a table of typical sensor diagonals:

Andy Astbury

Table of Sensor Diagonals for Digital Cameras.

And here is a table of metric print media sizes:

Andy Astbury

Metric Paper Sizes including diagonals.

To get back to our 300 PPI image derived from our D4 sensor,  Pythagoras tells us that our 16.43″ x 10.93″ image has a diagonal of 19.73″ – or 501.14mm

So with a sensor diagonal of 43.2mm we arrive at a magnification factor of around 11.6x for our 300 PPI native image as displayed on our monitor.

This means that EVERYTHING on the sensor – photosites/pixels, dust bunnies, logs, lumps of coal, circles of confusion, Airy Discs – the lot – are magnified by that factor.

Just to add variety, a D800/800E produces native 300 PPI images at 24.53″ x 16.37″ – a magnification factor of 17.3x over the sensor size.

So you can now begin to see why pixel resolution is so important when we print.

 

How To Blow Up A Squirrel !

Let’s get back to ‘his cuteness’ and open him up in Photoshop:

Our Squirrel at his native 300 PPI open in Photoshop.

Our Squirrel at his native 300 PPI open in Photoshop.

See how I keep you on your toes – I’ve switched to millimeters now!

The image is 417 x 277 mm – in other words it’s basically A3.

What happens if we hit print using A3 paper?

Red Squirrel with Bushy Tail. D4 file at 300 PPI printed to A3 media.

Red Squirrel with Bushy Tail. D4 file at 300 PPI printed to A3 media.

Whoops – that’s not good at all because there is no margin.  We need workable margins for print handling and for mounting in cut mattes for framing.

Do not print borderless – it’s tacky, messy and it screws your printer up!

What happens if we move up a full A size and print A2:

Red Squirrel 300 PPI printed on A2

Red Squirrel D4 300 PPI printed on A2

Now that’s just over kill.

But let’s open him back up in Photoshop and take a look at that image size dialogue again:

Our Squirrel at his native 300 PPI open in Photoshop.

Our Squirrel at his native 300 PPI open in Photoshop.

If we remove the check mark from the resample section of the image size dialogue box (circled red) and make one simple change:

Our Squirrel at a reduced pixel resolution of 240 PPI open in Photoshop.

Our Squirrel at a reduced pixel resolution of 240 PPI open in Photoshop.

All we need to do is to change the pixel resolution figure from 300 PPI to 240 PPI and click OK.

We make NO apparent change to the image on the monitor display because we haven’t changed any physical dimension and we haven’t resampled the image.

All we have done is tell the print pipeline that every 240 pixels of this image must occupy 1 liner inch of paper – instead of 300 pixels per linear inch of paper.

Let’s have a look at the final outcome:

Red Squirrel D4 240 PPI printed on A2.

Red Squirrel D4 240 PPI printed on A2.

Perfick… as Pop Larkin would say!

Now we have workable margins to the print for both handling and mounting purposes.

But here’s the big thing – printed at 2880+ DPI printer output resolution you would see no difference in visual print quality.  Indeed, 240 PPI was the Adobe Lightroom, ACR default pixel resolution until fairly recently.

So there we go, how big can you print?? – Bigger than you might think!

And it’s all down to pixel resolution – learn to understand it and you’ll find a lot of  the “murky stuff” in photography suddenly becomes very simple!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.