Members Image Processing

Members Image Processing – Members Only Content

In this 30 minute video I take you step-by-step through my processing of one of my members raw files.

This shot was taken by member Phil Ellard and is of the famous landmark of South Stack Light on Holy Island, Anglesey in North Wales.

Phil slightly over-exposed the highlights in this shot, and even though we can’t recover them in Lightroom we can bring them 99.9% back in line with the highlight reconstruction feature inside Raw Therapee.

Once that is done it’s a simple question of building visual depth into the image using some very simple techniques inside Photoshop and a final polish back in Lightroom.

The one KEY MESSAGE I want you to take from this is that I’m using THREE separate applications:

Raw Therapee

Photoshop

Lightroom

and all three have something they can bring to the table!

You can only watch the video by clicking the image above, or this link https://www.patreon.com/posts/members-photo-21106877 and becoming a member.

Photoshop View Magnification

View Magnification in Photoshop (Patreon Only).

A few days ago I uploaded a video to my YouTube channel explaining PPI and DPI – you can see that HERE .

But there is way more to pixel per inch (PPI) resolution values than just the general coverage I gave it in that video.

And this post is about a major impact of PPI resolution that seems to have evaded the understanding and comprehension of perhaps 95% of Photoshop users – and Lightroom users too for that matter.

I am talking about image view magnification, and the connection this has to your monitor.

Let’s make a new document in Photoshop:

View Magnification

We’ll make the new document 5 inches by 4 inches, 300ppi:

View Magnification

I want you to do this yourself, then get a plastic ruler – not a steel tape like I’ve used…..

Make sure you are viewing the new image at 100% magnification, and that you can see your Photoshop rulers along the top and down the left side of the workspace – and right click on one of the rulers and make sure the units are INCHES.

Take your plastic ruler and place it along the upper edge of your lower monitor bezel – not quite like I’ve done in the crappy GoPro still below:

View Magnification

Yes, my 5″ long image is in reality 13.5 inches long on the display!

The minute you do this, you may well get very confused!

Now then, the length of your 5×4 image, in “plastic ruler inches” will vary depending on the size and pixel pitch of your monitor.

Doing this on a 13″ MacBook Pro Retina the 5″ edge is actually 6.875″ giving us a magnification factor of 1.375:1

On a 24″ 1920×1200 HP monitor the 5″ edge is pretty much 16″ long giving us a magnification factor of 3.2:1

And on a 27″ Eizo ColorEdge the 5″ side is 13.75″ or there abouts, giving a magnification factor of 2.75:1

The 24″ HP monitor has a long edge of not quite 20.5 inches containing 1920 pixels, giving it a pixel pitch of around 94ppi.

The 27″ Eizo has a long edge of 23.49 inches containing 2560 pixels, giving it a pixel pitch of 109ppi – this is why its magnification factor is less then the 24″ HP.

And the 13″ MacBook Pro Retina has a pixel pitch of 227ppi – hence the magnification factor is so low.

So WTF Gives with 1:1 or 100% View Magnification Andy?

Well, it’s simple.

The greatest majority of Ps users ‘think’ that a view magnification of 100% or 1:1 gives them a view of the image at full physical size, and some think it’s a full ppi resolution view, and they are looking at the image at 300ppi.

WRONG – on BOTH counts !!

A 100% or 1:1 view magnification gives you a view of your image using ONE MONITOR or display PIXEL to RENDER ONE IMAGE PIXEL  In other words the image to display pixel ratio is now 1:1

So at a 100% or 1:1 view magnification you are viewing your image at exactly the same resolution as your monitor/display – which for the majority of desk top users means sub-100ppi.

Why do I say that?  Because the majority of desk top machine users run a 24″, sub 100ppi monitor – Hell, this time last year even I did!

When I view a 300ppi image at 100% view magnification on my 27″ Eizo, I’m looking at it in a lowly resolution of 109ppi.  With regard to its properties such as sharpness and inter-tonal detail, in essence, it looks only 1/3rd as good as it is in reality.

Hands up those who think this is a BAD THING.

Did you put your hand up?  If you did, then see me after school….

It’s a good thing, because if I can process it to look good at 109ppi, then it will look even better at 300ppi.

This also means that if I deliberately sharpen certain areas (not the whole image!) of high frequency detail until they are visually right on the ragged edge of being over-sharp, then the minuscule halos I might have generated will actually be 3 times less obvious in reality.

Then when I print the image at 1440, 2880 or even 5760 DOTS per inch (that’s Epson stuff), that print is going to look so sharp it’ll make your eyeballs fall to bits.

And that dpi print resolution, coupled with sensible noise control at monitor ppi and 100% view magnification, is why noise doesn’t print to anywhere near the degree folk imagine it will.

This brings me to a point where I’d like to draw your attention to my latest YouTube video:

Did you like that – cheeky little trick isn’t it!

Anyway, back to the topic at hand.

If I process on a Retina display at over 200ppi resolution, I have a two-fold problem:

  • 1. I don’t have as big a margin or ‘fudge factor’ to play with when it comes to things like sharpening.
  • 2. Images actually look sharper than they are in reality – my 13″ MacBook Pro is horrible to process on, because of its excessive ppi and its small dimensions.

Seriously, if you are a stills photographer with a hankering for the latest 4 or 5k monitor, then grow up and learn to understand things for goodness sake!

Ultra-high resolution monitors are valid tools for video editors and, to a degree, stills photographers using large capacity medium format cameras.  But for us mere mortals on 35mm format cameras, they can actually ‘get in the way’ when it comes to image evaluation and processing.

Working on a monitor will a ppi resolution between the mid 90’s and low 100’s at 100% view magnification, will always give you the most flexible and easy processing workflow.

Just remember, Photoshop linear physical dimensions always ‘appear’ to be larger than ‘real inches’ !

And remember, at 100% view magnification, 1 IMAGE pixel is displayed by 1 SCREEN pixel.  At 50% view magnification 1 SCREEN pixel is actually displaying the dithered average of 2 IMAGE pixels.  At 25% magnification each monitor pixel is displaying the average of 4 image pixels.

Anyway, that’s about it from me until the New Year folks, though I am the worlds biggest Grinch, so I might well do another video or two on YouTube over the ‘festive period’ so don’t forget to subscribe over there.

Thanks for reading, thanks for watching my videos, and Have a Good One!

 

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

The Importance of Finished Image Previsualization

The Importance of Finished Image Previsualization (Patreon Only).

For those of you who haven’t yet subscribed to my YouTube channel, I uploaded a video describing how I shot and processed the Lone Tree at Llyn Padarn in North Wales the other day.

You can view the video here:

Image previsualization is hugely important in all photography, but especially so in landscape photography.

Most of us do it in some way or other.  Looking at images of a location by other photographers is the commonest form of image previsualization that I come across amongst most hobby photographers – and up to a point, there’s nothing intrinsically wrong in that – as long as you put your own ‘slant’ on the shot.

But relying on this method alone has one massive Achilles Heel – nature does not always ‘play nice’ with the light!

You set off for your chosen location with a certain knowledge that the weather forecast is correct, and you are guaranteed to get the perfect light for the shot you have in mind.

Three hours later, you arrive at your destination, and the first thought that enters your head is “how do I blow up the Met Office” – how could they have lied to me so badly?

If you rely solely on ‘other folks images’ for what your shot should look like, then you now have a severe problem.  Nature is railing against your preconceptions, and unless you make some mental modifications then you are deep into a punch-up with nature that you will never win.

Just such an occasion transpired for me the other day at Llyn Padarn in North Wales.

The forecast was for low level cloud with no wind, just perfect for a moody shot of the famous Lone Tree on the south shore of the lake.

So, arriving at the location to be greeted by this was a surprise to say the least:

image previsualization

This would have been disastrous for some, simply because the light does not comply with their initial expectations.  I’ve seen many people get a ‘fit of the sulks’ when this happens, and they abandon the location without even getting out of the car.

Alternatively, there are folk who will get their gear set up and make an attempt, but their initial disappointment becomes a festering ‘mental block’, and they cannot see a way to turn this bad situation into something good.

But, here’s the thing – there is no such thing as a bad situation!

There are however, multiple BAD REACTIONS to a situation.

And every adverse reaction has its roots buried in either:

  • Rigid, inflexible preconceptions.
  • Poor understanding of photographic equipment and post-processing.

Or both!

On this occasion, I was expecting a rather heavy, flat-ish light scenario; but was greeted by the exact opposite.

But instead of getting ‘stroppy about it’, experience and knowledge allow me to change my expectation, and come up with a new ‘finished image previsualization’ on the fly so to speak.

image previsualization

Instead of the futility of trying to produce my original idea – which would never work out – I simply change my image previsualization, based on what’s in front of me.

It’s then up to me to identify what I need to do in order to bring this new idea to fruition.

The capture workflow for both ‘anticipated’ and ‘reality’ would involve bracketing due to excessive subject brightness range, but there the similarity ends.

The ‘anticipated’ capture workflow would only require perhaps 3 or 4 shots – one for the highlights, and the rest for the mid tones and shadow detail.

But the ‘reality’ capture workflow is very different.  The scene has massive contrast and the image looks like crap BECAUSE of that excessive contrast. Exposing for the brightest highlights gives us a very dark image:

image previsualization

But I know that the contrast can be reduced in post to give me this:

image previsualization

So, while I’m shooting I can previz in my head what the image I’ve shot will look like in post.

This then allows me to capture the basic bracket of shots to capture all my shadow and mid tone detail.

If you watch the video, you’ll see that I only use TWO shots from the bracket sequence to produce the basic exposure blend – and they are basically 5 stops apart. The other shots I use are just for patching blown highlights.

Because the clouds are moving, the sun is in and out like a yo-yo.  Obviously, when it’s fully uncovered, it will flare across the lens.  But when it is partially to fully covered, I’m doing shot after shot to try and get the best exposures of the reflected highlights in the water.

By shooting through a polarizer AND a 6 stop ND, I’m getting relatively smooth water in all these shots – with the added bonus of blurring out the damn canoeists!

And it’s the ‘washed out colour, low contrast previsualization’ of the finished image that is driving me to take all the shots – I’m gathering enough pixel data to enable me to create the finished image without too much effort in Lightroom or Photoshop.

Anyway, go and watch the video as it will give you a much better idea of what I’m talking about!

But remember, always take your time and try reappraise what’s in front of you when the lighting conditions differ from what you were expecting.  You will often be amazed at the awesome images you can ‘pull’ from what ostensibly appears to be a right-off situation.

 

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Monitors & Color Bit Depth

Monitors and Color Bit Depth – yawn, yawn – Andy’s being boring again!

Well, perhaps I am, but I know ‘stuff’ you don’t – and I’m telling YOU that you need to know it if you want to get the best out of your photography – so there!

Let me begin by saying that NOTHING monitor-related has any effect on your captured images.  But  EVERYTHING monitor-related DOES have an effect on the way you SEE your images, and therefore definitely has an effect on your image adjustments and post-processing.

So anything monitor-related can have either a positive or negative effect on your final image output.

Bit Depth

I’m going to begin with a somewhat disconnected analogy, but bare with me here.

We live in the ‘real and natural world’, and everything that we see around us is ANALOGUE.  Nature exists on a natural curve and is full of infinite variation. In the digital world though, everything has to be put in a box.

We’ll begin with two dogs – a Labrador and a Poodle.  In this instance both natural  and digital worlds can cope with the situation, because nature just regards them for what they are, and digital can put the Labrador in a box named ‘Labrador’ and the Poodle in a separate box just for Poodles.

Let’s now imagine for a fleeting second that Mr. Lab and Miss Poodle ‘get jiggy’ with the result of dog number 3 – a Labradoodle.  Nature just copes with the new dog because it sits on natures ‘doggy curve’ half way between Mum and Dad.

But digital is having a bloody hissy-fit in the corner because it can’t work out what damn box to put the new dog in.  The only way we can placate digital is to give it another box, one for 50% Labrador and 50% Poodle.

Now if our Labradoodle grows up a bit then starts dating and makes out with another Labrador then we end up with a fourth dog that is 75% Labrador and 25% Poodle.  Again, nature just takes all in her stride, but digital in now having a stroke because it’s got no box for that gene mix.

Every time we give digital a new box we have effectively given it a greater bit depth.

Now imagine this process of cross-breed gene dilution continues until the glorious day arrives when a puppy is born that is 99% Labrador and only 1% Poodle.  It’ll be obvious to you that by this time digital has a flaming warehouse full of boxes that can cope with just about any gene mix, but alas, the last time bit depth was increased was to accommodate 98% Lab 2% Poodle.

Digital is by now quite old and grumpy and just can’t be arsed anymore, so instead of filling in triplicate forms to request a bit depth upgrade it just lumps our new dog in the same classification box as the previous one.

So our new dog is put in the wrong box.

Digital hasn’t been slap-dash though and put the pup in any old box, oh no.  Digital has put the pup in the nearest suitable box – the box with the closest match to reality.

Please note that the above mentioned boxes are strictly metaphorical, and no puppies were harmed during the making of this analogy.

Digital images are made up of pixels, and a pixel can be thought of as a data point.  That single data point contains information about luminance and colour.  The precision of that information is determined by the bit depth of the data

Very little in our ‘real world’ has a surface that looks flat and uniform.  Even a supposedly flat, uniform white wall on a building has subtle variations and graduations of colour and brightness/luminance caused by the angular direction of light and its own surface texture. That’s nature for you in the analogy above.

We are all familiar with RGB values for white being 255,255,255 and black being 0,0,0, but those are only 8 bit values.

8 bit allows for 256 discrete levels of information (or gene mix classification boxes for our Labradoodles), and a scale from 0 to 255 contains 256 values – think about it for a second!

At all bit depth values black is always 0,0,0 but white is another matter entirely:

8 bit = 256 discrete values so image white is 255,255,255

10 bit = 1,024 discrete values so image white is 1023,1023,1023

12 bit = 4,096 discrete values so image white is 4095,4095,4095

14 bit = 16,384 discrete values so image white is 16383,16383,16383

15 bit = 32,768 discrete values so image white is 32767,32767,32767

16 bit = 65,536 discrete values so image white should be 65535,65535,65535 – but it isn’t – more later!

And just for giggles here are some higher bit depth potentials:

24 bit = 16,777,216 discrete values

28 bit = 268,435,456 discrete values

32 bit = 4,294,967,296 discrete values

So you can see a pattern here.  If we double the bit depth we square the value of the information, and if we halve the bit depth the information we are left with is the square root of what we started with.

And if we convert to a lower or smaller bit depth “digital has fewer boxes to put the different dogs in to, so Labradoodles of varying genetic make-ups end up in the same boxes.  They are no longer sorted in such a precise manner”.

The same applies to our images. Where we had two adjacent pixels of slightly differing value in 16 bit, those same two adjacent pixels can very easily become totally identical if we do an 8 bit conversion and so we lose fidelity of colour variation and hence definition.

This is why we should archive our processed images as 16 bit TIFFS instead of 8 bit JPEGs!

In an 8 bit image we have black 0,0,0 and white 255,255,255 and ONLY 254 available shades or tones to graduate from one to the other.

Monitor Display Bit Depth

Whereas, in a 16 bit image black is 0,0,0 and white is 65535,65535,65535 with 65,534 intervening shades of grey to make the same black to white transition:

Monitor Display Bit Depth

But we have to remember that whatever the bit depth value is, it applies to all 3 colour channels:

Monitor Display Bit Depth Monitor Display Bit Depth Monitor Display Bit Depth

So a 16 bit image should contain a potential of 65536 values per colour channel.

How Many Colours?

So how many colours can our bit depth describe Andy?

Simple answer is to cube the bit depth value, so:

8 bit = 256x256x256 = 16,777,216 often quoted as 16.7 million colours.

10 bit = 1024x1024x1024 = 1,073,741,824 or 1.07 billion colours or EXACTLY 64x the value of 8 bit!

16 bit = 65536x65536x65536 = 281,474,976,710,656 colours. Or does it?

Confusion Reigns Supreme

Now here’s where folks get confused.

Photoshop does not WORK  in 16 bit, but in 15 bit + 1 level.  Don’t believe me? Go New Document, RGB, 16 bit and select white as the background colour.

Open up your info panel, stick your cursor anywhere in the image area and look at the 16 bit RGB read out and you will see a value of 32768 for all 3 colour channels – that’s 15 bit folks! Now double the 32768 value – yup, that’s right, you get 16 bit or 65,536!

Why does Photoshop do this?  Simple answer is ‘for speed’ – or so they say at Adobe!  There are numerous others reasons that you’ll find on various forums etc – signed and unsigned integers, mid-points, float-points etc – but really, do we care?

Things are what they are, and rumor has it that once you hit the save button on a 16 bit TIFF is does actually save out at 16 bit.

So how many potential colours in 16 bit Photoshop?  Dunno! But it’ll be somewhere between 35,184,372,088,832 and 281,474,976,710,656, and to be honest either value is plenty enough for me!

The second line of confusion usually comes from PC users under Windows, and the  Windows 24 bit High Color and 32 bit True Color that a lot of PC users mistakenly think mean something they SERIOUSLY DO NOT!

Windows 24 bit means 24 bit TOTAL – in short, 8 bits per channel, not 24!

Windows 32 bit True Color is something else again. Correctly known as 32 bit RGBA it contains 4 channels of 8 bits each; three 8 bit colour channels and an 8 bit Alpha channel used for transparency.

The same 32 bit RGBA colour (Mac call it ARGB) has been utilised on Mac OS for ever, but most Mac users never questioned it because it’s not quite so obvious in OSX as it is in Windows unless you look at the Graphics/Displays section of your System report, and who the Hell ever goes there apart from twats like me:

bit depth

Above you can see the pixel depth being reported as 32 bit colour ARGB8888 – that’s Apple-speak for Windows 32 bit True Colour RGBA.  But like a lot of ‘things Mac’ the numbers give you the real information.  The channels are ordered Alpha, Red, Green, Blue and the four ‘8’s give you the bit depth of each pixel, or as Apple put it ‘pixel depth’.

However, in the latter part of 2015 Apple gave OSX 10.11 El Capitan a 10 bit colour capability, though hardly anyone knew including ‘yours truly’.  I never have understood why they kept it ‘on the down-low’ but there was no fan-fare that’s for sure.

bit depth

Now you can see the pixel depth being reported as 30 bit ARGB2101010 – meaning that the transparency Alpha channel has been reduced from 8 bit to 2 bit and the freed-up 6 bits have been distributed evenly between the Red, Green and Blue colour channels.

Monitor Display

Your computer has a maximum display bit depth output capability that is defined by:

  • a. the operating system
  • b. the GPU fitted

Your system might well support 10 bit colour, but will only output 8 bit if the GPU is limited to 8 bit.

Likewise, you could be running a 10 bit GPU but if your OS only supports 8 bit, then 8 bit is all you will get out of the system (that’s if the OS will support the GPU in the first place).

Monitors have their own panel display bit depth, and panel bit depth costs money.

A lot of LCD panels on the market are only capable of displaying 8 bit, even if you run an OS and GPU that output 10 bit colour.

And then again certain monitors such as Eizo ColorEdge, NEC MultiSynch and the odd BenQ for example, are capable of displaying 10 bit colour from a 10 bit OS/GPU combo, but only if the monitor-to-system connection has 10 bit capability.  This basically means Display Port or HDMI connection.

As photographers we really should be looking to maximise our visual capabilities by viewing the maximum number of colour graduations captured by our cameras.  This means operating with the greatest available colour bit depth on a properly calibrated monitor.

Just to reiterate the fundamental difference between 8 bit and 10 bit monitor display pixel depth:

  • 8 bit = 256x256x256 = 16,777,216 often quoted as 16.7 million colours.
  • 10 bit = 1024x1024x1024 = 1,073,741,824 or 1.07 billion colours.

So 10 bit colour allows us to see exactly 64 times more colour on our display than 8 bit colour. (please note the word ‘see’).

It certainly does NOT add a whole new spectrum of colour to what we see; nor does it ‘add’ anything physical to our files.  It’s purely a ‘visual’ improvement that allows us to see MORE of what we ALREADY have.

I’ve made a pound or two from my images over the years and I’ve been happily using 8 bit colour right up until I bought my Eizo the other month, even though my system has been 10 bit capable since I upgraded the graphics card back in August last year.

The main reason for the upgrade with NOT 10 bit capability either, but for the 4Gb of ‘heavy lifting power’ for Photoshop.

But once I splashed the cash on a 10 bit display I of course made instant use of the systems 10 bit capability and all its benefits – of which there’s really only one!

The Benefits

The ability to see 64 times more colour means that I can see 64x more subtle variantions of the same colours I could see before.

With my wildlife images I find very little benefit if I’m honest, but with landscapes – especially sunset and twilight shots – it’s a different story.  Sunset and twighlight images have massive graduations of similar hues.  Quite often an 8 bit display will not be able to display every colour variant in a graduation and so will replace it with its nearest neighbor that it can display – (putting the 99% Lab pup in the 98% Lab box!).

This leads to a visual ‘banding’ on the display:

bit depth

The banding in the shot above is greatly exaggerated but you get the idea.

A 10 bit colour display also helps me to soft proof slightly faster for print too, and for the same reason.  I can now see much more subtle shifts in proofing when making the same tiny adjustments as I made when using 8 bit.  It doesn’t bring me to a different place, but it allows me to get there faster.

For me the switch to 10 bit colour hasn’t really improved my product, but it has increased my productivity.

If you can’t afford a 10 bit display then don’t stress as 8 bit ARGB has served me well for years!

But if you are still needing a new monitor display then PLEASE be careful what you are buying, as some displays are not even true 8 bit.

A good place to research your next monitor (if not taking the Eizo, NEC 10 bit route) is TFT Central

If you select the panel size you fancy and then look at the Colour Depth column you will see the bit depth values for the display.

You should also check the Tech column and only consider H-IPS panel tech.

Beware of 10 bit panels that are listed as 8 bit + FRC, and 8 bit panels listed as 6 bit + FRC.

FRC is the acronym for FRAME RATE CONTROL – also known as Temporal Dithering.  In very simple terms FRC involves making the pixels flash different colours at you at a frame rate faster than your eye can see.  Therefore you are fooled into seeing what is to all intents and purposes an out ‘n out lie.

It’s a tech that’s okay for gamers and watching movies, but certainly not for any form of colour management or photography workflow.

Do not entertain the idea of anything that isn’t an IPS, H-IPS or other IPS derivative.  IPS is the acronym for In Plane Switching technology.  This the the type of panel that doesn’t visually change if you move your head when looking at it!

So there we go, that’s been a bit of a ramble hasn’t it, but I hope now that you all understand bit depth and how it relates to a monitors display colour.  And let’s not forget that you are all up to speed on Labradoodles!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Color Temperature

Lightroom Color Temperature (or Colour Temperature if you spell correctly!)

“Andy – why the heck is Lightrooms temperature slider the wrong way around?”

That’s a question that I used to get asked quite a lot, and it’s started again since I mentioned it in passing a couple of posts ago.

The short answer is “IT ISN”T….it’s just you who doesn’t understand what it is and how it functions”.

But in order to give the definitive answer I feel the need to get back to basics though – so here goes.

The Spectrum Locus

Let’s get one thing straight from the start – LOCUS is just a posh word for PATH!

Visible light is just part of the electro-magnetic energy spectrum typically between 380nm (nanometers) and 700nm:

Color Temperature

In the first image below is what’s known as the Spectrum Locus – as defined by the CIE (Commission Internationale de l´Eclairage or International Commission on Illumination).

In a nutshell the locus represents the range of colors visible to the human eye – or I should say chromaticities:

Color Temperature

The blue numbers around the locus are simply the nanometer values from that same horizontal scale above. The reasoning behind the unit values of the x and y axis are complex and irrelevant to us in this post, otherwise it’ll go on for ages.

The human eye is a fickle thing.

It will always perceive, say, 255 green as being lighter than 255 red or 255 blue, and 255 blue as being the darkest of the three.  And the same applies to any value of the three primaries, as long as all three are the same.

Color Temperature

This stems from the fact that the human eye has around twice the response to green light as it does red or blue – crazy but true.  And that’s why your camera sensor – if it’s a Bayer type – has twice the number of green photosites on it as red or blue.

In rather over-simplified terms the CIE set a standard by which all colors in the visible spectrum could be expressed in terms of ‘chromaticity’ and ‘brightness’.

Brightness can be thought of as a grey ramp from black to white.

Any color space is a 3 dimensional shape with 3 axes x, y and z.

Z is the grey ramp from black to white, and the shape is then defined by the colour positions in terms of their chromaticity on the x and y axes, and their brightness on the z axis:

Color Temperature

But if we just take the chromaticity values of all the colours visible to the human eye we end up with the CIE1931 spectrum locus – a two dimensional plot if you like, of the ‘perceived’ color space of human vision.

Now here’s where the confusion begins for the majority of ‘uneducated photographers’ – and I mean that in the nicest possible way, it’s not a dig!

Below is the same spectrum locus with an addition:

Color Temperature

This additional TcK curve is called the Planckian Locus, or dark body locus.  Now please don’t give up here folks, after all you’ve got this far, but it’ll get worse before it gets better!

The Planckian Locus simply represents the color temperature in degrees Kelvin of the colour emitted by a ‘dark body’ – think lump of pure carbon – as it is heated.  Its color temperature begins to visibly rise as its thermal temperature rises.

Up to a certain thermal temperature it’ll stay visibly black, then it will begin to glow a deep red.  Warm it up some more and the red color temperature turns to orange, then yellow and finally it will be what we can call ‘white hot’.

So the Planckian Locus is the 2D chromaticity plot of the colours emitted by a dark body as it is heated.

Here’s point of confusion number 1: do NOT jump to the conclusion that this is in any way a greyscale. “Well it starts off BLACK and ends up WHITE” – I’ve come across dozens of folk who think that – as they say, a little knowledge is a dangerous thing indeed!

What the Planckian Locus IS indicative of though is WHITE POINT.

Our commonly used colour management white points of D65, D55 and D50 all lie along the Planckian Locus, as do all the other CIE standard illumimant types of which there’s more than few.

The standard monitor calibration white point of D65 is actually 6500 Kelvin – it’s a standardized classification for ‘mean Noon Daylight’, and can be found on the Spectrum Locus/Plankckian Locus at 0.31271x, 0.32902y.

D55 or 5500 Kelvin is classed as Mid Morning/Mid Afternoon Daylight and can be found at 0.33242x, 0.34743y.

D50 or 5000 kelvin is classed as Horizon Light with co-ordinates of 0.34567x, 0.35850.

But we can also equate Planckian Locus values to our ‘picture taking’ in the form of white balance.

FACT: The HIGHER the color temperature the BLUER the light, and lower color temperatures shift from blue to yellow, then orange (studio type L photofloods 3200K), then more red (standard incandescent bulb 2400K) down to candle flame at around 1850K).  Sunset and sunrise are typically standardized at 1850K and LPS Sodium street lights can be as low as 1700K.

And a clear polar sky can be upwards of 27,000K – now there’s blue for you!

And here’s where we find confusion point number 2!

Take a look at this shot taken through a Lee Big Stopper:

Color Temperature

I’m an idle git and always have my camera set to a white balance of Cloudy B1, and here I’m shooting through a filter that notoriously adds a pretty severe bluish cast to an image anyway.

If you look at the TEMP and TINT sliders you will see Cloudy B1 is interpreted by Lightroom as 5550 Kelvin and a tint of +5 – that’s why the notation is ‘AS SHOT’.

Officially a Cloudy white balance is anywhere between 6000 Kelvin and 10,000 kelvin depending on your definition, and I’ve stuck extra blue in there with the Cloudy B1 setting, which will make the effective temperature go up even higher.

So either way, you can see that Lightrooms idea of 5550 Kelvin is somewhat ‘OFF’ to say the least, but it’s irrelevant at this juncture.

Where the real confusion sets in is shown in the image below:

Color Temperature

“Andy, now you’ve de-blued the shot why is the TEMP slider value saying 8387 Kelvin ? Surely it should be showing a value LOWER than 5550K – after all, tungsten is warm and 3200K”….

How right you are…..and wrong at the same time!

What Lightroom is saying is that I’ve added YELLOW to the tune of 8387-5550 or 2837.

FACT – the color temperature controls in Lightroom DO NOT work by adjusting the Planckian or black body temperature of light in our image.  They are used to COMPENSATE for the recorded Planckian/black body temperature.

If you load in image in the develop module of Lightroom and use any of the preset values, the value itself is ball park correct(ish).

The Daylight preset loads values of 5500K and +10. The Shade preset will jump to 7500K and +10, and Tungsten will drop to 2850K and +/-0.

But the Tungsten preset puts the TEMP slider in the BLUE part of the slider Blue/Yellow graduated scale, and the Shade preset puts the slider in the YELLOW side of the scale, thus leading millions of people into mistakenly thinking that 7500K is warmer/yellower than 2850K when it most definitely is NOT!

This kind of self-induced bad learning leaves people wide open to all sorts of misunderstandings when it comes to other aspects of color theory and color management.

My advice has always been the same, just ignore the numbers in Lightroom and do your adjustments subjectively – do what looks right!

But for heaven sake don’t try and build an understanding of color temperature based on the color balance control values in Lightroom – otherwise you get in one heck of a mess.

 

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Monitor Calibration Update

Monitor Calibration Update

Okay, so I no longer NEED a new monitor, because I’ve got one – and my wallet is in Leighton Hospital Intensive Care Unit on the critical list..

What have you gone for Andy?  Well if you remember, in my last post I was undecided between 24″ and 27″, Eizo or BenQ.  But I was favoring the Eizo CS2420, on the grounds of cost, both in terms of monitor and calibration tool options.

But I got offered a sweet deal on a factory-fresh Eizo CS270 by John Willis at Calumet – so I got my desire for more screen real-estate fulfilled, while keeping the costs down by not having to buy a new calibrator.

monitor calibration update

But it still hurt to pay for it!

Monitor Calibration

There are a few things to consider when it comes to monitor calibration, and they are mainly due to the physical attributes of the monitor itself.

In my previous post I did mention one of them – the most important one – the back light type.

CCFL and WCCFL – cold cathode fluorescent lamps, or LED.

CCFL & WCCFL (wide CCFL) used to be the common type of back light, but they are now less common, being replaced by LED for added colour reproduction, improved signal response time and reduced power consumption.  Wide CCFL gave a noticeably greater colour reproduction range and slightly warmer colour temperature than CCFL – and my old monitor was fitted with WCCFL back lighting, hence I used to be able to do my monitor calibration to near 98% of AdobeRGB.

CCFL back lights have one major property – that of being ‘cool’ in colour, and LEDs commonly exhibit a slightly ‘warmer’ colour temperature.

But there’s LEDs – and there’s LEDs, and some are cooler than others, some are of fixed output and others are of a variable output.

The colour temperature of the backlighting gives the monitor a ‘native white point’.

The ‘brightness’ of the backlight is really the only true variable on a standard type of LCD display, and the inter-relationship between backlight brightness and colour temperature, and the size of the monitors CLUT (colour look-up table) can have a massive effect on the total number of colours that the monitor can display.

Industry-standard documentation by folk a lot cleverer than me has for years recommended the same calibration target settings as I have alluded to in previous blog posts:

White Point: D65 or 6500K

Brightness: 120 cdm² or candelas per square meter

Gamma: 2.2

monitor calibration update

The ubiquitous ColorMunki Photo ‘standard monitor calibration’ method setup screen.

This setup for ‘standard monitor calibration’ works extremely well, and has stood me in good stead for more years than I care to add up.

As I mentioned in my previous post, standard monitor calibration refers to a standard method of calibration, which can be thought of as ‘software calibration’, and I have done many print workshops where I have used this method to calibrate Eizo ColorEdge and NEC Spectraviews with great effect.

However, these more specialised colour management monitors have the added bonus of giving you a ‘hardware monitor calbration’ option.

To carry out a hardware monitor calibration on my new CS270 ColorEdge – or indeed any ColorEdge – we need to employ the Eizo ColorNavigator.

The start screen for ColorNavigator shows us some interesting items:

monitor calibration update

The recommended brightness value is 100 cdm² – not 120.

The recommended white point is D55 not D65.

Thank God the gamma value is the same!

Once the monitor calibration profile has been done we get a result screen of the physical profile:

monitor calibration update

Now before anyone gets their knickers in a knot over the brightness value discrepancy there’s a couple of things to bare in mind:

  1. This value is always slightly arbitrary and very much dependent on working/viewing conditions.  The working environment should be somewhere between 32 and 64 lux or cdm² ambient – think Bat Cave!  The ratio of ambient to monitor output should always remain at between 32:75/80 and 64:120/140 (ish) – in other words between 1:2 and 1:3 – see earlier post here.
  2. The difference between 100 and 120 cdm² is less than 1/4 stop in camera Ev terms – so not a lot.

What struck me as odd though was the white point setting of D55 or 5500K – that’s 1000K warmer than I’m used to. (yes- warmer – don’t let that temp slider in Lightroom cloud your thinking!).

monitor calibration updateAfter all, 1000k is a noticeable variation – unlike the brightness 20cdm² shift.

Here’s the funny thing though; if I ‘software calibrate’ the CS270 using the ColorMunki software with the spectro plugged into the Mac instead of the monitor, I visually get the same result using D65/120cdm² as I do ‘hardware calibrating’ at D55 and 100cdm².

The same that is, until I look at the colour spaces of the two generated ICC profiles:

monitor calibration update

The coloured section is the ‘software calibration’ colour space, and the wire frame the ‘hardware calibrated’ Eizo custom space – click the image to view larger in a separate window.

The hardware calibration profile is somewhat larger and has a slightly better black point performance – this will allow the viewer to SEE just that little bit more tonality in the deepest of shadows, and those perennially awkward colours that sit in the Blue, Cyan, Green region.

It’s therefore quite obvious that monitor calibration via the hardware/ColorNavigator method on Eizo monitors does buy you that extra bit of visual acuity, so if you own an Eizo ColorEdge then it is the way to go for sure.

Having said that, the differences are small-ish so it’s not really worth getting terrifically evangelical over it.

But if you have the monitor then you should have the calibrator, and if said calibrator is ‘on the list’ of those supported by ColorNavigator then it’s a bit of a JDI – just do it.

You can find the list of supported calibrators here.

Eizo and their ColorNavigator are basically making a very effective ‘mash up’ of the two ISO standards 3664 and 12646 which call for D65 and D50 white points respectively.

Why did I go CHEAP ?

Well, cheaper…..

Apart from the fact that I don’t like spending money – the stuff is so bloody hard to come by – I didn’t want the top end Eizo in either 27″ or 24″.

With the ‘top end’ ColorEdge monitors you are paying for some things that I at least, have little or no use for:

  • 3D CLUT – I’m a general sort of image maker who gets a bit ‘creative’ with my processing and printing.  If I was into graphics and accurate repro of Pantone and the like, or I specialised in archival work for the V & A say, then super-accurate colour reproduction would be critical.  The advantage of the 3D CLUT is that it allows a greater variety of SUBTLY different tones and hues to be SEEN and therefore it’s easier to VISUALLY check that they are maintained when shifting an image from one colour space to another – eg softproofing for print.  I’m a wildlife and landscape photographer – I don’t NEED that facility because I don’t work in a world that requires a stringent 100% colour accuracy.
  • Built-in Calibrator – I don’t need one ‘cos I’ve already got one!
  • Built-in Self-Correction Sensor – I don’t need one of those either!

So if your photography work is like mine, then it’s worth hunting out a ‘zero hours’ CS270 if you fancy the extra screen real-estate, and you want to spend less than if buying its replacement – the CS2730.  You won’t notice the extra 5 milliseconds slower response time, and the new CS2730 eats more power – but you do get a built-in carrying handle!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Camera ISO Settings

The Truth About ISO

Andy Astbury,noise,iso

The effect of increased ISO – in camera “push processing” automatically lift the exposure value to where the camera thinks it is supposed to be.

Back in the days of ‘wet photography’, we had rolls and sheets of film that carried various ISO/ASA/DIN numbers.

ISO stands for International Standards Organisation

ASA stands for American Standards Association

DIN – well, that’s ‘Deutsches Institut für Normung’ or German Institute for Standardisation

ISO and ASA were basically identical values, and DIN = (log10)ISO x10 +1, so ASA/ISO 100 equated to DIN 21….nope, I’m not going to say anything!

These numbers were the film ‘speed’ values.  Film speed was critical to exposure metering as it specified the film sensitivity to light.  Metering a scene properly at the correct ISO/ASA/DIN gave us an overall exposure value that ensured the film got the correct ‘dose’ of light from the shutter speed and aperture combination.

Low ISO/ASA/DIN values meant the film was LESS sensitive to light (SLOW FILM) and high values meant MORE sensitivity to light (FAST FILM).

Ilford Pan F was a very slow mono negative film at ASA 50, while Ilford HP5 was a fast 400 ASA mono negative film.

The other characteristic of film speed was ‘grain’.  Correctly exposed, Pan F was extremely fine grained, whereas correctly exposed HP5 was ‘visibly grainy’ on an 8×10 print.

Another Ilford mono negative film I used a lot was FP4.  The stated ASA for this film was 125ASA/ISO, but I always rated it (set the meter ASA speed dial) to 100ASA on my 35mm Canon A1 and F1 (yup, you read that right!) because they both slightly over-metered most scenes.

If we needed to shoot at 1/1000th and f8 but 100ASA only gave us 1/250th at f8 we would switch to 400ASA film – two stops greater sensitivity to light means we can take a shutter speed two stops shorter for the same aperture and thus get our required 1/1000th sec.

But, what if we were already set up with 400ASA film, but the meter (set at 400ASA) was only giving us 1/250th?

Prior to the release of films like Delta 1600/3200 we would put a fresh roll of 400ASA film in the camera and set the meter to a whopping 1600ASA! We would deliberately UNDER EXPOSE Ilford HP5 or Kodak Tri-X by 2 stops to give us our required 1/1000th at f8.

The two stops underexposed film would then be ‘push processed’, which basically meant it was given a longer time in the developer.  This ‘push processing’ always gave us a grainy image, because of the manner in which photographic chemistry worked.

And just to confuse you even more, very occasionally a situation might arise where we would over expose film and ‘pull process’ it – but that’s another story.

We are not here for a history lesson, but the point you need to understand is this – we had a camera body into which we inserted various sensitivities of film, and that sometimes those sensitivities were chemically manipulated in processing.

That Was Then, This Is Now!

ISO/ASA/DIN was SENSITIVITY of FILM.

It is NOT SENSITIVITY of your DSLR SENSOR….!!! Understand that once and for all!

The sensitivity of your sensor IS FIXED.

It is set in Silicon when the sensor is manufactured.  Just like the sensitivity of Kodak Tri-X Pan was ‘fixed’ at 400ASA/ISO when it was made at the factory.

How is the sensitivity of a digital sensor fixed?  By the SIZE of the individual PHOTOSITES on the sensor.

Larger photosites will gather more photons from a given exposure than small ones – it’s that simple.

The greater the number of photons captured means that the output signal from a larger photosite is GREATER than the output signal from a smaller photosite for the same exposure value (EV being a combination shutter speed and aperture/f number).

All sensors have a base level of noise – we can refer to this as the sensor ‘noise floor’.

This noise floor is an amalgamation of the noise floors of each photosite on the sensor.

But the noise floor of each photosite on the sensor is masked/obscured by the photosite signal output; therefore the greater the signal, the larger the signal to noise (S/N) ratio is said to be.

In general, larger photosites yield a higher S/N ratio than smaller ones given the same exposure.

This is why the Nikon D3 had such success being full frame but just over 12 megapixels, and it’s the reason that some of us don’t get overly excited about seeing more megapixels being crammed into our 36mm x 24mm sensors.

Anyway, the total output from a photosite contains both signal and noise floor, and the signal component can be thought of as ‘gain’ over the noise floor – natural gain.

As manufacturers put more megapixels on our sensors this natural gain DECREASES because the photosites get SMALLER – they have to in order to fit more of them into the finite sensor area.

Natural gain CAN be brought back in certain sensor designs by manipulating the design of the micro lenses that sit on top of the individual photosites. Re-design of these micro lenses to ‘suck in’ more tangential photons – rather like putting a funnel in a bottle to make filling it easier and more efficient.

There is a brilliantly simple illustration of how a sensor fits into the general scheme of things, courtesy of digital camera world:

Camera ISO Settings

The main item of note in this image is perhaps not quite so obvious, but it’s the boundary between the analogue and digital parts of the system.

We have 3 component arrays forward of this boundary:

  1. Mosaic Filter including Micro Lenses & Moire filter if fitted.
  2. Sensor Array of Photosites – these suck in photons and release proportional electrons/charge.
  3. Analogue Electronics – this holds the charge record of the photosite output.

Everything forward of the Analogue/Digital Converter – ADC – is just that, analogue! And the variety of attributes that a manufacturer puts on the sensor forward of this boundary can be thought of mostly as modifying/enhancing natural gain.

So What About My ISO Control Settings Andy?

All sensors have a BASE ISO. In other words they have an ISO sensitivity/speed rating just like film!  And as I said before THIS IS A FIXED VALUE.

The base ISO of a sensor photosite array can be defined as that ISO setting that yields the best dynamic range across the whole array, and it is the ISO setting that carries NO internal amplification.

Your chosen ISO setting has absolutely ZERO effect on what happens forward of the Analogue/Digital boundary – NONE.

So, all those idiots who tell you that ISO effects/governs exposure are WRONG – it has nothing to do with it for the simple reason that ISO effecting sensor sensitivity is a total misconception….end of!

Now I’ll bet that’s going to set off a whole raft of negative comments and arguments – and they will all be wrong, because they don’t know what they’re talking about!

The ‘digital side’ of the boundary is where all the ‘voodoo’ happens, and it’s where your ISO settings come into play.

At the end of an exposure the Analogue Digital Converter, or ADC, comes along and makes a ‘count’ of the contents of the ‘analogue electronics’ mosaic (as Digital Camera World like to call it – nice and unambiguous!).

Remember, it’s counting/measuring TOTAL OUTPUT from each photosite – and that comprises both signal and noise floor outputs.

Camera ISO Settings

If the exposure has been carried out at ‘base ISO’ then we have the maximum S/N ratio, as in column 1.

However, if we increase our ISO setting above ‘base’ then the total sensor array output looks like column 2.  We have in effect UNDER EXPOSED the shot, resulting in a reduced signal.  But we have the same value for the noise floor, so we have a lower S/N ratio.

In principal, the ADC cannot discriminate between noise floor and signal outputs, and so all it sees in one output value for each photosite.

At base ISO this isn’t a problem, but once we begin to shoot at ISO settings above base, under exposing in other words, the cameras internal image processors apply gain to boost the output values handed to it by the ADC.

Yes, this boosts the signal output, but it also amplifies the noise floor component of the signal at the same time – hence that perennial problem we all like to call ‘high ISO noise’.

So your ISO control behaves in exactly the same way as the ‘gain switch’ on a CB or long wave radio, or indeed the db gain on a microphone – ISO is just applied gain.

Things You Should Know

My first digital camera had a CCD (charge coupled device) sensor, it was made by Fuji and it cost a bloody fortune.

Cameras today for the most part use CMOS (complimentary metal oxide semi-conductor) sensors.

  • CCD sensors create high-quality, low-noise images.
  • CMOS sensors, traditionally, are more susceptible to noise.
  • Because each photosite on a CMOS sensor has a series of transistors located next to it, the light sensitivity of a CMOS chip tends to be lower. Many of the photons striking the sensory photosite array hit the transistors instead of the photosites.  This is where the newer micro lens designs come in handy.
  • A CMOS sensor consumes less power. CCD sensors can consume up to 100 times more power than an equivalent CMOS sensor.
  • CMOS chips can be produced easily, making them cheaper to manufacture than CCD sensors.

Basic CMOS tech has changed very little over the years – by that I’m referring to the actual ‘sensing’ bit of the sensor.  Yes, the individual photosites are now manufactured with more precision and consistency, but the basic methodology is pretty much ‘same as it ever was’.

But what HAS changed are the bits they stick in front of it – most notably micro-lens design; and the stuff that goes behind it, the ADC and image processors (IPs).

The ADC used to be 12 bit, now they are 14 bit on most digital cameras, and even 16 bit on some.  Increasing the bit depth accuracy in the ADC means it can detect smaller variations in output signal values between adjacent photosites.

As long as the ‘bits’ that come after the ADC can handle these extended values then the result can extend the cameras dynamic range.

But the ADC and IPs are firmware based in their operation, and so when you turn your ISO above base you are relying on a set of algorithms to handle the business of compensating for your under exposure.

All this takes place AFTER the shutter has closed – so again, ISO settings have less than nothing to do with the exposure of the image; said exposure has been made and finished with before any ISO applied gain occurs.

For a camera to be revolutionary in terms of high ISO image quality it must deliver a lower noise floor than its predecessor whilst maintaining or bettering its predecessors low ISO performance in terms of noise and dynamic range.

This where Nikon have screwed their own pooch with the D5. At ISOs below 3200 it has poorer IQ and narrower dynamic range than either the D4 or 4S.  Perhaps some of this problem could be due to the sensor photosite pitch (diameter) of 6.45 microns compared to the D4/4S of 7.30 microns – but I think it’s mostly due to poor ADC and S/N firmware; which of course can be corrected in the future.

Can I Get More Photons Onto My Sensor Andy?

You can get more photons onto your sensor by changing to a lens that lets in more light.

You might now by thinking that I mean switching glass based on a lower f-number or f-stop.

If so you’re half right.  I’m actually talking about t-stops.

The f-number of a lens is basically an expression of the relationship between maximum aperture diameter and focal length, and is an indication of the amount of light the lens lets in.

T-stops are slightly different. They are a direct indicator of how much light is transmitted by the lens – in other words how much light is actually being allowed to leave the rear element.

We could have two lenses of identical focal length and f-number, but one contains 17 lens elements and the other only 13. Assuming the glass and any coatings are of equal quality then the lens with fewer elements will have a higher transmission value and therefore lower T-number.

As an example, the Canon 85mm f1.2 actually has a t-number of 1.4, and so it’s letting in pretty much HALF a stop less light than you might think it is.

In Conclusion

I’ve deliberately not embellished this post with lots of images taken at high ISO – I’ve posted and published enough of those in the past.

I’ve given you this information so that you can digest it and hopefully understand more about how your camera works and what’s going on.  Only by understanding how something works can you deploy or use it to your best advantage.

I regularly take, market and sell images taken at ISO speeds that a lot of folk wouldn’t go anywhere near – even when they are using the same camera as me.

The sole reason I opt for high ISO settings is to obtain very fast shutter speeds with big glass in order to freeze action, especially of subjects close to the camera.  You can freeze very little action with a 500mm lens using speeds in the hundredths of a second.

Picture buyers love frozen high speed action and they don’t mind some noise if the shot is a bit special. Noise doesn’t look anywhere near as severe in a print as it does on your monitor either, so high ISO values are nothing to shy away from – especially if to do so would be at the expense of the ‘shot of a lifetime’.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Colour in Photoshop

Colour in Photoshop.

Understanding colour inside Photoshop is riddled with confusion for the majority of users.  This is due to the perpetual misuse of certain words and terms.  Adobe themselves use incorrect terminology – which doesn’t help!

The aim of this post is to understand the attributes or properties of colour inside the Photoshop environment – “…is that right Andy?”  “Yeh, it is!”

So, the first colour attribute we’re going to look at is HUE:

Understanding Colour in Photoshop. A colour wheel showing point-sampled HUES (colours) at 30 degree increments.

A colour wheel showing point-sampled HUES (colours) at 30 degree increments.

HUE can be construed as meaning ‘colour’ – or color for the benefit of our American friends “come on guys, learn to spell – you’ve had long enough!”

The colour wheel begins at 0 degrees with pure Red (255,0,0 in 8bit RGB terms), and moves clockwise through all the HUES/colours to end up back at pure Red – simple!

Understanding Colour in Photoshop.

Above, we can see samples of primary red and secondary yellow together with their respective HUE degree values which are Red 0 degrees and Yellow 60 degrees.  You can also see that the colour channel values for Red are 255,0,0 and Yellow 255,255,0.  This shows that Yellow is a mix of Red light and Green light in equal proportions.

I told you it was easy!

Inside Photoshop the colour wheel starts and ends at 180 degrees CYAN, and is flattened out into a horizontal bar as in the Hue/Saturation adjustment:

Understanding Colour in Photoshop.

Overall, there is no ambiguity over the meaning or terminology HUE; it is what it is, and it is usually taken as meaning ‘what colour’ something is.

The same can be said for the next attribute of colour – SATURATION.

Or can it?

How do we define saturation?

Understanding Colour in Photoshop. Two different SATURATION values (100% & 50%) of the same HUE.

Two different SATURATION values (100% & 50%) of the same HUE.

Above we can see two different saturation values for the same HUE (0 degrees Hue, 100% and 50% Saturation). I suppose the burning question is, do we have two different ‘colours’?

As photographers we mainly work with additive colour; that is we add Red, Green and Blue coloured light to black in order to attain white.  But in the world of painting for instance, subtractive colour is used; pigments are overlaid on white (thus subtracting white) to make black.  Printing uses the same model – CMY+K inks overlaid on ‘white’ paper …..mmm see here

If we take a particular ‘colour’ of paint and we mix it with BLACK we have a different SHADE of the same colour.  If we instead add WHITE we end up with what’s called a TINT of the same colour; and if add grey to the original paint we arrive at a different TONE of the same colour.

Let’s look at that 50% saturated Red again:

Understanding Colour in Photoshop. Hue Red 0 degrees with 50% saturation.

Hue Red 0 degrees with 50% saturation.

We’ve basically added 128 Green and 128 Blue to 255 Red. Have we kept the same HUE – yes we have.

Is it the same colour? Be honest – you don’t know do you!

The answer is NO – they are two different ‘colours’, and the hexadecimal codes prove it – those are the hash-tag values ff0000 and ff8080.  But in our world of additive colour we should only think of the word ‘colour’ as a generalisation because it is somewhat ambiguous and imprecise.

But we can quantify the SATURATION of a HUE – so we’re all good up to this point!

So we beaver away in Photoshop in the additive RGB colour mode, but what you might not realise is that we are working in a colour model within that mode, and quite frankly this is where the whole chebang turns to pooh for a lot of folk.

There are basically two colour models for dare I use the word ‘normal’, photography work; HSB (also known as HSV) and HSL, and both are cylindrical co-ordinate colour models:

Understanding Colour in Photoshop. HSB (HSV) and HSL colour models for additive RGB.

HSB (HSV) and HSL colour models for additive RGB.

Without knowing one single thing about either, you can tell they are different just by looking at them.

All Photoshop default colour picker referencing is HSB – that is Hue, Saturation & Brightness; with equivalent RGB, Lab, CMYK  hexadecimal values:

Understanding Colour in Photoshop.

But in the Hue/Sat adjustment for example, we see the adjustments are HSL:

Understanding Colour in Photoshop.

The HSL model references colour in terms of Hue, Saturation & Lightness – not flaming LUMINOSITY as so many people wrongly think!

And it’s that word luminosity that’s the single largest purveyor of confusion and misunderstanding – luminosity masking, luminosity blending mode are both terms that I and oh so many others use – and we’re all wrong.

I have an excuse – I know everything, but I have to use the wrong terminology otherwise no one else knows what I’m talking about!!!!!!!!!  Plausible story and I’m sticking to it your honour………

Anyway, within Photoshop, HSB is used to select colours, and HSL is used to change them.

The reason for this is somewhat obvious when you take a close look at the two models again:

HSB (HSV) and HSL colour models for additive RGB.

HSB (HSV) and HSL colour models for additive RGB. (V stands for Value = B in HSB).

In the HSB model look where the “whiteness” information is; it’s radial, and bound up in the ‘S’ saturation co-ordinate.  But the “blackness” information is vertical, on the ‘B’ brightness co-ordinate.  This great when we want to pick/select/reference a colour.

But surely it would be more beneficial for the “whiteness” and “blackness” information to be attached to the axis or dimension, especially when we need to increase or decrease that “white” or “black” co-ordinate value in processing.

So within the two models the ‘H’ hue co-ordinates are pretty much the same, but the ‘S’ saturation co-ordinates are different.

So this leaves us with that most perennial of questions – what is the difference between Brightness and Lightness?

Firstly, there is a massive visual difference between the Brightness and Lightness  information contained within an image as you will see now:

Understanding Colour in Photoshop. The 'Brightness' channel of HSB.

The ‘Brightness’ channel of HSB.

Understanding Colour in Photoshop. The 'L' channel of HSL

The ‘L’ channel of HSL

Straight off the bat you can see that there is far more “whites detail” information contained in the ‘L’ lightness map of the image than in the brightness map.  Couple that with the fact that Lightness controls both black and white values for every pixel in your image – and you should now be able to comprehend the difference between Lightness and Brightness, and so be better at understanding colour inside Photoshop.

We’ll always use the highly bastardised terms like luminosity, luminance etc – but please be aware that you may be using them to describe something to which they DO NOT APPLY.

Luminosity is a measure of the magnitude of a light source – typically stars; but could loosely be applied to the lumens output power of any light source.  Luminance is a measure of the reflected light from a subject being illuminated by a light source; and varies with distance from said light source – a la the inverse square law etc.

Either way, neither of them have got anything to do with the pixel values of an image inside Photoshop!

But LIGHTNESS certainly does.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Camera Calibration

Custom Camera Calibration

The other day I had an email fall into my inbox from leading UK online retailer…whose name escapes me but is very short… that made my blood pressure spike.  It was basically offering me 20% off the cost of something that will revolutionise my photography – ColorChecker Passport Camera Calibration Profiling software.

I got annoyed for two reasons:

  1. Who the “f***” do they think they’re talking to sending ME this – I’ve forgotten more about this colour management malarkey than they’ll ever know….do some customer research you idle bastards and save yourselves a mauling!
  2. Much more importantly – tens of thousands of you guys ‘n gals will get the same email and some will believe the crap and buy it – and you will get yourselves into the biggest world of hurt imaginable!

Don’t misunderstand me, a ColorChecker Passport makes for a very sound purchase indeed and I would not like life very much if I didn’t own one.  What made me seethe is the way it’s being marketed, and to whom.

Profile all your cameras for accurate colour reproduction…..blah,blah,blah……..

If you do NOT fully understand the implications of custom camera calibration you’ll be in so much trouble when it comes to processing you’ll feel like giving up the art of photography.

The problems lie in a few areas:

First, a camera profile is a SENSOR/ASIC OUTPUT profile – think about that a minute.

Two things influence sensor/asic output – ISO and lens colour shift – yep. that’s right, no lens is colour-neutral, and all lenses produce colour shifts either by tint or spectral absorption. And higher ISO settings usually produce a cooler, bluer image.

Let’s take a look at ISO and its influence on custom camera calibration profiling – I’m using a far better bit of software for doing the job – “IN MY OPINION” – the Adobe DNG Profile Editor – free to all MAC download and Windows download – but you do need the ColorChecker Passport itself!

I prefer the Adobe product because I find the ColorChecker software produced camera calibration profiles there were, well, pretty vile in terms of increased contrast especially; not my cup of tea at all.

camera calibration, Andy Astbury, colour, color management

5 images shot at 1 stop increments of ISO on the same camera/lens combination.

Now this is NOT a demo of software – a video tutorial of camera profiling will be on my next photography training video coming sometime soon-ish, doubtless with a somewhat verbose narrative explaining why you should or should not do it!

Above, we have 5 images shot on a D4 with a 24-70 f2.8 at 70mm under a consistent overcast daylight at 1stop increments of ISO between 200 and 3200.

Below, we can see the resultant profile and distribution of known colour reference points on the colour wheel.

camera calibration, Andy Astbury, colour, color management

Here’s the 200 ISO custom camera calibration profile – the portion of interest to us is the colour wheel on the left and the points of known colour distribution (the black squares and circled dot).

Next, we see the result of the image shot at 3200 ISO:

camera calibration, Andy Astbury, colour, color management

Here’s the result of the custom camera profile based on the shot taken at 3200 ISO.

Now let’s super-impose one over t’other – if ISO doesn’t matter to a camera calibration profile then we should see NO DIFFERENCE………….

camera calibration, Andy Astbury, colour, color management

The 3200 ISO profile colour distribution overlaid onto the 200 ISO profile colour distribution – it’s different and they do not match up.

……..well would you bloody believe it!  Embark on custom camera calibration  profiling your camera and then apply that profile to an image shot with the same lens under the same lighting conditions but at a different ISO, and your colours will not be right.

So now my assertions about ISO have been vindicated, let’s take a look at skinning the cat another way, by keeping ISO the same but switching lenses.

Below is the result of a 500mm f4 at 1000 ISO:

camera calibration, Andy Astbury, colour, color management

Profile result of a 500mm f4 at 1000 ISO

And below we have the 24-70mm f2.8 @ 70mm and 1000 ISO:

camera calibration, Andy Astbury, colour, color management

Profile result of a 24-70mm f2.8 @ 70mm at 1000 ISO

Let’s overlay those two and see if there’s any difference:

camera calibration, Andy Astbury, colour, color management

Profile results of a 500mm f4 at 1000 ISO and the 24-70 f2.8 at 1000 ISO – as massively different as day and night.

Whoops….it’s all turned to crap!

Just take a moment to look at the info here.  There is movement in the orange/red/red magentas, but even bigger movements in the yellows/greens and the blues and blue/magentas.

Because these comparisons are done simply in Photoshop layers with the top layer at 50% opacity you can even see there’s an overall difference in the Hue and Saturation slider values for the two profiles – the 500mm profile is 2 and -10 respectively and the 24-70mm is actually 1 and -9.

The basic upshot of this information is that the two lenses apply a different colour cast to your image AND that cast is not always uniformly applied to all areas of the colour spectrum.

And if you really want to “screw the pooch” then here’s the above comparison side by side with with  the 500f4 1000iso against the 24-70mm f2.8 200iso view:

camera calibration, Andy Astbury, colour, color management

500mm f4/24-70mm f2.8 1000 ISO comparison versus 500mm f4 1000 ISO and 24-70mm f2.8 200 ISO.

A totally different spectral distribution of colour reference points again.

And I’m not even going to bother showing you that the same camera/lens/ISO combo will give different results under different lighting conditions – you should by now be able to envisage that little nugget yourselves.

So, Custom Camera Calibration – if you do it right then you’ll be profiling every body/lens combo you have, at every conceivable ISO value and lighting condition – it’s one of those things that if you don’t do it all then you’d be best off not doing at all in most cases.

I can think of a few instances where I would do it as a matter of course, such as scientific work, photo-microscopy, and artwork photography/copystand work etc, but these would be well outside the remit the more normal photographic practices.

As I said earlier, the Passport device itself is worth far more than it’s weight in gold – set up and light your shot and include the Passport device in a prominent place. Take a second shot without it and use shot 1 to custom white balance shot 2 – a dead easy process that makes the device invaluable for portrait and studio work etc.

But I hope by now you can begin to see the futility of trying to use a custom camera calibration profile on a “one size fits all” basis – it just won’t work correctly; and yet for the most part this is how it’s marketed – especially by third party retailers.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Metering Modes Explained

Camera Metering Modes

Become a Patron!

I always get asked about which camera metering mode I use,  and to be honest, I think sometimes the folk doing the asking just can’t get their heads around my simplistic, and sometimes quite brutal answers!

“Andy, it’s got to be more complicated than that surely….otherwise why does the camera give me so many options…?”

Well, I always like to keep things really simple, mainly because I’m not the brightest diamond in the jewellery shop, and because I’m getting old and most often times my memory keeps buggering off on holiday without telling me!

But before I espouse on “metering the Uncle Andy way” let’s take a quick look at exactly how the usual metering options work and their effects on exposure.

The Metering Modes

  • Average (a setting usually buried in the center-weighted menu)
  • Spot
  • Center-weighted
  • 3D Matrix (Nikon) or Evaluative (Canon)
Metering Mode Icons

Metering Mode Icons

You can continue reading this article FREE over on my public Patreon posts pages.  Just CLICK HERE