Photoshop View Magnification

View Magnification in Photoshop (Patreon Only).

A few days ago I uploaded a video to my YouTube channel explaining PPI and DPI – you can see that HERE .

But there is way more to pixel per inch (PPI) resolution values than just the general coverage I gave it in that video.

And this post is about a major impact of PPI resolution that seems to have evaded the understanding and comprehension of perhaps 95% of Photoshop users – and Lightroom users too for that matter.

I am talking about image view magnification, and the connection this has to your monitor.

Let’s make a new document in Photoshop:

View Magnification

We’ll make the new document 5 inches by 4 inches, 300ppi:

View Magnification

I want you to do this yourself, then get a plastic ruler – not a steel tape like I’ve used…..

Make sure you are viewing the new image at 100% magnification, and that you can see your Photoshop rulers along the top and down the left side of the workspace – and right click on one of the rulers and make sure the units are INCHES.

Take your plastic ruler and place it along the upper edge of your lower monitor bezel – not quite like I’ve done in the crappy GoPro still below:

View Magnification

Yes, my 5″ long image is in reality 13.5 inches long on the display!

The minute you do this, you may well get very confused!

Now then, the length of your 5×4 image, in “plastic ruler inches” will vary depending on the size and pixel pitch of your monitor.

Doing this on a 13″ MacBook Pro Retina the 5″ edge is actually 6.875″ giving us a magnification factor of 1.375:1

On a 24″ 1920×1200 HP monitor the 5″ edge is pretty much 16″ long giving us a magnification factor of 3.2:1

And on a 27″ Eizo ColorEdge the 5″ side is 13.75″ or there abouts, giving a magnification factor of 2.75:1

The 24″ HP monitor has a long edge of not quite 20.5 inches containing 1920 pixels, giving it a pixel pitch of around 94ppi.

The 27″ Eizo has a long edge of 23.49 inches containing 2560 pixels, giving it a pixel pitch of 109ppi – this is why its magnification factor is less then the 24″ HP.

And the 13″ MacBook Pro Retina has a pixel pitch of 227ppi – hence the magnification factor is so low.

So WTF Gives with 1:1 or 100% View Magnification Andy?

Well, it’s simple.

The greatest majority of Ps users ‘think’ that a view magnification of 100% or 1:1 gives them a view of the image at full physical size, and some think it’s a full ppi resolution view, and they are looking at the image at 300ppi.

WRONG – on BOTH counts !!

A 100% or 1:1 view magnification gives you a view of your image using ONE MONITOR or display PIXEL to RENDER ONE IMAGE PIXEL  In other words the image to display pixel ratio is now 1:1

So at a 100% or 1:1 view magnification you are viewing your image at exactly the same resolution as your monitor/display – which for the majority of desk top users means sub-100ppi.

Why do I say that?  Because the majority of desk top machine users run a 24″, sub 100ppi monitor – Hell, this time last year even I did!

When I view a 300ppi image at 100% view magnification on my 27″ Eizo, I’m looking at it in a lowly resolution of 109ppi.  With regard to its properties such as sharpness and inter-tonal detail, in essence, it looks only 1/3rd as good as it is in reality.

Hands up those who think this is a BAD THING.

Did you put your hand up?  If you did, then see me after school….

It’s a good thing, because if I can process it to look good at 109ppi, then it will look even better at 300ppi.

This also means that if I deliberately sharpen certain areas (not the whole image!) of high frequency detail until they are visually right on the ragged edge of being over-sharp, then the minuscule halos I might have generated will actually be 3 times less obvious in reality.

Then when I print the image at 1440, 2880 or even 5760 DOTS per inch (that’s Epson stuff), that print is going to look so sharp it’ll make your eyeballs fall to bits.

And that dpi print resolution, coupled with sensible noise control at monitor ppi and 100% view magnification, is why noise doesn’t print to anywhere near the degree folk imagine it will.

This brings me to a point where I’d like to draw your attention to my latest YouTube video:

Did you like that – cheeky little trick isn’t it!

Anyway, back to the topic at hand.

If I process on a Retina display at over 200ppi resolution, I have a two-fold problem:

  • 1. I don’t have as big a margin or ‘fudge factor’ to play with when it comes to things like sharpening.
  • 2. Images actually look sharper than they are in reality – my 13″ MacBook Pro is horrible to process on, because of its excessive ppi and its small dimensions.

Seriously, if you are a stills photographer with a hankering for the latest 4 or 5k monitor, then grow up and learn to understand things for goodness sake!

Ultra-high resolution monitors are valid tools for video editors and, to a degree, stills photographers using large capacity medium format cameras.  But for us mere mortals on 35mm format cameras, they can actually ‘get in the way’ when it comes to image evaluation and processing.

Working on a monitor will a ppi resolution between the mid 90’s and low 100’s at 100% view magnification, will always give you the most flexible and easy processing workflow.

Just remember, Photoshop linear physical dimensions always ‘appear’ to be larger than ‘real inches’ !

And remember, at 100% view magnification, 1 IMAGE pixel is displayed by 1 SCREEN pixel.  At 50% view magnification 1 SCREEN pixel is actually displaying the dithered average of 2 IMAGE pixels.  At 25% magnification each monitor pixel is displaying the average of 4 image pixels.

Anyway, that’s about it from me until the New Year folks, though I am the worlds biggest Grinch, so I might well do another video or two on YouTube over the ‘festive period’ so don’t forget to subscribe over there.

Thanks for reading, thanks for watching my videos, and Have a Good One!

 

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Monitors & Color Bit Depth

Monitors and Color Bit Depth – yawn, yawn – Andy’s being boring again!

Well, perhaps I am, but I know ‘stuff’ you don’t – and I’m telling YOU that you need to know it if you want to get the best out of your photography – so there!

Let me begin by saying that NOTHING monitor-related has any effect on your captured images.  But  EVERYTHING monitor-related DOES have an effect on the way you SEE your images, and therefore definitely has an effect on your image adjustments and post-processing.

So anything monitor-related can have either a positive or negative effect on your final image output.

Bit Depth

I’m going to begin with a somewhat disconnected analogy, but bare with me here.

We live in the ‘real and natural world’, and everything that we see around us is ANALOGUE.  Nature exists on a natural curve and is full of infinite variation. In the digital world though, everything has to be put in a box.

We’ll begin with two dogs – a Labrador and a Poodle.  In this instance both natural  and digital worlds can cope with the situation, because nature just regards them for what they are, and digital can put the Labrador in a box named ‘Labrador’ and the Poodle in a separate box just for Poodles.

Let’s now imagine for a fleeting second that Mr. Lab and Miss Poodle ‘get jiggy’ with the result of dog number 3 – a Labradoodle.  Nature just copes with the new dog because it sits on natures ‘doggy curve’ half way between Mum and Dad.

But digital is having a bloody hissy-fit in the corner because it can’t work out what damn box to put the new dog in.  The only way we can placate digital is to give it another box, one for 50% Labrador and 50% Poodle.

Now if our Labradoodle grows up a bit then starts dating and makes out with another Labrador then we end up with a fourth dog that is 75% Labrador and 25% Poodle.  Again, nature just takes all in her stride, but digital in now having a stroke because it’s got no box for that gene mix.

Every time we give digital a new box we have effectively given it a greater bit depth.

Now imagine this process of cross-breed gene dilution continues until the glorious day arrives when a puppy is born that is 99% Labrador and only 1% Poodle.  It’ll be obvious to you that by this time digital has a flaming warehouse full of boxes that can cope with just about any gene mix, but alas, the last time bit depth was increased was to accommodate 98% Lab 2% Poodle.

Digital is by now quite old and grumpy and just can’t be arsed anymore, so instead of filling in triplicate forms to request a bit depth upgrade it just lumps our new dog in the same classification box as the previous one.

So our new dog is put in the wrong box.

Digital hasn’t been slap-dash though and put the pup in any old box, oh no.  Digital has put the pup in the nearest suitable box – the box with the closest match to reality.

Please note that the above mentioned boxes are strictly metaphorical, and no puppies were harmed during the making of this analogy.

Digital images are made up of pixels, and a pixel can be thought of as a data point.  That single data point contains information about luminance and colour.  The precision of that information is determined by the bit depth of the data

Very little in our ‘real world’ has a surface that looks flat and uniform.  Even a supposedly flat, uniform white wall on a building has subtle variations and graduations of colour and brightness/luminance caused by the angular direction of light and its own surface texture. That’s nature for you in the analogy above.

We are all familiar with RGB values for white being 255,255,255 and black being 0,0,0, but those are only 8 bit values.

8 bit allows for 256 discrete levels of information (or gene mix classification boxes for our Labradoodles), and a scale from 0 to 255 contains 256 values – think about it for a second!

At all bit depth values black is always 0,0,0 but white is another matter entirely:

8 bit = 256 discrete values so image white is 255,255,255

10 bit = 1,024 discrete values so image white is 1023,1023,1023

12 bit = 4,096 discrete values so image white is 4095,4095,4095

14 bit = 16,384 discrete values so image white is 16383,16383,16383

15 bit = 32,768 discrete values so image white is 32767,32767,32767

16 bit = 65,536 discrete values so image white should be 65535,65535,65535 – but it isn’t – more later!

And just for giggles here are some higher bit depth potentials:

24 bit = 16,777,216 discrete values

28 bit = 268,435,456 discrete values

32 bit = 4,294,967,296 discrete values

So you can see a pattern here.  If we double the bit depth we square the value of the information, and if we halve the bit depth the information we are left with is the square root of what we started with.

And if we convert to a lower or smaller bit depth “digital has fewer boxes to put the different dogs in to, so Labradoodles of varying genetic make-ups end up in the same boxes.  They are no longer sorted in such a precise manner”.

The same applies to our images. Where we had two adjacent pixels of slightly differing value in 16 bit, those same two adjacent pixels can very easily become totally identical if we do an 8 bit conversion and so we lose fidelity of colour variation and hence definition.

This is why we should archive our processed images as 16 bit TIFFS instead of 8 bit JPEGs!

In an 8 bit image we have black 0,0,0 and white 255,255,255 and ONLY 254 available shades or tones to graduate from one to the other.

Monitor Display Bit Depth

Whereas, in a 16 bit image black is 0,0,0 and white is 65535,65535,65535 with 65,534 intervening shades of grey to make the same black to white transition:

Monitor Display Bit Depth

But we have to remember that whatever the bit depth value is, it applies to all 3 colour channels:

Monitor Display Bit Depth Monitor Display Bit Depth Monitor Display Bit Depth

So a 16 bit image should contain a potential of 65536 values per colour channel.

How Many Colours?

So how many colours can our bit depth describe Andy?

Simple answer is to cube the bit depth value, so:

8 bit = 256x256x256 = 16,777,216 often quoted as 16.7 million colours.

10 bit = 1024x1024x1024 = 1,073,741,824 or 1.07 billion colours or EXACTLY 64x the value of 8 bit!

16 bit = 65536x65536x65536 = 281,474,976,710,656 colours. Or does it?

Confusion Reigns Supreme

Now here’s where folks get confused.

Photoshop does not WORK  in 16 bit, but in 15 bit + 1 level.  Don’t believe me? Go New Document, RGB, 16 bit and select white as the background colour.

Open up your info panel, stick your cursor anywhere in the image area and look at the 16 bit RGB read out and you will see a value of 32768 for all 3 colour channels – that’s 15 bit folks! Now double the 32768 value – yup, that’s right, you get 16 bit or 65,536!

Why does Photoshop do this?  Simple answer is ‘for speed’ – or so they say at Adobe!  There are numerous others reasons that you’ll find on various forums etc – signed and unsigned integers, mid-points, float-points etc – but really, do we care?

Things are what they are, and rumor has it that once you hit the save button on a 16 bit TIFF is does actually save out at 16 bit.

So how many potential colours in 16 bit Photoshop?  Dunno! But it’ll be somewhere between 35,184,372,088,832 and 281,474,976,710,656, and to be honest either value is plenty enough for me!

The second line of confusion usually comes from PC users under Windows, and the  Windows 24 bit High Color and 32 bit True Color that a lot of PC users mistakenly think mean something they SERIOUSLY DO NOT!

Windows 24 bit means 24 bit TOTAL – in short, 8 bits per channel, not 24!

Windows 32 bit True Color is something else again. Correctly known as 32 bit RGBA it contains 4 channels of 8 bits each; three 8 bit colour channels and an 8 bit Alpha channel used for transparency.

The same 32 bit RGBA colour (Mac call it ARGB) has been utilised on Mac OS for ever, but most Mac users never questioned it because it’s not quite so obvious in OSX as it is in Windows unless you look at the Graphics/Displays section of your System report, and who the Hell ever goes there apart from twats like me:

bit depth

Above you can see the pixel depth being reported as 32 bit colour ARGB8888 – that’s Apple-speak for Windows 32 bit True Colour RGBA.  But like a lot of ‘things Mac’ the numbers give you the real information.  The channels are ordered Alpha, Red, Green, Blue and the four ‘8’s give you the bit depth of each pixel, or as Apple put it ‘pixel depth’.

However, in the latter part of 2015 Apple gave OSX 10.11 El Capitan a 10 bit colour capability, though hardly anyone knew including ‘yours truly’.  I never have understood why they kept it ‘on the down-low’ but there was no fan-fare that’s for sure.

bit depth

Now you can see the pixel depth being reported as 30 bit ARGB2101010 – meaning that the transparency Alpha channel has been reduced from 8 bit to 2 bit and the freed-up 6 bits have been distributed evenly between the Red, Green and Blue colour channels.

Monitor Display

Your computer has a maximum display bit depth output capability that is defined by:

  • a. the operating system
  • b. the GPU fitted

Your system might well support 10 bit colour, but will only output 8 bit if the GPU is limited to 8 bit.

Likewise, you could be running a 10 bit GPU but if your OS only supports 8 bit, then 8 bit is all you will get out of the system (that’s if the OS will support the GPU in the first place).

Monitors have their own panel display bit depth, and panel bit depth costs money.

A lot of LCD panels on the market are only capable of displaying 8 bit, even if you run an OS and GPU that output 10 bit colour.

And then again certain monitors such as Eizo ColorEdge, NEC MultiSynch and the odd BenQ for example, are capable of displaying 10 bit colour from a 10 bit OS/GPU combo, but only if the monitor-to-system connection has 10 bit capability.  This basically means Display Port or HDMI connection.

As photographers we really should be looking to maximise our visual capabilities by viewing the maximum number of colour graduations captured by our cameras.  This means operating with the greatest available colour bit depth on a properly calibrated monitor.

Just to reiterate the fundamental difference between 8 bit and 10 bit monitor display pixel depth:

  • 8 bit = 256x256x256 = 16,777,216 often quoted as 16.7 million colours.
  • 10 bit = 1024x1024x1024 = 1,073,741,824 or 1.07 billion colours.

So 10 bit colour allows us to see exactly 64 times more colour on our display than 8 bit colour. (please note the word ‘see’).

It certainly does NOT add a whole new spectrum of colour to what we see; nor does it ‘add’ anything physical to our files.  It’s purely a ‘visual’ improvement that allows us to see MORE of what we ALREADY have.

I’ve made a pound or two from my images over the years and I’ve been happily using 8 bit colour right up until I bought my Eizo the other month, even though my system has been 10 bit capable since I upgraded the graphics card back in August last year.

The main reason for the upgrade with NOT 10 bit capability either, but for the 4Gb of ‘heavy lifting power’ for Photoshop.

But once I splashed the cash on a 10 bit display I of course made instant use of the systems 10 bit capability and all its benefits – of which there’s really only one!

The Benefits

The ability to see 64 times more colour means that I can see 64x more subtle variantions of the same colours I could see before.

With my wildlife images I find very little benefit if I’m honest, but with landscapes – especially sunset and twilight shots – it’s a different story.  Sunset and twighlight images have massive graduations of similar hues.  Quite often an 8 bit display will not be able to display every colour variant in a graduation and so will replace it with its nearest neighbor that it can display – (putting the 99% Lab pup in the 98% Lab box!).

This leads to a visual ‘banding’ on the display:

bit depth

The banding in the shot above is greatly exaggerated but you get the idea.

A 10 bit colour display also helps me to soft proof slightly faster for print too, and for the same reason.  I can now see much more subtle shifts in proofing when making the same tiny adjustments as I made when using 8 bit.  It doesn’t bring me to a different place, but it allows me to get there faster.

For me the switch to 10 bit colour hasn’t really improved my product, but it has increased my productivity.

If you can’t afford a 10 bit display then don’t stress as 8 bit ARGB has served me well for years!

But if you are still needing a new monitor display then PLEASE be careful what you are buying, as some displays are not even true 8 bit.

A good place to research your next monitor (if not taking the Eizo, NEC 10 bit route) is TFT Central

If you select the panel size you fancy and then look at the Colour Depth column you will see the bit depth values for the display.

You should also check the Tech column and only consider H-IPS panel tech.

Beware of 10 bit panels that are listed as 8 bit + FRC, and 8 bit panels listed as 6 bit + FRC.

FRC is the acronym for FRAME RATE CONTROL – also known as Temporal Dithering.  In very simple terms FRC involves making the pixels flash different colours at you at a frame rate faster than your eye can see.  Therefore you are fooled into seeing what is to all intents and purposes an out ‘n out lie.

It’s a tech that’s okay for gamers and watching movies, but certainly not for any form of colour management or photography workflow.

Do not entertain the idea of anything that isn’t an IPS, H-IPS or other IPS derivative.  IPS is the acronym for In Plane Switching technology.  This the the type of panel that doesn’t visually change if you move your head when looking at it!

So there we go, that’s been a bit of a ramble hasn’t it, but I hope now that you all understand bit depth and how it relates to a monitors display colour.  And let’s not forget that you are all up to speed on Labradoodles!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Monitor Calibration Update

Monitor Calibration Update

Okay, so I no longer NEED a new monitor, because I’ve got one – and my wallet is in Leighton Hospital Intensive Care Unit on the critical list..

What have you gone for Andy?  Well if you remember, in my last post I was undecided between 24″ and 27″, Eizo or BenQ.  But I was favoring the Eizo CS2420, on the grounds of cost, both in terms of monitor and calibration tool options.

But I got offered a sweet deal on a factory-fresh Eizo CS270 by John Willis at Calumet – so I got my desire for more screen real-estate fulfilled, while keeping the costs down by not having to buy a new calibrator.

monitor calibration update

But it still hurt to pay for it!

Monitor Calibration

There are a few things to consider when it comes to monitor calibration, and they are mainly due to the physical attributes of the monitor itself.

In my previous post I did mention one of them – the most important one – the back light type.

CCFL and WCCFL – cold cathode fluorescent lamps, or LED.

CCFL & WCCFL (wide CCFL) used to be the common type of back light, but they are now less common, being replaced by LED for added colour reproduction, improved signal response time and reduced power consumption.  Wide CCFL gave a noticeably greater colour reproduction range and slightly warmer colour temperature than CCFL – and my old monitor was fitted with WCCFL back lighting, hence I used to be able to do my monitor calibration to near 98% of AdobeRGB.

CCFL back lights have one major property – that of being ‘cool’ in colour, and LEDs commonly exhibit a slightly ‘warmer’ colour temperature.

But there’s LEDs – and there’s LEDs, and some are cooler than others, some are of fixed output and others are of a variable output.

The colour temperature of the backlighting gives the monitor a ‘native white point’.

The ‘brightness’ of the backlight is really the only true variable on a standard type of LCD display, and the inter-relationship between backlight brightness and colour temperature, and the size of the monitors CLUT (colour look-up table) can have a massive effect on the total number of colours that the monitor can display.

Industry-standard documentation by folk a lot cleverer than me has for years recommended the same calibration target settings as I have alluded to in previous blog posts:

White Point: D65 or 6500K

Brightness: 120 cdm² or candelas per square meter

Gamma: 2.2

monitor calibration update

The ubiquitous ColorMunki Photo ‘standard monitor calibration’ method setup screen.

This setup for ‘standard monitor calibration’ works extremely well, and has stood me in good stead for more years than I care to add up.

As I mentioned in my previous post, standard monitor calibration refers to a standard method of calibration, which can be thought of as ‘software calibration’, and I have done many print workshops where I have used this method to calibrate Eizo ColorEdge and NEC Spectraviews with great effect.

However, these more specialised colour management monitors have the added bonus of giving you a ‘hardware monitor calbration’ option.

To carry out a hardware monitor calibration on my new CS270 ColorEdge – or indeed any ColorEdge – we need to employ the Eizo ColorNavigator.

The start screen for ColorNavigator shows us some interesting items:

monitor calibration update

The recommended brightness value is 100 cdm² – not 120.

The recommended white point is D55 not D65.

Thank God the gamma value is the same!

Once the monitor calibration profile has been done we get a result screen of the physical profile:

monitor calibration update

Now before anyone gets their knickers in a knot over the brightness value discrepancy there’s a couple of things to bare in mind:

  1. This value is always slightly arbitrary and very much dependent on working/viewing conditions.  The working environment should be somewhere between 32 and 64 lux or cdm² ambient – think Bat Cave!  The ratio of ambient to monitor output should always remain at between 32:75/80 and 64:120/140 (ish) – in other words between 1:2 and 1:3 – see earlier post here.
  2. The difference between 100 and 120 cdm² is less than 1/4 stop in camera Ev terms – so not a lot.

What struck me as odd though was the white point setting of D55 or 5500K – that’s 1000K warmer than I’m used to. (yes- warmer – don’t let that temp slider in Lightroom cloud your thinking!).

monitor calibration updateAfter all, 1000k is a noticeable variation – unlike the brightness 20cdm² shift.

Here’s the funny thing though; if I ‘software calibrate’ the CS270 using the ColorMunki software with the spectro plugged into the Mac instead of the monitor, I visually get the same result using D65/120cdm² as I do ‘hardware calibrating’ at D55 and 100cdm².

The same that is, until I look at the colour spaces of the two generated ICC profiles:

monitor calibration update

The coloured section is the ‘software calibration’ colour space, and the wire frame the ‘hardware calibrated’ Eizo custom space – click the image to view larger in a separate window.

The hardware calibration profile is somewhat larger and has a slightly better black point performance – this will allow the viewer to SEE just that little bit more tonality in the deepest of shadows, and those perennially awkward colours that sit in the Blue, Cyan, Green region.

It’s therefore quite obvious that monitor calibration via the hardware/ColorNavigator method on Eizo monitors does buy you that extra bit of visual acuity, so if you own an Eizo ColorEdge then it is the way to go for sure.

Having said that, the differences are small-ish so it’s not really worth getting terrifically evangelical over it.

But if you have the monitor then you should have the calibrator, and if said calibrator is ‘on the list’ of those supported by ColorNavigator then it’s a bit of a JDI – just do it.

You can find the list of supported calibrators here.

Eizo and their ColorNavigator are basically making a very effective ‘mash up’ of the two ISO standards 3664 and 12646 which call for D65 and D50 white points respectively.

Why did I go CHEAP ?

Well, cheaper…..

Apart from the fact that I don’t like spending money – the stuff is so bloody hard to come by – I didn’t want the top end Eizo in either 27″ or 24″.

With the ‘top end’ ColorEdge monitors you are paying for some things that I at least, have little or no use for:

  • 3D CLUT – I’m a general sort of image maker who gets a bit ‘creative’ with my processing and printing.  If I was into graphics and accurate repro of Pantone and the like, or I specialised in archival work for the V & A say, then super-accurate colour reproduction would be critical.  The advantage of the 3D CLUT is that it allows a greater variety of SUBTLY different tones and hues to be SEEN and therefore it’s easier to VISUALLY check that they are maintained when shifting an image from one colour space to another – eg softproofing for print.  I’m a wildlife and landscape photographer – I don’t NEED that facility because I don’t work in a world that requires a stringent 100% colour accuracy.
  • Built-in Calibrator – I don’t need one ‘cos I’ve already got one!
  • Built-in Self-Correction Sensor – I don’t need one of those either!

So if your photography work is like mine, then it’s worth hunting out a ‘zero hours’ CS270 if you fancy the extra screen real-estate, and you want to spend less than if buying its replacement – the CS2730.  You won’t notice the extra 5 milliseconds slower response time, and the new CS2730 eats more power – but you do get a built-in carrying handle!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Your Monitor – All You Ever Wanted to Know

Your Monitor – All You Ever Wanted to Know, and the stuff you didn’t – but need to!

I need a new monitor, but am undecided which to buy.  I know exactly which one I’d go for if money was no object – the NEC Spectraview Reference 302, but money is a very big object in that I ain’t got any spare!

But spend it I’ll have to – your monitor is the window on to your images and so is just about THE most important tool in your photographic workflow.  I do wish people would realize/remember that!

Right now my decision is between 24″ and 27″, Eizo or BenQ.  The monitor that needs replacement due to backlight degradation is my trusty HP LP2475W – a wide gamut monitor that punched way above its original price weight, and if I could find a new one I’d buy it right now – it was THAT good.

Now I know more than most about the ‘numbers bit’ of photography, and this current dilemma made me think about how much potential for money-wasting this situation could be for those that don’t ‘understand the tech’ quite as much as I do.

So I thought I’d try and lay things out for you in a simple and straight forward blog post – so here goes.

The Imaging Display Chain

Image Capture:

Let’s take my landscape camera – the Nikon D800E.  It is a 36 megapixel DSLR set to record UNCOMPRESSED 14 bit Raw files.

The RAW image produced by this camera has a pixel dimension of 7360 x 4912 and a pixel area of 36,152,320 pixels.

The horizontal resolution of this beastly sensor is approximately 5200 pixels per inch, each pixel being 4.88 µm (microns) in diameter – that’s know as pixel pitch.

During the exposure, the ANALOGUE part of the senor sees the scene in full spectrum colour and tone through its Bayer Array – it gathers an analogue image.

When the shutter closes, the DIGITAL side of the imaging sensor then basically converts the analogue image into a digital render with a reproduction accuracy of 14 bits per pixel.

And let’s not forget the other big thing – colour space.  All dslr cameras capture their images in their very own unique sensor colour space.  This bares little to no resemblance to either of the three commonly used digital colour management workflow colour spaces of sRGB, AdobeRGB1998 or ProPhotoRGB.

But for the purposes of digital RAW workflow, RAW editors such as Lightroom do an exceptional job of conserving the majority if not all the colours captured by the camera sensor, by converting the capture colour space to that of ProPhotoRGB – basically because it’s by far the largest industry standard space with the greatest spread of HSL values.

So this RAW file that sits on my CF card, then gets ingested by my Mac Pro for later display on my monitor is:

  • 1.41 inches on its long edge
  • has a resolution of around 5,200 pixels per inch
  • has a reproduction accuracy for Hue, Saturation & Luminance of 14 bits
  • has a colour space unique to the camera, which can best be reproduced by the ProPhotoRGB working colour space.

Image Display:

Now comes the tricky bit!

In order to display an image on a monitor, said monitor has to be connected to your computer via your graphics card or GPU output. This creates a larger number of pitfalls and bear traps for the unsuspecting and naive!

Physical attributes of a monitor you need to bare in mind:

  1. Panel Display Colour Bit Depth
  2. Panel Technology – IPS etc
  3. Monitor Panel Backlight – CCFL, WCCFL, LED etc
  4. Monitor Colour Look-Up Table – Monitor On-Board LUT (if applicable)
  5. Monitor connectivity
  6. Reliance on dedicated calibration device or not

The other consideration is your graphics card Colour Look-Up Table – GPU LUT

1.Monitor Panel Display Colour Bit Depth – All display monitors have a panel display colour bit depth – 8 bit or 10 bit.

I had a client turn up here last year with his standard processing setup – an oldish Acer laptop and an Eizo Colour Edge monitor – he was very proud of this setup, and equally gutted at his stupidity when it was pointed out to him.

The Eizo was connected to the laptop via a DVI to VGA lead, so he had paid a lot of good money for a 10 bit display monitor which he was feeding via a connection that was barely 8 bit.

Sat next to the DVI input on the Eizo was a Display Port input – which is native 10 bit. A Display Port lead doesn’t cost very much at all and is therefore the ONLY sensible way to connect to a 10 bit display – provided of course that your machine HAS a Display Port output – which his Acer laptop did not!

So if you are looking at buying a new monitor make sure you buy one with a display bit depth that your computer is capable of supporting.

There is visually little difference between 10 bit and 8 bit displays until you view an image at 100% magnification or above – then you will usually see something of an increase in colour variation and tonal shading, provided that the image you are viewing has a bit depth of 10+.  The difference is often quoted at its theoretical value of 64x –  (1,073,741,824 divided by 16,777,216).

So, yes, your RAW files will LOOK and APPEAR slightly better on a 10 bit monitor – but WAIT!

There’s more….how does the monitor display panel achieve its 10 bit display depth?  Is it REAL or is it pseudo? Enter FRC or Frame rate Control.

The FRC spoof 10 bit display – frame rate control quite literally ‘flickers’ individual pixels between two different HSL values at a rate fast enough to be undetectable by the human eye – the viewers brain gets fooled into seeing an HSL value that isn’t really there!

monitor

Here’s why I hate FRC !

Personally I have zero time for FRC technology in panels – I’d much prefer a good solid 8 bit wide gamut panel without it than a pseudo 10 bit; which is pretty much the same 8 bit panel with FRC tech and a higher price tag…Caveat Emptor!

2. Panel Technology – for photography there is only really one tech to use, that of IPS or In Plane Switching.  The main reasons for this are viewing angle and full colour gamut.

The more common monitors, and cheaper ones most often use TN tech – Twisted Nematic, and from a view angle point of view these are bloody awful because the display colour and contrast vary hugely with even just an inch or two head movement.

Gamers don’t like IPS panels because the response time is slow in comparison to TN – so don’t buy a gaming monitor for your photo work!

There are also Vertical Alignment (VA) and Plane to line Switching (PLS) technologies out there, VA being perhaps marginally better than TN, and PLS being close to (and in certain cases better than) IPS.

But all major colour work monitor manufacturers use IPS derivative tech.

3. Monitor Panel Backlight – CCFL, WCCFL, LED

All types of TFT (thin film transistor) monitor require a back light in order to view what is on the display.

Personally I like – or liked before it started to get knackered – the wide cold cathode fluorescent (WCCFL) backlight on the HP LP2475W, but these seem to have fallen by the wayside somewhat in favour of LED backlights.

The WCCFL backlight enabled me to wring 99% of the Adobe1998 RGB colourspace out of a plain 8 bit panel on the old HP, and it was a very even light across the whole of the monitor surface.  The monitor itself is nearly 11 years old, but it wasn’t until just over 12 months ago that it started to fade at the corners.  Only since the start of this year (2017) has it really begun to show signs of more severe failure on the right hand 20% – hence I’ll be needing a new one soonish!

But modern LED backlights have a greater degree of uniformity – hence their general supersedence of WCCFL.

4. Colour Look-Up Tables or LUTs

Now this is a bit of an awkward one for some folk to get their heads around, but really it’s simple.

Most monitors that you can buy have an 8 bit LUT which is either fixed, or variable via a number of presets available within the monitor OSD menu.

When it comes to calibrating a ‘standard gamut with fixed LUT’ monitor, the calibration software makes its alterations to the LUT of the GPU – not that of the monitor.

With monitors and GPUs that are barely 8 bit to begin with, the act of calibration can lead to problems.

A typical example would be an older laptop screen.  A laptop screen is driven by the on-board graphics component or chipset within the laptop motherboard.  Older MacBooks were the epitome of this setups failure for photographers.

The on-board graphics in older MacBooks were barely 8 bit from the Apple factory, and when you calibrated them they fell to something like 6 bit, and so a lot of images that contained varied tones of a similar Hue displayed colour banding:

monitor

An example of image colour banding due to low GPU LUT bit depth.
The banding is NOT really there, it just illustrates the lack of available colours and tones for the monitor display.

This phenomenon used to be a pain in the bum when choosing images for a presentation, but was never anything to panic over because the banding is NOT in the image itself.

Now if I display this same RAW file in Lightroom on my newer calibrated 15″ Retina MacBook Pro I still see a tiny bit of banding, though it’s not nearly this bad.  However, if I connect an Eizo CS2420 using a DisplayPort to HDMI cable via the 10 bit HDMI port on the MBP then there is no banding at all.

And here’s where folk get confused – none of what we are talking about has a direct effect on your image – just on how it appears on the monitor.

When I record a particular shade of say green on my D800E the camera records that green in its own colour space with an accuracy of 14 bits per colour channel.  Lightroom will display it’s own interpretation of that colour green.  I will make adjustments to that green in HSL terms and then ask Lightroom to export the result as say a TIFF file with 16 bits of colour accuracy per channel – and all the time this is going on I’m viewing the process on a monitor which has a display colour bit depth of 8 bit or 10 bit and that is deriving its colour from a LUT which could be 8 bit, 14 bit or 16 bit depending on what make and model monitor I’m using!

Some people get into a state of major confusion when it comes to bits and bit depth, and to be honest there’s no need for it.  All we are talking about here is ‘fidelity of reproduction’ on the monitor of colours which are FIXED and UNALTERABLE in your RAW file, and of the visual impact of your processing adjustments.

The colours contained in our image are just numbers – nothing more than that.

Lightroom will display an image by sending colour numbers through the GPU LUT to the monitor.  I can guarantee you that even with the best monitor in the world in conjunction with the most accurate calibration hardware money can buy, SOME of those colour numbers will NOT display correctly!  They will be replaced in a ‘relative colourmetric manner’ by their nearest neighbor in the MONITOR LUT – the colours the monitor CAN display.

Expensive monitors with 14 bit or 16 bit LUTs mean less colours will be ‘replaced’ than when using a monitor that has an 8 bit LUT, and even more colours will be replaced if we scale back our ‘spend’ even further and purchase a standard gamut sRGB monitor.

Another advantage of the pricier 14/16 bit wide gamut dedicated photography monitors from the likes of Eizo, NEC and BenQ is the ability to do ‘hardware calibration’.

Whereas the ‘standard’ monitor calibration mentioned earlier makes it’s calibration changes primarily to the GPU LUT, and therefore somewhat ‘stiffles’ its output bit depth; with hardware calibration we can internally calibrate the monitor itself and leave the GPU running as intended.

That’s a slight over-simplification, but it makes the point!

5. Monitor Connectivity. By this I mean connection type:

monitor

VGA or D-Sub 15. Awful method of connection – went out with the Ark. If you are using this then “stop it”!

monitor

DVI – nothing wrong with this connection format whatsoever, but bare in mind it’s an 8 bit connection.

monitor

Dual Link DVI – still only 8 bit.

monitor

Displayport – 10 bit monitor input connection.

monitor

HDMI left, Displayport right – both 10 bit connections.

6. Reliance on dedicated calibration device or not – this is something that has me at the thin end of a sharp wedge if I consider the BenQ option.

I own a perfectly serviceable ColorMunki Photo, and as far as I can see, hardware calibration on the Eizo is feasible with this device. However, hardware calibration on BenQ system software does not appear to support the use of my ColorMunki Photo – so I need to purchase an i1 Display, which is not a corner I really want to be backed into!

Now remember how we defined my D800E Raw file earlier on:

  • has a pixel dimension of 7360 x 4912 and a pixel area (or resolution) of 36,152,320 pixels.
  • 1.41 inches on its long edge
  • has a resolution of around 5,200 pixels per inch
  • has a reproduction accuracy for Hue, Saturation & Luminance of 14 bits
  • has a colour space unique to the camera, which can best be reproduced by the ProPhotoRGB working colour space.

So let’s now take a look at the resolution spec for, say, the NEC Spectraview Reference 302 monitor.  It’s a 30″ panel with an optimum resolution of 2560 x 1600 pixels – that’s 4Mp!

The ubiquitous Eizo ColorEdge CG2420 has a standard 24 inch resolution of 1920 x 1200 pixels – that’s 2.3Mp!

The BenQ SW2700PT Pro 27in IPS has 2560 x 1440, or 3.68Mp resolution.

Yes, monitor resolution is WAY BELOW that of the image – and that’s a GOOD THING.

I HATE viewing unedited images/processing on my 13″ Retina MBP screen – not just because of any possible calibration issue, or indeed that of its diminutive size – but because of its whopping 2560 x 1600, 4Mp resolution crammed into such a small space.

The individual pixels are so damn tiny the lull you into a false sense of security about one thing above all else – critical image sharpness.

Images that ‘appear tack sharp’ on a high resolution monitor MIGHT prove a slight disappointment when viewed on another monitor with a more conventional resolution!

So there we have it, and I hope you’ve learned something you didn’t know about monitors.

And remember, understanding what you already have, and what you want to buy is a lot more advantageous to you than the advice of some bloke in a shop who’s on a sales commission!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

If this post has been useful to you then please consider chucking me a small donation – or a big one if you are that way inclined!

Many thanks to the handful of readers who contributed over the last week or so – you’ve done your bit and I’m eternally grateful to you.

Colormunki Photo Update

Colormunki Photo Update

Both my MacPro and non-retina iMac used to be on Mountain Lion, or OSX 10.8, and nope, I never updated to Mavericks as I’d heard so many horror stories, and I basically couldn’t be bothered – hey, if it ain’t broke don’t fix it!

But, I wanted to install CapOne Pro on the iMac for the live-view capabilities – studio product shot lighting training being the biggest draw on that score.

So I downloaded the 60 day free trial, and whadyaknow, I can’t install it on anything lower than OSX 10.9!

Bummer thinks I – and I upgrade the iMac to OSX 10.10 – YOSEMITE.

Now I was quite impressed with the upgrade and I had no problems in the aftermath of the Yosemite installation; so after a week or so muggins here decided to do the very same upgrade to his late 2009 Mac Pro.

OHHHHHHH DEARY ME – what a pigs ear of a move that turned out to be!

Needless to say, I ended up making a Yosemite boot installer and setting up on a fresh HDD.  After re-installing all the necessary software like Lightroom and Photoshop, iShowU HD Pro and all the other crap I use, the final task arrived of sorting colour management out and profiling the monitors.

So off we trundle to X-Rite and download the Colormunki Photo software – v1.2.1.  I then proceeded to profile the 2 monitors I have attached to the Mac Pro.

Once the colour measurement stage got underway I started to think that it was all looking a little different and perhaps a bit more comprehensive than it did before.  Anyway, once the magic had been done and the profile saved I realised that I had no way of checking the new profile against the old one – t’was on the old hard drive!

So I go to the iMac and bring up the Colormunki software version number – 1.1.1 – so I tell the software to check for updates – “non available” came the reply.

Colormunki software downloads

Colormunki software downloads

Colormunki v1.2.1 for Yosemite

Colormunki v1.2.1 for Yosemite

So I download 1.2.1, remove the 1.1.1 software and restart the iMac as per X-Rites instructions, and then install said 1.2.1 software.

Once installation was finished I profiled the iMac and found something quite remarkable!

Check out the screen grab below:

iMac screen profile comparrisons.

iMac screen profile comparisons. You need to click this to open full size in a new tab.

On the left is a profile comparison done in the ColourThink 2-D grapher, and on the right one done in the iMacs own ColourSynch Utility.

In the left image the RED gamut projection is the new Colormunki v1.2.1 profile. This also corresponds to the white mesh grid in the Colour Synch image.

Now the smaller WHITE gamut projection was produced with an i1Pro 2 using the maximum number of calibration colours; this corresponds to the coloured projection in the Coloursynch window image.

The GREEN gamut projection is the supplied iMac system monitor profile – which is slightly “pants” due to its obvious smaller size.

What’s astonished me is that the Colormunki Photo with the new software v1.2.1 has produced a larger gamut for the display than the i1 Pro 2 did under Mountain Lion OSX 10.8

I’ve only done a couple of test prints via softproofing in Lightroom, but so far the new monitor profile has led to a small improvement in screen-to-print matching of the some subtle yellow-green and green-blue mixes, aswell as those yellowish browns which I often found tricky to match when printing from the iMac.

So, my advice is this, if you own a Colormunki Photo and have upgraded your iMac to Yosemite CHECK your X-Rite software version number. Checking for updates doesn’t always work, and the new 1.2.1 Mac version is well worth the trouble to install.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Desktop Printing 101

Understanding Desktop Printing – part 1

 

desktop printingDesktop printing is what all photographers should be doing.

Holding a finished print of your epic image is the final part of the photographic process, and should be enjoyed by everyone who owns a camera and loves their photography.

But desktop printing has a “bad rap” amongst the general hobby photography community – a process full of cost, danger, confusion and disappointment.

Yet there is no need for it to be this way.

Desktop printing is not a black art full of ‘ju-ju men’ and bear-traps  – indeed it’s exactly the opposite.

But if you refuse to take on board a few simple basics then you’ll be swinging in the wind and burning money for ever.

Now I’ve already spoken at length on the importance of monitor calibration & monitor profiling on this blog HERE and HERE so we’ll take that as a given.

But in this post I want to look at the basic material we use for printing – paper media.

Print Media

A while back I wrote a piece entitled “How White is Paper White” – it might be worth you looking at this if you’ve not already done so.

Over the course of most of my blog posts you’ll have noticed a recurring undertone of contrast needs controlling.

Contrast is all about the relationship between blacks and whites in our images, and the tonal separation between them.

This is where we, as digital photographers, can begin to run into problems.

We work on our images via a calibrated monitor, normally calibrated to a gamma of 2.2 and a D65 white point.  Modern monitors can readily display true black and true white (Lab 0 to Lab 100/RGB 0 to 255 in 8 bit terms).

Our big problem lies in the fact that you can print NEITHER of these luminosity values in any of the printer channels – the paper just will not allow it.

A papers ability to reproduce white is obviously limited to the brightness and background colour tint of the paper itself – there is no such think as ‘white’ paper.

But a papers ability to render ‘black’ is the other vitally important consideration – and it comes as a major shock to a lot of photographers.

Let’s take 3 commonly used Permajet papers as examples:

  • Permajet Gloss 271
  • Permajet Oyster 271
  • Permajet Portrait White 285

The following measurements have been made with a ColorMunki Photo & Colour Picker software.

L* values are the luminosity values in the L*ab colour space where 0 = pure black (0RGB) and 100 = pure white (255RGB)

Gloss paper:

  • Black/Dmax = 4.4 L* or 14,16,15 in 8 bit RGB terms
  • White/Dmin = 94.4 L* or 235,241,241 (paper white)

From these measurements we can see that the deepest black we can reproduce has an average 8bit RGB value of 15 – not zero.

We can also see that “paper white” has a leaning towards cyan due to the higher 241 green & blue RGB values, and this carries over to the blacks which are 6 points deficient in red.

Oyster paper:

  • Black/Dmax = 4.7 L* or 15,17,16 in 8 bit RGB terms
  • White/Dmin = 94.9 L* or 237,242,241 (paper white)

We can see that the Oyster maximum black value is slightly lighter than the Gloss paper (L* values reflect are far better accuracy than 8 bit RGB values).

We can also see that the paper has a slightly brighter white value.

Portrait White Matte paper:

  • Black/Dmax = 25.8 L* or 59,62,61 in 8 bit RGB terms
  • White/Dmin = 97.1 L* or 247,247,244 (paper white)

You can see that paper white is brighter than either Gloss or Oyster.

The paper white is also deficient in blue, but the Dmax black is deficient in red.

It’s quite common to find this skewed cool/warm split between dark tones and light tones when printing, and sometimes it can be the other way around.

And if you don’t think there’s much of a difference between 247,247,244 & 247,247,247 you’d be wrong!

The image below (though exaggerated slightly due to jpeg compression) effectively shows the difference – 247 neutral being at the bottom.

paper white,printing

247,247,244 (top) and 247,247,247 (below) – slightly exaggerated by jpeg compression.

See how much ‘warmer’ the top of the square is?

But the real shocker is the black or Dmax value:

paper,printing,desktop printing

Portrait White matte finish paper plotted against wireframe sRGB on L*ab axes.

The wireframe above is the sRGB colour space plotted on the L*ab axes; the shaded volume is the profile for Portrait White.  The sRGB profile has a maximum black density of 0RGB and so reaches the bottom of vertical L axis.

However, that 25.8 L* value of the matte finish paper has a huge ‘gap’ underneath it.

The higher the black L* value the larger is the gap.

What does this gap mean for our desktop printing output?

It’s simple – any tones in our image that are DARKER, or have a lower L* value than the Dmax of the destination media will be crushed into “paper black” – so any shadow detail will be lost.

Equally the same can be said for gaps at the top of the L* axis where “paper white” or Dmin is lower than the L* value of the brightest tones in our image – they too will get homogenized into the all-encompassing paper white!

Imagine we’ve just processed an image that makes maximum use of our monitors display gamut in terms of luminosity – it looks magnificent, and will no doubt look equally as such for any form of electronic/digital distribution.

But if we send this image straight to a printer it’ll look really disappointing, if only for the reasons mentioned above – because basically the image will NOT fit on the paper in terms of contrast and tonal distribution, let alone colour fidelity.
It’s at this point where everyone gives up the idea of desktop printing:

  • It looks like crap
  • It’s a waste of time
  • I don’t know what’s happened.
  • I don’t understand what’s gone wrong

Well, in response to the latter, now you do!

But do we have to worry about all this tech stuff ?

No, we don’t have to WORRY about it – that’s what a colour managed work flow & soft proofing is for.

But it never hurts to UNDERSTAND things, otherwise you just end up in a “monkey see monkey do” situation.

And that’s as dangerous as it can get – change just one thing and you’re in trouble!

But if you can ‘get the point’ of this post then believe me you are well on your way to understanding desktop printing and the simple processes we need to go through to ensure accurate and realistic prints every time we hit the PRINT button.

desktop printing

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Gamma Encoding – Under the Hood

Gamma, Gamma Encoding & Decoding

Gamma – now there’s a term I see cause so much confusion and misunderstanding.

So many people use the term without knowing what it means.

Others get gamma mixed up with contrast, which is the worst mistake anyone could ever make!

Contrast controls the spatial relationship between black and white; in other words the number of grey tones.  Higher contrast spreads black into the darker mid tones and white into the upper mid tones.  In other words, both the black point and white point are moved.

The only tones that are not effected by changes in image gamma are the black point and white point – that’s why getting gamma mixed up with contrast is the mark of a “complete idiot” who should be taken outside and summarily shot before they have chance to propagate this shocking level of misunderstanding!

What is Gamma?

Any device that records an image does so with a gamma value.

Any device which displays/reproduces said image does so with a gamma value.

We can think of gamma as the proportional distribution of tones recorded by, or displayed on, a particular device.

Because different devices have different gamma values problems would arise were we to display an image that has a gamma of X on a display with a gamma of Y:

Ever wondered what a RAW file would look like displayed on a monitor without any fancy colour & gamma managed software such as LR or ACR?

gamma,gamma encoding,Andy Astbury

A raw file displayed on the back of the camera (left) and as it would look on a computer monitor calibrated to a gamma of 2.2 & without any colour & gamma management (right).

The right hand image looks so dark because it has a native gamma of 1.0 but is being displayed on a monitor with a native gamma of 2.2

RAW file Gamma

To all intents and purposes ALL RAW files have a gamma of 1.0

gamma,gamma encoding,Andy Astbury

Camera Sensor/Linear Gamma (Gamma 1.0)

Digital camera sensors work in a linear fashion:

If we have “X” number of photons striking a sensor photosite then “Y” amount of electrons will be generated.

Double the number of photons by doubling the amount of light, then 2x “Y” electrons will be generated.

Halve the number of photons by reducing the light on the scene by 50% then 0.5x “Y” electrons will be generated.

We have two axes on the graph; the horizontal x axis represents the actual light values in the scene, and the vertical y axis represents the output or recorded tones in the image.

So, if we apply Lab L* values to our graph axes above, then 0 equates to black and 1.0 equates to white.

The “slope” of the graph is a straight line giving us an equal relationship between values for input and output.

It’s this relationship between input and output values in digital imaging that helps define GAMMA.

In our particular case here, we have a linear relationship between input and output values and so we have LINEAR GAMMA, otherwise known as gamma 1.0.

Now let’s look at a black to white graduation in gamma 1.0 in comparison to one in what’s called an encoding gamma:

gamma,gamma encoding,Andy Astbury

Linear (top) vs Encoded Gamma

The upper gradient is basically the way our digital cameras see and record a scene.

There is an awful lot of information about highlights and yet the darker tones and ‘shadow’ areas are seemingly squashed up together on the left side of the gradient.

Human vision does not see things in the same way that a camera sensor does; we do not see linearly.

If the amount of ambient light falling on a scene suddenly doubles we will perceive the increase as an unquantifiable “it’s got brighter”; whereas our sensors response will be exactly double and very quantifiable.

Our eyes see a far more ‘perceptually even’ tonal distribution with much greater tonal separation in the darker tones and a more compressed distribution of highlights.

In other words we see a tonal distribution more like that contained in the gamma encoded gradient.

Gamma encoding can be best illustrated with another graph:

gamma,gamma encoding,Andy Astbury

Linear Gamma vs Gamma Encoding 1/2.2 (0.4545)

Now sadly this is where things often get misunderstood, and why you need to be careful about where you get information from.

The cyan curve is NOT gamma 2.2 – we’ll get to that shortly.

Think of the graph above as the curves panel in Lightroom, ACR or Photoshop – after all, that’s exactly what it is.

Think of our dark, low contrast linear gamma image as displayed on a monitor – what would we need to do to the linear slope  to improve contrast and generally brighten the image?

We’d bend the linear slope to something like the cyan curve.

The cyan curve is the encoding gamma 1/2.2.

There’s a direct numerical relationship between the two gamma curves; linear and 1/2.2. and it’s a simple power law:

  •  VO = VIγ where VO = output value, VI = input value and γ = gamma

Any input value (VI) on the linear gamma curve to the power of γ equals the output value of the cyan encoding curve; and γ as it works out equals 0.4545

  •  VI 0 = VO 0
  •  VI 0.25 = VO 0.532
  •  VI 0.50 = VO 0.729
  •  VI 0.75 = VO 0.878
  •  VI 1.0 = VO 1.0

Now isn’t that bit of maths sexy………………..yeah!

Basically the gamma encoding process remaps all the tones in the image and redistributes them in a non-linear ratio which is more familiar to our eye.

Note: the gamma of human vision is not really gamma 1/2.2 – gamma 0.4545.  It would be near impossible to actually quantify gamma for our eye due to the behavior of the iris etc, but to all intents and purposes modern photographic principles regard it as being ‘similar to’..

So the story so far equates to this:

gamma,gamma encoding,Andy Astbury

Gamma encoding redistributes tones in a non-linear manner.

But things are never quite so straight forward are they…?

Firstly, if gamma < 1 (less than 1) the encoding curve goes upwards – as does the cyan curve in the graph above.

But if gamma > 1 (greater than 1) the curve goes downwards.

A calibrated monitor has (or should have) a calibrated device gamma of 2.2:

gamma,gamma encoding,Andy Astbury

Linear, Encoding & Monitor gamma curves.

As you can now see, the monitor device gamma of 2.2 is the opposite of the encoding gamma – after all, the latter is the reciprocal of the former.

So what happens when we apply the decoding gamma/monitor gamma of 2.2 to our gamma encoded image?

gamma,gamma encoding,Andy Astbury

The net effect of Encode & Decode gamma – Linear.

That’s right, we end up back where we started!

Now, are you thinking:

  • Don’t understand?
  • We are back with our super dark image again?

Welcome to the worlds biggest Bear-Trap!

The “Learning Gamma Bear Trap”

Hands up those who are thinking this is what happens:

gamma,gamma encoding,Andy Astbury

If your arm so much as twitched then you are not alone!

I’ll admit to being naughty and leading you to edge of the pit containing the bear trap – but I didn’t push you!

While you’ve been reading this post have you noticed the occasional random bold and underlined text?

Them’s clues folks!

The super dark images – both seascape and the rope coil – are all “GAMMA 1.0 displayed on a GAMMA 2.2 device without any management”.

That doesn’t mean a gamma 1.0 RAW file actually LOOKS like that in it’s own gamma environment!

That’s the bear trap!

gamma,gamma encoding,Andy Astbury

Gamma 1.0 to gamma 2.2 encoding and decoding

Our RAW file actually looks quite normal in its own gamma environment (2nd from left) – but look at the histogram and how all those darker mid tones and shadows are piled up to the left.

Gamma encoding to 1/2.2 (gamma 0.4545) redistributes and remaps those all the tones and lightens the image by pushing the curve up BUT leaves the black and white points where they are.  No tones have been added or taken away, the operation just redistributes what’s already there.  Check out the histogram.

Then the gamma decode operation takes place and we end up with the image on the right – looks perfect and ready for processing, but notice the histogram, we keep the encoding redistribution of tones.

So, are we back where we started?  No.

Luckily for us gamma encoding and decoding is all fully automatic within a colour managed work flow and RAW handlers such as Lightroom, ACR and CapOnePro etc.

Image gamma changes are required when an image is moved from one RGB colour space to another:

  • ProPhoto RGB has a gamma of 1.8
  • Adobe RGB 1998 has a gamma of 2.2
  • sRGB has an oddball gamma that equates to an average of 2.2 but is nearly 1.8 in the deep shadow tones.
  • Lightrooms working colour space is ProPhoto linear, in other words gamma 1.0
  • Lightrooms viewing space is MelissaRGB which equates to Prophoto with an sRGB gamma.

Image gamma changes need to occur when images are sent to a desktop printer – the encode/decode characteristics are actually part and parcel of the printer profile information.

Gamma awareness should be exercised when it comes to monitors:

  • Most plug & play monitors are set to far too high a gamma ‘out the box’ – get it calibrated properly ASAP; it’s not just about colour accuracy.
  • Laptop screen gamma changes with viewing position – God they are awful!

Anyway, that just about wraps up this brief explanation of gamma; believe me it is brief and somewhat simplified – but hopefully you get the picture!

Become a Patron!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Pixel Resolution – part 2

More on Pixel Resolution

In my previous post on pixel resolution  I mentioned that it had some serious ramifications for print.

The major one is PHYSICAL or LINEAR image dimension.

In that previous post I said:

  • Pixel dimension divided by pixel resolution = linear dimension

Now, as we saw in the previous post, linear dimension has zero effect on ‘digital display’ image size – here’s those two snake jpegs again:

Andy Astbury,wildlife in pixels,pixel,dpi,ppi,pixel resolution,photoshop,lightroom,adobe

European Adder – 900 x 599 pixels with a pixel resolution of 300PPI

Andy Astbury,wildlife in pixels,pixel,dpi,ppi,pixel resolution,photoshop,lightroom,adobe

European Adder – 900 x 599 pixels with a pixel resolution of 72PPI

Digital display size is driven by pixel dimensionNOT linear dimension or pixel resolution.

Print on the other hand is directly driven by image linear dimension – the physical length and width of our image in inches, centimeters or millimeters.

Now I teach this ‘stuff’ all the time at my Calumet workshops and I know it’s hard for some folk to get their heads around print size and printer output, but it really is simple and straightforward if you just think about it logically for minute.

Let’s get away from snakes and consider this image of a cute Red Squirrel:

Andy Astbury,wildlife in pixels,

Red Squirrel with Bushy Tail – what a cutey!
Shot with Nikon D4 – full frame render.

Yeah yeah – he’s a bit big in the frame for my taste but it’s a seller so boo-hoo – what do I know ! !

Shot on a Nikon D4 – the relevance of which is this:

  • The D4 has a sensor with a linear dimension of 36 x 24 millimeters, but more importantly a photosite dimension of 4928 x 3280. (this is the effective imaging area – total photosite area is 4992 x 3292 according to DXO Labs).

Importing this image into Lightroom, ACR, Bridge, CapOne Pro etc will take that photosite dimension as a pixel dimension.

They also attach the default standard pixel resolution of 300 PPI to the image.

So now the image has a set of physical or linear dimensions:

  • 4928/300  x  3280/300 inches  or  16.43″ x 10.93″

or

  • 417.24 x 277.71 mm for those of you with a metric inclination!

So how big CAN we print this image?

 

Pixel Resolution & Image Physical Dimension

Let’s get back to that sensor for a moment and ask ourselves a question:

  • “Does a sensor contain pixels, and can it have a PPI resolution attached to it?
  • Well, the strict answer would be No and No not really.

But because the photosite dimensions end up being ‘converted’ to pixel dimensions then let’s just for a moment pretend that it can.

The ‘effective’ PPI value for the D4 sensor could be easily derived from its long edge ‘pixel’ count of the FX frame divided by the linear length which is just shy of 36mm or 1.4″ – 3520 PPI or thereabouts.

So, if we take this all literally our camera captures and stores a file that has linear dimensions of  1.4″ x 0.9″, pixel dimensions of  4928 x 3280 and a pixel resolution of 3520 PPI.

Import this file into Lightroom for instance, and that pixel resolution is reduced to 300 PPI.  It’s this very act that renders the image on our monitor at a size we can work with.  Otherwise we’d be working on postage stamps!

And what has that pixel resolution done to the linear image dimensions?  Well it’s basically ‘magnified’ the image – but by how much?

 

Magnification & Image Size

Magnification factors are an important part of digital imaging and image reproduction, so you need to understand something – magnification factors are always calculated on the diagonal.

So we need to identify the diagonals of both our sensor, and our 300 PPI image before we can go any further.

Here is a table of typical sensor diagonals:

Andy Astbury

Table of Sensor Diagonals for Digital Cameras.

And here is a table of metric print media sizes:

Andy Astbury

Metric Paper Sizes including diagonals.

To get back to our 300 PPI image derived from our D4 sensor,  Pythagoras tells us that our 16.43″ x 10.93″ image has a diagonal of 19.73″ – or 501.14mm

So with a sensor diagonal of 43.2mm we arrive at a magnification factor of around 11.6x for our 300 PPI native image as displayed on our monitor.

This means that EVERYTHING on the sensor – photosites/pixels, dust bunnies, logs, lumps of coal, circles of confusion, Airy Discs – the lot – are magnified by that factor.

Just to add variety, a D800/800E produces native 300 PPI images at 24.53″ x 16.37″ – a magnification factor of 17.3x over the sensor size.

So you can now begin to see why pixel resolution is so important when we print.

 

How To Blow Up A Squirrel !

Let’s get back to ‘his cuteness’ and open him up in Photoshop:

Our Squirrel at his native 300 PPI open in Photoshop.

Our Squirrel at his native 300 PPI open in Photoshop.

See how I keep you on your toes – I’ve switched to millimeters now!

The image is 417 x 277 mm – in other words it’s basically A3.

What happens if we hit print using A3 paper?

Red Squirrel with Bushy Tail. D4 file at 300 PPI printed to A3 media.

Red Squirrel with Bushy Tail. D4 file at 300 PPI printed to A3 media.

Whoops – that’s not good at all because there is no margin.  We need workable margins for print handling and for mounting in cut mattes for framing.

Do not print borderless – it’s tacky, messy and it screws your printer up!

What happens if we move up a full A size and print A2:

Red Squirrel 300 PPI printed on A2

Red Squirrel D4 300 PPI printed on A2

Now that’s just over kill.

But let’s open him back up in Photoshop and take a look at that image size dialogue again:

Our Squirrel at his native 300 PPI open in Photoshop.

Our Squirrel at his native 300 PPI open in Photoshop.

If we remove the check mark from the resample section of the image size dialogue box (circled red) and make one simple change:

Our Squirrel at a reduced pixel resolution of 240 PPI open in Photoshop.

Our Squirrel at a reduced pixel resolution of 240 PPI open in Photoshop.

All we need to do is to change the pixel resolution figure from 300 PPI to 240 PPI and click OK.

We make NO apparent change to the image on the monitor display because we haven’t changed any physical dimension and we haven’t resampled the image.

All we have done is tell the print pipeline that every 240 pixels of this image must occupy 1 liner inch of paper – instead of 300 pixels per linear inch of paper.

Let’s have a look at the final outcome:

Red Squirrel D4 240 PPI printed on A2.

Red Squirrel D4 240 PPI printed on A2.

Perfick… as Pop Larkin would say!

Now we have workable margins to the print for both handling and mounting purposes.

But here’s the big thing – printed at 2880+ DPI printer output resolution you would see no difference in visual print quality.  Indeed, 240 PPI was the Adobe Lightroom, ACR default pixel resolution until fairly recently.

So there we go, how big can you print?? – Bigger than you might think!

And it’s all down to pixel resolution – learn to understand it and you’ll find a lot of  the “murky stuff” in photography suddenly becomes very simple!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Lightroom Tutorials #2

 

Lightroom Tutorials,video,lessoneagle,golden eagle,snow,winter,Norway,wildlife

Image Processing in Lightroom & Photoshop

 

In this Lightroom tutorial preview I take a close look at the newly evolved Clone/Heal tool and dust spot removal in Lightroom 5.

This newly improved tool is simple to use and highly effective – a vast improvement over the great tool that it was already in Lightroom 4.

 

Lightroom Tutorials  Sample Video Link below: Video will open in a new window

 

https://vimeo.com/64399887

 

This 4 disc Lightroom Tutorials DVD set is available from my website at http://wildlifeinpixels.net/dvd.html

How White is Paper White?

What is Paper White?

We should all know by now that, in RGB terms, BLACK is 0,0,0 and that WHITE is 255,255,255 when expressed in 8 bit colour values.

White can also be 32,768: 32,768: 32,768 when viewed in Photoshop as part of a 16 bit image (though those values are actually 15 bit – yet another story!).

Either way, WHITE is WHITE; or is it?

paper white,photo paper white,printing paper white,Permajet paper whites, snow, Arctic Fox

Arctic Fox in Deep Snow ©Andy Astbury/Wildlife in Pixels

Take this Arctic Fox image – is anything actually white?  No, far from it! The brightest area of snow is around 238,238,238 which is neutral, but it’s not white but a very light grey.  And we won’t even discuss the “whiteness” of  the fox itself.

paper white,photo paper white,printing paper white,Permajet paper whites, bird, pheasant, snow

Hen Pheasant in Snow ©Andy Astbury/Wildlife in Pixels

The Hen Pheasant above was shot very late on a winters afternoon when the sun was at a very low angle directly behind me – the colour temperature has gone through the roof and everything has taken on a very warm glow which adds to the atmosphere of the image.

paper white,photo paper white,printing paper white,Permajet paper whites, snow, sunset, extreme colour temperature

Extremes of colour temperature – Snow Drift at Sunset ©Andy Astbury/Wildlife in Pixels

We can take the ‘snow at sunset’ idea even further, where the suns rays strike the snow it lights up pink, but the shadows go a deep rich aquamarine blue – what we might call a ‘crossed curves’ scenario, where shadow and lower mid tones are at a low Kelvin temperature, and upper mid tones and highlights are at a much higher Kelvin.

All three of these images might look a little bit ‘too much’ – but try clicking one and viewing it on a darker background without the distractions of the rest of the page – GO ON, TRY IT.

Showing you these three images has a couple of purposes:

Firstly, to show you that “TRUE WHITE” is something you will rarely, if ever, photograph.

Secondly, viewing the same image in a different environment changes the eyes perception of the image.

The secondary purpose is the most important – and it’s all to do with perception; and to put it bluntly, the pack of lies that your eyes and brain lead you to believe is the truth.

Only Mother Nature, wildlife and cameras tell the truth!

So Where’s All This Going Andy, and What’s it got to do with Paper White?

Fair question, but bare with me!

If we go to the camera shop and peruse a selection of printer papers or unprinted paper samplers, our eyes tell us that we are looking at blank sheets of white paper;  but ARE WE?

Each individual sheet of paper appears to be white, but we see very subtle differences which we put down to paper finish.

But if we put a selection of, say Permajet papers together and compare them with ‘true RGB white’ we see the truth of the matter:

paper white,photo paper white,printing paper white,Permajet paper whites

Paper whites of a few Permajet papers in comparison to RGB white – all colour values are 8bit.

Holy Mary Mother of God!!!!!!!!!!!!!!!!

I’ll bet that’s come as a bit of a shocker………

No paper is WHITE; some papers are “warm”; and some are “cool”.

So, if we have a “warmish” toned image it’s going to be a lot easier to “soft proof” that image to a “warm paper” than a cool one – with the result of greater colour reproduction accuracy.

If we were to try and print a “cool” image on to “warm paper” then we’ve got to shift the whole colour balance of the image, in other words warm it up in order for the final print to be perceived as neutral – don’t forget, that sheet of paper looked neutral to you when you stuck it in the printer!

Well, that’s simple enough you might think, but you’d be very, very wrong…

We see colour on a print because the inks allow use to see the paper white through them, but only up to a point.  As colours and tones become darker on our print we see less “paper white” and more reflected colour from the ink surface.

If we shift the colour balance of the entire image – in this case warm it up – we shift the highlight areas so they match the paper white; but we also shift the shadows and darker tones.  These darker areas hide paper white so the colour shift in those areas is most definitely NOT desirable because we want them to be as perceptually neutral as the highlights.

What we need to do in truth is to somehow warm up the higher tonal values while at the same time keep the lowest tonal values the same, and then somehow match all the tones in between the shadows and highlights to the paper.

This is part of the process called SOFT PROOFING – but the job would be a lot easier if we chose to print on a paper whose “paper white” matched the overall image a little more closely.

The Other Kick in the Teeth

Not only are we battling the hue of paper white, or tint if you like, but we also have to take into account the luminance values of the paper – in other words just how “bright” it is.

Those RGB values of paper whites across a spread of Permajet papers – here they are again to save you scrolling back:

paper white,photo paper white,printing paper white,Permajet paper whites

Paper whites of a few Permajet papers in comparrison to RGB white – all colour values are 8bit.

not only tell us that there is a tint to the paper due to the three colour channel values being unequal, but they also tell us the brightest value we can “print” – in other words not lay any ink down!

Take Oyster for example; a cracking all-round general printer paper that has a very large colour gamut and is excellent value for money – Permajet deserve a medal for this paper in my opinion because it’s economical and epic!

Its paper white is on average 240 Red, 245 Green ,244 Blue.  If we have any detail in areas of our image that are above 240, 240, 240 then part of that detail will be lost in the print because the red channel minimum density (d-min) tops out at 240; so anything that is 241 red or higher will just not be printed and will show as 240 Red in the paper white.

Again, this is a problem mitigated in the soft proofing process.

But it’s also one of the reasons why the majority of photographers are disappointed with their prints – they look good on screen because they are being displayed with a tonal range of 0 to 255, but printed they just look dull, flat and generally awful.

Just another reason for adopting a Colour Managed Work Flow!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.