Good Contrast Control in Lightroom CC

Contrast Control in Lightroom

Learning how to deploy proper contrast control in Lightroom brings with it two major benefits:

  • It allows you to reveal more of your camera sensors dynamic range.
  • It will allow you to reveal considerably more image detail.

contrast control

I have posted on this subject before, under the guise of neutralising Lightrooms ‘hidden background adjustments’.  But as Lightroom CC 2015 evolves, trying to ‘nail’ the best way of doing something becomes like trying to hit a moving target.

For the last few months I’ve been using this (for me) new method – and to be honest it works like a charm!

It involves the use of the ‘zero’ preset together with a straight process version swap around, as illustrated in the before/after shot above and in the video linked below.  This video is best viewed on my YouTube channel:

The process might seem a little tedious at first, but it’s really easy when you get used to it, and it works on ALL images from ALL cameras.

Here is a step-by-step guide to the various Lightroom actions you need to take in order to obtain good contrast control:

Contrast Control Workflow Steps:

1. Develop Module Presets: Choose ZEROED
2. Camera Calibration Panel: Choose CAMERA NEUTRAL
3. Camera Calibration Panel: Choose Process Version 2010
4. Camera Calibration Panel: Choose Process Version 2012
5. Basics Panel: Double Click Exposure (goes from -1 to 0)
6. Basics Panel: Adjust Black Setting to taste if needed.
7. Details Panel: Reset Sharpening to default +25
8. Details Panel: Reset Colour Noise to default +25
9. Lens Corrections Panel: Tick Remove Chromatic Aberration.

Now that you’ve got good contrast control you can set about processing your image – just leave the contrast slider well alone!

Why is contrast control important, and why does it ‘add’ so much to my images Andy?

We are NOT really reducing the contrast of the raw file we captured.  We are simply reducing the EXCESSIVE CONTRAST that Lightroom ADDS to our files.

  • Lightroom typically ADDS a +33 contrast adjustment but ‘calls it’ ZERO.
  • Lightroom typically ADDS a medium contrast tone curve but ‘calls it’ LINEAR.

Both of this are contrast INCREASES, and any increase in contrast can be seen as a ‘compression’ of the tonal space between BLACK and WHITE.  This is a dynamic range visualisation killer because it crushes the ends of the midtone range.

It’s also a detail killer, because 99% of the subject detail is in the mid tone range.  Typically the Lightroom tonal curve range for midtones is 25% to 75%, but Lightroom is quite happy to accept a midtone range of 10% to 90% – check those midtone arrow adjusters at the bottom edge of the parametric tone curve!

I hope you find this post useful folks, and don’t forget to watch the video at full resolution on my YouTube Channel.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

 

Speed Light Photography

Speed Light Photography – part 1

First things first, apologies for the gap in blog entries – I’ve been a bit “in absentia” of late for one reason or another.  I’ve got a few gear reviews to do between now and the end of the year, video tutorial ideas and requests are crawling out of the woodwork, and my ability to organise myself has become something of a crumbling edifice!

I blame the wife myself………………..

But I’ve come to the conclusion that for one reason or another I’ve become somewhat pigeon-holed as a wildlife/natural history photographer – going under the moniker of Wildlife in Pixels it’s hardly a big surprise is it..

But I cut my photographic teeth on studio product/pack shot and still life work – I loved it then and I still do.  And there’s NOTHING that teaches you more about light than studio work – it pays dividends in all aspects of photography, wildlife and landscape work are no exception.  Understanding how light behaves, when it’ll look good and when it’ll look like a bag of spanners is what helps capture mood and atmosphere in a shot.

The interaction between light and subject is what makes a great image, and I do wish photographers would understand this – sadly most don’t.

To this end I’ve begun to teach workshops that try to give those attending a flavor of the basic concepts of light by introducing them to the idea of using their speed lights to produce images they can do 365 days a year cum rain or shine – high speed flash, and simple product still life.

Both styles demand a high level of attention to detail in the way the light produced by the speed lights bends and wraps around the subject.  Full-blown studio lights have the benefit of modelling lights so that you can see this before you take the shot, but using speed lights means you have to imagine what the light is doing, so it’s level of difficulty begins high, but decreases with practical experience.

A basic 3 light setup with speed lights can produce some really soft and moody lighting with ease.

A basic 4 light setup with speed lights can produce some really soft and moody lighting with ease.

This Black Label shot went a bit bonkers in the final stages with the addition of smoke, but it gives you an idea of the subtlety of lighting that can be achieved with speed lights.

As for the setup, here’s a shot before I introduced the glass….

Simple setup for the Black Label shot - note the well-appointed studio!

Simple setup for the Black Label shot – note the well-appointed studio!

…featuring that most valuable of studio photographers tools, the Voice Activated Light Stand..!

Four SB800’s in all, the one on the right is running at 1/2 power and is fitted with an Interfit Strobies softbox and is double diffused using a Calumet 42″ frame (available here) and white diffuser – this constitutes the main light.

Just look at the size of the diffused disc on the face of that 42″ frame – all that from a poxy 2″x1″ flash head in less than 16″ – epic!

The SB800 on the left, fitted with another softbox is turned down to 1/64th power, and is there solely to illuminate the label where it wraps around the left edge of the bottle, and to get a second neck highlight. Although their is light emanating from it, its greatest effect is that of “bouncing” light from the right hand source back in to the bottle.

The V.A.L.S. is fitted with a third speed light that has a diffused snoot – note the expensive diffusion material and the highly engineered attachment method – kitchen towel and rubber band!  The sole purpose of this tiny soft light is to just help pull out the left side of the bottle cap from the intensely dark background towards the top of the shot.

The 4th SB800 is fitted with a 30 degree honeycomb and a “tits ‘n ass”; or TNA2 to be more correct; filter just to give a subtle warm graduation to the background.

Speaking of the background, this is a roll of high grade tracing paper – one of the most versatile materials any studio has, both as a front lit or back lit background, or as a diffusion material – just brilliant stuff, second only to Translum plastic, and a shed-load cheaper.

At the other end of the speed light photography spectrum is the most enjoyable and fascinating pastime of high speed liquid motion photography – a posh way of saying “making a mess”!

It doesn’t have to be too messy – just don’t do it on your best Axminster!

By utilising the IGBT (Isolated Gate Bipolar Transistor) circuitry given to us in speed lights we can deploy the very fast tube burn times, or flash durations, obtained at lower output power settings to our advantage.

Simple shots of water, both dyed and clear can produce some stunning captures:

Streams of water captured back lit against a white background illuminated by two speed lights.

Streams of water captured back lit against a white background illuminated by two speed lights.

The background for this shot (above) is an A1 sized sheet of white foam board illuminated by a pair of SB910s.  The internal reflector angle is set to 35mm and the two speed lights are placed on stands about three feet from the background, just out of shot left and right, and aimed pretty much at the center of the board to facilitate a fairly even spread of light.

The power output settings for both speed lights is set to 1/16th which gives us 1/10,000th of a second flash duration.

Switching to tracing paper as a back lit background immediately puts us at a disadvantage in that it’ll cut the amount of light we see at the camera. But a back lit background always looks just that little bit better as it makes your lighting more easy to shape and control.

Doubling the speed light count behind the trace background to 4 now gives us the power in terms of guide number equal to your average studio light – but with full IGBT advantages.

Working a little closer to the background than we were with the white board/reflected light method we can very easily generate a smooth white field of 255RGB which will make our liquid splash shots really punchy:

Working about 3 feet from a translucent background illuminated by 4 SB800's gives us a much flatter white background, especially when deploying a 150mm or 180mm macro lens.

Working about 3 feet from a translucent background illuminated by 4 SB800’s gives us a much flatter white background, especially when deploying a 150mm or 180mm macro lens.

Shot with a 180mm macro lens at ISO 260 and f16 we have bags of depth of field on this shot.

Using 4x SB800s we can dial in the correct background exposure using the flash output power and camera ISO – we want a background that’s just on the verge of “blinkies”.  If we over expose too much for the background the light will wrap around the liquid edges too much, washing out the contrast and flaring – that’s something that muppet on Adorama TV doesn’t tell you!

Take a few shots holding the glass by the rim gives us a clean foot to the glass, so we can now go and make a nice composite in Photoshop:

Composite of a couple of splash shots and a couple of "clean foot" images....

Composite of a couple of splash shots and a couple of “clean foot” images….

Happy sodding Valentines day for next year everyone……..yuck, but it’ll sell all day bloomin’ long!

A while ago I posted an entry on this blog about doing splash shots using a method I call “long flash short shutter” HERE.

All the shots on this entry have been taken using the “short flash long shutter” method.

This latter method is the more versatile one of the two because it has a more effective “motion freezing” power; the former method being speed-limited by the 1/8000th shutter speed – and it’s more costly on batteries!

BUT………there’s always one of those isn’t there…?

Short flash long shutter utilises the maximum X-synch speed or the camera.  This is the fastest speed we can use where the sensor is FULLY open, and it’s most commonly 1/250th sec.

Sussed the massive potential pitfall yet?

That’s right – AMBIENT LIGHT.

If any ambient light reaches the sensor during our 1/250th sec exposure time then WE WILL GET MOTION BLUR that will visually amount to the same sort of effect as slow synch, sharp image with under exposed blur trails.

So we need to make sure that the ambient light is low enough to render a totally black frame.

The “long flash short shutter” method works well in conditions of high ambient provided that the action can be frozen in 1/8000th sec.  If your camera only does 1/4000th sec then the method becomes somewhat less useful.

Freezing action depends on a number of things:

  • 1. Is the subject falling under gravity or rising against it?
  • 2. How far away is the subject?

A body falling under gravity is doing around 10mph after it’s fallen 2 feet from a dead start, and a car doing 100mph looks a lot slower when it’s 200 yards down the road than it does when it’s 20 yards away.

Similarly, if we have a cascade of liquid falling under gravity through the frame of our camera and (to avoid the jug or pouring vessel) the liquid has fallen 6 inches when it enters the top of the frame, and 30 inches when it vacates the bottom of the frame; we have to take a few things into consideration.

  • The liquid is faster at the bottom of the frame than at the top – think Angel Falls – the water pulls itself apart (that’s why the images can look so amazing).
  • If we shoot close with a short lens the speed differential across the frame will be the same BUT the overall speed will be a little more apparent than if we shoot with a longer lens from further away.

An SB910 has a 1/16th power output duration of 1/10000th sec and an SB800 1/10,900th at the same output setting (OEM-quoted values). With a 70mm lens close up this can make a subtle difference in image sharpness, but fit a 180mm and move further away from the subject to maintain composition, and the difference is non-existent.

If you are throwing liquid upwards against gravity, then it’s slowing down, and will eventually stop before falling back under the effects of gravity – quite often, 1/8000th is sufficient to freeze this sort of motion.

Both “long shutter short flash” and “short shutter long flash” are valid methods, each with their own pluses and minuses; but the method I always recommend people start with is the former “long shutter” method – it’s easier!

When a shot features a glass remember one thing – drinking glasses were invented by a race of photographer-hating beings! Glasses transmit, reflect and refract light through a full 360 degrees and you can really end up chasing your tail trying to find the source of an errant reflection if you don’t go about lighting it in the correct manner.

And if you put liquid in it then things can get a whole lot worse!

I’ll be doing some very specific workshops with Calumet in the near future that will be all about lighting glass and metal, gloss and matte surfaces, so keep your eye open if this sort of thing interests you – IT SHOULD ‘cos it’ll make you a better photographer….!

The simplest “proper” glass lighting method is what we call “bright field illumination” and guess what – that’s the method used in all the above liquid shots.

Glass Photography - Bright Field & Dark Field illumination.

Glass Photography – Bright Field & Dark Field illumination.

In the image above, I’ve photographed the same glass using the two ancient and venerable methods of glass photography – one is easy, the other a total pain in the ass; guess which is which!

I’m not going to go into this in detail here, that’ll be in a later post; but BRIGHT FIELD defines the outline of the glass with DARK lines, and DARK FIELD defines the glass white lines of WHITE or highlight.

If you guessed DARK FIELD is the pain the bum then you were right – you will see bits of your “studio” reflected in the glass you didn’t even know existed unless you get this absolutely spot on and 100% correct.

The nice thing about studio-style photography is that you have thinking time, without pressure from working with people, animals or weather and a constantly moving sun. You can start to work up a shot and then leave it over night, when you come back the next day and click the shutter everything is as you left it – unless you’ve had burglars.

You do develop a habit of needing more “grips” gear – you’ve NEVER got the right bit! But then again it’s far cheaper than the bad habit of tripod accumulation like my friend Malc is afflicted with!

Later Folks!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Auto Focus & Shooting Speed

Auto Focus & Shooting Speed

Firstly, an apology to my blog followers for the weird blog post notification this morning – I had one of those “senior moments” where I confused the Preview button with Publish – DOH!

There is truly no hope………..!  But let’s get on….

The effectiveness of auto focus and its ability to track and follow a moving subject IS INFLUENCED by frame rate.

Why is this I here you ask.

Well, it’s simple, and logical if you think about it – where are your AF sensors?

They’re in the bottom of your cameras mirror box.

Most folk thing that the mirror just sits there, reflecting at 45 degrees all the light that comes through the lens up to the focus screen and viewfinder.  The fact that the mirror is still DOWN when they are using the auto focus leads most people into thinking the AF sensor array is elsewhere – that’s if they can be bothered to think about it in the first place.

 

So how does the AF array SEE the scene?

Because the center area of the main mirror is only SEMI silvered, and in reality light from the lens does actually pass through it.

 

auto focus,how auto focus works,main mirror,dslr mirror,mirror box,photography,camera

Main mirror of a Nikon D2Xs in the down position.

 

Now I don’t recommend you jam a ball point pen under your own main mirror, but in the next image:

 

auto focus,how auto focus works,main mirror,dslr mirror,mirror box,photography,camera

Main mirror of a Nikon D2Xs lifted so you can see the secondary mirror.

 

Now there’s a really good diagram of the mechanics at http://www.reikan.co.uk/ – makers of FoCal software, and I’ll perhaps get my goolies cut of for linking to it, but here it is:

 

This image belongs to Reikan

 

As you can now hopefully understand, light passes through the mirror and is reflected downwards by the secondary mirror into the AF sensor array.

As long as the mirror is DOWN the auto focus sensor array can see – and so do its job.

Unless the MAIN mirror is fully down, the secondary mirror is not in the correct position to send light to the auto focus sensor array – SO GUESS WHAT – that’s right, your AF ain’t working; or at least it’s just guessing.

So how do we go about giving the main mirror more “down time”?  Simply by slowing the frame rate down is how!

When I’m shooting wildlife using a continuous auto focus mode then I tend to shot at  5 frames per second in Continuous LOW (Nikon-speak) and have the Continuous HIGH setting in reserve set for 9 frames per second.

 

The Scenario Forces Auto Focus Settings Choices

From a photography perspective we are mainly concerned with subjects CROSSING or subjects CLOSING our camera position.

Once focus is acquired on a CROSSING subject (one that’s not changing its distance from the camera) then I might elect to use a faster frame rate as mirror-down-time isn’t so critical.

But subjects that are either CLOSING or CROSSING & CLOSING are far more common; and head on CLOSING subjects are the ones that give our auto focus systems the hardest workout – and show the system failures and short-comings the most.

Consider the focus scale on any lens you happen to have handy – as you focus closer to you the scale divisions get further apart; in other words the lens focus unit has to move further to change from say 10 meters to 5 meters than it does to move from 15 meters to 10 meters – it’s a non-linear scale of change.

So the closer a subject comes to your camera position the greater is the need for the auto focus sensors to see the subject AND react to its changed position – and yes, by the time it’s acquired focus and is ready to take the next frame the subject is now even closer – and things get very messy!

That’s why high grade dSLR auto focus systems have ‘predictive algorithms’ built into them.

Also. the amount of light on the scene AND the contrast between subject and background ALL effect the ability of the auto focus to do its job.  Even though most pro-summer and all pro body systems use phase detection auto focus, contrast between the subject to be tracked and its background does impact the efficiency of the overall system.

A swan against a dark background is a lot easier on the auto focus system than a panther in the jungle or a white-tailed eagle against a towering granite cliff in Norway, but the AF system in most cameras is perfectly capable of acquiring, locking on and tracking any of the above subjects.

So as a basic rule of thumb the more CLOSING a subject is then the LOWER your frame rate needs to be if you are looking for a sharp sequence of shots.  Conversely the more CROSSING a subject is then the higher the frame rate can be and you might still get away with it.

 

Points to Clarify

The mechanical actions of an exposure are:

  1. Mirror lifts
  2. Front shutter curtain falls
  3. Rear shutter curtain falls
  4. Mirror falls closed (down)

Here’s the thing; the individual time taken for each of these actions is the same ALL the time – irrespective of whether the shutter speed is 1/8000th sec or 8 sec; it’s the gap in between 2. & 3. that makes the difference.

And it’s the ONLY thing shutter-related we’ve got any control over.

So one full exposure takes t1 + t2 + shutter speed + t3 +t4, and the gap between t4 and the repeat of t1 on the next frame is what gives us our mirror down time between shots for any given frame rate.  So it’s this time gap between t4 and the repeat of t1 that we lengthen by dropping the shooting speed frame rate.

There’s another problem with using 10 or 11 frames per second with Nikon D3/D4 bodies.

10 fps on a D3 LOCKS the exposure to the values/settings of the first frame in the burst.

11 fps on a D3 LOCKS both exposure AND auto focus to the values/settings of the first frame in the burst.

11 fps on a D4 LOCKS both exposure AND auto focus* to those of the first frame in the burst – and it’s one heck of a burst to shoot where all the shots can be out of focus (and badly exposed) except the first one!

*Page 112 of the D4 manual says that at 11fps the second and subsequent shots in a burst may not be in focus or exposed correctly.

That’s Nikon-speak for “If you are photographing a statue or a parked car ALL your shots will be sharp and exposed the same; but don’t try shooting anything that’s getting closer to the camera, and don’t try shooting things where the frame exposure value changes”.

 

There’s a really cool video of 11 fps slowed right down with 5000fps slo-mo  HERE  but for Christ’ sake turn your volume down because the ST is some Marlene Dietrich wannabe!

So if you want to shoot action sequences that are sharp from the first frame to the last then remember – DON’T be greedy – SLOW DOWN!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Lens Performance

I have a friend – yes, a strange concept I know, but I do have some – we’ll call him Steve.

Steve is a very talented photographer – when he’ll give himself half a chance; but impatience can sometimes get the better of him.

He’ll have a great scene in front of him but then he’ll forget things such as any focus or exposure considerations the scene demands, and the resulting image will be crap!

Quite often, a few of Steve’s character flaws begin to emerge at this juncture.

Firstly, Steve only remembers his successes; this leads to the unassailable ‘fact’ that he couldn’t possibly have ‘screwed up’.

So now we can all guess the conclusive outcome of that scenario can’t we……..that’s right; his camera gear has fallen short in the performance department.

Clairvoyance department would actually be more accurate!

So this ‘error in his camera system’ needs to be stamped on – hard and fast!

This leads to Steve embarking on a massive information-gathering exercise from various learned sources on ‘that there inter web’ – where another of Steve’s flaws shows up; that of disjointed speed reading…..

The terrifying outcome of these situations usually concludes with Steve’s confident affirmation that some piece of his equipment has let him down; not just by becoming faulty but sometimes, more worryingly by initial design.

These conclusions are always arrived at in the same manner – the various little snippets of truth and random dis-associated facts that Steve gathers, all get forcibly hammered into some hellish, bastardized ‘factual’ jigsaw in his head.

There was a time when Steve used to ask me first, but he gave up on that because my usual answer contravened the outcome of his first mentioned character flaw!

Lately one of Steve’s biggest peeves has been the performance of one or two of his various lenses.

Ostensibly you’ll perhaps think there’s nothing wrong in that – after all, the image generated by the camera is only as good as the lens used to gather the light in the scene – isn’t it?

 

But there’s a potential problem, and it  lies in what evidence you base your conclusions on……………

 

For Steve, at present, it’s manufacturers MTF charts, and comparisons thereof, coupled with his own images as they appear in Lightroom or Photoshop ACR.

Again, this might sound like a logical methodology – but it isn’t.

It’s flawed on so many levels.

 

The Image Path from Lens to Sensor

We could think of the path that light travels along in order to get to our camera sensor as a sort of Grand National horse race – a steeplechase for photons!

“They’re under starters orders ladies and gentlemen………………and they’re off!”

As light enters the lens it comes across it’s first set of hurdles – the various lens elements and element groups that it has to pass through.

Then they arrive at Becher’s Brook – the aperture, where there are many fallers.

Carefully staying clear of the inside rail and being watchful of any lose photons that have unseated their riders at Becher’s we move on over Foinavon – the rear lens elements, and we then arrive at the infamous Canal Turn – the Optical Low Pass filter; also known as the Anti-alias filter.

Crashing on past the low pass filter and on over Valentines only the bravest photons are left to tackle the the last big fence on their journey – The Chair – our camera sensor itself.

 

Okay, I’ll behave myself now, but you get the general idea – any obstacle that lies in the path of light between the front surface of our lens and the photo-voltaic surface of our sensor is a BAD thing.

Andy Astbury,Wildlife in Pixels,lens,resolution,optical path,sharpness,resolution,imaging pathway

The various obstacles to light as it passes through a camera (ASIC = Application Specific Integrated Circuit)

The problems are many, but let’s list a few:

  1. Every element reduces the level of transmitted light.
  2. Because the lens elements have curved surfaces, light is refracted or bent; the trick is to make all wavelengths of light refract to the same degree – failure results in either lateral or longitudinal chromatic aberration – or worse still, both.
  3. The aperture causes diffraction – already discussed HERE

We have already seen in that same previous post on Sensor Resolution that the number of megapixels can effect overall image quality in terms of overall perceived sharpness due to pixel-pitch, so all things considered, using photographs of any 3 dimensional scene is not always a wise method of judging lens performance.

And here is another reason why it’s not a good idea – the effect on image quality/perceived lens resolution of anti-alias, moire or optical low pass filter; and any other pre-filtering.

I’m not going to delve into the functional whys and wherefores of an AA filter, save to say that it’s deemed a necessary evil on most sensors, and that it can make your images take on a certain softness because it basically adds blur to every edge in the image projected by the lens onto your sensor.

The reasoning behind it is that it stops ‘moire patterning’ in areas of high frequency repeated detail.  This it does, but what about the areas in the image where its effect is not required – TOUGH!

 

Many photographers have paid service suppliers for AA filter removal just to squeeze the last bit of sharpness out of their sensors, and Nikon of course offer the ‘sort of AA filter-less’ D800E.

Side bar note:  I’ve always found that with Nikon cameras at least, the pro-body range seem to suffer a lot less from undesirable AA filtration softening than than their “amateur” and “semi pro” bodies – most notably the D2X compared to a D200, and the D3 compared to the D700 & D300.  Perhaps this is due to a ‘thinner’ filter, or a higher quality filter – I don’t know, and to be honest I’ve never had the desire to ‘poke Nikon with a sharp stick’ in order to find out.

 

Back in the days of film things were really simple – image resolution was governed by just two things; lens resolution and film resolution:

1/image resolution = 1/lens resolution + 1/film resolution

Film resolution was a variable depending on the Ag Halide distribution and structure,  dye coupler efficacy within the film emulsion, and the thickness of the emulsion or tri-pack itself.

But today things are far more complicated.

With digital photography we have all those extra hurdles to jump over that I mentioned earlier, so we end up with a situation whereby:

1/Image Resolution = 1/lens resolution + 1/AA filter resolution + 1/sensor resolution + 1/image processor/imaging ASIC resolution

Steve is chasing after lens resolution under the slightly misguided idea the resolution equates to sharpness, which is not strictly true; but he is basing his conception of lens sharpness based on the detail content and perceived detail ‘sharpness’ of his  images; which are ‘polluted’ if you like by the effects of the AA filter, sensor and imaging ASIC.

What it boils down to, in very simplified terms, is this:

You can have one particular lens that, in combination with one camera sensor produces a superb image, but in combination with another sensor produces a not-quite-so-superb image!

On top of the “fixed system” hurdles I’ve outlined above, we must not forget the potential for errors introduced by lens-to-body mount flange inaccuracies, and of course, the big elephant-in-the-room – operator error – ehh Steve.

So attempting to quantify the pure ‘optical performance’ of a lens using your ‘taken images’ is something of a pointless exercise; you cannot see the pure lens sharpness or resolution unless you put the lens on a fully equipped optical test bench – and how many of us have got access to one of those?

The truth of the matter is that the average photographer has to trust the manufacturers to supply accurately put together equipment, and he or she has to assume that all is well inside the box they’ve just purchased from their photographic supplier.

But how can we judge a lens against an assumed standard of perfection before we part with our cash?

A lot of folk, including Steve – look at MTF charts.

 

The MTF Chart

Firstly, MTF stands for Modulation Transfer Function – modu-what I hear your ask!

OK – let’s deal with the modulation bit.  Forget colour for a minute and consider yourself living in a black & white world.  Dark objects in a scene reflect few photons of light – ’tis why the appear dark!  Conversely, bright objects reflect loads of the little buggers, hence these objects appear bright.

Imagine now that we are in a sealed room totally impervious to the ingress of any light from outside, and that the room is painted matte white from floor to ceiling – what is the perceived colour of the room? Black is the answer you are looking for!

Now turn on that 2 million candle-power 6500k searchlight in the corner.  The split second before your retinas melted, what was the perceived colour of the room?

Note the use of the word ‘perceived’ – the actual colour never changed!

The luminosity value of every surface in the room changed from black to white/dark to bright – the luminosity values MODULATED.

Now back in reality we can say that a set of alternating black and white lines of equal width and crisp clean edges represent a high degree of contrast, and therefore tonal modulation; and the finer the lines the higher is the modulation frequency – which we measure in lines per millimeter (lpmm).

A lens takes in a scene of these alternating black and white lines and, just like it does with any other scene, projects it into an image circle; in other words it takes what it sees in front of it and ‘transfers’ the scene to the image circle behind it.

With a bit of luck and a fair wind this image circle is being projected sharply into the focal plane of the lens, and hopefully the focal plane matches up perfectly with the plane of the sensor – what used to be refereed to as the film plane.

The efficacy with which the lens carries out this ‘transfer’ in terms of maintaining both the contrast ratio of the modulated tones and the spatial separation of the lines is its transfer function.

So now you know what MTF stands for and what it means – good this isn’t it!

 

Let’s look at an MTF chart:

Nikon 500mm f4 MTF chart

Nikon 500mm f4 MTF chart

Now what does all this mean?

 

Firstly, the vertical axis – this can be regarded as that ‘efficacy’ I mentioned above – the accuracy of tonal contrast and separation reproduction in the projected image; 1.0 would be perfect, and 0 would be crappier than the crappiest version of a crap thing!

The horizontal axis – this requires a bit of brain power! It is scaled in increments of 5 millimeters from the lens axis AT THE FOCAL PLANE.

The terminus value at the right hand end of the axis is unmarked, but equates to 21.63mm – half the opposing corner-to-corner dimension of a 35mm frame.

Now consider the diagram below:

Andy Astbury,image circle,photography,frame,full frame,dimensions,radial,

The radial dimensions of the 35mm format.

These are the radial dimensions, in millimeters, of a 35mm format frame (solid black rectangle).

The lens axis passes through the center axis of the sensor, so the radii of the green, yellow and dashed circles correspond to values along the horizontal axis of an MTF chart.

Let’s simplify what we’ve learned about MTF axes:

Andy Astbury,image circle,photography,frame,full frame,dimensions,radial,

MTF axes hopefully made simpler!

Now we come to the information data plots; firstly the meaning of Sagittal & Meridional.   From our perspective in this instance I find it easier for folk to think of them as ‘parallel to’ and ‘at right angles to’ the axis of measurement, though strictly speaking Meridional is circular and Sagittal is radial.

This axis of measurement is from the lens/film plane/sensor center to the corner of a 35mm frame – in other words, along that 21.63mm radius.

Andy Astbury,image circle,photography,frame,full frame,dimensions,radial,

The axis of MTF measurement and the relative axial orientation of Sagittal & Meridional lines. NOTE: the target lines are ONLY for illustration.

Separate measurements are taken for each modulation frequency along the entire measurement axis:

Andy Astbury,image circle,photography,frame,full frame,dimensions,radial,

Thin Meridional MTF measurement. (They should be concentric circles but I can’t draw concentric circles!).

Let’s look at that MTF curve for the 500m f4 Nikon together with a legend of ‘sharpness’ – the 300 f2.8:

MTF chart,Andy Astbury,lens resolution

Nikon MTF comparison between the 500mm f4 & 300mm f2.8

Nikon say on their website that they measure MTF at maximum aperture, that is, wide open; so the 300mm chart is for an aperture of f2.8 (though they don’t say so) and the 500mm is for an f4 aperture – which they do specify on the chart – don’t ask me why ‘cos I’ve no idea.

As we can see, the best transfer values for the two lenses (and all other lenses) is 10 lines per millimeter, and generally speaking sagittal orientation usually performs slightly better than meridional, but not always.

10 lpmm is always going to give a good transfer value because its very coarse and represents a lower frequency of detail than 30 lpmm.

Funny thing, 10 lines per millimeter is 5 line pairs per millimeter – and where have we heard that before? HERE – it’s the resolution of the human eye at 25 centimeters.

 

Another interesting thing to bare in mind is that, as the charts clearly show, better transfer values occur closer to the lens axis/sensor center, and that performance falls as you get closer to the frame corners.

This is simply down to the fact that your are getting closer to the inner edge of the image circle (the dotted line in the diagrams above).  If manufacturers made lenses that threw a larger image circle then corner MTF performance would increase – it can be done – that’s the basis upon which PCE/TS lenses work.

One way to take advantage of center MTF performance is to use a cropped sensor – I still use my trusty D2Xs for a lot of macro work; not only do I get the benefit of center MTF performance across the majority of the frame but I also have the ability to increase the lens to subject distance and get the composition I want, so my depth of field increases slightly for any given aperture.

Back to the matter at hand, here’s my first problem with the likes of Nikon, Canon etc:  they don’t specify the lens-to-target distance. A lens that gives a transfer value of 9o% plus on a target of 10 lpmm sagittal at 2 meters distance is one thing; one that did the same but at 25 meters would be something else again.

You might look at the MTF chart above and think that the 300mm f2.8 lens is poor on a target resolution of  30 lines per millimeter compared to the 500mm, but we need to temper that conclusion with a few facts:

  1. A 300mm lens is a lot wider in Field of View (FoV) than a 500mm so there is a lot more ‘scene width’ being pushed through the lens – detail is ‘less magnified’.
  2. How much ‘less magnified’ –  40% less than at 500mm, and yet the 30 lpmm transfer value is within 6% to 7% that of the 500mm – overall a seemingly much better lens in MTF terms.
  3. The lens is f2.8 – great for letting light in but rubbish for everything else!

Most conventional lenses have one thing in common – their best working aperture for overall image quality is around f8.

But we have to counter balance the above with the lack of aforementioned target distance information.  The minimum focus distances for the two comparison lenses are 2.3 meters and 4.0 meters respectively so obviously we know that the targets are imaged and measured at vastly different distances – but without factual knowledge of the testing distances we cannot really say that one lens is better than the other.

 

My next problem with most manufacturers MTF charts is that the values are supplied ‘a la white light’.

I mentioned earlier – much earlier! – that lens elements refracted light, and the importance of all wavelengths being refracted to the same degree, otherwise we end up with either lateral or longitudinal chromatic aberration – or worse still – both!

Longitudinal CA will give us different focal planes for different colours contained within white light – NOT GOOD!

Lateral CA gives us the same plane of focus but this time we get lateral shifts in the red, green and blue components of the image, as if the 3 colour channels have come out of register – again NOT GOOD!

Both CA types are most commonly seen along defined edges of colour and/or tone, and as such they both effect transferred edge definition and detail.

So why do manufacturers NOT publish this information – there is to my knowledge only one that does – Schneider (read ‘proper lens’).

They produce some very meaningful MTF data for their lenses with modulation frequencies in excess of 90 to 150 lpmm; separate R,G & B curves; spectral weighting variations for different colour temperatures of light and all sorts of other ‘geeky goodies’ – I just love it all!

 

SHAME ON YOU NIKON – and that goes for Canon and Sigma just as much.

 

So you might now be asking WHY they don’t publish the data – they must have it – are they treating us like fools that wouldn’t be able to understand it; OR – are they trying to hide something?

You guys think what you will – I’m not accusing anyone of anything here.

But if they are trying to hide something then that ‘something’ might not be what you guys are thinking.

What would you think if I told you that if you were a lens designer you could produce an MTF plot with a calculator – ‘cos you can, and they do!

So, in a nutshell, most manufacturers MTF charts as published for us to see are worse than useless.  We can’t effectively use them to compare one lens against another because of missing data; we can’t get an idea of CA performance because of missing red, green and blue MTF curves; and finally we can’t even trust that the bit of data they do impart is even bloody genuine.

Please don’t get taken in by them next time you fancy spending money on glass – take your time and ask around – better still try one; and try it on more than 1 camera body!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Pixel Resolution

What do we mean by Pixel Resolution?

Digital images have two sets of dimensions – physical size or linear dimension (inches, centimeters etc) and pixel dimensions (long edge & short edge).

The physical dimensions are simple enough to understand – the image is so many inches long by so many inches wide.

Pixel dimension is straightforward too – ‘x’ pixels long by ‘y’ pixels wide.

If we divide the physical dimensions by the pixel dimensions we arrive at the PIXEL RESOLUTION.

Let’s say, for example, we have an image with pixel dimensions of 3000 x 2400 pixels, and a physical, linear dimension of 10 x 8 inches.

Therefore:

3000 pixels/10 inches = 300 pixels per inch, or 300PPI

and obviously:

2400 pixels/8 inches = 300 pixels per inch, or 300PPI

So our image has a pixel resolution of 300PPI.

 

How Does Pixel Resolution Influence Image Quality?

In order to answer that question let’s look at the following illustration:

Andy Astbury,pixels,resolution,dpi,ppi,wildlife in pixels

The number of pixels contained in an image of a particular physical size has a massive effect on image quality. CLICK to view full size.

All 7 square images are 0.5 x 0.5 inches square.  The image on the left has 128 pixels per 0.5 inch of physical dimension, therefore its PIXEL RESOLUTION is 2 x 128 PPI (pixels per inch), or 256PPI.

As we move from left to right we halve the number of pixels contained in the image whilst maintaining the physical size of the image – 0.5″ x 0.5″ – so the pixels in effect become larger, and the pixel resolution becomes lower.

The fewer the pixels we have then the less detail we can see – all the way down to the image on the right where the pixel resolution is just 4PPI (2 pixels per 0.5 inch of edge dimension).

The thing to remember about a pixel is this – a single pixel can only contain 1 overall value for hue, saturation and brightness, and from a visual point of view it’s as flat as a pancake in terms of colour and tonality.

So, the more pixels we can have between point A and point B in our image the more variation of colour and tonality we can create.

Greater colour and tonal variation means we preserve MORE DETAIL and we have a greater potential for IMAGE SHARPNESS.

REALITY

So we have our 3 variables; image linear dimension, image pixel dimension and pixel resolution.

In our typical digital work flow the pixel dimension is derived from the the photosite dimension of our camera sensor – so this value is fixed.

All RAW file handlers like Lightroom, ACR etc;  all default to a native pixel resolution of 300PPI. * (this 300ppi myth annoys the hell out of me and I’ll explain all in another post).

So basically the pixel dimension and default resolution SET the image linear dimension.

If our image is destined for PRINT then this fact has some serious ramifications; but if our image is destined for digital display then the implications are very different.

 

Pixel Resolution and Web JPEGS.

Consider the two jpegs below, both derived from the same RAW file:

Andy Astbury,pixels,resolution,dpi,ppi,Wildlife in Pixels

European Adder – 900 x 599 pixels with a pixel resolution of 300PPI

European Adder - 900 x 599 pixels with a pixel resolution of 72PPI

European Adder – 900 x 599 pixels with a pixel resolution of 72PPI

In order to illustrate the three values of linear dimension, pixel dimension and pixel resolution of the two images let’s look at them side by side in Photoshop:

Andy Astbury,photoshop,resolution,pixels,ppi,dpi,wildlife in pixels,image size,image resolution

The two images opened in Photoshop – note the image size dialogue contents – CLICK to view full size.

The two images differ in one respect – their pixel resolutions.  The top Adder is 300PPI, the lower one has a resolution of 72PPI.

The simple fact that these two images appear to be exactly the same size on this page means that, for DIGITAL display the pixel resolution is meaningless when it comes to ‘how big the image is’ on the screen – what makes them appear the same size is their identical pixel dimensions of 900 x 599 pixels.

Digital display devices such as monitors, ipads, laptop monitors etc; are all PIXEL DIMENSION dependent.  The do not understand inches or centimeters, and they display images AT THEIR OWN resolution.

Typical displays and their pixel resolutions:

  • 24″ monitor = typically 75 to 95 PPI
  • 27″ iMac display = 109 PPI
  • iPad 3 or 4 = 264 PPI
  • 15″ Retina Display = 220 PPI
  • Nikon D4 LCD = 494 PPI

Just so that you are sure to understand the implication of what I’ve just said – you CAN NOT see your images at their NATIVE 300 PPI resolution when you are working on them.  Typically you’ll work on your images whilst viewing them at about 1/3rd native pixel resolution.

Yes, you can see 2/3rds native on a 15″ MacBook Pro Retina – but who the hell wants to do this – the display area is minuscule and its display gamut is pathetically small. 😉

Getting back to the two Adder images, you’ll notice that the one thing that does change with pixel resolution is the linear dimensions.

Whilst the 300 PPI version is a tiny 3″ x 2″ image, the 72 PPI version is a whopping 12″ x 8″ by comparison – now you can perhaps understand why I said earlier that the implications of pixel resolution for print are fundamental.

Just FYI – when I decide I’m going to create a small jpeg to post on my website, blog, a forum, Flickr or whatever – I NEVER ‘down sample’ to the usual 72 PPI that get’s touted around by idiots and no-nothing fools as “the essential thing to do”.

What a waste of time and effort!

Exporting a small jpeg at ‘full pixel resolution’ misses out the unnecessary step of down sampling and has an added bonus – anyone trying to send the image direct from browser to a printer ends up with a print the size of a matchbox, not a full sheet of A4.

It won’t stop image theft – but it does confuse ’em!

I’ve got a lot more to say on the topic of resolution and I’ll continue in a later post, but there is one thing related to PPI that is my biggest ‘pet peeve’:

 

PPI and DPI – They Are NOT The Same Thing

Nothing makes my blood boil more than the persistent ‘mix up’ between pixels per inch and dots per inch.

Pixels per inch is EXACTLY what we’ve looked at here – PIXEL RESOLUTION; and it has got absolutely NOTHING to do with dots per inch, which is a measure of printer OUTPUT resolution.

Take a look inside your printer driver; here we are inside the driver for an Epson 3000 printer:

Andy Astbury,printer,dots per inch,dpi,pixels per inch,ppi,photoshop,lightroom,pixel resolution,output resoloution

The Printer Driver for the Epson 3000 printer. Inside the print settings we can see the output resolutions in DPI – Dots Per Inch.

Images would be really tiny if those resolutions were anything to do with pixel density.

It surprises a lot of people when they come to the realisation that pixels are huge in comparison to printer dots – yes, it can take nearly 400 printer dots (20 dots square) to print 1 square pixel in an image at 300 PPI native.

See you in my next post!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Bit Depth

Bit Depth – What is a Bit?

Good question – from a layman’s point of view it’s the smallest USEFUL unit of computer/digital information; useful in the fact that it can have two values – 0 or 1.

Think of it as a light switch; it has two positions – ON and OFF, 1 or 0.

bit, Andy Astbury, bit depth

A bit is like a light switch.

We have 1 switch (bit) with 2 potential positions (bit value 0 or 1) so we have a bit depth of 1. We can arrive at this by simple maths – number of switch positions to the power of the number of switches; in other words 2 to the 1st power.

How Does Bit Depth Impact Our Images:

So what would this bit depth of 1 mean in image terms:

Andy Astbury,bit depth,

An Image with a Bit Depth of 1 bit.

Well, it’s not going to win Wildlife Photographer of the Year is it!

Because each pixel in the image can only be black or white, on or off, 0 or 1 then we only have two tones we can use to describe the entire image.

Now if we were to add another bit to the overall bit depth of the image we would have 2 switches (bits) each with 2 potential values so the total number of potential values, so 2 to the 2nd, or 4 potential output values/tones.

Andy Astbury,bits,bit depth

An image with a bit depth of 2 bits.

Not brilliant – but it’s getting there!

If we now double the bit depth again, this time to 4 bit, then we have 2 to the 4th, or 16 potential tones or output values per image pixel:

Andy Astbury,bits,bit depth

A bit depth of 4 bits gives us 16 tonal values.

And if we double the bit depth again, up to 8 bit we will end up with 2 to the 8th power, or 256 tonal values for each image pixel:

Andy Astbury,bits,bit depth

A bit depth of 8 bits yields what the eye perceives to be continuous unbroken tone.

This range of 256 tones (0 to 255) is the smallest number of tonal values that the human eye can perceive as being continuous in nature; therefore we see an unbroken range of greys from black to white.

More Bits is GOOD

Why do we need to use bit depths HIGHER than 8 bit?

Our modern digital cameras capture and store RAW images to a bit depth of 12 bit, and now in most cases 14 bit – 4096 & 16,384 tonal values respectively.

Just as we use the ProPhotoRGB colour space to preserve as many CAPTURED COLOURS as we can, we need to apply a bit depth to our pixel-based images that is higher than the capture depth in order to preserve the CAPTURED TONAL RANGE.

It’s the “bigger bucket” or “more stairs on the staircase” scenario all over again – more information about a pixels brightness and colour is GOOD.

Andy Astbury,bits,bit depth,tonal range,tonality,tonal graduation

How Tonal Graduation Increases with Bit Depth.

Black is black, and white is white, but increased bit depth gives us a higher number of steps/tones; tonal graduations, to get from black to white and vice versa.

So, if our camera captures at 14 bit we need a 15 bit or 16 bit “bucket” to keep it in.  And for those who want to know why a 14 bit bucket ISN’T a good idea then try carrying 2 gallons of water in a 2 gallon bucket without spillage!

The 8 bit Image Killer

Below we have two identical grey scale images open in Photoshop – simple graduations from black to white; one is a 16 bit image, the other 8 bit:

Andy Astbury,bits,bit depth,tone,tonal graduation

16 bit greyscale at the top. 8 bit greyscale below – CLICK Image to view full size.

Now everything looks OK at this “fit to screen” magnification; and it doesn’t look so bad at 1:1 either, but let’s increase the magnification to 1600% so we can see every pixel:

 

Andy Astbury,bits,bit depth,tone,tonal range,tonal graduation

CLICK Image to view full size. At 1600% magnification we can see that the 8 bit file is degraded.

At this degree of magnification we can see a huge amount of image degradation in the lower, 8 bit image whereas the upper, 16 bit image looks tonally smooth in its graduation.

The degradation in the 8 bit image is simply due to the fact that the total number of tones is “capped” at 256. and 256 steps to get from the black to the white values of the image are not sufficient – this leaves gaps in the image that Photoshop has to fill with “invented” tonal information based on its own internal “logic”….mmmmmm….

There was a time when I thought “girlies” were the most illogical things on the planet; but since Photoshop, now I’m not so sure…!

The image is a GREYSCALE – RGB ratios are supposedly equal in every pixel, but as you can see, Photoshop begins to skew the ratios where it has to do its “inventing” so we not only have luminosity artifacts, but we have colour artifacts being generated too.

You might look upon this as “pixel peeping” and “geekey”, but when it comes to image quality, being a pixel-peeping Geek is never a bad thing.

Of course, we all know 8bit as being “jpeg”, and these artifacts won’t show up on a web-based jpeg for your website; but if you are in the business of large scale gallery prints, then printing from an 8 bit image file is never going to be a good idea as these artifacts WILL show on the final print.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Lightroom Tutorials #2

 

Lightroom Tutorials,video,lessoneagle,golden eagle,snow,winter,Norway,wildlife

Image Processing in Lightroom & Photoshop

 

In this Lightroom tutorial preview I take a close look at the newly evolved Clone/Heal tool and dust spot removal in Lightroom 5.

This newly improved tool is simple to use and highly effective – a vast improvement over the great tool that it was already in Lightroom 4.

 

Lightroom Tutorials  Sample Video Link below: Video will open in a new window

 

https://vimeo.com/64399887

 

This 4 disc Lightroom Tutorials DVD set is available from my website at http://wildlifeinpixels.net/dvd.html

Colour Space & Profiles

colour space

From Camera to Print
copyright 2013 Andy Astbury/Wildlife in Pixels

Colour space and device profiles seem to cause a certain degree of confusion for a lot of people; and a feeling of dread, panic and total fear in others!

The reality of colour spaces and device profiles is that they are really simple things, and that how and why we use them in a colour managed work flow is perfectly logical and easy to understand.

Up to a point colour spaces and device profiles are one and the same thing – they define a certain “volume” of colours from red to green to blue, and from black to white – and all the colours that lie in between those five points.

The colour spaces that most photographers are by now familiar with are ProPhotoRGB, AdobeRGB(1998) and sRGB – these are classed as “working colour spaces” and are standards of colour set by the International Color Consortium, or ICC; and they all have one thing in common; where red, green and blue are present in equal amounts the colour produced will be NEUTRAL.

The only real differences between these three working colour spaces is the “distances” between the five set points of red, green, blue, black and white.  The greater the distance between the three primary colours then the greater is the degree of graduation between them, hence the greater the number of potential colours.  In the diagram below we can see the sRGB & ProPhoto working colour spaces displayed on the same axes:

colour space volume

The sRGB & ProPhoto colour spaces. The larger volume of ProPhoto contains more colour variety between red, green & blue than sRGB.

If we were to mark five different points on the surface of a partially inflated balloon,  and then inflate it some more then the points in relation to the balloons surface would NOT change: the points remain the same.  But the spatial distances between the points would change, as would the internal volume.  It’s the same with our five points of colour reference – red, green, blue, black & white – they do NOT change between colour spaces; red is red no matter what the working colour space.  But the range of potential colours between our 5 points of reference increases due to increased colour space volume.

So now we have dealt with the basics of the three main working colour spaces, we need to consider the volume of colour our camera sensor can capture – if you like, its colour space; but I’d rather use the word “gamut”.

Let’s take the Canon 5DMk3 as an example, and look at the volume, or gamut, of colour that its sensor can capture, in direct comparison with our 3 quantifiable working colour spaces:

colour space

The Canon 5DMk3 sensor gamut (black) in comparison to ProPhoto (largest), AdobeRGB1998 & sRGB (smallest) working colour spaces.

In a previous blog article I wrote – see here – I mentioned how to setup the colour settings in Photoshop, and this is why.  If you want to keep the greatest proportion of your camera sensors captured colour then you need to contain the image within the ProPhotoRGB working colour space.  If you don’t, and you use AdobeRGB or sRGB as Photoshops working colour space then you will loose a certain proportion of those captured colours – as I’ve heard it put before, it’s like a sex change operation – certain colours get chopped off, and once that’s happened you can’t get them back!

To keep things really simple just think of the 3 standard working colour spaces as buckets – the bigger the bucket, the more colour it contains; and you can’t tip the colours captured by your camera into a smaller bucket without getting spillage and making a mess on the floor!

As I said before, working colour spaces are neutral; but seldom does our camera ever capture a scene that contains pure neutrals.  Even though an item in the scene may well be neutral in colour, camera sensors quite often skew these colours ever so slightly; most Canon RAW files always look a teeny-weeny ever so slight bit magenta to me when I import them; but there again I’m a Nikon shooter seem to have a minute greenish tinge to them before processing.

Throughout our imaging work flow we have 3 stages:

1. Input (camera or scanner).

2. Working Process (Lightroom, Photoshop etc).

3. Output (printer for example).

And each stage has its representative type of colour space – we have input profiles, working colour spaces and output profiles.

So we have our camera capture gamut (colour space if you like) and we’ve opened our image in Photoshop or Lightroom in the ProPhoto working colour space – there’s NO SPILLAGE!

We now come to the crux of colour management; before we can do anything else we need to profile our “window onto our image” – the monitor.

In order to see the reality of what the camera captured we need to ensure that our monitor is in line with our WORKING COLOUR SPACE in terms of colour neutrality – not that of the camera as some people seem to think.

All 3 working colour spaces posses the same degree of colour neutrality where red, green & blue are present at the same values irrespective of physical size of the colour space.

So as long as our monitor is profiled to be:

1. Accurately COLOUR NEUTRAL

2. Displaying maximum brightness only in the presence true white – which you’ll hardly ever photograph, even snow isn’t white.

then we will see a highly workable representation of image colour neutrality and luminosity on our monitor.  Only by working this way can we actually tell if the camera has captured the image correctly in terms of colour balance and overall exposure.

And the fact that our monitor CANNOT display all the colours contained within our big ProPhoto bucket is, to all intents and purposes,  a fairly mute point; though seeing as many of them as possible is never a bad thing.

And using a monitor that does NOT display the volume of colour approximating or exceeding that of the Adobe working space can be highly detrimental for the reasons discussed in my previous post.

Now that we’ve covered input profiles and working colour spaces we need to move on and outline the basics of output profiles, and printer profiles in particular.

colour space, profile, print profile

Adobe & sRGB working paces in comparison to the colours contained in the Kingfisher image and the profile for Permajet Oyster paper using the Epson 7900 printer. (CLICK image for full sized view).

In the image above we can see both the Adobe and sRGB working spaces and the full distribution of colours contained in the Kingfisher image which is a TIFF file in our big ProPhoto bucket of colour;  and a black trace which is the colour profile (or space if you like) for Permajet Oyster paper using Epson UltraChrome HDR ink on an Epson 7900 printer.

As we can see, some of the colours contained in the image fall outside the gamut of the sRGB working colour space; notably some oranges and “electric blues” which are basically colours of the subject and are most critical to keep in the print.

However, all those ProPhoto colours are capable of being reproduced on the Epson 7900 using Permajet Oyster paper because, as the black trace shows, the printer/ink/paper combination can reproduce colours that lie outside of the Adobe working colour space.

The whole purpose of that particular profile is to ensure that the print matches what we can see on the monitor both in terms of colour and brightness – in other words, what we see is what we get – WYSIWYG!

The beauty of a colour managed workflow is that it’s economical – assuming the image is processed correctly then printing via an accurate printer profile can give you a perfect printed rendition of your screen image using just a single sheet of paper – and only one sheets worth of ink.

colour space, colour profile

The difference between colour profiles for the same printer paper on different printers. Epson 3000 printer profile trace in Red (CLICK image for full size view).

If we were to switch printers to an Epson 3000 using UltraChrome K3 ink on the very same paper, the area circled in white shows us that there are a couple of orange hue colours that are a little problematic – they lie either close to or outside the colour gamut of this printer/ink/paper combination, and so they need to be changed in order to ‘fit’, either by localised adjustment or variation of rendering intent – but that’s a story for later!

Why is it different? Well, it’s not to do with the paper for sure, so it’s down to either the ink change or printer head.  Using the same K3 ink in an Epson 4800 brings the colours back into gamut, so the difference is in the printer head itself, or the printer driver, but as I said, it’s a small problem easily fixed.

When you consider the low cost of achieving an accurate monitor profile – see this previous post – and combine that with an accurate printer output profile or two to match your chosen printer papers, and then deploy these assets correctly you have a proper colour managed workflow.  Add to that the cost savings in ink and paper and it becomes a bit of a “no-brainer” doesn’t it?

In this post I set out to hopefully ‘demystify’ colour spaces and profiles in terms of what they are and how they are used – I hope I’ve succeeded!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Monitor Calibration with ColorMunki

Monitor Calibration with ColorMunki Photo

Following on from my previous posts on the subject of monitor calibration I thought I’d post a fully detailed set of instructions, just to make sure we’re all “singing from the same hymn sheet” so to speak.

Basic Setup

_D4R7794

Put the ColorMunki spectrophotometer into the cover/holder and attach the USB cable.

_D4R7798

Always keep the sliding dust cover closed when storing the ColorMunki in its holder – this prevents dust ingress which will effect the device performance.

BUT REMEMBER – slide the cover out of the way before you begin the calibration process!

colormunkiSpecCover

Install the ColorMunki software on your machine, register it via the internet, then check for any available updates.

Once the software is fully installed and working you are ready to begin.

Plug the USB cable into an empty USB port on your computer – NOT an external hub port as this can sometimes cause device/system communication problems.

Launch the ColorMunki software.

The VERY FIRST THING YOU NEED TO DO is open the ColorMunki software preferences and ensure that it looks like the following screen:

PC: File > Preferences

Mac: ColorMunki Photo > Preferences

Screen Shot 2013-10-17 at 11.28.32

The value for the Tone Response Curve MUST be set to 2.2 which is the default value.

The ICC Profile Version number MUST be set to v2 for best results – this is NOT the default.

Ensure the two check boxes are “ticked”.**

** These settings can be something of a contentious issue. DDC & LUT check boxes should only be “ticked” if your Monitor/Graphics card combination offers support for these modes.

If you find these settings make your monitor become excessively dark once profiling has been completed, start again ensuring BOTH check boxes are “unticked”.

Untick both boxes if you are working on an iMac or laptop as for the most part these devices support neither function.

For more information on this, a good starting point is a page on the X-Rite website available on the link below:

http://xritephoto.com/ph_product_overview.aspx?ID=1115&Action=Support&SupportID=5561

If you are going to use the ColorMunki to make printer profiles then ensure the ICC Profile Version is set to v2.

By default the ColorMunki writes profiles in ICC v4 – not all computer operating systems can function correctly from a graphics colour aspect; but they can all function perfectly using ICC v2.

You should only need to do this operation once, but any updates from X-Rite, or a re-installation of the software will require you to revisit the preferences panel just to check all is well.

Once this panel is set as above Click OK and you are ready to begin.

 

Monitor Calibration

This is the main ColorMunki GUI, or graphic user interface:

Screen Shot 2013-10-17 at 12.32.58

Click Profile My Display

Screen Shot 2013-10-17 at 11.17.49

Select the display you want to profile.

I use what is called a “double desktop” and have two monitors running side by side; if you have just a single monitor connected then that will be the only display you see listed.

Click Next>.

Screen Shot 2013-10-17 at 11.18.18

Select the type of display – we are talking here about monitor calibration of a screen attached to a PC or Mac so select LCD.

Laptops – it never hurts a laptop to be calibrated for luminance and colour, but in most cases the graphics output LUT (colour Look Up Table) is barely 8 bit to begin with; the calibration process will usually reduce that to less than 8 bit. This will normally result in the laptop screen colour range being reduced in size and you may well see “virtual” colour banding in your images.

Remedy: DON’T PROCESS ON A LAPTOP – otherwise “me and the boys” will be paying you a visit!

Select Advanced.

Deselect the ambient light measurement optionit can be expensive to set yourself up with proper lighting in order to have an ICC standard viewing/processing environment; daylight (D65) bulbs are fairly cheap and do go a long way towards helping, but the correct amount of light and the colour of the walls and ceiling, and the exclusion of extraneous light sources of incorrect colour temperature (eg windows) can prove somewhat more problematic and costly.

Processing in darkened room without light is by far the easiest, cheapest and most cost-effective way of obtaining correct working conditions.

Set the Luminance target Value to 120 (that’s 120 candelas per square meter if you’re interested!).

Set the Target White Point to D65 (that’s 6500 degrees Kelvin – mean average daylight).

Click Next>.

Screen Shot 2013-10-17 at 11.19.44

With the ColorMunki connected to your system this is the screen you will be greeted with.

You need to calibrate the device itself, so follow the illustration and rotate the ColorMunki dial to the indicated position.

Once the device has calibrated itself to its internal calibration tile you will see the displayed GUI change to:

Screen Shot 2013-10-17 at 11.20.26

Follow the illustration and return the ColorMunki dial to its measuring position.

Screen Shot 2013-10-17 at 11.20.49

Click Next>.

Screen Shot 2013-10-17 at 11.21.11

With the ColorMunki in its holder and with the spectrophotometer cover OPEN for measurement, place the ColorMunki on the monitor as indicated on screen and in the image below:

XR-CLRMNK-01

We are now ready to begin the monitor calibration.

Click Next>.

The first thing the ColorMunki does is measure the luminosity of the screen. If you get a manual adjustment prompt such as this (indicates non-support/disabling of DDC preferences option):

ColorMunki-Photo-display-screen-111

Simply turn adjust the monitor brightness slowly until the indicator line is level with the central datum line; you should see a “tick” suddenly appear when the luminance value of 120 is reached by your adjustments.

LCDs are notoriously slow to respond to changes in “backlight brightness” so make an adjustment and give the monitor a few seconds to settle down.

You may have to access your monitor controls via the screen OSD menu, or on Mac via the System Preferences > Display menu.

Once the Brightness/Luminance of the monitor is set correctly then ColorMunki will proceed will proceed with its monitor output colour measurements.

In order for you to understand monitor calibration and what is going on here is a sequence of slides from one of my workshops on colour management:

moncal1

moncal2

moncal3

moncal4

Once the measurements are complete the GUI will return to the screen in this form.

Screen Shot 2013-10-17 at 11.26.29

Either use the default profile name, or one of your own choice and click Save.

NOTE: Under NO CIRCUMSTANCES can you rename the profile after it has been saved, or any other .icc profile for that matter, otherwise the profile will not work.

Click Next>.

Screen Shot 2013-10-17 at 11.27.00

Click Save again to commit the new monitor profile to you operating system as the default monitor profile.

You can set the profile reminder interval from the drop down menu.

Click Next>.

Screen Shot 2013-10-17 at 12.32.58

Monitor calibration is now complete and you are now back to the ColorMunki startup GUI.

Quit or Exit the ColorMunki application – you are done!

Please consider supporting this blog.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Screen Capture logos denoting ColorMunki & X-Rite are the copyright of X-Rite.

Monitor Calibration Devices

Colour management is the simple process of maintaining colour accuracy and consistency between the ACTUAL COLOURS in your image, in terms of Hue, Saturation and Luminosity; and those reproduced on your RGB devices; in this case, displayed on your monitor. Each and every pixel in your image has its very own individual RGB colour values and it is vital to us as photographers that we “SEE” these values accurately displayed on our monitors.

If we were to visit The National Gallery and gaze upon Turners “Fighting Temeraire” we would see all those sumptuous colours on the canvass just as J.M.W. intended; but could we see the same colours if we had a pair of Ray Bans on?

No, we couldn’t; because the sunglasses behave as colour filters and so they would add a “tint” to every colour of light that passes through them.

What you need to understand about your monitor is that it behaves like a filter between your eyes and the recorded colours in your image; and unless that “filter” is 100% neutral in colour, then it will indeed “tint” your displayed image.

So, the first effect of monitor calibration is that the process NEUTRALIZES any colour tint in the monitor display and so shows us the “real colours” in our images; the correct values of Hue and Saturation.

Now imagine we have an old fashioned Kodak Ektachrome colour slide sitting in a projector. If we have the correct wattage bulb in the projector we will see the correct LUMINOSITY of the slide when it is projected.

But if the bulb wattage is too high then the slide will project too brightly, and if the bulb wattage is too low then the projected image will not be bright enough.

All our monitors behave just like a projector, and as such they all have a brightness adjustment which we can directly correlate to our old fashioned slide projector bulb, and this brightness, or backlight control is another aspect of monitor calibration.

Have you done a print that comes out DARKER than the image displayed on the screen?

If you have then your monitor backlight is too bright!

And so, the second effect of monitor calibration is the setting of the correct level of brightness or back lighting of our monitor in order for us to see the true Luminosity of the pixels in our images.

Without accurate Monitor Calibration your ability to control the accuracy of colour and overall brightness of your images is severely limited.

I get asked all the time “what’s the best monitor calibration device to use” so, above is a short video (no sound) I’ve made showing the 3D and 2D plots of profiles I’ve just made for the same monitor using teo different monitor calibration devices/spectrophotometers from opposite ends of the pricing scale.

The first plot you see in black is the AdobeRGB1998 working colour space – this is only shown as a standard by which you can judge the other two profiles; if you like, monitor working colour spaces.

The yellow plot that shows up as an overlay is a profile done with an Xrite ColourMunki Photo, which usually retails for around £300 – and it clearly shows this particular monitor rendering a greater number of colours in certain areas than are contained in the Adobe1998 reference space.

The cyan plot is the same monitor, but profiled with the i1Photo Pro 2 spectro – not much change out of £1300 thank you very much – and the resulting profile virtually an identical twin of the one obtained with the ColorMunki which retails for a quarter of the price!

Don’t get me wrong, the i1 is a far more efficient monitor calibration device if you want to produce custom PRINTER profiles as well, but if you are happy using OEM profiles and just want perfect monitor calibration then I’d say the ColorMunki Photo is the more sensible purchase; or better still the ColorMunki Display at only around £110.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.