Photoshop CC Update

Photoshop CC Update

Installing a new Photoshop CC update is supposed to be a simple matter of clicking a button and the job gets done.

This morning both my Mac systems were telling me to update from v14.1.2 to v14.2

I have two Macs, a late 2012 iMac and a mid 2009 Mac Pro.  The Mac Pro used to run Snow Leopard but was upgraded to Mountain Lion because of Lightroom 5 dropping Snow Leopard support.

Now I never have any problems with Cloud Updates from Adobe on the iMac, but sometimes the Mac Pro can do some strange things – and this morning was no exception!

The update installed on the iMac without a hitch, but when the update was complete on the Mac Pro I was greeted with a message telling me that some components had not installed correctly.  On opening Photoshop CC I was greeted with the fact that the version had rolled back to v14.0 and that hitting UPDATE in both the app and my CC control panel simply informed me that my software was up to date and no updates were available!

So I just thought I’d do a blog entry on what to do if this ever happens to you!

 

Remove Photoshop CC

The first thing to do is UNINSTALL  Photoshop CC with the supplied uninstaller.

You’ll find this in the main Photoshop CC root directory:

Photoshop CC Update

Locate the Photoshop CC Uninstaller.

Take my advice and put a tick in the check box to “Remove Preferences” – the Photoshop preferences file can be a royal pain in the ass sometimes, so dump it – a new one will get written as soon as your fire Photoshop up after the new install.

Click UNINSTALL.

Once this action is complete YOU MUST RESTART THE MACHINE.

 

After the restart wait for the Creative Cloud to connect then open your CC control panel.

Under the Apps tab you’ll see that Photoshop CC is no longer listed.

Scroll down past all the apps Adobe have listed and you’ll come to Photoshop CC;  it’ll have an INSTALL button next to it – click the install button:

Photoshop CC Update

Install Photoshop CC from the Cloud control panel.

If you are installing the 14.1.2 to 14.2 update (the current one as of today’s date) you might find a couple of long ‘stick bits’ during the installation process – notably between 1 and 20% and a long one at 90% – just let the machine do it’s thing.

When the update is complete I’d recommend you do a restart – it might not be necessary, but I do it anyway.

Once the machine has restarted fire up Photoshop, click on ‘About Photoshop’ and you should see:

Photoshop CC Update

Photoshop “about screen” showing version number.

Because we dumped the preferences file we need to go and change the defaults for best best working practice:

Photoshop CC Update

Preferences Interface tab.

If you want to change the BG colour then do it here.

Next, click File Handling:

Photoshop CC Update

File handling tab in Photoshop Preferences

Remove the tick from the SAVE IN BACKGROUND check box – like the person who put it there, you too might think background auto-save is a good idea – IT ISN’T – think about it!

Finally, go to Performance:

Photoshop CC Update

Photoshop preferences Performance tab

and change the Scratch Disc to somewhere other than your system drive if you have the internal drives fitted.  If you only have 1 internal drive then leave “as is”.  You ‘could’ use an external drive as a scratch disk, but to be honest it really does need to be a fast drive over a fast connection – USB 2 to an old 250Gb portable isn’t really going to cut it!

You can go and check your Colour Settings, though these should not have changed – assuming you had ’em set right in the first place!

Here’s what they SHOULD look like:

Photoshop CC Update

Photoshop PROPER COLOUR SETTINGS!

That’s it – you’re done!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Please consider supporting this blog.

This blog really does need your support. All the information I put on these pages I do freely, but it does involve costs in both time and money.

If you find this post useful and informative please could you help by making a small donation – it would really help me out a lot – whatever you can afford would be gratefully received.

Your donation will help offset the costs of running this blog and so help me to bring you lots more useful and informative content.

Many thanks in advance.

 

Accurate Camera Colour within Lightroom

Obtaining accurate camera colour within Lightroom 5, in other words making the pics in your Lr Library look like they did on the back of the camera; is a problem that I’m asked about more and more since the advent of Lightroom 5 AND the latest camera marks – especially Nikon!

UPDATE NOTE: Please feel free to read this post THEN go HERE for a further post on achieving image NEUTRALITY in Lightroom 6/CC 2015

Does this problem look familiar?

Accurate Camera Colour within Lightroom

Back of the camera (left) to Lightroom (right) – click to enlarge.

The image looks fine (left) on the back of the camera, fine in the import dialogue box, and fine in the library module grid view UNTIL the previews have been created – then it looks like the image on the right.

I hear complaints that the colours are too saturated and the contrast has gone through the roof, the exposure has gone down etc etc.

All the visual descriptions are correct, but what’s responsible for the changes is mostly down to a shift in contrast.

Let’s have a closer look at the problem:

Accurate Camera Colour within Lightroom

Back of the camera (left) to Lightroom (right) – click to enlarge.

The increase in contrast has resulted in “choking” of the shadow detail under the wing of the Red Kite, loss of tonal separation in the darker mid tones, and a slight increase in the apparent luminance noise level – especially in that out-of-focus blue sky.

And of course, the other big side effect is an apparent increase in saturation.

You should all be aware of my saying that “Contrast Be Thine Enemy” by now – and so we’re hardly getting off to a good start with a situation like this are we…………

So how do we go about obtaining accurate camera colour within Lightroom?

Firstly, we need to understand just what’s going on inside the camera with regard to various settings, and what happens to those settings when we import the image into Lightroom.

Camera Settings & RAW files

Let’s consider all the various settings with regard to image control that we have in our cameras:

  • White Balance
  • Active D lighting
  • Picture Control – scene settings, sharpening etc:
  • Colour Space
  • Distortion Control
  • Vignette Control
  • High ISO NR
  • Focus Point/Group
  • Uncle Tom Cobbly & all…………..

All these are brought to bare to give us the post-view jpeg on the back of the camera.

And let’s not forget

  • Exif
  • IPTC

That post-view/review jpeg IS subjected to all the above image control settings, and is embedded in the RAW file; and the image control settings are recorded in what is called the raw file “header”.

It’s actually a lot more complex than that, with IFD & MakerNote tags and other “scrummy” tech stuff – see this ‘interesting’ article HERE – but don’t fall asleep!

If we ship the raw file to our camera manufacturers RAW file handler software such as Nikon CapNX then the embedded jpeg and the raw header data form the image preview.

However, to equip Lightroom with the ability to read headers from every digital camera on the planet would be physically impossible, and in my opinion, totally undesirable as it’s a far better raw handler than any proprietary offering from Nikon or Canon et al.

So, in a nutshell, Lightroom – and ACR – bin the embedded jpeg preview and ignore the raw file header, with the exception of white balance, together with Exif & IPTC data.

However, we still need to value the post jpeg on the camera because we use it to decide many things about exposure, DoF, focus point etc – so the impact of the various camera image settings upon that image have to be assessed.

Now here’s the thing about image control settings “in camera”.

For the most part they increase contrast, saturation and vibrancy – and as a consequence can DECREASE apparent DYNAMIC RANGE.  Now I’d rather have total control over the look and feel of my image rather than hand that control over to some poxy bit of cheap post-ASIC circuitry inside my camera.

So my recommendations are always the same – all in-camera ‘picture control’ type settings should be turned OFF; and those that can’t be turned off are set to LOW or NEUTRAL as applicable.

That way, when I view the post jpeg on the back of the camera I’m viewing the very best rendition possible of what the sensor has captured.

And it’s pointless having it any other way because when you’re shooting RAW then both Lightroom and Photoshop ACR ignore them anyway!

Accurate Camera Colour within Lightroom

So how do we obtain accurate camera colour within Lightroom?

We can begin to understand how to achieve accurate camera colour within Lightroom if we look at what happens when we import a raw file; and it’s really simple.

Lightroom needs to be “told” how to interpret the data in the raw file in order to render a viewable preview – let’s not forget folks, a raw file is NOT a visible image, just a matrix full of numbers.

In order to do this seemingly simple job Lightroom uses process version and camera calibration settings that ship inside it, telling it how to do the “initial process” of the image – if you like, it’s a default process setting.

And what do you think the default camera calibration setting is?

Accurate Camera Colour within Lightroom

The ‘contrasty’ result of the Lightroom Nikon D4 Adobe Standard camera profile.

Lightroom defaults to this displayed nomenclature “Adobe Standard” camera profile irrespective of what camera make and model the raw file is recorded by.

Importantly – you need to bare in mind that this ‘standard’ profile is camera-specific in its effect, even though the displayed name is the same when handling say D800E NEF files as it is when handling 1DX CR2 files, the background functionality is totally different and specific to the make and model of camera.

What it says on the tin is NOT what’s inside – so to speak!

So this “Adobe Standard” has as many differing effects on the overall image look as there are cameras that Lightroom supports – is it ever likely that some of them are a bit crap??!!

Some files, such as the Nikon D800 and Canon 5D3 raws seem to suffer very little if any change – in my experience at any rate – but as a D4 shooter this ‘glitch in the system’ drives me nuts.

But the walk-around is so damned easy it’s not worth stressing about:

  1. Bring said image into Lightroom (as above).
  2. Move the image to the DEVELOP module
  3. Go to the bottom settings panel – Camera Calibration.
  4. Select “Camera Neutral” from the drop-down menu:
    Accurate Camera Colour within Lightroom

    Change camera profile from ‘Adobe Standard’ to ‘Camera Neutral’ – see the difference!

    You can see that I’ve added a -25 contrast adjustment in the basics panel here too – you might not want to do that*

  5. Scoot over to the source panel side of the Lightroom GUI and open up the Presets Panel

    Accurate Camera Colour within Lightroom

    Open Presets Panel (indicated) and click the + sign to create a new preset.

  6. Give the new preset a name, and then check the Process Version and Calibration options (because of the -25 contrast adjustment I’ve added here the Contrast option is ticked).
  7. Click CREATE and the new “camera profile preset” will be stored in the USER PRESETS across ALL your Lightroom 5 catalogs.
  8. The next time you import RAW files you can ADD this preset as a DEVELOP SETTING in the import dialogue box:
    Accurate Camera Colour within Lightroom

    Choose new preset

    Accurate Camera Colour within Lightroom

    Begin the import

  9. Your images will now look like they did on the back of the camera (if you adopt my approach to camera settings at least!).

You can play around with this procedure as much as you like – I have quite a few presets for this “initial process” depending on a number of variables such as light quality and ISO used to name but two criteria (as you can see in the first image at 8. above).

The big thing I need you to understand is that the camera profile in the Camera Calibration panel of Lightroom acts merely as Lightroom’s own internal guide to the initial process settings it needs to apply to the raw file when generating it’s library module previews.

There’s nothing complicated, mysterious or sinister going on, and no changes are being made to your raw images – there’s nothing to change.

In fact, I don’t even bother switching to Camera Neutral half the time; I just do a rough initial process in the Develop module to negate the contrast in the image, and perhaps noise if I’ve been cranking the ISO a bit – then save that out as a preset.

Then again, there are occasions when I find switching to Camera Neutral is all that’s needed –  shooting low ISO wide angle landscapes when I’m using the full extent of the sensors dynamic range springs to mind.

But at least now you’ve got shots within your Lightroom library that look like they did on the back of the camera, and you haven’t got to start undoing the mess it’s made on import before you get on with the proper task at hand – processing – and keeping that contrast under control.

Some twat on a forum somewhere slagged this post off the other day saying that I was misleading folk into thinking that the shot on the back of the camera was “neutral” – WHAT A PRICK…………

All we are trying to do here is to make the image previews in Lr5 look like they did on the back of the camera – after all, it is this BACK OF CAMERA image that made us happy with the shot in the first place.

And by ‘neutralising’ the in-camera sharpening and colour/contrast picture control ramping the crappy ‘in camera’ jpeg is the best rendition we have of what the sensor saw while the shutter was open.

Yes, we are going to process the image and make it look even better, so our Lr5 preview starting point is somewhat irrelevant in the long run; but a lot of folk freak-out because Lr5 can make some really bad changes to the look of their images before they start.  All we are doing in this article is stopping Lr5 from making those unwanted changes.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Lens Performance

I have a friend – yes, a strange concept I know, but I do have some – we’ll call him Steve.

Steve is a very talented photographer – when he’ll give himself half a chance; but impatience can sometimes get the better of him.

He’ll have a great scene in front of him but then he’ll forget things such as any focus or exposure considerations the scene demands, and the resulting image will be crap!

Quite often, a few of Steve’s character flaws begin to emerge at this juncture.

Firstly, Steve only remembers his successes; this leads to the unassailable ‘fact’ that he couldn’t possibly have ‘screwed up’.

So now we can all guess the conclusive outcome of that scenario can’t we……..that’s right; his camera gear has fallen short in the performance department.

Clairvoyance department would actually be more accurate!

So this ‘error in his camera system’ needs to be stamped on – hard and fast!

This leads to Steve embarking on a massive information-gathering exercise from various learned sources on ‘that there inter web’ – where another of Steve’s flaws shows up; that of disjointed speed reading…..

The terrifying outcome of these situations usually concludes with Steve’s confident affirmation that some piece of his equipment has let him down; not just by becoming faulty but sometimes, more worryingly by initial design.

These conclusions are always arrived at in the same manner – the various little snippets of truth and random dis-associated facts that Steve gathers, all get forcibly hammered into some hellish, bastardized ‘factual’ jigsaw in his head.

There was a time when Steve used to ask me first, but he gave up on that because my usual answer contravened the outcome of his first mentioned character flaw!

Lately one of Steve’s biggest peeves has been the performance of one or two of his various lenses.

Ostensibly you’ll perhaps think there’s nothing wrong in that – after all, the image generated by the camera is only as good as the lens used to gather the light in the scene – isn’t it?

 

But there’s a potential problem, and it  lies in what evidence you base your conclusions on……………

 

For Steve, at present, it’s manufacturers MTF charts, and comparisons thereof, coupled with his own images as they appear in Lightroom or Photoshop ACR.

Again, this might sound like a logical methodology – but it isn’t.

It’s flawed on so many levels.

 

The Image Path from Lens to Sensor

We could think of the path that light travels along in order to get to our camera sensor as a sort of Grand National horse race – a steeplechase for photons!

“They’re under starters orders ladies and gentlemen………………and they’re off!”

As light enters the lens it comes across it’s first set of hurdles – the various lens elements and element groups that it has to pass through.

Then they arrive at Becher’s Brook – the aperture, where there are many fallers.

Carefully staying clear of the inside rail and being watchful of any lose photons that have unseated their riders at Becher’s we move on over Foinavon – the rear lens elements, and we then arrive at the infamous Canal Turn – the Optical Low Pass filter; also known as the Anti-alias filter.

Crashing on past the low pass filter and on over Valentines only the bravest photons are left to tackle the the last big fence on their journey – The Chair – our camera sensor itself.

 

Okay, I’ll behave myself now, but you get the general idea – any obstacle that lies in the path of light between the front surface of our lens and the photo-voltaic surface of our sensor is a BAD thing.

Andy Astbury,Wildlife in Pixels,lens,resolution,optical path,sharpness,resolution,imaging pathway

The various obstacles to light as it passes through a camera (ASIC = Application Specific Integrated Circuit)

The problems are many, but let’s list a few:

  1. Every element reduces the level of transmitted light.
  2. Because the lens elements have curved surfaces, light is refracted or bent; the trick is to make all wavelengths of light refract to the same degree – failure results in either lateral or longitudinal chromatic aberration – or worse still, both.
  3. The aperture causes diffraction – already discussed HERE

We have already seen in that same previous post on Sensor Resolution that the number of megapixels can effect overall image quality in terms of overall perceived sharpness due to pixel-pitch, so all things considered, using photographs of any 3 dimensional scene is not always a wise method of judging lens performance.

And here is another reason why it’s not a good idea – the effect on image quality/perceived lens resolution of anti-alias, moire or optical low pass filter; and any other pre-filtering.

I’m not going to delve into the functional whys and wherefores of an AA filter, save to say that it’s deemed a necessary evil on most sensors, and that it can make your images take on a certain softness because it basically adds blur to every edge in the image projected by the lens onto your sensor.

The reasoning behind it is that it stops ‘moire patterning’ in areas of high frequency repeated detail.  This it does, but what about the areas in the image where its effect is not required – TOUGH!

 

Many photographers have paid service suppliers for AA filter removal just to squeeze the last bit of sharpness out of their sensors, and Nikon of course offer the ‘sort of AA filter-less’ D800E.

Side bar note:  I’ve always found that with Nikon cameras at least, the pro-body range seem to suffer a lot less from undesirable AA filtration softening than than their “amateur” and “semi pro” bodies – most notably the D2X compared to a D200, and the D3 compared to the D700 & D300.  Perhaps this is due to a ‘thinner’ filter, or a higher quality filter – I don’t know, and to be honest I’ve never had the desire to ‘poke Nikon with a sharp stick’ in order to find out.

 

Back in the days of film things were really simple – image resolution was governed by just two things; lens resolution and film resolution:

1/image resolution = 1/lens resolution + 1/film resolution

Film resolution was a variable depending on the Ag Halide distribution and structure,  dye coupler efficacy within the film emulsion, and the thickness of the emulsion or tri-pack itself.

But today things are far more complicated.

With digital photography we have all those extra hurdles to jump over that I mentioned earlier, so we end up with a situation whereby:

1/Image Resolution = 1/lens resolution + 1/AA filter resolution + 1/sensor resolution + 1/image processor/imaging ASIC resolution

Steve is chasing after lens resolution under the slightly misguided idea the resolution equates to sharpness, which is not strictly true; but he is basing his conception of lens sharpness based on the detail content and perceived detail ‘sharpness’ of his  images; which are ‘polluted’ if you like by the effects of the AA filter, sensor and imaging ASIC.

What it boils down to, in very simplified terms, is this:

You can have one particular lens that, in combination with one camera sensor produces a superb image, but in combination with another sensor produces a not-quite-so-superb image!

On top of the “fixed system” hurdles I’ve outlined above, we must not forget the potential for errors introduced by lens-to-body mount flange inaccuracies, and of course, the big elephant-in-the-room – operator error – ehh Steve.

So attempting to quantify the pure ‘optical performance’ of a lens using your ‘taken images’ is something of a pointless exercise; you cannot see the pure lens sharpness or resolution unless you put the lens on a fully equipped optical test bench – and how many of us have got access to one of those?

The truth of the matter is that the average photographer has to trust the manufacturers to supply accurately put together equipment, and he or she has to assume that all is well inside the box they’ve just purchased from their photographic supplier.

But how can we judge a lens against an assumed standard of perfection before we part with our cash?

A lot of folk, including Steve – look at MTF charts.

 

The MTF Chart

Firstly, MTF stands for Modulation Transfer Function – modu-what I hear your ask!

OK – let’s deal with the modulation bit.  Forget colour for a minute and consider yourself living in a black & white world.  Dark objects in a scene reflect few photons of light – ’tis why the appear dark!  Conversely, bright objects reflect loads of the little buggers, hence these objects appear bright.

Imagine now that we are in a sealed room totally impervious to the ingress of any light from outside, and that the room is painted matte white from floor to ceiling – what is the perceived colour of the room? Black is the answer you are looking for!

Now turn on that 2 million candle-power 6500k searchlight in the corner.  The split second before your retinas melted, what was the perceived colour of the room?

Note the use of the word ‘perceived’ – the actual colour never changed!

The luminosity value of every surface in the room changed from black to white/dark to bright – the luminosity values MODULATED.

Now back in reality we can say that a set of alternating black and white lines of equal width and crisp clean edges represent a high degree of contrast, and therefore tonal modulation; and the finer the lines the higher is the modulation frequency – which we measure in lines per millimeter (lpmm).

A lens takes in a scene of these alternating black and white lines and, just like it does with any other scene, projects it into an image circle; in other words it takes what it sees in front of it and ‘transfers’ the scene to the image circle behind it.

With a bit of luck and a fair wind this image circle is being projected sharply into the focal plane of the lens, and hopefully the focal plane matches up perfectly with the plane of the sensor – what used to be refereed to as the film plane.

The efficacy with which the lens carries out this ‘transfer’ in terms of maintaining both the contrast ratio of the modulated tones and the spatial separation of the lines is its transfer function.

So now you know what MTF stands for and what it means – good this isn’t it!

 

Let’s look at an MTF chart:

Nikon 500mm f4 MTF chart

Nikon 500mm f4 MTF chart

Now what does all this mean?

 

Firstly, the vertical axis – this can be regarded as that ‘efficacy’ I mentioned above – the accuracy of tonal contrast and separation reproduction in the projected image; 1.0 would be perfect, and 0 would be crappier than the crappiest version of a crap thing!

The horizontal axis – this requires a bit of brain power! It is scaled in increments of 5 millimeters from the lens axis AT THE FOCAL PLANE.

The terminus value at the right hand end of the axis is unmarked, but equates to 21.63mm – half the opposing corner-to-corner dimension of a 35mm frame.

Now consider the diagram below:

Andy Astbury,image circle,photography,frame,full frame,dimensions,radial,

The radial dimensions of the 35mm format.

These are the radial dimensions, in millimeters, of a 35mm format frame (solid black rectangle).

The lens axis passes through the center axis of the sensor, so the radii of the green, yellow and dashed circles correspond to values along the horizontal axis of an MTF chart.

Let’s simplify what we’ve learned about MTF axes:

Andy Astbury,image circle,photography,frame,full frame,dimensions,radial,

MTF axes hopefully made simpler!

Now we come to the information data plots; firstly the meaning of Sagittal & Meridional.   From our perspective in this instance I find it easier for folk to think of them as ‘parallel to’ and ‘at right angles to’ the axis of measurement, though strictly speaking Meridional is circular and Sagittal is radial.

This axis of measurement is from the lens/film plane/sensor center to the corner of a 35mm frame – in other words, along that 21.63mm radius.

Andy Astbury,image circle,photography,frame,full frame,dimensions,radial,

The axis of MTF measurement and the relative axial orientation of Sagittal & Meridional lines. NOTE: the target lines are ONLY for illustration.

Separate measurements are taken for each modulation frequency along the entire measurement axis:

Andy Astbury,image circle,photography,frame,full frame,dimensions,radial,

Thin Meridional MTF measurement. (They should be concentric circles but I can’t draw concentric circles!).

Let’s look at that MTF curve for the 500m f4 Nikon together with a legend of ‘sharpness’ – the 300 f2.8:

MTF chart,Andy Astbury,lens resolution

Nikon MTF comparison between the 500mm f4 & 300mm f2.8

Nikon say on their website that they measure MTF at maximum aperture, that is, wide open; so the 300mm chart is for an aperture of f2.8 (though they don’t say so) and the 500mm is for an f4 aperture – which they do specify on the chart – don’t ask me why ‘cos I’ve no idea.

As we can see, the best transfer values for the two lenses (and all other lenses) is 10 lines per millimeter, and generally speaking sagittal orientation usually performs slightly better than meridional, but not always.

10 lpmm is always going to give a good transfer value because its very coarse and represents a lower frequency of detail than 30 lpmm.

Funny thing, 10 lines per millimeter is 5 line pairs per millimeter – and where have we heard that before? HERE – it’s the resolution of the human eye at 25 centimeters.

 

Another interesting thing to bare in mind is that, as the charts clearly show, better transfer values occur closer to the lens axis/sensor center, and that performance falls as you get closer to the frame corners.

This is simply down to the fact that your are getting closer to the inner edge of the image circle (the dotted line in the diagrams above).  If manufacturers made lenses that threw a larger image circle then corner MTF performance would increase – it can be done – that’s the basis upon which PCE/TS lenses work.

One way to take advantage of center MTF performance is to use a cropped sensor – I still use my trusty D2Xs for a lot of macro work; not only do I get the benefit of center MTF performance across the majority of the frame but I also have the ability to increase the lens to subject distance and get the composition I want, so my depth of field increases slightly for any given aperture.

Back to the matter at hand, here’s my first problem with the likes of Nikon, Canon etc:  they don’t specify the lens-to-target distance. A lens that gives a transfer value of 9o% plus on a target of 10 lpmm sagittal at 2 meters distance is one thing; one that did the same but at 25 meters would be something else again.

You might look at the MTF chart above and think that the 300mm f2.8 lens is poor on a target resolution of  30 lines per millimeter compared to the 500mm, but we need to temper that conclusion with a few facts:

  1. A 300mm lens is a lot wider in Field of View (FoV) than a 500mm so there is a lot more ‘scene width’ being pushed through the lens – detail is ‘less magnified’.
  2. How much ‘less magnified’ –  40% less than at 500mm, and yet the 30 lpmm transfer value is within 6% to 7% that of the 500mm – overall a seemingly much better lens in MTF terms.
  3. The lens is f2.8 – great for letting light in but rubbish for everything else!

Most conventional lenses have one thing in common – their best working aperture for overall image quality is around f8.

But we have to counter balance the above with the lack of aforementioned target distance information.  The minimum focus distances for the two comparison lenses are 2.3 meters and 4.0 meters respectively so obviously we know that the targets are imaged and measured at vastly different distances – but without factual knowledge of the testing distances we cannot really say that one lens is better than the other.

 

My next problem with most manufacturers MTF charts is that the values are supplied ‘a la white light’.

I mentioned earlier – much earlier! – that lens elements refracted light, and the importance of all wavelengths being refracted to the same degree, otherwise we end up with either lateral or longitudinal chromatic aberration – or worse still – both!

Longitudinal CA will give us different focal planes for different colours contained within white light – NOT GOOD!

Lateral CA gives us the same plane of focus but this time we get lateral shifts in the red, green and blue components of the image, as if the 3 colour channels have come out of register – again NOT GOOD!

Both CA types are most commonly seen along defined edges of colour and/or tone, and as such they both effect transferred edge definition and detail.

So why do manufacturers NOT publish this information – there is to my knowledge only one that does – Schneider (read ‘proper lens’).

They produce some very meaningful MTF data for their lenses with modulation frequencies in excess of 90 to 150 lpmm; separate R,G & B curves; spectral weighting variations for different colour temperatures of light and all sorts of other ‘geeky goodies’ – I just love it all!

 

SHAME ON YOU NIKON – and that goes for Canon and Sigma just as much.

 

So you might now be asking WHY they don’t publish the data – they must have it – are they treating us like fools that wouldn’t be able to understand it; OR – are they trying to hide something?

You guys think what you will – I’m not accusing anyone of anything here.

But if they are trying to hide something then that ‘something’ might not be what you guys are thinking.

What would you think if I told you that if you were a lens designer you could produce an MTF plot with a calculator – ‘cos you can, and they do!

So, in a nutshell, most manufacturers MTF charts as published for us to see are worse than useless.  We can’t effectively use them to compare one lens against another because of missing data; we can’t get an idea of CA performance because of missing red, green and blue MTF curves; and finally we can’t even trust that the bit of data they do impart is even bloody genuine.

Please don’t get taken in by them next time you fancy spending money on glass – take your time and ask around – better still try one; and try it on more than 1 camera body!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Sensor Resolution

Sensor Resolution

In my previous two posts on this subject HERE and HERE I’ve been looking at pixel resolution as it pertains to digital display and print, and the basics of how we can manipulate it to our benefit.

You should also by aware by now that I’m not the worlds biggest fan of high sensor resolution 35mm format dSLRs – there’s nothing wrong with mega pixels; you can’t have enough of them in my book!

BUT, there’s a limit to how many you can cram into a 36 x 24 millimeter sensor area before things start getting silly and your photographic life gets harder.

So in this post I want to explain the reasoning behind my thoughts.

But before I get into that I want to address something else to do with resolution – the standard by which we judge everything we see around us – the resolution of the eye.

 

Human Eye – How Much Can We See?

In very simple terms, because I’m not an optician, the answer goes like this.

Someone with what some call 20/20/20 vision – 20/20 vision in a 20 year old – has a visual acuity of 5 line pairs per millimeter at a distance of 25 centimeters.

What’s a line pair?

5 line pairs per millimeter. Each line pair is 0.2mm and each line is 0.1mm.

5 line pairs per millimeter. Each line pair is 0.2mm and each line is 0.1mm.

Under ideal viewing conditions in terms of brightness and contrast the human eye can at best resolve 0.1mm detail at a distance of 25 centimeters.

Drop the brightness and the contrast and black will become less black and more grey, and white will become greyer; the contrast between light and dark becomes reduced and therefore that 0.1mm detail becomes less distinct.  until the point comes where the same eye can’t resolve detail any smaller than 0.2mm at 25cms, and so on.

Now if I try and focus on something at 25 cms my eyeballs start to ache,  so we are talking extreme close focus for the eye here.

An interesting side note is that 0.1mm is 100µm (microns) and microns are what we measure the size of sensor photosites in – which brings me nicely to SENSOR resolution.

 

Sensor Resolution – Too Many Megapixels?

As we saw in the post on NOISE we do not give ourselves the best chances by employing sensors with small photosite diameters.  It’s a basic fact of physics and mathematics – the more megapixels on a sensor, then the smaller each photosite has to be in order to fit them all in there;  and the smaller they are then the lower is their individual signal to noise or S/N ratio.

But there is another problem that comes with increased sensor resolution:

Increased diffraction threshold.

Andy Astbury,Wildlife in Pixels,sensor resolution,megapixels,pixel pitch,base noise,signal to noise ratio

Schematic of identical surface areas on lower and higher megapixel sensors.

In the above schematic we are looking at the same sized tiny surface area section on two sensors.

If we say that the sensor resolution on the left is that of a 12Mp Nikon D3, and the ‘area’ contains 3 x 3 photosites which are each 8.4 µm in size, then we can say we are looking at an area of about 25µm square.

On the right we are looking at that same 25µm (25 micron) square, but now it contains 5.2 x 5.2 photosites, each 4.84µm in size – a bit like the sensor resolution of a 36Mp D800.

 

What is Diffraction?

Diffraction is basically the bending or reflecting of waves by objects placed in their path (not to be confused with refraction).  As it pertains to our camera sensor, and overall image quality, it causes an general softening of every single point of sharp detail in the image that is projected onto the sensor during the exposure.

I say during the exposure because diffraction is ‘aperture driven’ and it’s effects only occur when the aperture is ‘stopped down’; which on modern cameras only occurs during the time the shutter is open.

At all other times you are viewing the image with the aperture wide open, and so you can’t see the effect unless you hit the stop down button (if you have one) and even then the image in the viewfinder is so small and dark you can’t see it.

As I said, diffraction is caused by aperture diameter – the size of the hole that lets the light in:

Andy Astbury,Wildlife in Pixels,sensor resolution,megapixels,pixel pitch,base noise,signal to noise ratio

Diffraction has a low presence in the system at wider apertures.

Light enters the lens, passes through the aperture and strikes the focal plane/sensor causing the image to be recorded.

Light waves passing through the center of the aperture and light waves passing through the periphery of the aperture all need to travel the same distance – the focal distance – in order for the image to be sharp.

The potential for the peripheral waves to be bent by the edge of the aperture diaphragm increases as the aperture becomes smaller.

Andy Astbury,Wildlife in Pixels,sensor resolution,megapixels,pixel pitch,base noise,signal to noise ratio

Diffraction has a greater presence in the system at narrower apertures.

If I apply some randomly chosen numbers to this you might understand it a little better:

Let’s say that the focal distance of the lens (not focal length) is 21.25mm.

As long as light passing through all points of the aperture travels 21.25mm and strikes the sensor then the image will be sharp; in other words, the more parallel the central and peripheral light waves are, then the sharper the image.

Making the aperture narrower by ‘stopping down’ increases the divergence between central and peripheral waves.

This means that peripheral waves have to travel further before the strike the sensor; further than 21.25mm – therefore they are no longer in focus, but those central waves still are.  This effect gives a fuzzy halo to every single sharply focused point of light striking our sensor.

Please remember, the numbers I’ve used above are meaningless and random.

The amount of fuzziness varies with aperture – wider aperture =  less fuzzy; narrower aperture = more fuzzy, and the circular image produced by a single point of sharp focus is known as an Airy Disc.

As we ‘stop down’ the aperture the edges of the Airy Disc become softer and more fuzzy.

Say for example, we stick a 24mm lens on our camera and frame up a nice landscape, and we need to use f14 to generate the amount of depth of field we need for the shot.  The particular lens we are using produces an Airy Disc of a very particular size at any given aperture.

Now here is the problem:

Andy Astbury,Wildlife in Pixels,sensor resolution,megapixels,pixel pitch,base noise,signal to noise ratio

Schematic of identical surface areas on lower and higher megapixel sensors and the same diameter Airy Disc projected on both of them.

As you can see, the camera with the lower sensor resolution and larger photosite diameter contains the Airy Disc within the footprint of ONE photosite; but the disc effects NINE photosites on the camera with the higher sensor resolution.

Individual photosites basically record one single flat tone which is the average of what they see; so the net outcome of the above scenario is:

Andy Astbury,Wildlife in Pixels,sensor resolution,megapixels,pixel pitch,base noise,signal to noise ratio

Schematic illustrating the tonal output effect of a particular size Airy Disc on higher and lower resolution sensors

On the higher resolution sensor the Airy Disc has produced what we might think of as ‘response pollution’ in the 8 surrounding photosites – these photosites need to record the values of the own ‘bits of the image jigsaw’ as well – so you end up with a situation where each photosite on the sensor ends up recording somewhat imprecise tonal values – this is diffraction in action.

If we were to stop down to f22 or f32 on the lower resolution sensor then the same thing would occur.

If we used an aperture wide enough on the higher resolution sensor – an aperture that generated an Airy Disc that was the same size or smaller than the diameter of the photosites – then only 1 single photosite would be effected and diffraction would not occur.

But that would leave of with a reduced depth of field – getting around that problem is fairly easy if you are prepared to invest in something like a Tilt-Shift lens.

Andy Astbury,Wildlife in Pixels,sensor resolution,megapixels,pixel pitch,base noise,signal to noise ratio

Both images shot with a 24mm TS lens at f3.5. Left image lens is set to zero and behaves as normal 24mm lens. Right image has 1 degree of down tilt applied.

Above we see two images shot with a 24mm Tilt-Shift lens, and both shots are at f3.5 – a wide open aperture.  In the left hand image the lens controls are set to zero and so it behaves like a standard construction lens of 24mm and gives the shallow depth of field that you’d expect.

The image on the right is again, shot wide open at f3.5, but this time the lens was tilted down by just 1 degree – now we have depth of field reaching all the way through the image.  All we would need to do now is stop the lens down to its sharpest aperture – around f8 – and take the shot;  and no worries about diffraction.

Getting back to sensor resolution in general, if your move into high megapixels counts on 35mm format then you are in a ‘Catch 22’ situation:

  • Greater sensor resolution enables you to theoretically capture greater levels of detail.

but that extra level of detail is somewhat problematic because:

  • Diffraction renders it ‘soft’.
  • Eliminating the diffraction causes you to potentially lose the newly acquired level of, say foreground detail in a landscape, due to lack of depth of field.

All digital sensors are susceptible to diffraction at some point or other – they are ‘diffraction limited’.

Over the years I’ve owned a Nikon D3 I’ve found it diffraction limited to between f16 & f18 – I can see it at f18 but can easily rescue the situation.  When I first used a 24Mp D3X I forgot what I was using and spent a whole afternoon shooting at f16 & f18 – I had to go back the next day for a re-shoot because the sensor is diffraction limited to f11 – the pictures certainly told the story!

Everything in photography is a trade-off – you can’t have more of one thing without having less of another.  Back in the days of film we could get by with one camera and use different films because they had very different performance values, but now we buy a camera and expect its sensor to perform all tasks with equal dexterity – sadly, this is not the case.  All modern consumer sensors are jacks of all trades.

If it’s sensor resolution you want then by far the best way to go about it is to jump to medium format, if you want image quality of the n’th degree – this way you get the ‘pixel resolution’ without many of the incumbent problems I’ve mentioned, simply because the sensors are twice the size; or invest in a TS/PC lens and take the Scheimpflug route to more depth of field at a wider aperture.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Lightroom Tutorials #2

 

Lightroom Tutorials,video,lessoneagle,golden eagle,snow,winter,Norway,wildlife

Image Processing in Lightroom & Photoshop

 

In this Lightroom tutorial preview I take a close look at the newly evolved Clone/Heal tool and dust spot removal in Lightroom 5.

This newly improved tool is simple to use and highly effective – a vast improvement over the great tool that it was already in Lightroom 4.

 

Lightroom Tutorials  Sample Video Link below: Video will open in a new window

 

https://vimeo.com/64399887

 

This 4 disc Lightroom Tutorials DVD set is available from my website at http://wildlifeinpixels.net/dvd.html

Colour Space & Profiles

colour space

From Camera to Print
copyright 2013 Andy Astbury/Wildlife in Pixels

Colour space and device profiles seem to cause a certain degree of confusion for a lot of people; and a feeling of dread, panic and total fear in others!

The reality of colour spaces and device profiles is that they are really simple things, and that how and why we use them in a colour managed work flow is perfectly logical and easy to understand.

Up to a point colour spaces and device profiles are one and the same thing – they define a certain “volume” of colours from red to green to blue, and from black to white – and all the colours that lie in between those five points.

The colour spaces that most photographers are by now familiar with are ProPhotoRGB, AdobeRGB(1998) and sRGB – these are classed as “working colour spaces” and are standards of colour set by the International Color Consortium, or ICC; and they all have one thing in common; where red, green and blue are present in equal amounts the colour produced will be NEUTRAL.

The only real differences between these three working colour spaces is the “distances” between the five set points of red, green, blue, black and white.  The greater the distance between the three primary colours then the greater is the degree of graduation between them, hence the greater the number of potential colours.  In the diagram below we can see the sRGB & ProPhoto working colour spaces displayed on the same axes:

colour space volume

The sRGB & ProPhoto colour spaces. The larger volume of ProPhoto contains more colour variety between red, green & blue than sRGB.

If we were to mark five different points on the surface of a partially inflated balloon,  and then inflate it some more then the points in relation to the balloons surface would NOT change: the points remain the same.  But the spatial distances between the points would change, as would the internal volume.  It’s the same with our five points of colour reference – red, green, blue, black & white – they do NOT change between colour spaces; red is red no matter what the working colour space.  But the range of potential colours between our 5 points of reference increases due to increased colour space volume.

So now we have dealt with the basics of the three main working colour spaces, we need to consider the volume of colour our camera sensor can capture – if you like, its colour space; but I’d rather use the word “gamut”.

Let’s take the Canon 5DMk3 as an example, and look at the volume, or gamut, of colour that its sensor can capture, in direct comparison with our 3 quantifiable working colour spaces:

colour space

The Canon 5DMk3 sensor gamut (black) in comparison to ProPhoto (largest), AdobeRGB1998 & sRGB (smallest) working colour spaces.

In a previous blog article I wrote – see here – I mentioned how to setup the colour settings in Photoshop, and this is why.  If you want to keep the greatest proportion of your camera sensors captured colour then you need to contain the image within the ProPhotoRGB working colour space.  If you don’t, and you use AdobeRGB or sRGB as Photoshops working colour space then you will loose a certain proportion of those captured colours – as I’ve heard it put before, it’s like a sex change operation – certain colours get chopped off, and once that’s happened you can’t get them back!

To keep things really simple just think of the 3 standard working colour spaces as buckets – the bigger the bucket, the more colour it contains; and you can’t tip the colours captured by your camera into a smaller bucket without getting spillage and making a mess on the floor!

As I said before, working colour spaces are neutral; but seldom does our camera ever capture a scene that contains pure neutrals.  Even though an item in the scene may well be neutral in colour, camera sensors quite often skew these colours ever so slightly; most Canon RAW files always look a teeny-weeny ever so slight bit magenta to me when I import them; but there again I’m a Nikon shooter seem to have a minute greenish tinge to them before processing.

Throughout our imaging work flow we have 3 stages:

1. Input (camera or scanner).

2. Working Process (Lightroom, Photoshop etc).

3. Output (printer for example).

And each stage has its representative type of colour space – we have input profiles, working colour spaces and output profiles.

So we have our camera capture gamut (colour space if you like) and we’ve opened our image in Photoshop or Lightroom in the ProPhoto working colour space – there’s NO SPILLAGE!

We now come to the crux of colour management; before we can do anything else we need to profile our “window onto our image” – the monitor.

In order to see the reality of what the camera captured we need to ensure that our monitor is in line with our WORKING COLOUR SPACE in terms of colour neutrality – not that of the camera as some people seem to think.

All 3 working colour spaces posses the same degree of colour neutrality where red, green & blue are present at the same values irrespective of physical size of the colour space.

So as long as our monitor is profiled to be:

1. Accurately COLOUR NEUTRAL

2. Displaying maximum brightness only in the presence true white – which you’ll hardly ever photograph, even snow isn’t white.

then we will see a highly workable representation of image colour neutrality and luminosity on our monitor.  Only by working this way can we actually tell if the camera has captured the image correctly in terms of colour balance and overall exposure.

And the fact that our monitor CANNOT display all the colours contained within our big ProPhoto bucket is, to all intents and purposes,  a fairly mute point; though seeing as many of them as possible is never a bad thing.

And using a monitor that does NOT display the volume of colour approximating or exceeding that of the Adobe working space can be highly detrimental for the reasons discussed in my previous post.

Now that we’ve covered input profiles and working colour spaces we need to move on and outline the basics of output profiles, and printer profiles in particular.

colour space, profile, print profile

Adobe & sRGB working paces in comparison to the colours contained in the Kingfisher image and the profile for Permajet Oyster paper using the Epson 7900 printer. (CLICK image for full sized view).

In the image above we can see both the Adobe and sRGB working spaces and the full distribution of colours contained in the Kingfisher image which is a TIFF file in our big ProPhoto bucket of colour;  and a black trace which is the colour profile (or space if you like) for Permajet Oyster paper using Epson UltraChrome HDR ink on an Epson 7900 printer.

As we can see, some of the colours contained in the image fall outside the gamut of the sRGB working colour space; notably some oranges and “electric blues” which are basically colours of the subject and are most critical to keep in the print.

However, all those ProPhoto colours are capable of being reproduced on the Epson 7900 using Permajet Oyster paper because, as the black trace shows, the printer/ink/paper combination can reproduce colours that lie outside of the Adobe working colour space.

The whole purpose of that particular profile is to ensure that the print matches what we can see on the monitor both in terms of colour and brightness – in other words, what we see is what we get – WYSIWYG!

The beauty of a colour managed workflow is that it’s economical – assuming the image is processed correctly then printing via an accurate printer profile can give you a perfect printed rendition of your screen image using just a single sheet of paper – and only one sheets worth of ink.

colour space, colour profile

The difference between colour profiles for the same printer paper on different printers. Epson 3000 printer profile trace in Red (CLICK image for full size view).

If we were to switch printers to an Epson 3000 using UltraChrome K3 ink on the very same paper, the area circled in white shows us that there are a couple of orange hue colours that are a little problematic – they lie either close to or outside the colour gamut of this printer/ink/paper combination, and so they need to be changed in order to ‘fit’, either by localised adjustment or variation of rendering intent – but that’s a story for later!

Why is it different? Well, it’s not to do with the paper for sure, so it’s down to either the ink change or printer head.  Using the same K3 ink in an Epson 4800 brings the colours back into gamut, so the difference is in the printer head itself, or the printer driver, but as I said, it’s a small problem easily fixed.

When you consider the low cost of achieving an accurate monitor profile – see this previous post – and combine that with an accurate printer output profile or two to match your chosen printer papers, and then deploy these assets correctly you have a proper colour managed workflow.  Add to that the cost savings in ink and paper and it becomes a bit of a “no-brainer” doesn’t it?

In this post I set out to hopefully ‘demystify’ colour spaces and profiles in terms of what they are and how they are used – I hope I’ve succeeded!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Monitor Calibration with ColorMunki

Monitor Calibration with ColorMunki Photo

Following on from my previous posts on the subject of monitor calibration I thought I’d post a fully detailed set of instructions, just to make sure we’re all “singing from the same hymn sheet” so to speak.

Basic Setup

_D4R7794

Put the ColorMunki spectrophotometer into the cover/holder and attach the USB cable.

_D4R7798

Always keep the sliding dust cover closed when storing the ColorMunki in its holder – this prevents dust ingress which will effect the device performance.

BUT REMEMBER – slide the cover out of the way before you begin the calibration process!

colormunkiSpecCover

Install the ColorMunki software on your machine, register it via the internet, then check for any available updates.

Once the software is fully installed and working you are ready to begin.

Plug the USB cable into an empty USB port on your computer – NOT an external hub port as this can sometimes cause device/system communication problems.

Launch the ColorMunki software.

The VERY FIRST THING YOU NEED TO DO is open the ColorMunki software preferences and ensure that it looks like the following screen:

PC: File > Preferences

Mac: ColorMunki Photo > Preferences

Screen Shot 2013-10-17 at 11.28.32

The value for the Tone Response Curve MUST be set to 2.2 which is the default value.

The ICC Profile Version number MUST be set to v2 for best results – this is NOT the default.

Ensure the two check boxes are “ticked”.**

** These settings can be something of a contentious issue. DDC & LUT check boxes should only be “ticked” if your Monitor/Graphics card combination offers support for these modes.

If you find these settings make your monitor become excessively dark once profiling has been completed, start again ensuring BOTH check boxes are “unticked”.

Untick both boxes if you are working on an iMac or laptop as for the most part these devices support neither function.

For more information on this, a good starting point is a page on the X-Rite website available on the link below:

http://xritephoto.com/ph_product_overview.aspx?ID=1115&Action=Support&SupportID=5561

If you are going to use the ColorMunki to make printer profiles then ensure the ICC Profile Version is set to v2.

By default the ColorMunki writes profiles in ICC v4 – not all computer operating systems can function correctly from a graphics colour aspect; but they can all function perfectly using ICC v2.

You should only need to do this operation once, but any updates from X-Rite, or a re-installation of the software will require you to revisit the preferences panel just to check all is well.

Once this panel is set as above Click OK and you are ready to begin.

 

Monitor Calibration

This is the main ColorMunki GUI, or graphic user interface:

Screen Shot 2013-10-17 at 12.32.58

Click Profile My Display

Screen Shot 2013-10-17 at 11.17.49

Select the display you want to profile.

I use what is called a “double desktop” and have two monitors running side by side; if you have just a single monitor connected then that will be the only display you see listed.

Click Next>.

Screen Shot 2013-10-17 at 11.18.18

Select the type of display – we are talking here about monitor calibration of a screen attached to a PC or Mac so select LCD.

Laptops – it never hurts a laptop to be calibrated for luminance and colour, but in most cases the graphics output LUT (colour Look Up Table) is barely 8 bit to begin with; the calibration process will usually reduce that to less than 8 bit. This will normally result in the laptop screen colour range being reduced in size and you may well see “virtual” colour banding in your images.

Remedy: DON’T PROCESS ON A LAPTOP – otherwise “me and the boys” will be paying you a visit!

Select Advanced.

Deselect the ambient light measurement optionit can be expensive to set yourself up with proper lighting in order to have an ICC standard viewing/processing environment; daylight (D65) bulbs are fairly cheap and do go a long way towards helping, but the correct amount of light and the colour of the walls and ceiling, and the exclusion of extraneous light sources of incorrect colour temperature (eg windows) can prove somewhat more problematic and costly.

Processing in darkened room without light is by far the easiest, cheapest and most cost-effective way of obtaining correct working conditions.

Set the Luminance target Value to 120 (that’s 120 candelas per square meter if you’re interested!).

Set the Target White Point to D65 (that’s 6500 degrees Kelvin – mean average daylight).

Click Next>.

Screen Shot 2013-10-17 at 11.19.44

With the ColorMunki connected to your system this is the screen you will be greeted with.

You need to calibrate the device itself, so follow the illustration and rotate the ColorMunki dial to the indicated position.

Once the device has calibrated itself to its internal calibration tile you will see the displayed GUI change to:

Screen Shot 2013-10-17 at 11.20.26

Follow the illustration and return the ColorMunki dial to its measuring position.

Screen Shot 2013-10-17 at 11.20.49

Click Next>.

Screen Shot 2013-10-17 at 11.21.11

With the ColorMunki in its holder and with the spectrophotometer cover OPEN for measurement, place the ColorMunki on the monitor as indicated on screen and in the image below:

XR-CLRMNK-01

We are now ready to begin the monitor calibration.

Click Next>.

The first thing the ColorMunki does is measure the luminosity of the screen. If you get a manual adjustment prompt such as this (indicates non-support/disabling of DDC preferences option):

ColorMunki-Photo-display-screen-111

Simply turn adjust the monitor brightness slowly until the indicator line is level with the central datum line; you should see a “tick” suddenly appear when the luminance value of 120 is reached by your adjustments.

LCDs are notoriously slow to respond to changes in “backlight brightness” so make an adjustment and give the monitor a few seconds to settle down.

You may have to access your monitor controls via the screen OSD menu, or on Mac via the System Preferences > Display menu.

Once the Brightness/Luminance of the monitor is set correctly then ColorMunki will proceed will proceed with its monitor output colour measurements.

In order for you to understand monitor calibration and what is going on here is a sequence of slides from one of my workshops on colour management:

moncal1

moncal2

moncal3

moncal4

Once the measurements are complete the GUI will return to the screen in this form.

Screen Shot 2013-10-17 at 11.26.29

Either use the default profile name, or one of your own choice and click Save.

NOTE: Under NO CIRCUMSTANCES can you rename the profile after it has been saved, or any other .icc profile for that matter, otherwise the profile will not work.

Click Next>.

Screen Shot 2013-10-17 at 11.27.00

Click Save again to commit the new monitor profile to you operating system as the default monitor profile.

You can set the profile reminder interval from the drop down menu.

Click Next>.

Screen Shot 2013-10-17 at 12.32.58

Monitor calibration is now complete and you are now back to the ColorMunki startup GUI.

Quit or Exit the ColorMunki application – you are done!

Please consider supporting this blog.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Screen Capture logos denoting ColorMunki & X-Rite are the copyright of X-Rite.

Monitor Calibration Devices

Colour management is the simple process of maintaining colour accuracy and consistency between the ACTUAL COLOURS in your image, in terms of Hue, Saturation and Luminosity; and those reproduced on your RGB devices; in this case, displayed on your monitor. Each and every pixel in your image has its very own individual RGB colour values and it is vital to us as photographers that we “SEE” these values accurately displayed on our monitors.

If we were to visit The National Gallery and gaze upon Turners “Fighting Temeraire” we would see all those sumptuous colours on the canvass just as J.M.W. intended; but could we see the same colours if we had a pair of Ray Bans on?

No, we couldn’t; because the sunglasses behave as colour filters and so they would add a “tint” to every colour of light that passes through them.

What you need to understand about your monitor is that it behaves like a filter between your eyes and the recorded colours in your image; and unless that “filter” is 100% neutral in colour, then it will indeed “tint” your displayed image.

So, the first effect of monitor calibration is that the process NEUTRALIZES any colour tint in the monitor display and so shows us the “real colours” in our images; the correct values of Hue and Saturation.

Now imagine we have an old fashioned Kodak Ektachrome colour slide sitting in a projector. If we have the correct wattage bulb in the projector we will see the correct LUMINOSITY of the slide when it is projected.

But if the bulb wattage is too high then the slide will project too brightly, and if the bulb wattage is too low then the projected image will not be bright enough.

All our monitors behave just like a projector, and as such they all have a brightness adjustment which we can directly correlate to our old fashioned slide projector bulb, and this brightness, or backlight control is another aspect of monitor calibration.

Have you done a print that comes out DARKER than the image displayed on the screen?

If you have then your monitor backlight is too bright!

And so, the second effect of monitor calibration is the setting of the correct level of brightness or back lighting of our monitor in order for us to see the true Luminosity of the pixels in our images.

Without accurate Monitor Calibration your ability to control the accuracy of colour and overall brightness of your images is severely limited.

I get asked all the time “what’s the best monitor calibration device to use” so, above is a short video (no sound) I’ve made showing the 3D and 2D plots of profiles I’ve just made for the same monitor using teo different monitor calibration devices/spectrophotometers from opposite ends of the pricing scale.

The first plot you see in black is the AdobeRGB1998 working colour space – this is only shown as a standard by which you can judge the other two profiles; if you like, monitor working colour spaces.

The yellow plot that shows up as an overlay is a profile done with an Xrite ColourMunki Photo, which usually retails for around £300 – and it clearly shows this particular monitor rendering a greater number of colours in certain areas than are contained in the Adobe1998 reference space.

The cyan plot is the same monitor, but profiled with the i1Photo Pro 2 spectro – not much change out of £1300 thank you very much – and the resulting profile virtually an identical twin of the one obtained with the ColorMunki which retails for a quarter of the price!

Don’t get me wrong, the i1 is a far more efficient monitor calibration device if you want to produce custom PRINTER profiles as well, but if you are happy using OEM profiles and just want perfect monitor calibration then I’d say the ColorMunki Photo is the more sensible purchase; or better still the ColorMunki Display at only around £110.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Wildlife Photography – Common Kestrel

Wildlife Photography How To – Common Kestrel – “Flaps 30, Gear Down, “

As a specialist in natural history and wildlife photography it’s always difficult to decide what are your favorite images from all the frames you shoot – after all, you are quite “emotionally close” to every single one of them!

Being in it to make money in order to live makes the job a little more difficult for the simple reason that, being a photographer, the images you REALLY like are hardly ever the images the picture buyers like. So in order to make a living you have to devote the majority of your camera time to producing commercially viable images – not gallery images.

But occasionally you’ll come up with a shot that satisfies both sides of the equation – you love it yourself and are really proud of it; and it SELLS WELL!

So I thought I’d post a series of my own images that satisfy both myself and the picture buyers, and I’m going to start with one of my top 5 sellers in the last 18 months – your Uncle Andy’s infamous Kestrel shot.

wildlife photography, common Kestrel, photography technique

Common Kestrel Landing
©Andy Astbury/Wildlife in Pixels

Shot in June of 2012 at Poolbridge Farm in Yorkshire, I approached the entire shoot day with this particular shot in mind – you have to have a goal set even with wildlife photography, otherwise you just end up shooting at random; and you HAVE to be in control of at least something other than the camera!

I’d seen all the usual “kestrel perched” shots that were coming out Poolbridge, but I wanted something a little different – and I got this, which was just what I wanted.

Remember PPPPP – positive planning prevents poor performance!

So here’s how the shot was planned and executed:

This position in the Kestrels flight to the perch is BEHIND the perch – in this case an old wooden farm gate – so it happens BEFORE the bird lands on the perch.

So primary focus has to be BEHIND the perch.

Ok, we’re all good so far, but there are some very important factors to take into consideration.  We want a head-on shot, the bird is flying at about 7 meters per second, and we need to take the shot when the bird is around 1 meter behind the perch.

So here’s our main problem – head on means that the closing distance rate between bird and lens is at its fastest possible, and sadly there isn’t an auto focus system on the planet that will keep up with this small target flying straight down the lens axis and guarantee you the shot.

Therefore, sad to say, but AF is out and manual focus is in!

The bird itself is a mature female so she has a wingspan of about 30 inches.

So the shot calls for the following criteria – set the camera at a distance that will capture a 30 inch wide target about 30 inches behind the perch, with a 500mm f4 lens at about 80% of full frame width.  The lens needs to be manually pre-focused at the required distance and an aperture set that will give sufficient depth of field to give a good degree of sharpness over the nearest parts of the bird – beak to feet.

Simple maths tells me I need to have the bird arriving at “position X” about 40 feet or 12 meters in front of the lens.

So now it’s easy; just get my mate Mike who was with me on the day to stand about a meter behind the gate post with his hands outstretched 30 inches apart, frame up so his hands are both well in frame and about a third of the frame from its top edge.  Then manually focus on his cammo patterned shirt front making sure that both lens and camera body are in MF mode and I’m all set to take the shot from a lens point of view.

Set the camera to maximum frame rate (never a good idea usually on a Nikon as it locks the AF but we are not using AF so it doesn’t matter in this instance), and now I’m all set.

The bird is 100% wild and has a nest full of screaming hungry kids to feed, but she knows that if she’s seen people about then there’s usually a tasty morsel of food on the old gate post. She perches in one of two trees while she’s deciding if its safe to come to the perch, but her approach is only head on if she’s coming in from one of them.

So now its just a case of sitting and waiting until she’s in that particular tree, and then waiting some more until she begins her approach.

Once she’s on her way I pick her up in the viewfinder of the camera when she’s about half way across the field (she’s out of focus and very fuzzy when I begin to follow her), keep her fuzzy shape in frame and she gets sharper as she gets closer, then just as she starts to get some some definition to her in the viewfinder I just press and hold down the shutter to shoot an entire buffer full of frames: remembering to keep the camera moving as it was otherwise the composition will be a bit off!

It’s a technique rather like shot-gun shooting – you need to follow trough while squeezing the trigger, otherwise you miss behind!

Don’t get me wrong, the shot wasn’t “in the can” on the first attempt, and nor was it on the forth! But the fifth time she came I nailed it. After that all I had to do was try and repeat the shot over and over again and try to get it all to come together with some good light – we got there in the end.

All in all the shot has made over 500 sales in the last 12 months or so, in all guises from small website jpegs to full size prints – so buyers like it – and I’m pleased with the shot from both an aesthetic and technical standpoint.

And it’s even been on the TV – 4 times now!

So, the job’s a good ‘un!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Monitor, Is Yours Up To The Job?

Is Your Monitor Actually Up To The Job?

As photographers we have to take something of a “leap of faith” that the monitor we use to view and process our images on is actually up to the job – or do we?

No – is the short answer!  As a Photoshop & Lightroom educator I try and teach this mystical thing called “Colour Management” – note the correct spelling of the word COLOUR!

The majority of amateur photographers (and a few so-called pros come to that!) seem to think that colour management is some great complicated edifice; or even some sort of “re-invention of the wheel” – and so they either bury their head in the sand or generally “pooh-pooh” the idea as unnecessary.

Well, it’s certainly NOT complicated, but it certainly IS necessary.

The first stage in a colour managed workflow is to ensure that your monitor is calibrated – in other words it is working at the correct brightness level, and the correct colour balance or white point – this will ensure that when your computer sends pure red to your monitor, pure red is seen on the screen; not red with a blue tint to it!

But correct calibration of your monitor is fairly useless if your monitor cannot reproduce a large variation of colour – in other words, if its’ colour gamut is too small.

And it’s Monitor Colour Gamut that I want to look at in this post.

The first thing I’d like you to do is open up Photoshop and go to the Colour Settings – that’s Edit>Colour Settings, or shift+cmd+K on Mac, or shift+Ctrl+K on PC.

Once this dialogue box is open, set it up as follows:

Screen Shot 2013-11-18 at 13.47.30

This is the optimum setup of Photoshop for digital photography as ProPhoto is the best colour space for preserving the largest number of colours captured by your dslr sensor; far better than AdobeRGB1998 – but that’s another story.

If you like you can click the SAVE button and then give this settings profile a name – I call mine ProPhoto_Balanced_CC

Now that you are working with the largest colour palette possible inside Photoshop I want you to go to File>New and created a new 500×500 pixel square with a resolution of 300 pixels per inch with the settings as follows:

Screen Shot 2013-11-18 at 13.58.34

Click OK and you should now have a white square.

Now go to your foreground colour, click it to bring the colour palette dialogue box into view and manually add the following values indicated by the small red arrows:

Screen Shot 2013-11-18 at 14.06.52

The colour will look a little different than it does in the jpeg above.

So now we have a rather lurid sickly-looking green square in the ProPhoto colour space.

Now duplicate the image TWICE and then go to Window>Arrange>3up Vertical and you should end up with a display looking like this:

unconverted

Now comes the point of the exercise – click on the tab for the centre image and go Edit>Convert to Profile and choose AdobeRGB(1998) as the destination space (colour space).

Then click on the tab for the left hand image and go Edit>Convert to Profile and choose sRGB as the destination space.

Here’s the thing – if your display DOES NOT look like this:

MonitorColourDisplay

and all three squares look the same as the square on the left then your monitor only has a small sRGB colour gamut and is going to severely inhibit your ability to process your images properly or with any degree of colour accuracy.

Monitors rely on their Colour Look-up Table or LUT in order to display colour. Calibration of the monitor can reduce the size of the available range of colours in the LUT if it’s not big enough in the first place, and so calibration can indeed make things worse from a colour point of view; BUT, it will still ensure the monitor is set to the correct levels of brightness and colour neutrality; so calibration is still a good idea.

Laptops are usually the best illustration of this small LUT problem; normally their display gamuts are barely 8bit sRGB to begin with, and if calibration drops the LUT to below 8bit then the commonest problem you see is colour banding in your images.

If however, your display looks like the image above then you’re laughing!

Why is a large monitor colour gamut essential for digital photography?  Well it’s all to do with those colour spaces:

Screen Shot 2013-11-18 at 14.56.11

If you look at the image above you’ll see the three standard primary working colour spaces of ProPhoto, AdobeRGB(1998) and sRGB overlaid for comparison with each other.  There’s also a 4th plot – this is the input space of the Canon 1Dx dslr – in other words, it encompasses all the colours the sensor of that camera can record.

In actual fact, some colours can be recorded by the camera that lie OUTSIDE even the ProPhoto colour space!

But you can clearly see that the Adobe space looses more camera-captured colour than ProPhoto – hence RAW file handlers like Lightroom work in Prophoto (or to be more strictly true MelissaRGB – but that’s yet another story!) in order to at least preserve as many of the colours captured by the camera as possible.

Even more camera colour is lost to the sRGB colour space.

So this is why we should always have Photoshop set to a default ProPhoto working space – the archival images we produce will therefore retain as much of the original colours captured by the camera as possible.

If we now turn our attention back to monitors – the windows on to our images – we can now deduce that:

a. If a monitor can only display sRGB at best, then we will only be able to see a small portion of the cameras captured colour.

b. However, if the monitor has a larger colour gamut and a bigger LUT both in terms of colour spectrum and bit depth, then we will see a lot more of the original capture colours – and the more we can see then more effectively we can colour manage.

Monitors are available that can display the Adobe colour gamut, indeed quite a few can display more colours – but if you are on a tight budget these can seem more than expensive to say the least.

A good monitor that I recommend quite a lot – indeed I use one myself – is the HP LP2475W, well worth the price if you can find one; and with a bit of tweaking it will display 98%+ of the AdobeRGB colour space in all three primary colours and even some of the warmer colours that are only ProPhoto:

Screen Shot 2013-11-18 at 15.40.07

The green plot is the Adobe space, the red plot is the HP LP2475W display colour space.

So it’s a good buy if you can find one.

However, there’s a catch – there always is! This monitor relies on the LUT of the graphics card driving it – plugged into the modest 512Mb nVidea GT120 on my Mac Pro it is brilliant and competes at every level with the likes of Eizo ColourEdge and NEC Spectraviews for all practical purposes.  But plugged into the back of a laptop then it can only reproduce what the lower specification graphics chips can supply it with.

So there we have it, a simple way to test if your monitor is giving you the best advantage when it comes to processing your images – food for thought?

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.