YouTube Channel Latest Video Training

My YouTube Channel Latest Photography Video Training.

I’ve been busy this week adding more content to the old YouTube channel.

Adding content is really time-consuming, with recording times taking around twice the length of the final video.

Then there’s the editing, which usually takes around the same time, or a bit longer.  Then encoding and compression and uploading takes around the same again.

So yes, a 25 minute video takes A LOT more than 25 minutes to make and make live for the world to view.

This weeks video training uploads are:

This video deals with the badly overlooked topic of raw file demosaicing.

Next up is:

This video is a refreshed version of getting contrast under control in Lightroom – particularly Lightroom Classic CC.

Then we have:

This video is something of a follow-up to the previous one, where I explain the essential differences between contrast and clarity.

And finally, one from yesterday – which is me, restraining myself from embarking on a full blown ‘rant’, all about the differences between DPI (dots per inch) and PPI (pixels per inch):

Important Note

Viewing these videos is essential for the betterment of your understanding – yes it is!  And all I ask for in terms of repayment from yourselves is that you:

  1. Click the main channel subscribe button HERE https://www.youtube.com/c/AndyAstbury
  2. Give the video a ‘like’ by clicking the thumbs up!

YouTube is a funny old thing, but a substantial subscriber base and like videos will bring me closer to laying my hands on latest gear for me to review for you!

If all my blog subscribers would subscribe to my YouTube channel then my subs would more than treble – so go on, what are you waiting for.

I do like creating YouTube free content, but I do have to put food on the table, so I have to do ‘money making stuff’ as well, so I can’t afford to become a full-time YouTuber yet!  But wow, would I like to be in that position.

So that’s that – appeal over.

Watch the videos, and if you have any particular topic you would like me to do a video on, then please just let me know.  Either email me, or you can post in the comments below – no comment goes live here unless I approve it, so if you have a request but don’t want anyone else to see it, then just say.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Good Contrast Control in Lightroom CC

Contrast Control in Lightroom

Learning how to deploy proper contrast control in Lightroom brings with it two major benefits:

  • It allows you to reveal more of your camera sensors dynamic range.
  • It will allow you to reveal considerably more image detail.

contrast control

I have posted on this subject before, under the guise of neutralising Lightrooms ‘hidden background adjustments’.  But as Lightroom CC 2015 evolves, trying to ‘nail’ the best way of doing something becomes like trying to hit a moving target.

For the last few months I’ve been using this (for me) new method – and to be honest it works like a charm!

It involves the use of the ‘zero’ preset together with a straight process version swap around, as illustrated in the before/after shot above and in the video linked below.  This video is best viewed on my YouTube channel:

The process might seem a little tedious at first, but it’s really easy when you get used to it, and it works on ALL images from ALL cameras.

Here is a step-by-step guide to the various Lightroom actions you need to take in order to obtain good contrast control:

Contrast Control Workflow Steps:

1. Develop Module Presets: Choose ZEROED
2. Camera Calibration Panel: Choose CAMERA NEUTRAL
3. Camera Calibration Panel: Choose Process Version 2010
4. Camera Calibration Panel: Choose Process Version 2012
5. Basics Panel: Double Click Exposure (goes from -1 to 0)
6. Basics Panel: Adjust Black Setting to taste if needed.
7. Details Panel: Reset Sharpening to default +25
8. Details Panel: Reset Colour Noise to default +25
9. Lens Corrections Panel: Tick Remove Chromatic Aberration.

Now that you’ve got good contrast control you can set about processing your image – just leave the contrast slider well alone!

Why is contrast control important, and why does it ‘add’ so much to my images Andy?

We are NOT really reducing the contrast of the raw file we captured.  We are simply reducing the EXCESSIVE CONTRAST that Lightroom ADDS to our files.

  • Lightroom typically ADDS a +33 contrast adjustment but ‘calls it’ ZERO.
  • Lightroom typically ADDS a medium contrast tone curve but ‘calls it’ LINEAR.

Both of this are contrast INCREASES, and any increase in contrast can be seen as a ‘compression’ of the tonal space between BLACK and WHITE.  This is a dynamic range visualisation killer because it crushes the ends of the midtone range.

It’s also a detail killer, because 99% of the subject detail is in the mid tone range.  Typically the Lightroom tonal curve range for midtones is 25% to 75%, but Lightroom is quite happy to accept a midtone range of 10% to 90% – check those midtone arrow adjusters at the bottom edge of the parametric tone curve!

I hope you find this post useful folks, and don’t forget to watch the video at full resolution on my YouTube Channel.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

 

Lumenzia for Wildlife

The Lumenzia Photoshop extension

Yet more on the usefulness of the Lumenzia Photoshop extension, the short cut to great looking images of all types and styles.

I had an email from client and blog follower David Sparks after my last post about this useful mighty Photoshop tool.

He sent these before and after rail shots:

20141002-_D4S6303

Before adding Lumenzia. Click for larger view.

After adding Lumenzia

After adding Lumenzia. Click for larger view.

difference

Comparison overlay – see how the left side of the image has that extra presence – and that’s just with the click of a couple of buttons in the Lumenzia GUI. Click to view larger.

Here is what David had to say in his email:

Andy, here is a before and after.  Processing was much, much faster than usual, using Lumenzia.

Thanks for bringing it to my attention….I’m working my way through your Image Processing in LR4 & Photoshop + LR5 bundle and enjoying it very much.

And as my friend and blog follower Frank Etchells put it:

Excellent recommendation this Andy. Bought it first time from your previous posting… at just over £27 it’s marvellous :)

What gets me puzzled is the fact that these Lumenzia posts have had over 500 separate page views in the last few days but less then 3% of you have bought it – WTF are you guys waiting for…

Get it BOUGHT – NOW – HERE

UPDATE: Greg Benz (the plugin author) has launched a comprehensive Lumenzia training course – see my post here for more information.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Lumenzia for Easy Luminosity Masking

Lumenzia for Easy Luminosity Masking..

I’m a really BIG user of luminosity masking techniques, and the ease by which you can use them to create really powerful adjustments to your image inside Photoshop – adjustments that are IMPOSSIBLE to make in Lightroom.

For a while now I’ve been selling a luminosity mask action set for Photoshop, and up until a week ago I had plans to upgrade said action set to produce even more custom masks.

That is until a good friend of mine, Mr. Omar Jabr, asked me if I’d come across this new product, LUMENZIA, that made the production and deployment of luminosity masks and their derivatives EVEN EASIER.

Lumenzia,luminosity masking,Andy Astbury,Wildlife in Pixels Blog

An original RAW file open in Lightroom (right) together with the final image (left) – 99% of the “heavy lifting” being done in Photoshop using the Lumenzia Extension and it’s luminosity masking functions.

In all honesty I am so excited about this amazing software extension that I’ve abandoned all plans to further develop my own action set for Photoshop – to do so would be a truly pointless exercise.

There is so much more to Lumenzia than the production of the standard 4 or 5 Darks,Lights and Midtone luminosity masks that mine and other available action sets produce.

To get an idea of just how powerful Lumenzia is just click HERE to visit the applications home page – and just buy it while you are there; purchase is a “no brainer” and one of those digital imaging JDI’s (just do it)!

The inclusion of a luminosity masking function based on the Zone System gives you instant recourse to masks based on Ansel Adams 11 zone system of scene brightness – a classic approach to the quantification of subject brightness range created by arguably the greatest landscape photographer the world has ever known – IMHO of course.

Lumenzia,luminosity masking,Andy Astbury,Wildlife in Pixels Blog

In order to instal Lumenzia you will need to install the correct Photoshop Extension Manager for which ever version of Photoshop you are running – CS6, CC, or CC2014 (it is not intended to be installed on CS5 or lower).

1. Buy Lumenzia

2. Follow the download link, and download the .Zip folder.

3. Extract the folder contents.

4. Locate the “com.lumenzia.zxp” file in the extracted contents, right click and choose Open with: Adobe Extension Manager v.xx

You should see:

Lumenzia,luminosity masking,Andy Astbury,Wildlife in Pixels Blog

Click Install, and you should see:

Lumenzia,luminosity masking,Andy Astbury,Wildlife in Pixels Blog

If you are running Mac OS 10.10x Yosemite you may have a slight problem with the CC2014 Extension Manager not being able to find the application pathway to Ps CC2014.  If you get a message from the Extension Manager waffling on about needing Photoshop v11 or higher don’t stress, the fix is a little brutal but really simple:

Go Applications>Utilities>Adobe Installers and UNINSTALL (that’s right!) BOTH Photoshop CC2014 and Extension Manager CC2014, then log back in to your CC account, go to the Apps tab and re-install Photoshop CC2014 AND Extension Manager CC2014 sequentially – that will cure the problem and only take about 5 or 6 minutes.

Open a RAW file in CameraRAW, or better still Lightroom. Get your camera calibration and contrast under control as I’ve banged on about so many times before, negate any chromatic aberration and do a bit of effective noise reduction if needed, then send the image to Photoshop:

Lumenzia,luminosity masking,Andy Astbury,Wildlife in Pixels Blog

Go Window>Extensions>Lumenzia

Go Window>Extensions>Lumenzia and the Lumenzia interface will appear – I like to drag it into the right hand tools palette so it’s not encroaching on the work area.

The first thing that amazed me about Lumenzia is the fact that you can create luminosity masks without creating 12 or 15 separate Alpha channels with the image – this makes a HUGE difference to the file size of the image, not just from the disc space PoV but it can also have file handling speed benefits in terms of tile rendering speed and scratch disc usage – if you don’t understand that just think of it as a GOOD thing!

For example:

5

The final adjusted image (prior to a couple of tweaks in Lightroom) on the left is 271Mb including all layers being intact; the image on the right, though not yet processed, has been prepared for processing by running a luminosity mask action set and developing a stack of Alpha channels; it is now over 458Mb:

Lumenzia,luminosity masking,Andy Astbury,Wildlife in Pixels Blog

…just because of the Alpha channels. And we have also got 50 steps of History that have to be retained by Photoshop; as you’ve now realised, the joke is that it’s double the size of the Lumenzia processed image and we haven’t begun to start making any adjustments yet!

There is lot’s more to Lumenzia, such as surface sharpening and easy dodge and burn layer creation – it’s going to take me a week to digest it all.

Prior to working with Lumenzia my one question was “how good are the masks” – well they are pixel-perfect.

Creating pixel-perfect luminosity masks is the most tedious of jobs if you do it the maual way – so much so that most folk take one look at the process and go “No thanks…..”

Photographers like myself couldn’t really help alleviate the tedium until the advent of CS6 which gave us the ability to write an ACTION that involved the operation of a PREVIOUSLY recorded action – so the luminosity mask action set was born.

But the developer of Lumenzia has topped it all by the proverbial country mile and given us a totally unique way of making the tedious and complex very easy and simple.

Once you have made your purchase you’d do well to go and watch the developer videos that are available online; you will get links to the training and support pages in your purchase receipt.

And to top it all off we can even generate Alpha channels and selections if we want or need to, and we can mask on the basis of Vibrancy and Saturation; yet another processing wheeze known by few, and used by fewer still.

The developer has given me permission to demonstrate and teach the deployment of Lumenzia, and to promote it as an affiliate.  I’ve been offered affiliate-ships before but have rejected them in the past because basically what was being peddled was either crap or too expensive; or BOTH.

But whatever you think the opinion of yours truly is worth, I can honestly say that Lumenzia is most definitely NEITHER of the above – it’s that good I’ll never use anything else ever again, and at under 40 bucks you’re going to make one hell of a difference to your images with so little effort it’s unreal.

Click HERE to buy and download

LUMENZIA – BUY IT – go on, get on with it!

Lumenzia,luminosity masking,Andy Astbury,Wildlife in Pixels Blog

Lumenzia GUI for Photoshop CC2014

UPDATE June 2018: Greg Benz (the plugin author) has launched a comprehensive Lumenzia training course – see my post here for more information.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Please consider supporting this blog.

Camera Calibration

Custom Camera Calibration

The other day I had an email fall into my inbox from leading UK online retailer…whose name escapes me but is very short… that made my blood pressure spike.  It was basically offering me 20% off the cost of something that will revolutionise my photography – ColorChecker Passport Camera Calibration Profiling software.

I got annoyed for two reasons:

  1. Who the “f***” do they think they’re talking to sending ME this – I’ve forgotten more about this colour management malarkey than they’ll ever know….do some customer research you idle bastards and save yourselves a mauling!
  2. Much more importantly – tens of thousands of you guys ‘n gals will get the same email and some will believe the crap and buy it – and you will get yourselves into the biggest world of hurt imaginable!

Don’t misunderstand me, a ColorChecker Passport makes for a very sound purchase indeed and I would not like life very much if I didn’t own one.  What made me seethe is the way it’s being marketed, and to whom.

Profile all your cameras for accurate colour reproduction…..blah,blah,blah……..

If you do NOT fully understand the implications of custom camera calibration you’ll be in so much trouble when it comes to processing you’ll feel like giving up the art of photography.

The problems lie in a few areas:

First, a camera profile is a SENSOR/ASIC OUTPUT profile – think about that a minute.

Two things influence sensor/asic output – ISO and lens colour shift – yep. that’s right, no lens is colour-neutral, and all lenses produce colour shifts either by tint or spectral absorption. And higher ISO settings usually produce a cooler, bluer image.

Let’s take a look at ISO and its influence on custom camera calibration profiling – I’m using a far better bit of software for doing the job – “IN MY OPINION” – the Adobe DNG Profile Editor – free to all MAC download and Windows download – but you do need the ColorChecker Passport itself!

I prefer the Adobe product because I find the ColorChecker software produced camera calibration profiles there were, well, pretty vile in terms of increased contrast especially; not my cup of tea at all.

camera calibration, Andy Astbury, colour, color management

5 images shot at 1 stop increments of ISO on the same camera/lens combination.

Now this is NOT a demo of software – a video tutorial of camera profiling will be on my next photography training video coming sometime soon-ish, doubtless with a somewhat verbose narrative explaining why you should or should not do it!

Above, we have 5 images shot on a D4 with a 24-70 f2.8 at 70mm under a consistent overcast daylight at 1stop increments of ISO between 200 and 3200.

Below, we can see the resultant profile and distribution of known colour reference points on the colour wheel.

camera calibration, Andy Astbury, colour, color management

Here’s the 200 ISO custom camera calibration profile – the portion of interest to us is the colour wheel on the left and the points of known colour distribution (the black squares and circled dot).

Next, we see the result of the image shot at 3200 ISO:

camera calibration, Andy Astbury, colour, color management

Here’s the result of the custom camera profile based on the shot taken at 3200 ISO.

Now let’s super-impose one over t’other – if ISO doesn’t matter to a camera calibration profile then we should see NO DIFFERENCE………….

camera calibration, Andy Astbury, colour, color management

The 3200 ISO profile colour distribution overlaid onto the 200 ISO profile colour distribution – it’s different and they do not match up.

……..well would you bloody believe it!  Embark on custom camera calibration  profiling your camera and then apply that profile to an image shot with the same lens under the same lighting conditions but at a different ISO, and your colours will not be right.

So now my assertions about ISO have been vindicated, let’s take a look at skinning the cat another way, by keeping ISO the same but switching lenses.

Below is the result of a 500mm f4 at 1000 ISO:

camera calibration, Andy Astbury, colour, color management

Profile result of a 500mm f4 at 1000 ISO

And below we have the 24-70mm f2.8 @ 70mm and 1000 ISO:

camera calibration, Andy Astbury, colour, color management

Profile result of a 24-70mm f2.8 @ 70mm at 1000 ISO

Let’s overlay those two and see if there’s any difference:

camera calibration, Andy Astbury, colour, color management

Profile results of a 500mm f4 at 1000 ISO and the 24-70 f2.8 at 1000 ISO – as massively different as day and night.

Whoops….it’s all turned to crap!

Just take a moment to look at the info here.  There is movement in the orange/red/red magentas, but even bigger movements in the yellows/greens and the blues and blue/magentas.

Because these comparisons are done simply in Photoshop layers with the top layer at 50% opacity you can even see there’s an overall difference in the Hue and Saturation slider values for the two profiles – the 500mm profile is 2 and -10 respectively and the 24-70mm is actually 1 and -9.

The basic upshot of this information is that the two lenses apply a different colour cast to your image AND that cast is not always uniformly applied to all areas of the colour spectrum.

And if you really want to “screw the pooch” then here’s the above comparison side by side with with  the 500f4 1000iso against the 24-70mm f2.8 200iso view:

camera calibration, Andy Astbury, colour, color management

500mm f4/24-70mm f2.8 1000 ISO comparison versus 500mm f4 1000 ISO and 24-70mm f2.8 200 ISO.

A totally different spectral distribution of colour reference points again.

And I’m not even going to bother showing you that the same camera/lens/ISO combo will give different results under different lighting conditions – you should by now be able to envisage that little nugget yourselves.

So, Custom Camera Calibration – if you do it right then you’ll be profiling every body/lens combo you have, at every conceivable ISO value and lighting condition – it’s one of those things that if you don’t do it all then you’d be best off not doing at all in most cases.

I can think of a few instances where I would do it as a matter of course, such as scientific work, photo-microscopy, and artwork photography/copystand work etc, but these would be well outside the remit the more normal photographic practices.

As I said earlier, the Passport device itself is worth far more than it’s weight in gold – set up and light your shot and include the Passport device in a prominent place. Take a second shot without it and use shot 1 to custom white balance shot 2 – a dead easy process that makes the device invaluable for portrait and studio work etc.

But I hope by now you can begin to see the futility of trying to use a custom camera calibration profile on a “one size fits all” basis – it just won’t work correctly; and yet for the most part this is how it’s marketed – especially by third party retailers.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Black Ink Type

Black ink type and black ink switching when moving from matte to luster and gloss papers – here’s my thoughts on this, initially triggered by Franks’ reply to my previous article HERE.

And I quote:

Another great and instructive article Andy. I have the r3000 but get slightly annoyed with the black ink changes from one to the other. Some further guidance on the use of these re paper ‘types’ would be appreciated by moi ~ please ♡

Look, he’s even put a heart in there – bless you Frank, that’s more than I’ve got out of ‘her indoors’ for years!

Now the basic school of thought over this switching of black ink type is this:

  • PK, or Photo Black ink type supposedly produces a smooth, highly glossy black.
  • MK or Matte Black ink type produces a dull, flat black.
  • Using a matte finish paper requires the MATTE black ink type.
  • Using Luster or Gloss paper requires the Photo black ink type.

The PK black ink type really only produces a HIGH GLOSS finish when chucked onto HIGH GLOSS media.  Its’ got a rather less glossy and more ‘egg shell’ finish when used on a more luster finish paper. There does come a “tipping point” though where it will look a little shinier than the finish of the paper – and it’s this tipping point where theory, clever-dicks and user-guides tell you there’s a need to switch to the matte black ink type.

The Matte black ink type does exactly what point two says it does.

The third point – replace the word “requires” with the phrase “can cope with” and we’d be about right.

The forth point is absolutely true; get this wrong by printing with the MK black ink type on high gloss paper and you’ll just waste consumables and potentially end up with the type of clean up operation normally the preserve of Exon & BP. Dot gain on steroids!

There’s also an argument that the MK black ink type produces a deeper black on matte finish paper than the PK black ink type – this is also true:

printing,black ink,profile,icc profile,black ink type,matte finish,gloss finish,luster finish,

Permajet canned profiles for Museum paper on the Epson 4800 printer using PK and MK black ink types.

As we can clearly see, the Matte black ink type does indeed accommodate a deeper black point than its counterpart Photo black ink type.

Adopting the Common Sense Approach

There are a few things we need to think about here, and the first one is my constant mantra that the choice of paper is governed by the “overall look, feel and atmosphere of the finished image” when it’s sitting there on your monitor.

Paper choice IS the final part of the creative process; for all the reasons I’ve mentioned in past blog posts.

You will also know by now that in my world there is little room for high gloss paper – it’s a total pain the bum because of its highly reflective surface; but that same surface can allow you to print the very finest of details.

But here’s common sense point number 1 – the majority of people reading this blog, attending my workshops and coming to me for 1to1 tuition CAN NOT produce images with detail fine enough to warrant this single benefit of high gloss paper.

That’s not because they’re daft or rubbish at processing either – it’s simply due to the fact that they shoot 35mm format dSLR, not £30K medium format.  The sensors we commonly use can’t record enough ultra fine detail.  There’s a really good comparison between the Nikon D800 and an IQ160 here, it’s well worth having a look – then you’ll see what I’m on about.

The point I’m trying to make is this; print on gloss from 35mm if you like; but you are saddling yourself with its problems but not truthfully getting any of the benefit – but you can kid yourself if you like!

I Lust After Luster Papers But How Lusty Is That Luster?

As I mentioned in the previous post, Calumet Brilliant Museum Satin Matte Natural is NOT a matte finish paper.

True matte papers never really hold much appeal for me if I’m honest, because they are very dull, flat and relatively lifeless.  Yes, a 12×12 inch monochromatic image might look stunning, especially hanging in an area where reflections might prove difficult for any other print surface.

But that same image printed 8 foot square might well “kill’ any room you hang it in, just because it’s so dull and so damned BIG.

True matte papers do have their uses that’s for sure, but in the main you need to discriminate between matte and what I call matte “effect”.

Permajet Fine Art Museum 310, Matte Plus and Portrait 300 are papers that spring to mind as falling into this matte effect category – and wouldn’t you know it, there are canned profiles for these papers for both PK and MK black ink type ink sets, as you can see from the image earlier in the post.

So, with regard to black ink type switching you have to ask yourself:

  • Am I using a paper the ACTUALLY NEEDS the MK black ink type?  Chances are you’re probably not!
  • If I am, do I really want to – how big a print am I doing?

In my own print portfolio I only have two images that benefit from being printed on a “dead” media surface, and they are both printed to Permajet Museum using the PK black ink type.

I had another one that looked “nearly there” but the heavy texture of the paper detracted from the image, so it was re-proofed and printed to Matt Plus, again using PK ink. It looked just the same from a colour/luminance stand point, but worse from a ‘style’ point because of the zero texture.

Along comes Calumet Museum Satin Matte Natural!

The subtle texture gets me where I wanted to be on that score, and that ever-so-soft luster just makes the colours come to life that tiny bit more, giving me a print variation that I love and hadn’t even envisaged at the time I did the original print.

Ink Type Switching

I have to say at the outset that I do NOT own an R3000 printer – I use wide format Epson printers and so have no commercial need for the 3000 DT format.  But I always advise people looking for a printer to buy one – it’s a stunning machine that punches well above it’s weight based on price point.

My Epson wide format does not hold both black ink types.  Switching entails a rather tedious and highly wasteful process; which I have neither desire or need to embark upon.

But if you have any brand of printer that carries both types on board then I’d highly recommend you to set the black ink type to PK, and turn any auto-switching OFF – that is, set switching to manual.

Right, now the super-pessimist in me shines through!

I’m not a fan of Epson papers on the whole, and there’s a lot more choice and far better quality available from third party suppliers ranging from Photospeed to Hahnemuhle, Canson, Red River and all points in between.

Now third party suppliers in the main will tell you to use one black ink type or the other – or either, and give you the correct media settings (Brilliant – are you reading this??).

But, if you have auto switching enabled, and use Epson paper, the print head sees the paper surface and automatically switches the ink to the ‘supposed’ correct type.  This switching process requires the printer to purge the black ink line and refill it with the ‘correct’ black ink type before printing commences.

Now these figures are the stats quoted from Epson:

Black ink conversion times:

  • Matte to Photo Black approx. 3 min. 30 sec
  • Photo to Matte Black approx. 2 min. sec

Ink used during conversion:

  • Matte to Photo Black approx. 3 ml
  • Photo to Matte Black approx. 1 ml

Now why the times and volumes aren’t the same in both directions is a bit of a mystery to me and doesn’t make sense.  But what is killer is that the carts are only 26 (25.9)ml and around £24 each, so 6 changes of black ink type is going to burn through as good as £25 of ink – and that’s without doing any bloody printing!!!

When ever I demo this printer at a workshop I never use Epson paper, auto switching is OFF and I never get a head sensor warning to tell me to switch ink even if I load Permajet Museum – the head sensor doesn’t warn me about the fact that I’m using PK ink.

Yes the printer could be up the spout, but using a canned PK profile the resulting print would tend to indicate otherwise.

Or something slightly more dark and sinister might be happening – or rather NOT, because I’m not using OEM paper………...What was that I heard you say?  Good gracious me…you might think that but I couldn’t possibly comment!

One thing to bare in mind is this.  For the most part, the majority of print media will work exceptionally well with the PK black ink type – BUT NOT THE OTHER WAY AROUND – you’ve been warned.  If you want to know how the captain of the Exon Valdez felt and be up to your ass in black stuff then go ahead and give it a try, but don’t send the cleaning bills to me!

I did it once years ago with an HP printer – I can still see matte black ink tide marks on the skirting board in my office……it wasn’t pretty! And it screwed the printer up totally.

Using PK on matte media will only effect the D-max and lower the overall contrast a wee bit; unless it’s a very low key image with vast areas of blackish tones in it then for the most part you’d perhaps struggle to notice it.  Sometimes you might even find that the drop in contrast even works to your advantage.

But don’t forget, you might not be using a matte media at all, even though it visually looks like it and says the word matte in the paper name.  If the paper manufacturer supplies a PK and an MK profile for the same paper then save yourself time and money and use the PK profile to soft-proof to AND to control the printer colour management.

Did that answer your question Frank – FRANK – can you hear me Frank??!!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Desktop Printing 101

Understanding Desktop Printing – part 1

 

desktop printingDesktop printing is what all photographers should be doing.

Holding a finished print of your epic image is the final part of the photographic process, and should be enjoyed by everyone who owns a camera and loves their photography.

But desktop printing has a “bad rap” amongst the general hobby photography community – a process full of cost, danger, confusion and disappointment.

Yet there is no need for it to be this way.

Desktop printing is not a black art full of ‘ju-ju men’ and bear-traps  – indeed it’s exactly the opposite.

But if you refuse to take on board a few simple basics then you’ll be swinging in the wind and burning money for ever.

Now I’ve already spoken at length on the importance of monitor calibration & monitor profiling on this blog HERE and HERE so we’ll take that as a given.

But in this post I want to look at the basic material we use for printing – paper media.

Print Media

A while back I wrote a piece entitled “How White is Paper White” – it might be worth you looking at this if you’ve not already done so.

Over the course of most of my blog posts you’ll have noticed a recurring undertone of contrast needs controlling.

Contrast is all about the relationship between blacks and whites in our images, and the tonal separation between them.

This is where we, as digital photographers, can begin to run into problems.

We work on our images via a calibrated monitor, normally calibrated to a gamma of 2.2 and a D65 white point.  Modern monitors can readily display true black and true white (Lab 0 to Lab 100/RGB 0 to 255 in 8 bit terms).

Our big problem lies in the fact that you can print NEITHER of these luminosity values in any of the printer channels – the paper just will not allow it.

A papers ability to reproduce white is obviously limited to the brightness and background colour tint of the paper itself – there is no such think as ‘white’ paper.

But a papers ability to render ‘black’ is the other vitally important consideration – and it comes as a major shock to a lot of photographers.

Let’s take 3 commonly used Permajet papers as examples:

  • Permajet Gloss 271
  • Permajet Oyster 271
  • Permajet Portrait White 285

The following measurements have been made with a ColorMunki Photo & Colour Picker software.

L* values are the luminosity values in the L*ab colour space where 0 = pure black (0RGB) and 100 = pure white (255RGB)

Gloss paper:

  • Black/Dmax = 4.4 L* or 14,16,15 in 8 bit RGB terms
  • White/Dmin = 94.4 L* or 235,241,241 (paper white)

From these measurements we can see that the deepest black we can reproduce has an average 8bit RGB value of 15 – not zero.

We can also see that “paper white” has a leaning towards cyan due to the higher 241 green & blue RGB values, and this carries over to the blacks which are 6 points deficient in red.

Oyster paper:

  • Black/Dmax = 4.7 L* or 15,17,16 in 8 bit RGB terms
  • White/Dmin = 94.9 L* or 237,242,241 (paper white)

We can see that the Oyster maximum black value is slightly lighter than the Gloss paper (L* values reflect are far better accuracy than 8 bit RGB values).

We can also see that the paper has a slightly brighter white value.

Portrait White Matte paper:

  • Black/Dmax = 25.8 L* or 59,62,61 in 8 bit RGB terms
  • White/Dmin = 97.1 L* or 247,247,244 (paper white)

You can see that paper white is brighter than either Gloss or Oyster.

The paper white is also deficient in blue, but the Dmax black is deficient in red.

It’s quite common to find this skewed cool/warm split between dark tones and light tones when printing, and sometimes it can be the other way around.

And if you don’t think there’s much of a difference between 247,247,244 & 247,247,247 you’d be wrong!

The image below (though exaggerated slightly due to jpeg compression) effectively shows the difference – 247 neutral being at the bottom.

paper white,printing

247,247,244 (top) and 247,247,247 (below) – slightly exaggerated by jpeg compression.

See how much ‘warmer’ the top of the square is?

But the real shocker is the black or Dmax value:

paper,printing,desktop printing

Portrait White matte finish paper plotted against wireframe sRGB on L*ab axes.

The wireframe above is the sRGB colour space plotted on the L*ab axes; the shaded volume is the profile for Portrait White.  The sRGB profile has a maximum black density of 0RGB and so reaches the bottom of vertical L axis.

However, that 25.8 L* value of the matte finish paper has a huge ‘gap’ underneath it.

The higher the black L* value the larger is the gap.

What does this gap mean for our desktop printing output?

It’s simple – any tones in our image that are DARKER, or have a lower L* value than the Dmax of the destination media will be crushed into “paper black” – so any shadow detail will be lost.

Equally the same can be said for gaps at the top of the L* axis where “paper white” or Dmin is lower than the L* value of the brightest tones in our image – they too will get homogenized into the all-encompassing paper white!

Imagine we’ve just processed an image that makes maximum use of our monitors display gamut in terms of luminosity – it looks magnificent, and will no doubt look equally as such for any form of electronic/digital distribution.

But if we send this image straight to a printer it’ll look really disappointing, if only for the reasons mentioned above – because basically the image will NOT fit on the paper in terms of contrast and tonal distribution, let alone colour fidelity.
It’s at this point where everyone gives up the idea of desktop printing:

  • It looks like crap
  • It’s a waste of time
  • I don’t know what’s happened.
  • I don’t understand what’s gone wrong

Well, in response to the latter, now you do!

But do we have to worry about all this tech stuff ?

No, we don’t have to WORRY about it – that’s what a colour managed work flow & soft proofing is for.

But it never hurts to UNDERSTAND things, otherwise you just end up in a “monkey see monkey do” situation.

And that’s as dangerous as it can get – change just one thing and you’re in trouble!

But if you can ‘get the point’ of this post then believe me you are well on your way to understanding desktop printing and the simple processes we need to go through to ensure accurate and realistic prints every time we hit the PRINT button.

desktop printing

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Gamma Encoding – Under the Hood

Gamma, Gamma Encoding & Decoding

Gamma – now there’s a term I see cause so much confusion and misunderstanding.

So many people use the term without knowing what it means.

Others get gamma mixed up with contrast, which is the worst mistake anyone could ever make!

Contrast controls the spatial relationship between black and white; in other words the number of grey tones.  Higher contrast spreads black into the darker mid tones and white into the upper mid tones.  In other words, both the black point and white point are moved.

The only tones that are not effected by changes in image gamma are the black point and white point – that’s why getting gamma mixed up with contrast is the mark of a “complete idiot” who should be taken outside and summarily shot before they have chance to propagate this shocking level of misunderstanding!

What is Gamma?

Any device that records an image does so with a gamma value.

Any device which displays/reproduces said image does so with a gamma value.

We can think of gamma as the proportional distribution of tones recorded by, or displayed on, a particular device.

Because different devices have different gamma values problems would arise were we to display an image that has a gamma of X on a display with a gamma of Y:

Ever wondered what a RAW file would look like displayed on a monitor without any fancy colour & gamma managed software such as LR or ACR?

gamma,gamma encoding,Andy Astbury

A raw file displayed on the back of the camera (left) and as it would look on a computer monitor calibrated to a gamma of 2.2 & without any colour & gamma management (right).

The right hand image looks so dark because it has a native gamma of 1.0 but is being displayed on a monitor with a native gamma of 2.2

RAW file Gamma

To all intents and purposes ALL RAW files have a gamma of 1.0

gamma,gamma encoding,Andy Astbury

Camera Sensor/Linear Gamma (Gamma 1.0)

Digital camera sensors work in a linear fashion:

If we have “X” number of photons striking a sensor photosite then “Y” amount of electrons will be generated.

Double the number of photons by doubling the amount of light, then 2x “Y” electrons will be generated.

Halve the number of photons by reducing the light on the scene by 50% then 0.5x “Y” electrons will be generated.

We have two axes on the graph; the horizontal x axis represents the actual light values in the scene, and the vertical y axis represents the output or recorded tones in the image.

So, if we apply Lab L* values to our graph axes above, then 0 equates to black and 1.0 equates to white.

The “slope” of the graph is a straight line giving us an equal relationship between values for input and output.

It’s this relationship between input and output values in digital imaging that helps define GAMMA.

In our particular case here, we have a linear relationship between input and output values and so we have LINEAR GAMMA, otherwise known as gamma 1.0.

Now let’s look at a black to white graduation in gamma 1.0 in comparison to one in what’s called an encoding gamma:

gamma,gamma encoding,Andy Astbury

Linear (top) vs Encoded Gamma

The upper gradient is basically the way our digital cameras see and record a scene.

There is an awful lot of information about highlights and yet the darker tones and ‘shadow’ areas are seemingly squashed up together on the left side of the gradient.

Human vision does not see things in the same way that a camera sensor does; we do not see linearly.

If the amount of ambient light falling on a scene suddenly doubles we will perceive the increase as an unquantifiable “it’s got brighter”; whereas our sensors response will be exactly double and very quantifiable.

Our eyes see a far more ‘perceptually even’ tonal distribution with much greater tonal separation in the darker tones and a more compressed distribution of highlights.

In other words we see a tonal distribution more like that contained in the gamma encoded gradient.

Gamma encoding can be best illustrated with another graph:

gamma,gamma encoding,Andy Astbury

Linear Gamma vs Gamma Encoding 1/2.2 (0.4545)

Now sadly this is where things often get misunderstood, and why you need to be careful about where you get information from.

The cyan curve is NOT gamma 2.2 – we’ll get to that shortly.

Think of the graph above as the curves panel in Lightroom, ACR or Photoshop – after all, that’s exactly what it is.

Think of our dark, low contrast linear gamma image as displayed on a monitor – what would we need to do to the linear slope  to improve contrast and generally brighten the image?

We’d bend the linear slope to something like the cyan curve.

The cyan curve is the encoding gamma 1/2.2.

There’s a direct numerical relationship between the two gamma curves; linear and 1/2.2. and it’s a simple power law:

  •  VO = VIγ where VO = output value, VI = input value and γ = gamma

Any input value (VI) on the linear gamma curve to the power of γ equals the output value of the cyan encoding curve; and γ as it works out equals 0.4545

  •  VI 0 = VO 0
  •  VI 0.25 = VO 0.532
  •  VI 0.50 = VO 0.729
  •  VI 0.75 = VO 0.878
  •  VI 1.0 = VO 1.0

Now isn’t that bit of maths sexy………………..yeah!

Basically the gamma encoding process remaps all the tones in the image and redistributes them in a non-linear ratio which is more familiar to our eye.

Note: the gamma of human vision is not really gamma 1/2.2 – gamma 0.4545.  It would be near impossible to actually quantify gamma for our eye due to the behavior of the iris etc, but to all intents and purposes modern photographic principles regard it as being ‘similar to’..

So the story so far equates to this:

gamma,gamma encoding,Andy Astbury

Gamma encoding redistributes tones in a non-linear manner.

But things are never quite so straight forward are they…?

Firstly, if gamma < 1 (less than 1) the encoding curve goes upwards – as does the cyan curve in the graph above.

But if gamma > 1 (greater than 1) the curve goes downwards.

A calibrated monitor has (or should have) a calibrated device gamma of 2.2:

gamma,gamma encoding,Andy Astbury

Linear, Encoding & Monitor gamma curves.

As you can now see, the monitor device gamma of 2.2 is the opposite of the encoding gamma – after all, the latter is the reciprocal of the former.

So what happens when we apply the decoding gamma/monitor gamma of 2.2 to our gamma encoded image?

gamma,gamma encoding,Andy Astbury

The net effect of Encode & Decode gamma – Linear.

That’s right, we end up back where we started!

Now, are you thinking:

  • Don’t understand?
  • We are back with our super dark image again?

Welcome to the worlds biggest Bear-Trap!

The “Learning Gamma Bear Trap”

Hands up those who are thinking this is what happens:

gamma,gamma encoding,Andy Astbury

If your arm so much as twitched then you are not alone!

I’ll admit to being naughty and leading you to edge of the pit containing the bear trap – but I didn’t push you!

While you’ve been reading this post have you noticed the occasional random bold and underlined text?

Them’s clues folks!

The super dark images – both seascape and the rope coil – are all “GAMMA 1.0 displayed on a GAMMA 2.2 device without any management”.

That doesn’t mean a gamma 1.0 RAW file actually LOOKS like that in it’s own gamma environment!

That’s the bear trap!

gamma,gamma encoding,Andy Astbury

Gamma 1.0 to gamma 2.2 encoding and decoding

Our RAW file actually looks quite normal in its own gamma environment (2nd from left) – but look at the histogram and how all those darker mid tones and shadows are piled up to the left.

Gamma encoding to 1/2.2 (gamma 0.4545) redistributes and remaps those all the tones and lightens the image by pushing the curve up BUT leaves the black and white points where they are.  No tones have been added or taken away, the operation just redistributes what’s already there.  Check out the histogram.

Then the gamma decode operation takes place and we end up with the image on the right – looks perfect and ready for processing, but notice the histogram, we keep the encoding redistribution of tones.

So, are we back where we started?  No.

Luckily for us gamma encoding and decoding is all fully automatic within a colour managed work flow and RAW handlers such as Lightroom, ACR and CapOnePro etc.

Image gamma changes are required when an image is moved from one RGB colour space to another:

  • ProPhoto RGB has a gamma of 1.8
  • Adobe RGB 1998 has a gamma of 2.2
  • sRGB has an oddball gamma that equates to an average of 2.2 but is nearly 1.8 in the deep shadow tones.
  • Lightrooms working colour space is ProPhoto linear, in other words gamma 1.0
  • Lightrooms viewing space is MelissaRGB which equates to Prophoto with an sRGB gamma.

Image gamma changes need to occur when images are sent to a desktop printer – the encode/decode characteristics are actually part and parcel of the printer profile information.

Gamma awareness should be exercised when it comes to monitors:

  • Most plug & play monitors are set to far too high a gamma ‘out the box’ – get it calibrated properly ASAP; it’s not just about colour accuracy.
  • Laptop screen gamma changes with viewing position – God they are awful!

Anyway, that just about wraps up this brief explanation of gamma; believe me it is brief and somewhat simplified – but hopefully you get the picture!

Become a Patron!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Accurate Camera Colour within Lightroom

Obtaining accurate camera colour within Lightroom 5, in other words making the pics in your Lr Library look like they did on the back of the camera; is a problem that I’m asked about more and more since the advent of Lightroom 5 AND the latest camera marks – especially Nikon!

UPDATE NOTE: Please feel free to read this post THEN go HERE for a further post on achieving image NEUTRALITY in Lightroom 6/CC 2015

Does this problem look familiar?

Accurate Camera Colour within Lightroom

Back of the camera (left) to Lightroom (right) – click to enlarge.

The image looks fine (left) on the back of the camera, fine in the import dialogue box, and fine in the library module grid view UNTIL the previews have been created – then it looks like the image on the right.

I hear complaints that the colours are too saturated and the contrast has gone through the roof, the exposure has gone down etc etc.

All the visual descriptions are correct, but what’s responsible for the changes is mostly down to a shift in contrast.

Let’s have a closer look at the problem:

Accurate Camera Colour within Lightroom

Back of the camera (left) to Lightroom (right) – click to enlarge.

The increase in contrast has resulted in “choking” of the shadow detail under the wing of the Red Kite, loss of tonal separation in the darker mid tones, and a slight increase in the apparent luminance noise level – especially in that out-of-focus blue sky.

And of course, the other big side effect is an apparent increase in saturation.

You should all be aware of my saying that “Contrast Be Thine Enemy” by now – and so we’re hardly getting off to a good start with a situation like this are we…………

So how do we go about obtaining accurate camera colour within Lightroom?

Firstly, we need to understand just what’s going on inside the camera with regard to various settings, and what happens to those settings when we import the image into Lightroom.

Camera Settings & RAW files

Let’s consider all the various settings with regard to image control that we have in our cameras:

  • White Balance
  • Active D lighting
  • Picture Control – scene settings, sharpening etc:
  • Colour Space
  • Distortion Control
  • Vignette Control
  • High ISO NR
  • Focus Point/Group
  • Uncle Tom Cobbly & all…………..

All these are brought to bare to give us the post-view jpeg on the back of the camera.

And let’s not forget

  • Exif
  • IPTC

That post-view/review jpeg IS subjected to all the above image control settings, and is embedded in the RAW file; and the image control settings are recorded in what is called the raw file “header”.

It’s actually a lot more complex than that, with IFD & MakerNote tags and other “scrummy” tech stuff – see this ‘interesting’ article HERE – but don’t fall asleep!

If we ship the raw file to our camera manufacturers RAW file handler software such as Nikon CapNX then the embedded jpeg and the raw header data form the image preview.

However, to equip Lightroom with the ability to read headers from every digital camera on the planet would be physically impossible, and in my opinion, totally undesirable as it’s a far better raw handler than any proprietary offering from Nikon or Canon et al.

So, in a nutshell, Lightroom – and ACR – bin the embedded jpeg preview and ignore the raw file header, with the exception of white balance, together with Exif & IPTC data.

However, we still need to value the post jpeg on the camera because we use it to decide many things about exposure, DoF, focus point etc – so the impact of the various camera image settings upon that image have to be assessed.

Now here’s the thing about image control settings “in camera”.

For the most part they increase contrast, saturation and vibrancy – and as a consequence can DECREASE apparent DYNAMIC RANGE.  Now I’d rather have total control over the look and feel of my image rather than hand that control over to some poxy bit of cheap post-ASIC circuitry inside my camera.

So my recommendations are always the same – all in-camera ‘picture control’ type settings should be turned OFF; and those that can’t be turned off are set to LOW or NEUTRAL as applicable.

That way, when I view the post jpeg on the back of the camera I’m viewing the very best rendition possible of what the sensor has captured.

And it’s pointless having it any other way because when you’re shooting RAW then both Lightroom and Photoshop ACR ignore them anyway!

Accurate Camera Colour within Lightroom

So how do we obtain accurate camera colour within Lightroom?

We can begin to understand how to achieve accurate camera colour within Lightroom if we look at what happens when we import a raw file; and it’s really simple.

Lightroom needs to be “told” how to interpret the data in the raw file in order to render a viewable preview – let’s not forget folks, a raw file is NOT a visible image, just a matrix full of numbers.

In order to do this seemingly simple job Lightroom uses process version and camera calibration settings that ship inside it, telling it how to do the “initial process” of the image – if you like, it’s a default process setting.

And what do you think the default camera calibration setting is?

Accurate Camera Colour within Lightroom

The ‘contrasty’ result of the Lightroom Nikon D4 Adobe Standard camera profile.

Lightroom defaults to this displayed nomenclature “Adobe Standard” camera profile irrespective of what camera make and model the raw file is recorded by.

Importantly – you need to bare in mind that this ‘standard’ profile is camera-specific in its effect, even though the displayed name is the same when handling say D800E NEF files as it is when handling 1DX CR2 files, the background functionality is totally different and specific to the make and model of camera.

What it says on the tin is NOT what’s inside – so to speak!

So this “Adobe Standard” has as many differing effects on the overall image look as there are cameras that Lightroom supports – is it ever likely that some of them are a bit crap??!!

Some files, such as the Nikon D800 and Canon 5D3 raws seem to suffer very little if any change – in my experience at any rate – but as a D4 shooter this ‘glitch in the system’ drives me nuts.

But the walk-around is so damned easy it’s not worth stressing about:

  1. Bring said image into Lightroom (as above).
  2. Move the image to the DEVELOP module
  3. Go to the bottom settings panel – Camera Calibration.
  4. Select “Camera Neutral” from the drop-down menu:
    Accurate Camera Colour within Lightroom

    Change camera profile from ‘Adobe Standard’ to ‘Camera Neutral’ – see the difference!

    You can see that I’ve added a -25 contrast adjustment in the basics panel here too – you might not want to do that*

  5. Scoot over to the source panel side of the Lightroom GUI and open up the Presets Panel

    Accurate Camera Colour within Lightroom

    Open Presets Panel (indicated) and click the + sign to create a new preset.

  6. Give the new preset a name, and then check the Process Version and Calibration options (because of the -25 contrast adjustment I’ve added here the Contrast option is ticked).
  7. Click CREATE and the new “camera profile preset” will be stored in the USER PRESETS across ALL your Lightroom 5 catalogs.
  8. The next time you import RAW files you can ADD this preset as a DEVELOP SETTING in the import dialogue box:
    Accurate Camera Colour within Lightroom

    Choose new preset

    Accurate Camera Colour within Lightroom

    Begin the import

  9. Your images will now look like they did on the back of the camera (if you adopt my approach to camera settings at least!).

You can play around with this procedure as much as you like – I have quite a few presets for this “initial process” depending on a number of variables such as light quality and ISO used to name but two criteria (as you can see in the first image at 8. above).

The big thing I need you to understand is that the camera profile in the Camera Calibration panel of Lightroom acts merely as Lightroom’s own internal guide to the initial process settings it needs to apply to the raw file when generating it’s library module previews.

There’s nothing complicated, mysterious or sinister going on, and no changes are being made to your raw images – there’s nothing to change.

In fact, I don’t even bother switching to Camera Neutral half the time; I just do a rough initial process in the Develop module to negate the contrast in the image, and perhaps noise if I’ve been cranking the ISO a bit – then save that out as a preset.

Then again, there are occasions when I find switching to Camera Neutral is all that’s needed –  shooting low ISO wide angle landscapes when I’m using the full extent of the sensors dynamic range springs to mind.

But at least now you’ve got shots within your Lightroom library that look like they did on the back of the camera, and you haven’t got to start undoing the mess it’s made on import before you get on with the proper task at hand – processing – and keeping that contrast under control.

Some twat on a forum somewhere slagged this post off the other day saying that I was misleading folk into thinking that the shot on the back of the camera was “neutral” – WHAT A PRICK…………

All we are trying to do here is to make the image previews in Lr5 look like they did on the back of the camera – after all, it is this BACK OF CAMERA image that made us happy with the shot in the first place.

And by ‘neutralising’ the in-camera sharpening and colour/contrast picture control ramping the crappy ‘in camera’ jpeg is the best rendition we have of what the sensor saw while the shutter was open.

Yes, we are going to process the image and make it look even better, so our Lr5 preview starting point is somewhat irrelevant in the long run; but a lot of folk freak-out because Lr5 can make some really bad changes to the look of their images before they start.  All we are doing in this article is stopping Lr5 from making those unwanted changes.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

MTF, Lens & Sensor Resolution

MTF, Lens & Sensor Resolution

I’ve been ‘banging on’ about resolution lens performance and MTF over the last few posts so I’d like to start bringing all these various bits of information together with at least a modicum of simplicity.

If this is your first visit to my blog I strongly recommend you peruse HERE and HERE before going any further!

You might well ask the question “Do I really need to know this stuff – you’re a pro Andy and I’m not, so I don’t think I need to…”

My answer is “Yes you bloody well do need to know, so stop whinging – it’ll save you time and perhaps stop you wasting money…”

Words used like ‘resolution’ do tend to get used out of context sometimes, and when you guys ‘n gals are learning this stuff then things can get a mite confusing – and nowhere does terminology get more confusing than when we are talking ‘glass’.

But before we get into the idea of bringing lenses and sensors together I want to introduce you to something you’ve all heard of before – CONTRAST – and how it effects our ability to see detail, our lens’s ability to transfer detail, and our camera sensors ability to record detail.

Contrast & How It Effects the Resolving of Detail

In an earlier post HERE I briefly mentioned that the human eye can resolve 5 line pairs per millimeter, and the illustration I used to illustrate those line pairs looked rather like this:

5 line pairs per millimeter with a contrast ratio of 100% or 1.0

5 line pairs per millimeter with a contrast ratio of 100% or 1.0

Now don’t forget, these line pairs are highly magnified – in reality each pair should be 0.2mm wide.  These lines are easily differentiated because of the excessive contrast ratio between each line in a pair.

How far can contrast between the lines fall before we can’t tell the difference any more and all the lines blend together into a solid monotone?

Enter John William Strutt, the 3rd Baron Rayleigh…………

5 line pairs at bottom threshold of human vision - a 9% contrast ratio.

5 line pairs at bottom threshold of human vision – a 9% contrast ratio.

The Rayleigh Criterion basically stipulates that the ‘discernability’ of each line in a pair is low end limited to a line pair contrast ratio of 9% or above, for average human vision – that is, when each line pair is 0.2mm wide and viewed from 25cms.  Obviously they are reproduced much larger here, hence you can see ’em!

Low contrast limit for Human vision (left) & camera sensor (right).

Low contrast limit for Human vision (left) & camera sensor (right).

However, it is said in some circles that dslr sensors are typically limited to a 12% to 15% minimum line pair contrast ratio when it comes to discriminating between the individual lines.

Now before you start getting in a panic and misinterpreting this revelation you must realise that you are missing one crucial factor; but let’s just recap what we’ve got so far.

  1. A ‘line’ is a detail.
  2. but we can’t see one line (detail) without another line (detail) next to it that has a different tonal value ( our line pair).
  3. There is a limit to the contrast ratio between our two lines, below which our lines/details begin to merge together and become less distinct.

So, what is this crucial factor that we are missing; well, it’s dead simple – the line pair per millimeter (lp/mm) resolution of a camera sensor.

Now there’s something you won’t find in your cameras ‘tech specs’ that’s for sure!

Sensor Line Pair Resolution

The smallest “line” that can be recorded on a sensor is 1 photosite in width – now that makes sense doesn’t it.

But in order to see that line we must have another line next to it, and that line must have a higher or lower tonal value to a degree where the contrast ratio between the two lines is at or above the low contrast limit of the sensor.

So now we know that the smallest line pair our sensor can record is 2 photosites/pixels in width – the physical width is governed by the sensor pixel pitch; in other words the photosite diameter.

In a nutshell, the lp/mm resolution of a sensor is 0.5x the pixel row count per millimeter – referred to as the Nyquist Rate, simply because we have to define (sample) 2 lines in order to see/resolve 1 line.

The maximum resolution of an image projected by the lens that can be captured at the sensor plane – in other words, the limit of what can be USEFULLY sampled – is the Nyquist Limit.

Let’s do some practical calculations:

Canon 1DX 18.1Mp

Imaging Area = 36mm x 24mm / 5202 x 3533 pixels/photosites OR LINES.

I actually do this calculation based on the imaging area diagonal

So sensor resolution in lp/mm = (pixel diagonal/physical diagonal) x 0.5 = 72.01 lp/mm

Nikon D4 16.2Mp = 68.62 lp/mm

Nikon D800 36.3Mp = 102.33 lp/mm

PhaseOne P40 40Mp medium format = 83.15 lp/mm

PhaseOne IQ180 80Mp medium format = 96.12 lp/mm

Nikon D7000 16.2mp APS-C (DX) 4928×3264 pixels; 23.6×15.6mm dimensions  = 104.62 lp/mm

Canon 1D IV 16.1mp APS-H 4896×3264 pixels; 27.9×18.6mm dimensions  = 87.74 lp/mm

Taking the crackpot D800 as an example, that 102.33 lp/mm figure means that the sensor is capable of resolving 204.66 lines, or points of detail, per millimeter.

I say crackpot because:

  1. The Optical Low Pass “fights” against this high degree of resolving power
  2. This resolving power comes at the expense of S/N ratio
  3. This resolving power comes at the expense of diffraction
  4. The D800E is a far better proposition because it negates 1. above but it still leaves 2. & 3.
  5. Both sensors would purport to be “better” than even an IQ180 – newsflash – they ain’t; and not by a bloody country mile!  But the D800E is an exceptional sensor as far as 35mm format (36×24) sensors go.

A switch to a 40Mp medium format is BY FAR the better idea.

Before we go any further, we need a reality check:

In the scene we are shooting, and with the lens magnification we are using, can we actually “SEE” detail as small as 1/204th of a millimeter?

We know that detail finer than that exists all around us – that’s why we do macro/micro photography – but shooting a landscape with a 20mm wide angle where the nearest detail is 1.5 meters away ??

And let’s not forget the diffraction limit of the sensor and the incumbent reduction in depth of field that comes with 36Mp+ crammed into a 36mm x 24mm sensor area.

The D800 gives you something with one hand and takes it away with the other – I wouldn’t give the damn thing house-room!  Rant over………

Anyway, getting back to the matter at hand, we can now see that the MTF lp/mm values quoted by the likes of Nikon and Canon et al of 10 and 30 lp/mm bare little or no connectivity with the resolving power of their sensors – as I said in my previous post HERE – they are meaningless.

The information we are chasing after is all about the lens:

  1. How well does it transfer contrast because its contrast that allows us to “see” the lines of detail?
  2. How “sharp” is the lens?
  3. What is the “spread” of 1. and 2. – does it perform equally across its FoV (field of view) or is there a monstrous fall-off of 1. and 2. between 12 and 18mm from the center on an FX sensor?
  4. Does the lens vignette?
  5. What is its CA performance?

Now we can go to data sites on the net such as DXO Mark where we can find out all sorts of more meaningful data about our potential lens purchase performance.

But even then, we have to temper what we see because they do their testing using Imatest or something of that ilk, and so the lens performance data is influenced by sensor, ASIC and basic RAW file demosaicing and normalisation – all of which can introduce inaccuracies in the data; in other words they use camera images in order to measure lens performance.

The MTF 50 Standard

Standard MTF (MTF 100) charts do give you a good idea of the lens CONTRAST transfer function, as you may already have concluded. They begin by measuring targets with the highest degree of modulation – black to white – and then illustrate how well that contrast has been transferred to the image plane, measured along a corner radius of the frame/image circle.

MTF 1.0 (100%) left, MTF 0.5 (50%) center and MTF 0.1 (10%) right.

MTF 1.0 (100%) left, MTF 0.5 (50%) center and MTF 0.1 (10%) right.

As you can see, contrast decreases with falling transfer function value until we get to MTF 0.1 (10%) – here we can guess that if the value falls any lower than 10% then we will lose ALL “perceived” contrast in the image and the lines will become a single flat monotone – in other words we’ll drop to 9% and hit the Rayleigh Criterion.

It’s somewhat debatable whether or not sensors can actually discern a 10% value – as I mentioned earlier in this post, some favour a value more like 12% to 15% (0.12 to 0.15).

Now then, here’s the thing – what dictates the “sharpness” of edge detail in our images?  That’s right – EDGE CONTRAST.  (Don’t mistake this for overall image contrast!)

Couple that with:

  1. My well-used adage of “too much contrast is thine enemy”.
  2. “Detail” lies in midtones and shadows, and we want to see that detail, and in order to see it the lens has to ‘transfer’ it to the sensor plane.
  3. The only “visual” I can give you of MTF 100 would be something like power lines silhouetted against the sun – even then you would under expose the sun, so, if you like, MTF would still be sub 100.

Please note: 3. above is something of a ‘bastardisation’ and certain so-called experts will slag me off for writing it, but it gives you guys a view of reality – which is the last place some of those aforementioned experts will ever inhabit!

Hopefully you can now see that maybe measuring lens performance with reference to MTF 50 (50%, 0.5) rather than MTF 100 (100%, 1.0) might be a better idea.

Manufacturers know this but won’t do it, and the likes of Nikon can’t do it even if they wanted to because they use a damn calculator!

Don’t be trapped into thinking that contrast equals “sharpness” though; consider the two diagrams below (they are small because at larger sizes they make your eyes go funny!).

A lens can transfer full contrast but be unsharp.

A lens can have a high contrast transfer function but be unsharp.

A lens can have low contrast transmission (transfer function) but still be sharp.

A lens can have low contrast transfer function but still be sharp.

In the first diagram the lens has RESOLVED the same level of detail (the same lp/mm) in both cases, and at pretty much the same contrast transfer value; but the detail is less “sharp” on the right.

In the lower diagram the lens has resolved the same level of detail with the same degree of  “sharpness”, but with a much reduced contrast transfer value on the right.

Contrast is an AID to PERCEIVED sharpness – nothing more.

I actually hate that word SHARPNESS; and it’s a nasty word because it’s open to all sorts of misconceptions by the uninitiated.

A far more accurate term is ACUTANCE.

How Acutance effects perceived "sharpness" and is contrast independent.

How Acutance effects perceived “sharpness”.

So now hopefully you can see that LENS RESOLUTION is NOT the same as lens ACUTANCE (perceived sharpness..grrrrrr).

Seeing as it is possible to have a lens with a higher degree resolving power, but a lower degree of acutance you need to be careful – low acutance tends to make details blur into each other even at high contrast values; which tends to negate the positive effects of the resolving power. (Read as CHEAP LENS!).

Lenses need to have high acutance – they need to be sharp!  We’ve got enough problems trying to keep the sharpness once the sensor gets hold of the image, without chucking it a soft one in the first place – and I’ll argue this point with the likes of Mr. Rockwell until the cows have come home!

Things We Already Know

We already know that stopping down the aperture increases Depth of Field; and we already know that we can only do this to a certain degree before we start to hit diffraction.

What does increasing DoF do exactly; it increases ACUTANCE is what it does – exactly!

Yes it gives us increased perceptual sharpness of parts of the subject in front and behind the plane of sharp focus – but forget that bit – we need to understand that the perceived sharpness/acutance of the plane of focus increases too, until you take things too far and go beyond the diffraction limit.

And as we already know, that diffraction limit is dictated by the size of photosites/pixels in the sensor – in other words, the sensor resolution.

So the diffraction limit has two effects on the MTF of a lens:

  1. The diffraction limit changes with sensor resolution – you might get away with f14 on one sensor, but only f9 on another.
  2. All this goes “out the window” if we talk about crop-sensor cameras because their sensor dimensions are different.

We all know about “loss of wide angles” with crop sensors – if we put a 28mm lens on an FX body and like the composition but then we switch to a 1.5x crop body we then have to stand further away from the subject in order to achieve the same composition.

That’s good from a DoF PoV because DoF for any given aperture increases with distance; but from a lens resolving power PoV it’s bad – that 50 lp/mm detail has just effectively dropped to 75 lp/mm, so it’s harder for the lens to resolve it, even if the sensors resolution is capable of doing so.

There is yet another way of quantifying MTF – just to confuse the issue for you – and that is line pairs per frame size, usually based on image height and denoted as lp/IH.

Imatest uses MTF50 but quotes the frequencies not as lp/mm, or even lp/IH; but in line widths per image height – LW/IH!

Alas, there is no single source of the empirical data we need in order to evaluate pure lens performance anymore.  And because the outcome of any particular lens’s performance in terms of acutance and resolution is now so inextricably intertwined with that of the sensor behind it, then you as lens buyers, are left with a confusing myriad of various test results all freely available on the internet.

What does Uncle Andy recommend? – well a trip to DXO Mark is not a bad starting point all things considered, but I do strongly suggest that you take on board the information I’ve given you here and then scoot over to the DXO test methodology pages HERE and read them carefully before you begin to examine the data and draw any conclusions from it.

But do NOT make decisions just on what you see there; there is no substitute for hands-on testing with your camera before you go and spend your hard-earned cash.  Proper testing and evaluation is not as simple as you might think, so it’s a good idea to perhaps find someone who knows what they are doing and is prepared to help you out.   Do NOT ask the geezer in the camera shop – he knows bugger all about bugger all!

Do Sensors Out Resolve Lenses?

Well, that’s the loaded question isn’t it – you can get very poor performance from what is ostensibly a superb lens, and to a degree vice versa.

It all depends on what you mean by the question, because in reality a sensor can only resolve what the lens chucks at it.

If you somehow chiseled the lens out of your iPhone and Sellotaped it to your shiny new 1DX then I’m sure you’d notice that the sensor did indeed out resolve the lens – but if you were a total divvy who didn’t know any better then in reality all you’d be ware of is that you had a crappy image – and you’d possibly blame the camera, not the lens – ‘cos it took way better pics on your iPhone 4!

There are so many external factors that effect the output of a lens – available light, subject brightness range, angle of subject to the lens axis to name but three.  Learning how to recognise these potential pitfalls and to work around them is what separates a good photographer from an average one – and by good I mean knowledgeable – not necessarily someone who takes pics for a living.

I remember when the 1DX specs were first ‘leaked’ and everyone was getting all hot and bothered about having to buy the new Canon glass because the 1DX was going to out resolve all Canons old glass – how crackers do you need to be nowadays to get a one way ticket to the funny farm?

If they were happy with the lens’s optical performance pre 1DX then that’s what they would get post 1DX…duh!

If you still don’t get it then try looking at it this way – if lenses out resolve your sensor then you are up “Queer Street” – what you see in the viewfinder will be far better than the image that comes off the sensor, and you will not be a happy camper.

If on the other hand, our sensors have the capability to resolve more lines per millimeter than our lenses can throw at them, and we are more than satisfied with our lenses resolution and acutance, then we would be in a happy place, because we’d be wringing the very best performance from our glass – always assuming we know how to ‘drive the juggernaut’  in the first place!

Become a Patron!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.