Nikon D4S

The new Nikon D4S announced today

 

Nikon D4S left & D4 right

Nikon D4S left & D4 right

Well, that’s about right, my sexy Nikon D4 is officially out of date, and thanks to the Nikon D4S I’ve just lost a grand off the resale value of my camera – cheers chaps…..

Is Uncle Andy stressed at all about being kitted out with yesterdays gear?

Nope, not really.

So what’s new on the Nikon D4S ?

  • Well there’s been a few ergonomic tweaks which basically mean nothing for starters.
  • Seemingly dispelled are the rumours that it would have a higher Mp count – apparently this stays the same at 16.2Mp.
  • I was expecting some major change in AF but no, they’ve kept the venerable Multi-Cam 3500FX system.
  • New sensor design.
  • BUT – they’ve changed the image processor to Expeed 4 from Expeed 3.
  • AND – they’ve changed the battery from EN-EL18 to an EN-EL18a.

Bare in mind all I’m going on is the web – perish the thought that Nikon would ever think my opinion worthy of note and ACTUALLY SEND ME ONE.

Other changes:

  • A new Group Area AF mode – which from my own photography PoV is fairly meaningless, seeing as we already have 9 point dynamic AF – I can’t see it’ll make much difference. Plus, the Group AF mode always focusses on the nearest point – something you rarely want the camera to do!
  • 6 possible white balance presets as opposed to 3 on the D4 – I jam all my cameras into Cloudy B1 custom WB and leave them there – so this improvement isn’t worth jumping up and down about either.
  • Fairly gimmicky S Raw
  • Spot White Balance

On the storage front most reports say that the D4S carries over the D4 crazy arrangement of 1x CF plus 1x XQD.

My Basic Thoughts:

New Sensor – well the benefits can’t been seen by yours truly until I see a few RAW files from it – preferably taken by myself.

I’m glad they’ve kept it to 16.2Mp – if you crunch the numbers this is the optimum Mp count for an FX sensor – as Canon worked out aeons ago with the 1DsMk2; but then joined the stupid Mp race.

Image Processor changes – well, it’s reportedly 30% faster than the Expeed 3, which basically means that the D4S fires off images to storage 30% faster.

Now I can go out with the D4 and shoot getting on for 100 uncompressed 14bit RAW files in one continuous burst at 8 or 9 fps – do I want to chew through my storage any faster?  NO!

The Expeed 4 gives better high ISO performance?

Well perhaps it does, but I look at it this way.  If light is so damn low that you need to shoot at crackpot ISO numbers then you can say one thing – the light is crap.

If the light is crap then the image will look like crap – it’s just that with the Expeed 4 it’ll be slightly less noisy crap.

If I can pull 1/8000th sec at f7 or f8 at 3200ISO in half descent looking light using a D4 – which I do regularly – then why do I need a higher ISO capability?

The Red Squidger images you’ve seen in the previous blog articles are all 2000ISO and there is ZERO noise degradation – so again, why do I need more ISO capability.

Now if I was a ‘jobbing’ photo-jounalist, or I was embedded with the troops in Afghanistan or something of that ilk then I’d perhaps have a much different attitude.

But I’m not, and from my own perspective of wildlife & natural history photography these changes are of little interest to me – especially when they have a £5k price tag.

Battery Changes

There was always a persistent gripe about the battery life of the D4 EN-EL18 power cell – well, I’ve got two of them and have had no problems AT ALL with batteries running low.

I was REALLY annoyed that they switched from EN-EL4A D2/D3 style batteries – I’d got a handful of those already, and now when I go to Norway in June I’ve got to take 2 bloody chargers with me: yes the venerable D3 will be getting a summer holiday this year as second camera.

So, for me at least, the increased battery life of the new Nikon D4S 18a batteries is somewhat inconsequential – why do I want a battery that lasts longer than ‘for ever’ ??

Other Changes/Additions

I can’t see anything that excites me:  spot white balance?  Go and buy a Colour Checker Passport and do the job right – and that doesn’t cost £5k either (though they are a bit pricey).

Group Area AF – do me a favour (see above).

6 White Balance presets – what’s the point?

All of the above could be given away by Nikon as a firmware update for the D4 if they fancied being generous!

What I Would Have Got Excited About.

Twin UDMA 7 CF card slots and an XQD slot for dedicated video recording.

An improved AF module.

The ability to select ‘matched pairs’ of sensors – Canon offered this years ago and it was brilliant.

Internally recorded FX video of EXACTLY the same quality as that of a Canon 5D3, or at least the same quality as internal 1080p CROP.

AF mode selector back WHERE IT SHOULD BE!

Me being put in charge at Nikon!

In Conclusion

Do I want to buy one (even if I had the dough) – NO!

Do I wish I could afford one – NO!

Would I swap my D4 for a D4s – well of course I would.

Seriously though, I can just see an awful lot of people getting “hot under the collar” and stressing over this latest incarnation of this pro body from Nikon; but seriously, if you are then you need to just take a quiet step back and think about things calmly.

There is nothing – IMHO of course – on the D4S that warrants upgrading from the D4 – unless you have a penchant for spending your money that is.

But if you are still on a D3 or something older, and were thinking about buying a D4 – then hold off a while until the D4S in available; it’s makes better fiscal sense.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Flash Photography

Flash Photography

 

Red Squirrel,Andy Astbury,Flash,flash photography,fill flash,photography techniques

Really Cute Red Squirrel

 

On Sunday myself and my buddy Mark Davies made a short foray up to the Lake District and our small Red Squirrel site.  The weather was horrible, sleet, sun. rain, cloudy, sunny then rain again – in other words just not conducive to a half-descent session on the D4.

The one Achilles Heal with this site is the fact that it’s hard to get a descent background for your shots – it’s in the middle of a small wooded valley and you just can’t get away from tree trunks in the background.

This is further complicated by the fact that the “Squidgers” have a propensity for keeping in the ‘not so sunny’ bits, so frequently you end up with a scenario where backgrounds are brighter than foregrounds – which just won’t DO!

So what’s needed is some way to switch the lighting balance around to give a brighter foreground/subject AND a darker background.

Now that sounds all very well BUT; how do we achieve it?

Reflectors perhaps?  They’d do the trick but have one big problem; they rely on AMBIENT light  – and in the conditions we were shooting in the other day the value of the ambient light was up and down like a Yo-Yo.

Wouldn’t it be cool if we could have a consistent level of subject/foreground illumination AND at the same time have some degree of control over the exposure of the background?

Well with flash we can do just that!

Let’s look at a shot without flash:

 

No FLASH

No FLASH, AMBIENT light only – 1/320th @ f7.1

 

I don’t suppose this shot is too bad because the background isn’t strongly lit by the sun (it’s gone behind a cloud again!) but the foreground and background are pretty much the same exposure-wise.  For me there is not enough tonal separation between the two areas of the image, and the lighting is a bit flat.

If we could knock a stop or so out of the background; under expose it, then the image would have more tonal separation between foreground and background, and would look a lot better, but of course if we’re just working with ambient light then our adjusted exposure would under expose the foreground as well, so we’d be no better off.

Now look at the next image – we’ve got a background that’s under exposed by around  -1.5Ev, but the subject and foreground are lit pretty much to the same degree as before, and we’ve got a little more shape and form to the squirrel itself – it’s not quite so flat-looking.

 

With FLASH

With FLASH added – 1/800th @ f7.1

 

The image also has the slight sense that it’s been shot in more sunny conditions – which I can promise you it wasn’t !

And both images are basically straight off the camera, just with my neutral camera profile applied to them on import.

 

The Set Up

The Setup - shocking iPhone 3 quality!

The Setup – shocking iPhone 3 quality!

 

The first secret to good looking flash photography OF ANY KIND is to get the damn flash OFF the camera.

If we were in a totally dark studio with the sexiest looking model on the planet we’d NOT be lighting her with one light from the camera position now would we?

So we use basic studio lighting layouts where ever we can.

There are two other things to consider too:

  •   It’s broad daylight, so our exposure will contain both FLASH and an element of AMBIENT light – so we are working along the premise of ADDING to what’s already there.
  •   If we put the flash closer to the subject (off camera) then the output energy has less distance to travel in order to do its job – so it doesn’t have to have as much power behind it as it would have if emanating from the camera position.

 

You can see in the horrible iPhone 3 shot I took of the setup that I’m using two flash guns with white Lambency diffusers on them; one on a stand to the left and slightly in front of the log where the squirrels will sit, and one placed on the set base (Mr. Davies old knackered Black & Decker Workmate!) slightly behind the log and about the same distance away from where I anticipate a squirrel will sit on the log as the left flash.

The thing to note here is that I’m using the SIDE output of these Lambency diffuser domes and NOT the front – that’s why they are pointed up at the sky. The side output of these diffusers is very soft – just what the flash photography doctor ordered in terms of ‘keeping it real’.

The left light is going to be my MAIN light, the right is my FILL light.

The sun, when & if it decides to pop its head out, will be behind me and to my left so I place my MAIN light in a position where it will ‘simulate’ said ball in the sky.

The FILL light basically exists to ‘counter balance’ the ‘directionality’ of the MAIN light, and to weaken any shadows thrown by the MAIN light.

Does this flash bother a subject? For the most part NOT SO YOU’D NOTICE!

Take a look at the shot below – the caption will be relevant shortly.

This SB800 has just fired in "front curtain synch" and the balance of the exposure is from the ambient light - the shutter is still open after the flash has died. Does the squirrel look bothered?

This SB800 has just fired in “front curtain synch” and the balance of the exposure is from the ambient light. Does the squirrel look bothered?

Settings & The Black Art!

Before we talk about anything else I need to address the shutter curtain synch question.

We have two curtain synch options, FRONT & REAR.

Front Curtain (as in the shot above) – this means that the flash will fire as the front curtain starts to move, and most likely, the flash will be finished long before the rear curtain closes. If your subject reacts to the flash then some element of subject movement might be present in the shot due to the ambient light part of the exposure.

Rear Curtain Synch – my recommended ‘modus operandi’ – the ‘ambient only’ part of the exposure gets done first, then the flash fires as the rear curtain begins to close the exposure. This way, if the subject reacts to the flash the exposure will be over before it has chance to – MOSTLY!

The framing I want, and the depth of field I want dictates my camera position and aperture – in this case f7 or f8 – actually f7.1 is what I went for.

 

I elect to go with 2000 iso on the D4.

So now my only variable is shutter speed.

Ambient light dictates that to be 1/320th on average, and I want to UNDER EXPOSE that background by at least a stop and a bit (technical terms indeed!) so I elect to use a shutter speed of 1/800th.

So that’s it – I’m done; seeing as the light from the flashes will be constant my foreground/subject will ALWAYS be exposed correctly. In rear curtain synch I’ll negate the risk of subject movement ‘ghosting’ in the image, and at 1/800th I’ll have a far better chance of eliminating motion blur caused by a squirrel chewing food or twitching its whiskers etc.

 

Triggering Off-Camera Flashes

 

We can fire off-camera flashes in a number of ways, but distance, wet ground, occasional rain and squirrels with a propensity for chewing everything they see means CORDS ain’t one of ’em!

With the Nikon system that I obviously use we could employ another flash on-camera in MASTER/COMMANDER mode, with the flash pulse deactivated; or a dedicated commander such as the SU800; or if your camera has one, the built-in flash if it has a commander mode in the menu.

The one problem with Nikon CLS triggering system, and Canons as far as I know, is the reliance upon infra-red as the communication band. This is prone to a degree of unreliability in what we might term ‘dodgy’ conditions outdoors.

I use a Pocket Wizard MiniTT1 atop the camera and a FlexTT5 under my main light. The beauty of this system is that the comms is RADIO – far more reliable outdoors than IR.

Because a. I’m poor and can’t afford another TT5, and b. the proximity of my MAIN and FILL light, I put the SB800 FILL light in SU mode so it gets triggered by the flash from the MAIN light.

What I wouldn’t give for a dozen Nikon SB901’s and 12 TT5s – I’d kill for them!

The MAIN light itself is in TTL FP mode.

The beauty of this setup is that the MAIN light ‘thinks’ the TT5 is a camera, and the camera ‘thinks’ the miniTTL is a flash gun, so I have direct communication between camera and flash of iso and aperture information.

Also, I can turn the flash output down by up to -3Ev using the flash exposure compensation button without it having an effect on the background ambient exposure.

Don’t forget, seeing as my exposure is always going to 1/800th @ f7.1 at 2000 iso the CAMERA is in MANUAL exposure mode. So as long as the two flashes output enough light to expose the subject correctly at those settings (which they always will until the batteries die!) I basically can’t go wrong.

When shooting like this I also have a major leaning towards shooting in single servo – one shot at a time with just one AF point active.

 

Flash Photography – Flash Duration or Burn Time

Now here’s what you need to get your head around. As you vary the output of a flash like the SB800 the DURATION of the flash or BURN TIME of the tube changes

Below are the quoted figures for the Nikon SB800, burn time/output:

1/1050 sec. at M1/1 (full) output
1/1100 sec. at M1/2 output
1/2700 sec. at M1/4 output
1/5900 sec. at M1/8 output
1/10900 sec. at M1/16 output
1/17800 sec. at M1/32 output
1/32300 sec. at M1/64 output
1/41600 sec. at M1/128 output

On top of that there’s something else we need to take into account – and this goes for Canon shooters too; though Canon terminology is different.

Shutter Speed & The FP Option

35mm format cameras all have a falling curtain shutter with two curtains, a front one, and a rear one.

As your press the shutter button the FRONT curtain starts to fall, then the rear curtain starts to chase after it, the two meet at the bottom of the shutter plane and the exposure is over.

The LONGER or slower the shutter speed the greater head-start the front curtain has!

At speeds of 1/250th and slower the front curtain has reached the end of its travel BEFORE the rear curtain wakes up and decides to move – in other words THE SENSOR is FULLY exposed.

The fastest shutter speed that results in a FULLY EXPOSED film plane/sensor is the basic camera-to-flash synch speed; X synch as it used to be called, and when I started learning about photography this was usually 1/60th; and on some really crap cameras it was 1/30th!

But with modern technology and light weight materials these curtains can now get moving a lot faster, so basic synch now runs at 1/250th for a full frame DSLR.

If you go into your flash camera menu you’ll find an AUTO FP setting for Nikon, Canon refer to this as HSS or High Speed Synch – which makes far more sense (Nikon please take note, Canon got something right so please replicate!).

There’s something of an argument as to whether FP stands for Focal Plane or Flash Pulse; and frankly both are applicable, but it means the same as Canon’s HSS or High Speed Synch.

At speeds above/faster than 1/250th the sensor/film plane is NOT fully exposed. The gap between the front and rear curtains forms a slot or ‘letter box’ that travels downwards across the face of the sensor, so the image is, if you like, ‘scanned’ onto the imaging plane.

Obviously this is going to cause on heck of an exposure problem if the flash output is ‘dumped’ as a single pulse.

So FP/HSS mode physically pulses or strobes the flash output to the point where it behaves like a continuous light source.

If the flash was to fire with a single pulse then the ‘letterbox slot’ would receive the flash exposure, but you’d end up with bands of under exposure at the bottom or top of the image depending on the curtain synch mode – front or rear.

In FP/HSS mode the power output of each individual pulse in the sequence will drop as the shutter speed shortens, so even though you might have 1:1 power selected on the back of the flash itself (which I usually do on the MAIN light, and 1/2 on the FILL light) the pulses of light will be of lower power, but their cumulative effect gives the desired result.

By reviewing the shot on the back of the camera we can compensate for changes in ambient in the entire scene (we might want to dilute the effect of the main light somewhat if the sun suddenly breaks out on the subject as well as the background) by raising the shutter speed a little – or we might want to lighten the shot globally by lowering the shutter speed if it suddenly goes very gloomy.

We might want to change the balance between ambient and flash; this again can be done from the camera with the flash exposure compensation controls; or if needs be, by physically getting up and moving the flash units are little nearer or further away from the subject.

All in all, using flash is really easy, and always has been.

Except nowadays manufacturers tend to put far more controls and modes on things then are really necessary; the upshot of which is to frighten the uninitiated and then confuse them even further with instruction manuals that appear to be written by someone under the influence of Class A drugs!

 

"Trouble Brewing.." Confrontation over the right to feed between two Red Squirrels.

“Trouble Brewing..” Confrontation over the right to feed between two Red Squirrels.

 

The whole idea of flash is that it should do its job but leave no obvious trace to the viewer.

But its benefits to you as the photographer are invaluable – higher shutter speeds, more depth of field and better isolation of the subject from its background are the three main ones that you need to be taking advantage of right now.

If you have the gear and don’t understand how to use it then why not book a tuition day with me – then perhaps I could afford some more TT5s!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Paper White – Desktop Printing 101

Paper White video

A while back I posted an article called How White is Paper White

As a follow-up to my last post on the basic properties of printing paper media I thought I’d post this video to refresh the idea of “white”.

In this video we basically look at a range of 10 Permajet papers and simply compare their tints and brightness – it’s an illustration I give at my print workshops which never fails to amaze all the attendees.

I know I keep ‘banging on’ about this but you must understand:

  • Very few paper whites are even close to being neutral.
  • No paper is WHITE in terms of luminosity – RGB 255 in 8 bit colour terms.
  • No paper can hold a true black – RGB 0 in 8 bit colour terms.

In real-world terms ALL printing paper is a TINTED GREY – some cool, some warm.

printing,paper white,desktop printing,Andy Astbury,Wildlife in Pixels

If we attempted to print the image above on a cool tinted paper then we would REDUCE or even CANCEL OUT the warm tonal effects and general ‘atmosphere’ of the image.

Conversely, print it to a warmer tinted ‘paper white’ and the atmosphere would be enhanced.

Would this enhancement be a good thing?  Well, er NO – not if we were happy with our original ‘on screen’ processing.

You need to look upon ‘paper white’ as another TOOL to help you achieve your goal of great looking photographs, with a minimum of fuss and effort on your part.

We have to ‘soft proof’ our images if we want to get a print off the printer that matches what we see on our monitor.

But we can’t soft proof until we have made a decision about what paper we are going to soft-proof to.

Choosing a paper who’s characteristics match our finished ‘on screen’ image in terms of TINT especially, will make the job of soft proofing much easier.

How, why?

Proper soft proofing requires us to make a copy of our original image (there’s most peoples first mistake – not making a copy) and then making adjustments to said copy, in a soft proof environment, so that it it renders correctly on the print – in other words it matches our original processed image.

Printing from Photoshop requires a hard copy, printing from Lightroom is different – it relies on VIRTUAL copies.

Either way, this copy and its proof adjustments are what get sent to the printer along what we call the PRINT PIPELINE.

The print pipeline has to do a lot of work:

  • It has to transpose our adjusted/soft proofed image colour values from additive RGB to print CMYK
  • It has to up sample or interpolate the image dpi instructions to the print head, depending on print output size.
  • It has to apply the correct droplet size instructions to each nozzle in the print head hundreds of times per second.
  • And it has to do a lot of other ‘stuff’ besides!!

The key component is the Printer Driver – and printer drivers are basically CRAP at carrying out all but the simplest of instructions.

In other words they don’t like hard work.

Printing to a paper white that matches our image:

  • Warm image to warm tint paper white
  • Cool image to cool paper white

will reduce to the amount of adjustments we have to make under soft proofing and therefore REDUCE the printer driver workload.

The less work the print driver has to do, the lower is the risk of things  ‘getting lost in translation‘ and if nothing gets lost then the print matches the on screen image – assuming of course that your eyes haven’t let you down at the soft proofing stage!

print,desktop printing,paper white

IMPORTANT – Click Image to Enlarge in new window

If we try to print this squirrel on the left to Permajet Gloss 271 (warmish image to very cool tint paper white) we can see what will happen.

We have got to make a couple of tweaks in terms on luminosity BUT we’ve also got to make a global change to the overall colour temperature of the image – this will most likely present us with a need for further  opposing colour channel adjustments between light and dark tones.

 

print,desktop printing,paper white

IMPORTANT – Click Image to Enlarge in new window

Whereas the same image sent to Permajet Fibre Base Gloss Warmtone all we’ll have to do is tweak the luminosity up a tiny bit and saturation down a couple of points and basically we’ll be sorted.

So less work, and less work means less room for error in our hardware drivers; this leads to more efficient printing and reduced print production costs.

And reduced cost leads to a happy photographer!

Printing images is EASY –  as long as you get all your ducks in a row – and you’ve only got a handful of ducks to control.

Understanding print media and grasping the implications of paper white is one of those ducks………

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Desktop Printing 101

Understanding Desktop Printing – part 1

 

desktop printingDesktop printing is what all photographers should be doing.

Holding a finished print of your epic image is the final part of the photographic process, and should be enjoyed by everyone who owns a camera and loves their photography.

But desktop printing has a “bad rap” amongst the general hobby photography community – a process full of cost, danger, confusion and disappointment.

Yet there is no need for it to be this way.

Desktop printing is not a black art full of ‘ju-ju men’ and bear-traps  – indeed it’s exactly the opposite.

But if you refuse to take on board a few simple basics then you’ll be swinging in the wind and burning money for ever.

Now I’ve already spoken at length on the importance of monitor calibration & monitor profiling on this blog HERE and HERE so we’ll take that as a given.

But in this post I want to look at the basic material we use for printing – paper media.

Print Media

A while back I wrote a piece entitled “How White is Paper White” – it might be worth you looking at this if you’ve not already done so.

Over the course of most of my blog posts you’ll have noticed a recurring undertone of contrast needs controlling.

Contrast is all about the relationship between blacks and whites in our images, and the tonal separation between them.

This is where we, as digital photographers, can begin to run into problems.

We work on our images via a calibrated monitor, normally calibrated to a gamma of 2.2 and a D65 white point.  Modern monitors can readily display true black and true white (Lab 0 to Lab 100/RGB 0 to 255 in 8 bit terms).

Our big problem lies in the fact that you can print NEITHER of these luminosity values in any of the printer channels – the paper just will not allow it.

A papers ability to reproduce white is obviously limited to the brightness and background colour tint of the paper itself – there is no such think as ‘white’ paper.

But a papers ability to render ‘black’ is the other vitally important consideration – and it comes as a major shock to a lot of photographers.

Let’s take 3 commonly used Permajet papers as examples:

  • Permajet Gloss 271
  • Permajet Oyster 271
  • Permajet Portrait White 285

The following measurements have been made with a ColorMunki Photo & Colour Picker software.

L* values are the luminosity values in the L*ab colour space where 0 = pure black (0RGB) and 100 = pure white (255RGB)

Gloss paper:

  • Black/Dmax = 4.4 L* or 14,16,15 in 8 bit RGB terms
  • White/Dmin = 94.4 L* or 235,241,241 (paper white)

From these measurements we can see that the deepest black we can reproduce has an average 8bit RGB value of 15 – not zero.

We can also see that “paper white” has a leaning towards cyan due to the higher 241 green & blue RGB values, and this carries over to the blacks which are 6 points deficient in red.

Oyster paper:

  • Black/Dmax = 4.7 L* or 15,17,16 in 8 bit RGB terms
  • White/Dmin = 94.9 L* or 237,242,241 (paper white)

We can see that the Oyster maximum black value is slightly lighter than the Gloss paper (L* values reflect are far better accuracy than 8 bit RGB values).

We can also see that the paper has a slightly brighter white value.

Portrait White Matte paper:

  • Black/Dmax = 25.8 L* or 59,62,61 in 8 bit RGB terms
  • White/Dmin = 97.1 L* or 247,247,244 (paper white)

You can see that paper white is brighter than either Gloss or Oyster.

The paper white is also deficient in blue, but the Dmax black is deficient in red.

It’s quite common to find this skewed cool/warm split between dark tones and light tones when printing, and sometimes it can be the other way around.

And if you don’t think there’s much of a difference between 247,247,244 & 247,247,247 you’d be wrong!

The image below (though exaggerated slightly due to jpeg compression) effectively shows the difference – 247 neutral being at the bottom.

paper white,printing

247,247,244 (top) and 247,247,247 (below) – slightly exaggerated by jpeg compression.

See how much ‘warmer’ the top of the square is?

But the real shocker is the black or Dmax value:

paper,printing,desktop printing

Portrait White matte finish paper plotted against wireframe sRGB on L*ab axes.

The wireframe above is the sRGB colour space plotted on the L*ab axes; the shaded volume is the profile for Portrait White.  The sRGB profile has a maximum black density of 0RGB and so reaches the bottom of vertical L axis.

However, that 25.8 L* value of the matte finish paper has a huge ‘gap’ underneath it.

The higher the black L* value the larger is the gap.

What does this gap mean for our desktop printing output?

It’s simple – any tones in our image that are DARKER, or have a lower L* value than the Dmax of the destination media will be crushed into “paper black” – so any shadow detail will be lost.

Equally the same can be said for gaps at the top of the L* axis where “paper white” or Dmin is lower than the L* value of the brightest tones in our image – they too will get homogenized into the all-encompassing paper white!

Imagine we’ve just processed an image that makes maximum use of our monitors display gamut in terms of luminosity – it looks magnificent, and will no doubt look equally as such for any form of electronic/digital distribution.

But if we send this image straight to a printer it’ll look really disappointing, if only for the reasons mentioned above – because basically the image will NOT fit on the paper in terms of contrast and tonal distribution, let alone colour fidelity.
It’s at this point where everyone gives up the idea of desktop printing:

  • It looks like crap
  • It’s a waste of time
  • I don’t know what’s happened.
  • I don’t understand what’s gone wrong

Well, in response to the latter, now you do!

But do we have to worry about all this tech stuff ?

No, we don’t have to WORRY about it – that’s what a colour managed work flow & soft proofing is for.

But it never hurts to UNDERSTAND things, otherwise you just end up in a “monkey see monkey do” situation.

And that’s as dangerous as it can get – change just one thing and you’re in trouble!

But if you can ‘get the point’ of this post then believe me you are well on your way to understanding desktop printing and the simple processes we need to go through to ensure accurate and realistic prints every time we hit the PRINT button.

desktop printing

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Photoshop CC Update

Photoshop CC Update

Installing a new Photoshop CC update is supposed to be a simple matter of clicking a button and the job gets done.

This morning both my Mac systems were telling me to update from v14.1.2 to v14.2

I have two Macs, a late 2012 iMac and a mid 2009 Mac Pro.  The Mac Pro used to run Snow Leopard but was upgraded to Mountain Lion because of Lightroom 5 dropping Snow Leopard support.

Now I never have any problems with Cloud Updates from Adobe on the iMac, but sometimes the Mac Pro can do some strange things – and this morning was no exception!

The update installed on the iMac without a hitch, but when the update was complete on the Mac Pro I was greeted with a message telling me that some components had not installed correctly.  On opening Photoshop CC I was greeted with the fact that the version had rolled back to v14.0 and that hitting UPDATE in both the app and my CC control panel simply informed me that my software was up to date and no updates were available!

So I just thought I’d do a blog entry on what to do if this ever happens to you!

 

Remove Photoshop CC

The first thing to do is UNINSTALL  Photoshop CC with the supplied uninstaller.

You’ll find this in the main Photoshop CC root directory:

Photoshop CC Update

Locate the Photoshop CC Uninstaller.

Take my advice and put a tick in the check box to “Remove Preferences” – the Photoshop preferences file can be a royal pain in the ass sometimes, so dump it – a new one will get written as soon as your fire Photoshop up after the new install.

Click UNINSTALL.

Once this action is complete YOU MUST RESTART THE MACHINE.

 

After the restart wait for the Creative Cloud to connect then open your CC control panel.

Under the Apps tab you’ll see that Photoshop CC is no longer listed.

Scroll down past all the apps Adobe have listed and you’ll come to Photoshop CC;  it’ll have an INSTALL button next to it – click the install button:

Photoshop CC Update

Install Photoshop CC from the Cloud control panel.

If you are installing the 14.1.2 to 14.2 update (the current one as of today’s date) you might find a couple of long ‘stick bits’ during the installation process – notably between 1 and 20% and a long one at 90% – just let the machine do it’s thing.

When the update is complete I’d recommend you do a restart – it might not be necessary, but I do it anyway.

Once the machine has restarted fire up Photoshop, click on ‘About Photoshop’ and you should see:

Photoshop CC Update

Photoshop “about screen” showing version number.

Because we dumped the preferences file we need to go and change the defaults for best best working practice:

Photoshop CC Update

Preferences Interface tab.

If you want to change the BG colour then do it here.

Next, click File Handling:

Photoshop CC Update

File handling tab in Photoshop Preferences

Remove the tick from the SAVE IN BACKGROUND check box – like the person who put it there, you too might think background auto-save is a good idea – IT ISN’T – think about it!

Finally, go to Performance:

Photoshop CC Update

Photoshop preferences Performance tab

and change the Scratch Disc to somewhere other than your system drive if you have the internal drives fitted.  If you only have 1 internal drive then leave “as is”.  You ‘could’ use an external drive as a scratch disk, but to be honest it really does need to be a fast drive over a fast connection – USB 2 to an old 250Gb portable isn’t really going to cut it!

You can go and check your Colour Settings, though these should not have changed – assuming you had ’em set right in the first place!

Here’s what they SHOULD look like:

Photoshop CC Update

Photoshop PROPER COLOUR SETTINGS!

That’s it – you’re done!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Please consider supporting this blog.

This blog really does need your support. All the information I put on these pages I do freely, but it does involve costs in both time and money.

If you find this post useful and informative please could you help by making a small donation – it would really help me out a lot – whatever you can afford would be gratefully received.

Your donation will help offset the costs of running this blog and so help me to bring you lots more useful and informative content.

Many thanks in advance.

 

MTF, Lens & Sensor Resolution

MTF, Lens & Sensor Resolution

I’ve been ‘banging on’ about resolution lens performance and MTF over the last few posts so I’d like to start bringing all these various bits of information together with at least a modicum of simplicity.

If this is your first visit to my blog I strongly recommend you peruse HERE and HERE before going any further!

You might well ask the question “Do I really need to know this stuff – you’re a pro Andy and I’m not, so I don’t think I need to…”

My answer is “Yes you bloody well do need to know, so stop whinging – it’ll save you time and perhaps stop you wasting money…”

Words used like ‘resolution’ do tend to get used out of context sometimes, and when you guys ‘n gals are learning this stuff then things can get a mite confusing – and nowhere does terminology get more confusing than when we are talking ‘glass’.

But before we get into the idea of bringing lenses and sensors together I want to introduce you to something you’ve all heard of before – CONTRAST – and how it effects our ability to see detail, our lens’s ability to transfer detail, and our camera sensors ability to record detail.

Contrast & How It Effects the Resolving of Detail

In an earlier post HERE I briefly mentioned that the human eye can resolve 5 line pairs per millimeter, and the illustration I used to illustrate those line pairs looked rather like this:

5 line pairs per millimeter with a contrast ratio of 100% or 1.0

5 line pairs per millimeter with a contrast ratio of 100% or 1.0

Now don’t forget, these line pairs are highly magnified – in reality each pair should be 0.2mm wide.  These lines are easily differentiated because of the excessive contrast ratio between each line in a pair.

How far can contrast between the lines fall before we can’t tell the difference any more and all the lines blend together into a solid monotone?

Enter John William Strutt, the 3rd Baron Rayleigh…………

5 line pairs at bottom threshold of human vision - a 9% contrast ratio.

5 line pairs at bottom threshold of human vision – a 9% contrast ratio.

The Rayleigh Criterion basically stipulates that the ‘discernability’ of each line in a pair is low end limited to a line pair contrast ratio of 9% or above, for average human vision – that is, when each line pair is 0.2mm wide and viewed from 25cms.  Obviously they are reproduced much larger here, hence you can see ’em!

Low contrast limit for Human vision (left) & camera sensor (right).

Low contrast limit for Human vision (left) & camera sensor (right).

However, it is said in some circles that dslr sensors are typically limited to a 12% to 15% minimum line pair contrast ratio when it comes to discriminating between the individual lines.

Now before you start getting in a panic and misinterpreting this revelation you must realise that you are missing one crucial factor; but let’s just recap what we’ve got so far.

  1. A ‘line’ is a detail.
  2. but we can’t see one line (detail) without another line (detail) next to it that has a different tonal value ( our line pair).
  3. There is a limit to the contrast ratio between our two lines, below which our lines/details begin to merge together and become less distinct.

So, what is this crucial factor that we are missing; well, it’s dead simple – the line pair per millimeter (lp/mm) resolution of a camera sensor.

Now there’s something you won’t find in your cameras ‘tech specs’ that’s for sure!

Sensor Line Pair Resolution

The smallest “line” that can be recorded on a sensor is 1 photosite in width – now that makes sense doesn’t it.

But in order to see that line we must have another line next to it, and that line must have a higher or lower tonal value to a degree where the contrast ratio between the two lines is at or above the low contrast limit of the sensor.

So now we know that the smallest line pair our sensor can record is 2 photosites/pixels in width – the physical width is governed by the sensor pixel pitch; in other words the photosite diameter.

In a nutshell, the lp/mm resolution of a sensor is 0.5x the pixel row count per millimeter – referred to as the Nyquist Rate, simply because we have to define (sample) 2 lines in order to see/resolve 1 line.

The maximum resolution of an image projected by the lens that can be captured at the sensor plane – in other words, the limit of what can be USEFULLY sampled – is the Nyquist Limit.

Let’s do some practical calculations:

Canon 1DX 18.1Mp

Imaging Area = 36mm x 24mm / 5202 x 3533 pixels/photosites OR LINES.

I actually do this calculation based on the imaging area diagonal

So sensor resolution in lp/mm = (pixel diagonal/physical diagonal) x 0.5 = 72.01 lp/mm

Nikon D4 16.2Mp = 68.62 lp/mm

Nikon D800 36.3Mp = 102.33 lp/mm

PhaseOne P40 40Mp medium format = 83.15 lp/mm

PhaseOne IQ180 80Mp medium format = 96.12 lp/mm

Nikon D7000 16.2mp APS-C (DX) 4928×3264 pixels; 23.6×15.6mm dimensions  = 104.62 lp/mm

Canon 1D IV 16.1mp APS-H 4896×3264 pixels; 27.9×18.6mm dimensions  = 87.74 lp/mm

Taking the crackpot D800 as an example, that 102.33 lp/mm figure means that the sensor is capable of resolving 204.66 lines, or points of detail, per millimeter.

I say crackpot because:

  1. The Optical Low Pass “fights” against this high degree of resolving power
  2. This resolving power comes at the expense of S/N ratio
  3. This resolving power comes at the expense of diffraction
  4. The D800E is a far better proposition because it negates 1. above but it still leaves 2. & 3.
  5. Both sensors would purport to be “better” than even an IQ180 – newsflash – they ain’t; and not by a bloody country mile!  But the D800E is an exceptional sensor as far as 35mm format (36×24) sensors go.

A switch to a 40Mp medium format is BY FAR the better idea.

Before we go any further, we need a reality check:

In the scene we are shooting, and with the lens magnification we are using, can we actually “SEE” detail as small as 1/204th of a millimeter?

We know that detail finer than that exists all around us – that’s why we do macro/micro photography – but shooting a landscape with a 20mm wide angle where the nearest detail is 1.5 meters away ??

And let’s not forget the diffraction limit of the sensor and the incumbent reduction in depth of field that comes with 36Mp+ crammed into a 36mm x 24mm sensor area.

The D800 gives you something with one hand and takes it away with the other – I wouldn’t give the damn thing house-room!  Rant over………

Anyway, getting back to the matter at hand, we can now see that the MTF lp/mm values quoted by the likes of Nikon and Canon et al of 10 and 30 lp/mm bare little or no connectivity with the resolving power of their sensors – as I said in my previous post HERE – they are meaningless.

The information we are chasing after is all about the lens:

  1. How well does it transfer contrast because its contrast that allows us to “see” the lines of detail?
  2. How “sharp” is the lens?
  3. What is the “spread” of 1. and 2. – does it perform equally across its FoV (field of view) or is there a monstrous fall-off of 1. and 2. between 12 and 18mm from the center on an FX sensor?
  4. Does the lens vignette?
  5. What is its CA performance?

Now we can go to data sites on the net such as DXO Mark where we can find out all sorts of more meaningful data about our potential lens purchase performance.

But even then, we have to temper what we see because they do their testing using Imatest or something of that ilk, and so the lens performance data is influenced by sensor, ASIC and basic RAW file demosaicing and normalisation – all of which can introduce inaccuracies in the data; in other words they use camera images in order to measure lens performance.

The MTF 50 Standard

Standard MTF (MTF 100) charts do give you a good idea of the lens CONTRAST transfer function, as you may already have concluded. They begin by measuring targets with the highest degree of modulation – black to white – and then illustrate how well that contrast has been transferred to the image plane, measured along a corner radius of the frame/image circle.

MTF 1.0 (100%) left, MTF 0.5 (50%) center and MTF 0.1 (10%) right.

MTF 1.0 (100%) left, MTF 0.5 (50%) center and MTF 0.1 (10%) right.

As you can see, contrast decreases with falling transfer function value until we get to MTF 0.1 (10%) – here we can guess that if the value falls any lower than 10% then we will lose ALL “perceived” contrast in the image and the lines will become a single flat monotone – in other words we’ll drop to 9% and hit the Rayleigh Criterion.

It’s somewhat debatable whether or not sensors can actually discern a 10% value – as I mentioned earlier in this post, some favour a value more like 12% to 15% (0.12 to 0.15).

Now then, here’s the thing – what dictates the “sharpness” of edge detail in our images?  That’s right – EDGE CONTRAST.  (Don’t mistake this for overall image contrast!)

Couple that with:

  1. My well-used adage of “too much contrast is thine enemy”.
  2. “Detail” lies in midtones and shadows, and we want to see that detail, and in order to see it the lens has to ‘transfer’ it to the sensor plane.
  3. The only “visual” I can give you of MTF 100 would be something like power lines silhouetted against the sun – even then you would under expose the sun, so, if you like, MTF would still be sub 100.

Please note: 3. above is something of a ‘bastardisation’ and certain so-called experts will slag me off for writing it, but it gives you guys a view of reality – which is the last place some of those aforementioned experts will ever inhabit!

Hopefully you can now see that maybe measuring lens performance with reference to MTF 50 (50%, 0.5) rather than MTF 100 (100%, 1.0) might be a better idea.

Manufacturers know this but won’t do it, and the likes of Nikon can’t do it even if they wanted to because they use a damn calculator!

Don’t be trapped into thinking that contrast equals “sharpness” though; consider the two diagrams below (they are small because at larger sizes they make your eyes go funny!).

A lens can transfer full contrast but be unsharp.

A lens can have a high contrast transfer function but be unsharp.

A lens can have low contrast transmission (transfer function) but still be sharp.

A lens can have low contrast transfer function but still be sharp.

In the first diagram the lens has RESOLVED the same level of detail (the same lp/mm) in both cases, and at pretty much the same contrast transfer value; but the detail is less “sharp” on the right.

In the lower diagram the lens has resolved the same level of detail with the same degree of  “sharpness”, but with a much reduced contrast transfer value on the right.

Contrast is an AID to PERCEIVED sharpness – nothing more.

I actually hate that word SHARPNESS; and it’s a nasty word because it’s open to all sorts of misconceptions by the uninitiated.

A far more accurate term is ACUTANCE.

How Acutance effects perceived "sharpness" and is contrast independent.

How Acutance effects perceived “sharpness”.

So now hopefully you can see that LENS RESOLUTION is NOT the same as lens ACUTANCE (perceived sharpness..grrrrrr).

Seeing as it is possible to have a lens with a higher degree resolving power, but a lower degree of acutance you need to be careful – low acutance tends to make details blur into each other even at high contrast values; which tends to negate the positive effects of the resolving power. (Read as CHEAP LENS!).

Lenses need to have high acutance – they need to be sharp!  We’ve got enough problems trying to keep the sharpness once the sensor gets hold of the image, without chucking it a soft one in the first place – and I’ll argue this point with the likes of Mr. Rockwell until the cows have come home!

Things We Already Know

We already know that stopping down the aperture increases Depth of Field; and we already know that we can only do this to a certain degree before we start to hit diffraction.

What does increasing DoF do exactly; it increases ACUTANCE is what it does – exactly!

Yes it gives us increased perceptual sharpness of parts of the subject in front and behind the plane of sharp focus – but forget that bit – we need to understand that the perceived sharpness/acutance of the plane of focus increases too, until you take things too far and go beyond the diffraction limit.

And as we already know, that diffraction limit is dictated by the size of photosites/pixels in the sensor – in other words, the sensor resolution.

So the diffraction limit has two effects on the MTF of a lens:

  1. The diffraction limit changes with sensor resolution – you might get away with f14 on one sensor, but only f9 on another.
  2. All this goes “out the window” if we talk about crop-sensor cameras because their sensor dimensions are different.

We all know about “loss of wide angles” with crop sensors – if we put a 28mm lens on an FX body and like the composition but then we switch to a 1.5x crop body we then have to stand further away from the subject in order to achieve the same composition.

That’s good from a DoF PoV because DoF for any given aperture increases with distance; but from a lens resolving power PoV it’s bad – that 50 lp/mm detail has just effectively dropped to 75 lp/mm, so it’s harder for the lens to resolve it, even if the sensors resolution is capable of doing so.

There is yet another way of quantifying MTF – just to confuse the issue for you – and that is line pairs per frame size, usually based on image height and denoted as lp/IH.

Imatest uses MTF50 but quotes the frequencies not as lp/mm, or even lp/IH; but in line widths per image height – LW/IH!

Alas, there is no single source of the empirical data we need in order to evaluate pure lens performance anymore.  And because the outcome of any particular lens’s performance in terms of acutance and resolution is now so inextricably intertwined with that of the sensor behind it, then you as lens buyers, are left with a confusing myriad of various test results all freely available on the internet.

What does Uncle Andy recommend? – well a trip to DXO Mark is not a bad starting point all things considered, but I do strongly suggest that you take on board the information I’ve given you here and then scoot over to the DXO test methodology pages HERE and read them carefully before you begin to examine the data and draw any conclusions from it.

But do NOT make decisions just on what you see there; there is no substitute for hands-on testing with your camera before you go and spend your hard-earned cash.  Proper testing and evaluation is not as simple as you might think, so it’s a good idea to perhaps find someone who knows what they are doing and is prepared to help you out.   Do NOT ask the geezer in the camera shop – he knows bugger all about bugger all!

Do Sensors Out Resolve Lenses?

Well, that’s the loaded question isn’t it – you can get very poor performance from what is ostensibly a superb lens, and to a degree vice versa.

It all depends on what you mean by the question, because in reality a sensor can only resolve what the lens chucks at it.

If you somehow chiseled the lens out of your iPhone and Sellotaped it to your shiny new 1DX then I’m sure you’d notice that the sensor did indeed out resolve the lens – but if you were a total divvy who didn’t know any better then in reality all you’d be ware of is that you had a crappy image – and you’d possibly blame the camera, not the lens – ‘cos it took way better pics on your iPhone 4!

There are so many external factors that effect the output of a lens – available light, subject brightness range, angle of subject to the lens axis to name but three.  Learning how to recognise these potential pitfalls and to work around them is what separates a good photographer from an average one – and by good I mean knowledgeable – not necessarily someone who takes pics for a living.

I remember when the 1DX specs were first ‘leaked’ and everyone was getting all hot and bothered about having to buy the new Canon glass because the 1DX was going to out resolve all Canons old glass – how crackers do you need to be nowadays to get a one way ticket to the funny farm?

If they were happy with the lens’s optical performance pre 1DX then that’s what they would get post 1DX…duh!

If you still don’t get it then try looking at it this way – if lenses out resolve your sensor then you are up “Queer Street” – what you see in the viewfinder will be far better than the image that comes off the sensor, and you will not be a happy camper.

If on the other hand, our sensors have the capability to resolve more lines per millimeter than our lenses can throw at them, and we are more than satisfied with our lenses resolution and acutance, then we would be in a happy place, because we’d be wringing the very best performance from our glass – always assuming we know how to ‘drive the juggernaut’  in the first place!

Become a Patron!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Lens Performance

I have a friend – yes, a strange concept I know, but I do have some – we’ll call him Steve.

Steve is a very talented photographer – when he’ll give himself half a chance; but impatience can sometimes get the better of him.

He’ll have a great scene in front of him but then he’ll forget things such as any focus or exposure considerations the scene demands, and the resulting image will be crap!

Quite often, a few of Steve’s character flaws begin to emerge at this juncture.

Firstly, Steve only remembers his successes; this leads to the unassailable ‘fact’ that he couldn’t possibly have ‘screwed up’.

So now we can all guess the conclusive outcome of that scenario can’t we……..that’s right; his camera gear has fallen short in the performance department.

Clairvoyance department would actually be more accurate!

So this ‘error in his camera system’ needs to be stamped on – hard and fast!

This leads to Steve embarking on a massive information-gathering exercise from various learned sources on ‘that there inter web’ – where another of Steve’s flaws shows up; that of disjointed speed reading…..

The terrifying outcome of these situations usually concludes with Steve’s confident affirmation that some piece of his equipment has let him down; not just by becoming faulty but sometimes, more worryingly by initial design.

These conclusions are always arrived at in the same manner – the various little snippets of truth and random dis-associated facts that Steve gathers, all get forcibly hammered into some hellish, bastardized ‘factual’ jigsaw in his head.

There was a time when Steve used to ask me first, but he gave up on that because my usual answer contravened the outcome of his first mentioned character flaw!

Lately one of Steve’s biggest peeves has been the performance of one or two of his various lenses.

Ostensibly you’ll perhaps think there’s nothing wrong in that – after all, the image generated by the camera is only as good as the lens used to gather the light in the scene – isn’t it?

 

But there’s a potential problem, and it  lies in what evidence you base your conclusions on……………

 

For Steve, at present, it’s manufacturers MTF charts, and comparisons thereof, coupled with his own images as they appear in Lightroom or Photoshop ACR.

Again, this might sound like a logical methodology – but it isn’t.

It’s flawed on so many levels.

 

The Image Path from Lens to Sensor

We could think of the path that light travels along in order to get to our camera sensor as a sort of Grand National horse race – a steeplechase for photons!

“They’re under starters orders ladies and gentlemen………………and they’re off!”

As light enters the lens it comes across it’s first set of hurdles – the various lens elements and element groups that it has to pass through.

Then they arrive at Becher’s Brook – the aperture, where there are many fallers.

Carefully staying clear of the inside rail and being watchful of any lose photons that have unseated their riders at Becher’s we move on over Foinavon – the rear lens elements, and we then arrive at the infamous Canal Turn – the Optical Low Pass filter; also known as the Anti-alias filter.

Crashing on past the low pass filter and on over Valentines only the bravest photons are left to tackle the the last big fence on their journey – The Chair – our camera sensor itself.

 

Okay, I’ll behave myself now, but you get the general idea – any obstacle that lies in the path of light between the front surface of our lens and the photo-voltaic surface of our sensor is a BAD thing.

Andy Astbury,Wildlife in Pixels,lens,resolution,optical path,sharpness,resolution,imaging pathway

The various obstacles to light as it passes through a camera (ASIC = Application Specific Integrated Circuit)

The problems are many, but let’s list a few:

  1. Every element reduces the level of transmitted light.
  2. Because the lens elements have curved surfaces, light is refracted or bent; the trick is to make all wavelengths of light refract to the same degree – failure results in either lateral or longitudinal chromatic aberration – or worse still, both.
  3. The aperture causes diffraction – already discussed HERE

We have already seen in that same previous post on Sensor Resolution that the number of megapixels can effect overall image quality in terms of overall perceived sharpness due to pixel-pitch, so all things considered, using photographs of any 3 dimensional scene is not always a wise method of judging lens performance.

And here is another reason why it’s not a good idea – the effect on image quality/perceived lens resolution of anti-alias, moire or optical low pass filter; and any other pre-filtering.

I’m not going to delve into the functional whys and wherefores of an AA filter, save to say that it’s deemed a necessary evil on most sensors, and that it can make your images take on a certain softness because it basically adds blur to every edge in the image projected by the lens onto your sensor.

The reasoning behind it is that it stops ‘moire patterning’ in areas of high frequency repeated detail.  This it does, but what about the areas in the image where its effect is not required – TOUGH!

 

Many photographers have paid service suppliers for AA filter removal just to squeeze the last bit of sharpness out of their sensors, and Nikon of course offer the ‘sort of AA filter-less’ D800E.

Side bar note:  I’ve always found that with Nikon cameras at least, the pro-body range seem to suffer a lot less from undesirable AA filtration softening than than their “amateur” and “semi pro” bodies – most notably the D2X compared to a D200, and the D3 compared to the D700 & D300.  Perhaps this is due to a ‘thinner’ filter, or a higher quality filter – I don’t know, and to be honest I’ve never had the desire to ‘poke Nikon with a sharp stick’ in order to find out.

 

Back in the days of film things were really simple – image resolution was governed by just two things; lens resolution and film resolution:

1/image resolution = 1/lens resolution + 1/film resolution

Film resolution was a variable depending on the Ag Halide distribution and structure,  dye coupler efficacy within the film emulsion, and the thickness of the emulsion or tri-pack itself.

But today things are far more complicated.

With digital photography we have all those extra hurdles to jump over that I mentioned earlier, so we end up with a situation whereby:

1/Image Resolution = 1/lens resolution + 1/AA filter resolution + 1/sensor resolution + 1/image processor/imaging ASIC resolution

Steve is chasing after lens resolution under the slightly misguided idea the resolution equates to sharpness, which is not strictly true; but he is basing his conception of lens sharpness based on the detail content and perceived detail ‘sharpness’ of his  images; which are ‘polluted’ if you like by the effects of the AA filter, sensor and imaging ASIC.

What it boils down to, in very simplified terms, is this:

You can have one particular lens that, in combination with one camera sensor produces a superb image, but in combination with another sensor produces a not-quite-so-superb image!

On top of the “fixed system” hurdles I’ve outlined above, we must not forget the potential for errors introduced by lens-to-body mount flange inaccuracies, and of course, the big elephant-in-the-room – operator error – ehh Steve.

So attempting to quantify the pure ‘optical performance’ of a lens using your ‘taken images’ is something of a pointless exercise; you cannot see the pure lens sharpness or resolution unless you put the lens on a fully equipped optical test bench – and how many of us have got access to one of those?

The truth of the matter is that the average photographer has to trust the manufacturers to supply accurately put together equipment, and he or she has to assume that all is well inside the box they’ve just purchased from their photographic supplier.

But how can we judge a lens against an assumed standard of perfection before we part with our cash?

A lot of folk, including Steve – look at MTF charts.

 

The MTF Chart

Firstly, MTF stands for Modulation Transfer Function – modu-what I hear your ask!

OK – let’s deal with the modulation bit.  Forget colour for a minute and consider yourself living in a black & white world.  Dark objects in a scene reflect few photons of light – ’tis why the appear dark!  Conversely, bright objects reflect loads of the little buggers, hence these objects appear bright.

Imagine now that we are in a sealed room totally impervious to the ingress of any light from outside, and that the room is painted matte white from floor to ceiling – what is the perceived colour of the room? Black is the answer you are looking for!

Now turn on that 2 million candle-power 6500k searchlight in the corner.  The split second before your retinas melted, what was the perceived colour of the room?

Note the use of the word ‘perceived’ – the actual colour never changed!

The luminosity value of every surface in the room changed from black to white/dark to bright – the luminosity values MODULATED.

Now back in reality we can say that a set of alternating black and white lines of equal width and crisp clean edges represent a high degree of contrast, and therefore tonal modulation; and the finer the lines the higher is the modulation frequency – which we measure in lines per millimeter (lpmm).

A lens takes in a scene of these alternating black and white lines and, just like it does with any other scene, projects it into an image circle; in other words it takes what it sees in front of it and ‘transfers’ the scene to the image circle behind it.

With a bit of luck and a fair wind this image circle is being projected sharply into the focal plane of the lens, and hopefully the focal plane matches up perfectly with the plane of the sensor – what used to be refereed to as the film plane.

The efficacy with which the lens carries out this ‘transfer’ in terms of maintaining both the contrast ratio of the modulated tones and the spatial separation of the lines is its transfer function.

So now you know what MTF stands for and what it means – good this isn’t it!

 

Let’s look at an MTF chart:

Nikon 500mm f4 MTF chart

Nikon 500mm f4 MTF chart

Now what does all this mean?

 

Firstly, the vertical axis – this can be regarded as that ‘efficacy’ I mentioned above – the accuracy of tonal contrast and separation reproduction in the projected image; 1.0 would be perfect, and 0 would be crappier than the crappiest version of a crap thing!

The horizontal axis – this requires a bit of brain power! It is scaled in increments of 5 millimeters from the lens axis AT THE FOCAL PLANE.

The terminus value at the right hand end of the axis is unmarked, but equates to 21.63mm – half the opposing corner-to-corner dimension of a 35mm frame.

Now consider the diagram below:

Andy Astbury,image circle,photography,frame,full frame,dimensions,radial,

The radial dimensions of the 35mm format.

These are the radial dimensions, in millimeters, of a 35mm format frame (solid black rectangle).

The lens axis passes through the center axis of the sensor, so the radii of the green, yellow and dashed circles correspond to values along the horizontal axis of an MTF chart.

Let’s simplify what we’ve learned about MTF axes:

Andy Astbury,image circle,photography,frame,full frame,dimensions,radial,

MTF axes hopefully made simpler!

Now we come to the information data plots; firstly the meaning of Sagittal & Meridional.   From our perspective in this instance I find it easier for folk to think of them as ‘parallel to’ and ‘at right angles to’ the axis of measurement, though strictly speaking Meridional is circular and Sagittal is radial.

This axis of measurement is from the lens/film plane/sensor center to the corner of a 35mm frame – in other words, along that 21.63mm radius.

Andy Astbury,image circle,photography,frame,full frame,dimensions,radial,

The axis of MTF measurement and the relative axial orientation of Sagittal & Meridional lines. NOTE: the target lines are ONLY for illustration.

Separate measurements are taken for each modulation frequency along the entire measurement axis:

Andy Astbury,image circle,photography,frame,full frame,dimensions,radial,

Thin Meridional MTF measurement. (They should be concentric circles but I can’t draw concentric circles!).

Let’s look at that MTF curve for the 500m f4 Nikon together with a legend of ‘sharpness’ – the 300 f2.8:

MTF chart,Andy Astbury,lens resolution

Nikon MTF comparison between the 500mm f4 & 300mm f2.8

Nikon say on their website that they measure MTF at maximum aperture, that is, wide open; so the 300mm chart is for an aperture of f2.8 (though they don’t say so) and the 500mm is for an f4 aperture – which they do specify on the chart – don’t ask me why ‘cos I’ve no idea.

As we can see, the best transfer values for the two lenses (and all other lenses) is 10 lines per millimeter, and generally speaking sagittal orientation usually performs slightly better than meridional, but not always.

10 lpmm is always going to give a good transfer value because its very coarse and represents a lower frequency of detail than 30 lpmm.

Funny thing, 10 lines per millimeter is 5 line pairs per millimeter – and where have we heard that before? HERE – it’s the resolution of the human eye at 25 centimeters.

 

Another interesting thing to bare in mind is that, as the charts clearly show, better transfer values occur closer to the lens axis/sensor center, and that performance falls as you get closer to the frame corners.

This is simply down to the fact that your are getting closer to the inner edge of the image circle (the dotted line in the diagrams above).  If manufacturers made lenses that threw a larger image circle then corner MTF performance would increase – it can be done – that’s the basis upon which PCE/TS lenses work.

One way to take advantage of center MTF performance is to use a cropped sensor – I still use my trusty D2Xs for a lot of macro work; not only do I get the benefit of center MTF performance across the majority of the frame but I also have the ability to increase the lens to subject distance and get the composition I want, so my depth of field increases slightly for any given aperture.

Back to the matter at hand, here’s my first problem with the likes of Nikon, Canon etc:  they don’t specify the lens-to-target distance. A lens that gives a transfer value of 9o% plus on a target of 10 lpmm sagittal at 2 meters distance is one thing; one that did the same but at 25 meters would be something else again.

You might look at the MTF chart above and think that the 300mm f2.8 lens is poor on a target resolution of  30 lines per millimeter compared to the 500mm, but we need to temper that conclusion with a few facts:

  1. A 300mm lens is a lot wider in Field of View (FoV) than a 500mm so there is a lot more ‘scene width’ being pushed through the lens – detail is ‘less magnified’.
  2. How much ‘less magnified’ –  40% less than at 500mm, and yet the 30 lpmm transfer value is within 6% to 7% that of the 500mm – overall a seemingly much better lens in MTF terms.
  3. The lens is f2.8 – great for letting light in but rubbish for everything else!

Most conventional lenses have one thing in common – their best working aperture for overall image quality is around f8.

But we have to counter balance the above with the lack of aforementioned target distance information.  The minimum focus distances for the two comparison lenses are 2.3 meters and 4.0 meters respectively so obviously we know that the targets are imaged and measured at vastly different distances – but without factual knowledge of the testing distances we cannot really say that one lens is better than the other.

 

My next problem with most manufacturers MTF charts is that the values are supplied ‘a la white light’.

I mentioned earlier – much earlier! – that lens elements refracted light, and the importance of all wavelengths being refracted to the same degree, otherwise we end up with either lateral or longitudinal chromatic aberration – or worse still – both!

Longitudinal CA will give us different focal planes for different colours contained within white light – NOT GOOD!

Lateral CA gives us the same plane of focus but this time we get lateral shifts in the red, green and blue components of the image, as if the 3 colour channels have come out of register – again NOT GOOD!

Both CA types are most commonly seen along defined edges of colour and/or tone, and as such they both effect transferred edge definition and detail.

So why do manufacturers NOT publish this information – there is to my knowledge only one that does – Schneider (read ‘proper lens’).

They produce some very meaningful MTF data for their lenses with modulation frequencies in excess of 90 to 150 lpmm; separate R,G & B curves; spectral weighting variations for different colour temperatures of light and all sorts of other ‘geeky goodies’ – I just love it all!

 

SHAME ON YOU NIKON – and that goes for Canon and Sigma just as much.

 

So you might now be asking WHY they don’t publish the data – they must have it – are they treating us like fools that wouldn’t be able to understand it; OR – are they trying to hide something?

You guys think what you will – I’m not accusing anyone of anything here.

But if they are trying to hide something then that ‘something’ might not be what you guys are thinking.

What would you think if I told you that if you were a lens designer you could produce an MTF plot with a calculator – ‘cos you can, and they do!

So, in a nutshell, most manufacturers MTF charts as published for us to see are worse than useless.  We can’t effectively use them to compare one lens against another because of missing data; we can’t get an idea of CA performance because of missing red, green and blue MTF curves; and finally we can’t even trust that the bit of data they do impart is even bloody genuine.

Please don’t get taken in by them next time you fancy spending money on glass – take your time and ask around – better still try one; and try it on more than 1 camera body!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Pixel Resolution – part 2

More on Pixel Resolution

In my previous post on pixel resolution  I mentioned that it had some serious ramifications for print.

The major one is PHYSICAL or LINEAR image dimension.

In that previous post I said:

  • Pixel dimension divided by pixel resolution = linear dimension

Now, as we saw in the previous post, linear dimension has zero effect on ‘digital display’ image size – here’s those two snake jpegs again:

Andy Astbury,wildlife in pixels,pixel,dpi,ppi,pixel resolution,photoshop,lightroom,adobe

European Adder – 900 x 599 pixels with a pixel resolution of 300PPI

Andy Astbury,wildlife in pixels,pixel,dpi,ppi,pixel resolution,photoshop,lightroom,adobe

European Adder – 900 x 599 pixels with a pixel resolution of 72PPI

Digital display size is driven by pixel dimensionNOT linear dimension or pixel resolution.

Print on the other hand is directly driven by image linear dimension – the physical length and width of our image in inches, centimeters or millimeters.

Now I teach this ‘stuff’ all the time at my Calumet workshops and I know it’s hard for some folk to get their heads around print size and printer output, but it really is simple and straightforward if you just think about it logically for minute.

Let’s get away from snakes and consider this image of a cute Red Squirrel:

Andy Astbury,wildlife in pixels,

Red Squirrel with Bushy Tail – what a cutey!
Shot with Nikon D4 – full frame render.

Yeah yeah – he’s a bit big in the frame for my taste but it’s a seller so boo-hoo – what do I know ! !

Shot on a Nikon D4 – the relevance of which is this:

  • The D4 has a sensor with a linear dimension of 36 x 24 millimeters, but more importantly a photosite dimension of 4928 x 3280. (this is the effective imaging area – total photosite area is 4992 x 3292 according to DXO Labs).

Importing this image into Lightroom, ACR, Bridge, CapOne Pro etc will take that photosite dimension as a pixel dimension.

They also attach the default standard pixel resolution of 300 PPI to the image.

So now the image has a set of physical or linear dimensions:

  • 4928/300  x  3280/300 inches  or  16.43″ x 10.93″

or

  • 417.24 x 277.71 mm for those of you with a metric inclination!

So how big CAN we print this image?

 

Pixel Resolution & Image Physical Dimension

Let’s get back to that sensor for a moment and ask ourselves a question:

  • “Does a sensor contain pixels, and can it have a PPI resolution attached to it?
  • Well, the strict answer would be No and No not really.

But because the photosite dimensions end up being ‘converted’ to pixel dimensions then let’s just for a moment pretend that it can.

The ‘effective’ PPI value for the D4 sensor could be easily derived from its long edge ‘pixel’ count of the FX frame divided by the linear length which is just shy of 36mm or 1.4″ – 3520 PPI or thereabouts.

So, if we take this all literally our camera captures and stores a file that has linear dimensions of  1.4″ x 0.9″, pixel dimensions of  4928 x 3280 and a pixel resolution of 3520 PPI.

Import this file into Lightroom for instance, and that pixel resolution is reduced to 300 PPI.  It’s this very act that renders the image on our monitor at a size we can work with.  Otherwise we’d be working on postage stamps!

And what has that pixel resolution done to the linear image dimensions?  Well it’s basically ‘magnified’ the image – but by how much?

 

Magnification & Image Size

Magnification factors are an important part of digital imaging and image reproduction, so you need to understand something – magnification factors are always calculated on the diagonal.

So we need to identify the diagonals of both our sensor, and our 300 PPI image before we can go any further.

Here is a table of typical sensor diagonals:

Andy Astbury

Table of Sensor Diagonals for Digital Cameras.

And here is a table of metric print media sizes:

Andy Astbury

Metric Paper Sizes including diagonals.

To get back to our 300 PPI image derived from our D4 sensor,  Pythagoras tells us that our 16.43″ x 10.93″ image has a diagonal of 19.73″ – or 501.14mm

So with a sensor diagonal of 43.2mm we arrive at a magnification factor of around 11.6x for our 300 PPI native image as displayed on our monitor.

This means that EVERYTHING on the sensor – photosites/pixels, dust bunnies, logs, lumps of coal, circles of confusion, Airy Discs – the lot – are magnified by that factor.

Just to add variety, a D800/800E produces native 300 PPI images at 24.53″ x 16.37″ – a magnification factor of 17.3x over the sensor size.

So you can now begin to see why pixel resolution is so important when we print.

 

How To Blow Up A Squirrel !

Let’s get back to ‘his cuteness’ and open him up in Photoshop:

Our Squirrel at his native 300 PPI open in Photoshop.

Our Squirrel at his native 300 PPI open in Photoshop.

See how I keep you on your toes – I’ve switched to millimeters now!

The image is 417 x 277 mm – in other words it’s basically A3.

What happens if we hit print using A3 paper?

Red Squirrel with Bushy Tail. D4 file at 300 PPI printed to A3 media.

Red Squirrel with Bushy Tail. D4 file at 300 PPI printed to A3 media.

Whoops – that’s not good at all because there is no margin.  We need workable margins for print handling and for mounting in cut mattes for framing.

Do not print borderless – it’s tacky, messy and it screws your printer up!

What happens if we move up a full A size and print A2:

Red Squirrel 300 PPI printed on A2

Red Squirrel D4 300 PPI printed on A2

Now that’s just over kill.

But let’s open him back up in Photoshop and take a look at that image size dialogue again:

Our Squirrel at his native 300 PPI open in Photoshop.

Our Squirrel at his native 300 PPI open in Photoshop.

If we remove the check mark from the resample section of the image size dialogue box (circled red) and make one simple change:

Our Squirrel at a reduced pixel resolution of 240 PPI open in Photoshop.

Our Squirrel at a reduced pixel resolution of 240 PPI open in Photoshop.

All we need to do is to change the pixel resolution figure from 300 PPI to 240 PPI and click OK.

We make NO apparent change to the image on the monitor display because we haven’t changed any physical dimension and we haven’t resampled the image.

All we have done is tell the print pipeline that every 240 pixels of this image must occupy 1 liner inch of paper – instead of 300 pixels per linear inch of paper.

Let’s have a look at the final outcome:

Red Squirrel D4 240 PPI printed on A2.

Red Squirrel D4 240 PPI printed on A2.

Perfick… as Pop Larkin would say!

Now we have workable margins to the print for both handling and mounting purposes.

But here’s the big thing – printed at 2880+ DPI printer output resolution you would see no difference in visual print quality.  Indeed, 240 PPI was the Adobe Lightroom, ACR default pixel resolution until fairly recently.

So there we go, how big can you print?? – Bigger than you might think!

And it’s all down to pixel resolution – learn to understand it and you’ll find a lot of  the “murky stuff” in photography suddenly becomes very simple!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Pixel Resolution

What do we mean by Pixel Resolution?

Digital images have two sets of dimensions – physical size or linear dimension (inches, centimeters etc) and pixel dimensions (long edge & short edge).

The physical dimensions are simple enough to understand – the image is so many inches long by so many inches wide.

Pixel dimension is straightforward too – ‘x’ pixels long by ‘y’ pixels wide.

If we divide the physical dimensions by the pixel dimensions we arrive at the PIXEL RESOLUTION.

Let’s say, for example, we have an image with pixel dimensions of 3000 x 2400 pixels, and a physical, linear dimension of 10 x 8 inches.

Therefore:

3000 pixels/10 inches = 300 pixels per inch, or 300PPI

and obviously:

2400 pixels/8 inches = 300 pixels per inch, or 300PPI

So our image has a pixel resolution of 300PPI.

 

How Does Pixel Resolution Influence Image Quality?

In order to answer that question let’s look at the following illustration:

Andy Astbury,pixels,resolution,dpi,ppi,wildlife in pixels

The number of pixels contained in an image of a particular physical size has a massive effect on image quality. CLICK to view full size.

All 7 square images are 0.5 x 0.5 inches square.  The image on the left has 128 pixels per 0.5 inch of physical dimension, therefore its PIXEL RESOLUTION is 2 x 128 PPI (pixels per inch), or 256PPI.

As we move from left to right we halve the number of pixels contained in the image whilst maintaining the physical size of the image – 0.5″ x 0.5″ – so the pixels in effect become larger, and the pixel resolution becomes lower.

The fewer the pixels we have then the less detail we can see – all the way down to the image on the right where the pixel resolution is just 4PPI (2 pixels per 0.5 inch of edge dimension).

The thing to remember about a pixel is this – a single pixel can only contain 1 overall value for hue, saturation and brightness, and from a visual point of view it’s as flat as a pancake in terms of colour and tonality.

So, the more pixels we can have between point A and point B in our image the more variation of colour and tonality we can create.

Greater colour and tonal variation means we preserve MORE DETAIL and we have a greater potential for IMAGE SHARPNESS.

REALITY

So we have our 3 variables; image linear dimension, image pixel dimension and pixel resolution.

In our typical digital work flow the pixel dimension is derived from the the photosite dimension of our camera sensor – so this value is fixed.

All RAW file handlers like Lightroom, ACR etc;  all default to a native pixel resolution of 300PPI. * (this 300ppi myth annoys the hell out of me and I’ll explain all in another post).

So basically the pixel dimension and default resolution SET the image linear dimension.

If our image is destined for PRINT then this fact has some serious ramifications; but if our image is destined for digital display then the implications are very different.

 

Pixel Resolution and Web JPEGS.

Consider the two jpegs below, both derived from the same RAW file:

Andy Astbury,pixels,resolution,dpi,ppi,Wildlife in Pixels

European Adder – 900 x 599 pixels with a pixel resolution of 300PPI

European Adder - 900 x 599 pixels with a pixel resolution of 72PPI

European Adder – 900 x 599 pixels with a pixel resolution of 72PPI

In order to illustrate the three values of linear dimension, pixel dimension and pixel resolution of the two images let’s look at them side by side in Photoshop:

Andy Astbury,photoshop,resolution,pixels,ppi,dpi,wildlife in pixels,image size,image resolution

The two images opened in Photoshop – note the image size dialogue contents – CLICK to view full size.

The two images differ in one respect – their pixel resolutions.  The top Adder is 300PPI, the lower one has a resolution of 72PPI.

The simple fact that these two images appear to be exactly the same size on this page means that, for DIGITAL display the pixel resolution is meaningless when it comes to ‘how big the image is’ on the screen – what makes them appear the same size is their identical pixel dimensions of 900 x 599 pixels.

Digital display devices such as monitors, ipads, laptop monitors etc; are all PIXEL DIMENSION dependent.  The do not understand inches or centimeters, and they display images AT THEIR OWN resolution.

Typical displays and their pixel resolutions:

  • 24″ monitor = typically 75 to 95 PPI
  • 27″ iMac display = 109 PPI
  • iPad 3 or 4 = 264 PPI
  • 15″ Retina Display = 220 PPI
  • Nikon D4 LCD = 494 PPI

Just so that you are sure to understand the implication of what I’ve just said – you CAN NOT see your images at their NATIVE 300 PPI resolution when you are working on them.  Typically you’ll work on your images whilst viewing them at about 1/3rd native pixel resolution.

Yes, you can see 2/3rds native on a 15″ MacBook Pro Retina – but who the hell wants to do this – the display area is minuscule and its display gamut is pathetically small. 😉

Getting back to the two Adder images, you’ll notice that the one thing that does change with pixel resolution is the linear dimensions.

Whilst the 300 PPI version is a tiny 3″ x 2″ image, the 72 PPI version is a whopping 12″ x 8″ by comparison – now you can perhaps understand why I said earlier that the implications of pixel resolution for print are fundamental.

Just FYI – when I decide I’m going to create a small jpeg to post on my website, blog, a forum, Flickr or whatever – I NEVER ‘down sample’ to the usual 72 PPI that get’s touted around by idiots and no-nothing fools as “the essential thing to do”.

What a waste of time and effort!

Exporting a small jpeg at ‘full pixel resolution’ misses out the unnecessary step of down sampling and has an added bonus – anyone trying to send the image direct from browser to a printer ends up with a print the size of a matchbox, not a full sheet of A4.

It won’t stop image theft – but it does confuse ’em!

I’ve got a lot more to say on the topic of resolution and I’ll continue in a later post, but there is one thing related to PPI that is my biggest ‘pet peeve’:

 

PPI and DPI – They Are NOT The Same Thing

Nothing makes my blood boil more than the persistent ‘mix up’ between pixels per inch and dots per inch.

Pixels per inch is EXACTLY what we’ve looked at here – PIXEL RESOLUTION; and it has got absolutely NOTHING to do with dots per inch, which is a measure of printer OUTPUT resolution.

Take a look inside your printer driver; here we are inside the driver for an Epson 3000 printer:

Andy Astbury,printer,dots per inch,dpi,pixels per inch,ppi,photoshop,lightroom,pixel resolution,output resoloution

The Printer Driver for the Epson 3000 printer. Inside the print settings we can see the output resolutions in DPI – Dots Per Inch.

Images would be really tiny if those resolutions were anything to do with pixel density.

It surprises a lot of people when they come to the realisation that pixels are huge in comparison to printer dots – yes, it can take nearly 400 printer dots (20 dots square) to print 1 square pixel in an image at 300 PPI native.

See you in my next post!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Noise and the Camera Sensor

Camera sensors all suffer with two major afflictions; diffraction and noise; and between them these two afflictions cause more consternation amongst photographers than anything else.

In this post I’m going to concentrate on NOISE, that most feared of sensor afflictions, and its biggest influencer – LIGHT, and its properties.

What Is Light?

As humans we perceive light as being a constant continuous stream or flow of electromagnetic energy, but it isn’t!   Instead of flowing like water it behaves more like rain, or indeed, bullets from a machine gun!   Here’s a very basic physics lesson:

Below is a diagram showing the Bohr atomic model.

We have a single positively charged proton (black) forming the nucleus, and a single negatively charged electron (green) orbiting the nucleus.

The orbit distance n1 is defined by the electrostatic balance of the two opposing charges.

Andy Astbury,noise,light,Bohr atomic model

The Bohr Atomic Model

If we apply energy to the system then a ‘tipping point’ is reached and the electron is forced to move away from the nucleus – n2.

Apply even more energy and the system tips again and the electron is forced to move to an even higher energy level – n3.

Now here’s the fun bit – stop applying energy to the system.

As the system is no longer needing to cope with the excess energy it returns to its natural ‘ground’ state and the electron falls back to n1.

In the process the electron sheds the energy it has absorbed – the red squiggly bit – as a quantum, or packet, of electromagnetic energy.

This is basically how a flash gun works.

This ‘packet’ has a start and an end; the start happens as the electron begins its fall back to its ground state; and the end occurs once the electron arrives at n1 – therefore it can perhaps be tentatively thought of as being particulate in nature.

So now you know what Prof. Brian Cox knows – CERN here we come!

Right, so what’s this got to do with photography and camera sensor noise

Camera Sensor Noise

All camera sensors are effected by noise, and this noise comes in various guises:

Firstly, the ‘noise control’ sections of most processing software we use tend to break it down into two components; luminosity, or luminance noise; and colour noise.  Below is a rather crappy image that I’m using to illustrate what we might assume is the reality of noise:

Andy Astbury,noise

This shot shows both Colour & Luminance noise.
The insert shows the shot and the small white rectangle is the area we’re concentrating on.

Now let’s look at the two basic components: Firstly the LUMINANCE component

Andy Astbury,noise

Here we see the LUMINANCE noise component – colour & colour noise components have been removed for clarity.

Next, the COLOUR NOISE bit:

Andy Astbury,noise

The COLOUR NOISE component of the area we’re looking at. All luminance noise has been removed.

I must stress that the majority of colour noise you see in your files inside LR,ACR,CapOne,PS etc: is ‘demosaicing colour noise’, which occurs during the demosaic processes.

But the truth is, it’s not that simple.

Localised random colour errors are generated ‘on sensor’ due to the individual sensor characteristics as we’ll see in a moment, because noise, in truth, comes in various guises that collectively effect luminosity and colour:

Andy Astbury,noise

Shot Noise

This first type of noise is Shot Noise – called so because it’s basically an intrinsic part of the exposure, and is caused by photon flux in the light reflected by the subject/scene.

Remember – we see light in a different way to that of our camera. What we don’t notice is the fact that photon streams rise and fall in intensity – they ‘flux’ – these variations happen far too fast for our eyes to notice, but they do effect the sensor output.

On top of this ‘fluxing’ problem we have something more obvious to consider.

Lighter subjects reflect more light (more photons), darker subjects reflect less light (less photons).

Your exposure is always going to some sort of ‘average’, and so is only going to be ‘accurate’ for certain areas of the scene.

Lighter areas will be leaning towards over exposure; darker areas towards under exposure – your exposure can’t be perfect for all tones contained in the scene.

Tonal areas outside of the ‘average exposure perfection’ – especially the darker ones – may well contain more shot noise.

Shot noise is therefore quite regular in its distribution, but in certain areas it becomes irregular – so its often described as ‘pseudo random’ .

Andy Astbury,noise

Read Noise

Read Noise – now we come to a different category of noise completely.

The image is somewhat exaggerated so that you can see it, but basically this is a ‘zero light’ exposure; take a shot with the lens cap on and this is what happens!

What you can see here is the background sensor noise when you take any shot.

Certain photosites on the sensor are actually generating electrons even in the complete absence of light – seeing as they’re photo-voltaic they shouldn’t be doing this – but they do.

Added to this are AD Converter errors and general ‘system noise’ generated by the camera – so we can regard Read Noise as being like the background hiss, hum and rumble we can hear on a record deck when we turn the Dolby off.

Andy Astbury,noise

Thermal & Pattern Noise

In the same category as Read Noise are two other types of noise – thermal and pattern.

Both again have nothing to do with light falling on the sensor, as this too was shot under a duvet with the lens cap on – a 30 minute exposure at ISO 100 – not beyond stupid when you think of astro photography and star trail shots in particular.

You can see in the example that there are lighter and darker areas especially over towards the right side and top right corner – this is Thermal Noise.

During long exposures the sensor actually heats up, which in turn increases the response of photosites in those areas and causes them to release more electrons.

You can also see distinct vertical and some horizontal banding in the example image – this is pattern noise, yet another sensor noise signature.

Andy Astbury,noise

Under Exposure Noise – pretty much what most photographers think of when they hear the word “noise”.

Read Noise, Pattern Noise, Thermal Noise and to a degree Shot Noise all go together to form a ‘base line noise signature’ for your particular sensor, so when we put them all together and take a shot where we need to tweak the exposure in the shadow areas a little we get an overall Under Exposure Noise characteristic for our camera – which let’s not forget, contains other elements of  both luminance noise and colour noise components derived from the ISO settings we use.

All sensors have a base ISO – this can be thought of as the speed rating which yields the highest Dynamic Range (Dynamic Range falls with increasing ISO values, which is basically under exposure).

At this base ISO the levels of background noise generated by the sensor just being active (Pattern,Read & Thermal) will be at their lowest, and can be thought of as the ‘base noise’ of the sensor.

How visually apparent this base noise level is depends on what is called the Signal to Noise Ratio – the higher the S/N ratio the less you see the noise.

And what is it that gives us a high signal?

MORE Photons – that’s what..!

The more photons each photosite on the sensor can gather during the exposure then the more ‘masked’ will be any internal noise.

And how do we catch more photons?

By using a sensor with BIGGER photosites, a larger pixel pitch – that’s how.  And bigger photosites means LESS MEGAPIXELS – allow me to explain.

Buckets in the Rain A

Here we see a representation of various sized photosites from different sensors.

On the right is the photosite of a Nikon D3s – a massive ‘bucket’ for catching photons in – and 12Mp resolution.

Moving left we have another FX sensor photosite – the D3X at 24Mp, and then the crackpot D800 and it’s mental 36Mp tiny photosite  – can you tell I dislike the D800 yet? 

One the extreme left is the photosite from the 1.5x APS-C D7100 just for comparison.

Now cast your mind back to the start of this post where I said we could tentatively regard photons as particles – well, let’s imagine them as rain drops, and the photosites in the diagram above as different sized buckets.

Let’s put the buckets out in the back yard and let’s make the weather turn to rain:

Andy Astbury,Wildlife in Pixels,sensor resolution,megapixels,pixel pitch,base noise,signal to noise ratio

Various sizes of photosites catching photon rain.

Here it comes…

Andy Astbury,Wildlife in Pixels,sensor resolution,megapixels,pixel pitch,base noise,signal to noise ratio

It’s raining

OK – we’ve had 2 inches of rain in 10 seconds! Make it stop!

Andy Astbury,Wildlife in Pixels,sensor resolution,megapixels,pixel pitch,base noise,signal to noise ratio

All buckets have 2 inches of water in them, but which has caught the biggest volume of rain?

Thank God for that..

If we now get back to reality, we can liken the duration of the rain downpour as shutter speed, the rain drops themselves as photons falling on the sensor, and the consistency of water depth in each ‘bucket’ as a correct level of exposure.

Which bucket has the largest volume of water, or which photosite has captured the most photons – in other words which sensor has the highest S/N Ratio?   That’s right – the 12Mp D3s.

To put this into practical terms let’s consider the next diagram:

Andy Astbury,Wildlife in Pixels,sensor resolution,megapixels,pixel pitch,base noise,signal to noise ratio

Increased pixel pitch = Increased Signal to Noise Ratio

The importance of S/N ratio and its relevance to camera sensor noise can be seen clearly in the diagram above – but we are talking about base noise at native or base ISO.

If we now look at increasing the ISO speed we have a potential problem.

As I mentioned before, increasing ISO is basically UNDER EXPOSURE followed by in-camera “push processing” – now I’m showing my age..

Andy Astbury,noise,iso

The effect of increased ISO – in camera “push processing” automatically lift the exposure value to where the camera thinks it is supposed to be.

By under exposing the image we reduce the overall Signal to Noise Ratio, then the camera internals lift all the levels by a process of amplification – and this includes amplifying  the original level of base noise.

So now you know WHY and HOW your images look noisy at higher ISO’s – or so you’d think – again,  it’s not that simple; take the next two image crops for instance:

Andy Astbury, iso,noise,sensor noise

Kingfisher – ISO 3200 Nikon D4 – POOR LIGHT – Click for bigger view

Andy Astbury, iso,noise,sensor noise

Kingfisher – ISO 3200 Nikon D4 – GOOD LIGHT – CLICK for bigger view

If you click on the images (they’ll open up in new browser tabs) you’ll see that the noise from 3200 ISO on the D4 is a lot more apparent on the image taken in poor light than it is on the image taken in full sun.

You’ll also notice that in both cases the noise is less apparent in the high frequency detail (sharp high detail areas) and more apparent in areas of low frequency detail (blurred background).

So here’s “The Andy Approach” to noise and high ISO.

1. It’s not a good idea to use higher ISO settings just to combat poor light – in poor light everything looks like crap, and if it looks crap then the image will look even crappier.When I get in a poor light situation and I’m not faced with a “shot in a million” then I don’t take the shot.

2. There’s a big difference between poor light and low light that looks good – if that’s the case shoot as close to base ISO as you can get away with in terms of shutter speed.

3. I you shoot landscapes then shoot at base ISO at all times and use a tripod and remote release – make full use of your sensors dynamic range.

4. The Important One – don’t get hooked on megapixels and so-called sensor resolution – I’ve made thousands of landscape sales shot on a 12Mp D3 at 100 ISO. If you are compelled to have more megapixels buy a medium format camera which will generate a higher S/N Ratio because the photosites are larger.

5. If you shoot wildlife you’ll find that the necessity for full dynamic range decreases with angle of view/increasing focal length – using a 500mm lens you are looking at a very small section of what your eye can see, and tones contained within that small window will rarely occupy anywhere near the full camera dynamic range.

Under good light this will allow you to use a higher ISO in order to gain that crucial bit of extra shutter speed – remember, wildlife images tend to be at least 30 to 35% high frequency detail – noise will not be as apparent in these areas as it is in the background; hence to ubiquitous saying of  wildlife photographers “Watch your background at all times”.

Well, I think that’s enough to be going on with – but there’s oh so much more!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.