Parallel Horizontals.

Quite often when shooting landscapes, or more commonly seascapes, you may run into a problem with parallel horizontals and distortion between far and near horizontal features such as in the image below.

Parallel horizontals that are not parallel - but should be!

Parallel horizontals that are not parallel – but should be!

This sort of error cannot be fully corrected in Lightroom alone; we have to send the image to Photoshop in order to make the corrections in the most efficient manner.

Here’s a video lesson on how to effectively do just that, using the simplest, easiest and quickest of methods:

You can watch the video at full size HERE – make sure you click the HD icon.

This is something which commonly happens when photographing water with a regular shaped man-made structure in the foreground and a foreshortened horizon line such as the receding opposite shore in this shot.  But with a little logical thought these problems with parallel horizontals being “out of kilter” can be easily cured.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Flash Output Power

Flash output power raises a lot of questions when you are trying to decide how to spend your money.

A lot of people writing on the internet decry the versatility of portable speedlights and their use as studio-type lighting – something which is entirely wrong in my opinion; as there is nothing that can’t be done with them, as long as you have enough of them!

And you don’t have to take my word for it – just go and watch the worlds best exponent of the art, in my opinion anyway – Joe McNally. – then tell me if I’m wrong!

But with a top-of-the-line Nikon SB910 running at £340 and Canons new 600EXRT a cool £400 plus here in the UK, purchasing 10 to 15 of these puppies is a wallet-emptying proposition; though given the cash or sponsorship it’s the way I’d go all day long.

A lot of folk come to me with the same quandary – studio flash heads are a lot more cost-effective; notwithstanding their big limiting factor – lack of portability.

Leaving aside the other problems of many studio-style flash heads, namely lack of TTL and HSS/FP facility (though this can be walked-around on certain models with Pocket Wizards and the dark art of Hypersynch) they do give one big advantage – more photons for your buck.

But just how does one compare the flash output power of one unit/type with another – after all, this is what we want to know:

  • Can I get more light from flash A than I can from flash B
  • How many speedlights do I NOT have to buy if I get studio-style flash head C which costs 1.5x the price of one of my speedlights.

The problem is that manufacturers don’t make it easy to do direct comparisons of flash output power between brands and formats, and they tend to try and confuse the buyer with meaningless numbers and endless amounts of jargon.

Back in the days of manual-everything, we used to use flash in a very simple way using the units Guide Number.

The guide number is usually quoted as being at 100 ISO and at two values, one for metres and one for feet, and we use it with the following equation:

GUIDE No: = Distance x Aperture

So we might see a flash unit has a  guide number quoted as 40/131 at 100 ISO.  This means for example, that at 100 ISO and a flash to subject distance of 2.5 metres or 8.2 feet the correct aperture to use would be:

Guide No: divided by distance – in this case 40/2.5m or 131/8.2ft.

Either way the answer is 16, so we would set the shutter speed to the flash synch speed and the aperture to F16.

Simple!

Where things used to go a bit pear-shaped was when we introduced any form of output modifier such as a bounce board or diffuser because these spread and smooth the light and so reduce the number of photons falling on the subject by one or two stops.

But TTL flash metering soon put paid to all that grief.

Camera OEM Speedlights

Let’s compare a Nikon SB800 & SB910 – these have 100 ISO guide numbers of 38/125 & 34/112 respectively (published) – that’s right folks, the new one is weaker than the old one.

But by how much?

Well the old SB800 has a guide number that is 11.7% higher than the newer SB910, but what does this mean in terms of exposure value?

At a flash-to-subject distance of 3.4 metres, doing the maths says that our correct aperture would be 38/3.4 and 34/3.4 respectively. So the SB800 would put us at f11 (11.18 to be precise) while the SB910 would give us f10 – that’s an increase of over 1/3rd of a stop using the older unit.

When working with long lenses and wide apertures this extra 1/3rd of a stop gives me just that little bit more depth of field – and folk wonder why I don’t change mine!

Complications & Caveats

Nikon quote the two units above with guide numbers based on the head “zoom feature” being set to 35mm, which gives a fairly wide angle of lighting.  Someone said to me the other day that the new Canon 600EX was twice the power of the Nikon units I’ve already mentioned, simply because Canon quote the guide number for that device as a massive whopping 60!

The world is full of fools………..

Canon, in their infinite wisdom, quote that 60 value at a zoom head setting of 200mm.  The reality is that the guide number of this Canon unit varies between 26 with the zoom head at 20mm and 60 at 200mm – so in other words, give or take a bit, it’s pretty much in the same ball park as the Nikon units previously mentioned.

Canon speedlight naming policy tells you the units MAXIMUM guide number:

  • 600EX = 60 (metres)
  • 580EX = 58 (metres)
  • 550EX = 55 (metres)

The 550 specs also give you zoom length variations:

  • 105mm = 55 (metres)
  • 50mm = 42 (metres)
  • 17mm = 15 (metres)

Canon 600EX vs Nikon SB800 zoom lengths:

  • 105mm = 58 vs 56 (metres)
  • 50mm  = 42 vs 44 (metres)
  • 14mm = 15 vs 17 (metres)

Light leaves a flash unit in a cone of sorts, and the zoom heads on speedlight style units gather this cone of light so it basically matches the angle of view of the lens you are using and results in an efficient distribution of light across the image area – that’s the theory anyway.

Making the cone “tighter” forces the photons released by the flash into a more concentrated area, thus increasing the number falling on the subject and so increasing the overall exposure value.

So when we use guide numbers to compare various flash units we must ensure that we are comparing the units on a level playing field – in other words, the values we use are for the same “cone or reflector angle”.  And if the manufacturers use different reflector angles when assessing their flash guide numbers for promotion to the public, then you guys ‘n gals run the risk of being hood-winked into buying something that ain’t strictly what you thought it was when you ordered it.

So how do speed light style flash units stack up against studio type units?

Notwithstanding the lack of FP/HSS and any TTL metering problems, studio-type flash heads have guide numbers that are usually quoted as being “with standard reflector”.  This standard reflector is something which gathers those photons and shovels them out in a 50-55 degree spread; think “standard lens” on the image diagonal.

Current top end Nikon speed lights (and Canon) have guide numbers of sub 40 at 35mm reflector angles, and those equate to roughly 64 degrees diagonal coverage.  So if we were to “tighten them up” to 50 or 55 degrees we could, as a rough guide, round the guide numbers up to 42m or 44m.

Now we are on a more even playing field.

A Bowens Gemini 500R is quoted by Bowens as having a guide number of 85 with a standard reflector, so let’s be a bit cavalier with the numbers and say that it’s double the guide number of SB800/910 or 580EX etc.

So roughly how many speed lights is this puppy going to be equivalent to in terms of real flash output power ?

Hands up those who think two………….wrong!

This is where everything you thought you knew about exposure turns to shit in front of your very eyes (but not really!), and it’s called the Inverse Square Law.

Inverse Square Law

Now listen folks, this is as simple or as complicated as you care to make it!

When we capture a scene we capture a 2 dimensional plane filled with photons travelling towards us.

When we shine any light on an object we are actually throwing a flat sheet of light at it. This sheet is expanding outwards as it travels towards the subject because the photons in that sheet of light are all diverging.

So, let’s look at something tangible as an analogy – metric paper sizes!

How many sheets of A3 paper fit on a sheet of A2 paper?

That’s right, TWO – we’ve effectively doubled the surface area of the paper.

Now exposure works in stops – and making a 1 stop change in exposure effectively doubles or halves the exposure value depending on which way we’ve made the adjustment.

So moving from A3 to A2 is like making a 1 stop change in exposure; we’ve doubled the surface area of the paper.  BUT – we’ve not doubled the papers physical dimensions.

What paper size is twice the width AND twice the height of A3 – yep, that’s right, A1.

And how many sheets of A3 fit on a sheet of A1 – right again, 4.

So we have quadrupled the papers surface area – in exposure terms that would equate to 2 stops.

Now imagine a projector throwing an image onto a big screen and the screen to projector distance is 4 metres.  We go to the screen and measure the size of the projected image and it’s 1.5 metres by 2 metres.

How big will the image be if we move the projector to 8 metres from the screen?

Answer – 3 metres x 4 metres. (and the brightness of the image will have gone down by 2 stops).

And if we move the projector to 2 metres from the screen the image will be 0.75 metres x 1 metre. (and the brightness of the image will have increased by 2 stops!).

Inverse Square Law, Lights & Distances

Let’s say we have a theoretical flash with a metres guide number of 80.

If the subject is 10 metres from the light we need an aperture of f8 because 80/10 = 8.

If we now move the light to 5 metres from the subject our aperture decrease to 80/5 = f16

Halving the light-to-subject distance means we increase the overall intensity of the light (its effective flash output power) by 2 stops, so we have to reduce our overall exposure by two stops to compensate; otherwise we’ll just end up with 2 stops of over exposure.

And of course if we move the light away to 20 metres from the subject the inverse applies and we effectively reduce the flash output power by two stops and we’ll have to open the aperture up by two stops to avoid under exposure.

But what do we have to do in order to use f16 at 10 metres AND get correct exposure?

Use a flash with a guide number of 160 is what we’d need to do – it really is that simple.

Reality

So, how many guide number 45 speed lights would we need to equal one guide number 90 studio flash head in terms of effective flash output power?

Well it isn’t two – oh that we should be so lucky!

If we have two speed lights mounted together their cumulative guide number is equal to the square root of the sum of the squares of their individual guide numbers!

Sounds scary, but the answer is 63 or thereabouts.

But here’s the thing about photo-maths – it usually ends up as something really simple and this is no exception.

If you want to double the guide number you always need 4 identical units.

Do not forget what I’ve said above about published guide numbers – you have to ensure that the values were obtained using equal criteria, and manufacturers sometimes don’t always like to furnish you with the information you need in order to do easy comparisons.

Have they got something to hide – you may think that, but I couldn’t possibly comment!

What really does piss me off the meaningless crap they do furnish you with – watt-second, w/s, watt/sec or if you like Joules values.

The only thing these values do is inform you of the “potential energy” available at the capacitor; it’s no measure of how efficiently the flash tube converts that power into photons – and the photons is ALL we’re really interested in.

Other things such as tube temperature can have dramatic effects on both light output and the colour of that light.

Conclusion

This post has been a bit of a ramble but I’ve tried as best I can to give you a rough guide on how to compare one flash source with another.

Different photographers require different things – if all you want to do is shoot portraits and still life then shutter speeds above 1/250th synch are of little importance in general terms, so access to HSS/AutoFP via speed lights isn’t needed, and normal studio lights would be a far more economical proposition.

But on the other hand 8 speed lights in one bank, and two more banks of 4 speed lights each – all HSS/AutoFP compliant – crikey, the photographic possibilities are endless, and readily achievable – if your bank balance is endless too!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Flash Duration – How Fast Can We Go

Flash duration – how long the burst of photons from flash actually lasts, does seem to get a lot of people confused.

Earlier this year I posted an article on using flash HERE where the prime function of the flash was as a fill light. As a fill, flash should not be obvious in the images, as the main lighting is still the ambient light from the sun, and we’re just using the flash to “tickle” the foreground with a little extra light.

flash duration,fill flash,flash,shutter speed,photography,Andy Astbury,digital photography,wildlife photography,Red Squirrel

Flash as “fill” where the main lighting is still ambient daylight, and a moderate shutter speed is all that’s required. 1/800th sec @ f8 is plenty good enough for this shot.

Taking pictures is NEVER a case of just “rocking up”, seeing a shot and pressing the shutter; for me it’s a far more complex process whereby there’s a possible bucket-load of decisions to be made in between the “seeing the shot” bit and the “pressing the shutter” bit.

My biggest influencers are always the same – shutter speed and aperture, and the driving force behind these two things is light, and a possible lack thereof.

Once I make the decision to “add light” I then have to decide what role that additional light is going to take – fill, or primary source.

Obviously, in the shot above the decision was fill, and everything was pretty straight forward from there on, and aperture/shutter speed  selection is still dictated by the ambient lighting – I use the flash as a “light modifier”.

The duration of the flash is controlled by the TTL metering system and it’s duration is fairly irrelevant.

Let’s take a look at a different scenario.

flash duration,fill flash,flash,shutter speed,photography,Andy Astbury,digital photography,wildlife photography

The lovely Jo doing her 1930’s screen icon “pouty thing”. Flash is the ONLY light source in this image. 1/250th @ f9 ISO 100.

In this shot the lighting source is pure flash.  There’s very little in the way of ambient light present in this dark set, and what bit there is was completely over-powered by the flash output – so the lighting from the Elinchrom BX 500 monoblocks being used here is THE SOLE light source.

Considerations over the lighting itself are not the purpose of this post – what we are concerned with here are the implications for shutter speed due to flash synchronization.

The flash units were the standard type of studio flash unit offering no TTL interface with the camera being used, so it’s manual everything!

But the exposure in terms of shutter speed is capped at 1/250th of a second due to the CAMERA – that is it’s highest synch speed.

The focal length of the lens is 50mm so I need to shoot at around f8 or f9 to obtain workable depth of field, so basic exposure settings are dictated.  This particular shot was achieved by balancing the light-to-subject distance along the lines of the inverse square law for each light.

But from the point of view of this post the big consideration is this – can I afford to have movement in the subject?

At 1/250th sec you’d think not.  Then you’d think “hang on, flash durations are a lot faster than that” – so perhaps I can…..or can I ?

Flash Duration & Subject Movement

Flash duration, in terms of action-stopping power, is not as simple or straight forward as you might think.

Consider the diagram below:

flash duration,fill flash,flash,shutter speed,photography,Andy Astbury,digital photography,wildlife photography

Flash Power Output curve plotted against Output duration (time).

The grey shaded area in the diagram is the “power output curve’ of the flash.

Most folk think that a flash is an “instant on, instant off” kind of thing – how VERY wrong they are!

When we set the power output on either the back panel of our SB800/580EX etc, or on the power pack of a studio flash unit, or indeed any other flash unit, we are setting a peak output limit.

We might set a Nikon SB800 to 1/4 power, or we might set channel B output on a Quadra Ranger to 132Watt/sec, but either way, we are dictating the maximum flash output power – the peak output limit. The “t 5 time” – or to be more correct the “t 0.5 time” is the total time duration where the flash output is at 50% or above of the selected peak output limit we set.

Just to clarify: we set say, 1/4th power output on the back of a Canon 580EX – this is the selected peak output limit. The t5 time for this is the total time duration where the light output is at or above 50% of that selected 1/4th power – NOT 50% of the flash units full power output – do not get confused over this!

So when it comes to total “light emission duration” we’ve got 3 different ways of looking at things:

  1. Total – and I mean TOTAL – duration; the full span of the output curve.
  2. T 0.5 – the duration of the flash where its output is at 50% or above that level set by the user – the peak output limit.
  3. T 0.1 – the duration of the flash where its output is at 10% or above that level set by the user.

Anyone looking at the diagram above can see that the total output emission time/flash duration is A LOT LONGER than the t5 time.  Usually you find that t5 times are somewhere around 1/3rd of the total emission time, or flash duration.

Getting back to our shot of Jo above, if my memory serves me correctly the BX heads I used for the shot had a t5 time of around 1/1500th sec.  So the TOTAL duration of the flash output would be around 1/500th sec.

So I can’t afford to have any movement in the subject that isn’t going to be arrested by 1/500th sec flash duration, let alone the 1/250th shutter speed.

Why? Well that 1/250th sec the shutter is open will comprise of 1/500th sec of flash photons entering the lens, and 1/500th sec of NOTHING entering the lens but AMBIENT LIGHT photons.

Let us break flash output down a bit more:

In the previous article I mentioned, I quoted a table of Nikon SB800 duration times.  At the top of the table was the SB800 1/1 or full output power flash duration.  All times quoted in that table were t5 times.

The one I want to concentrate on is that 1/1 full power t5 time of 1/1050th sec.

Even though Nikon try to tempt you into believing that the flash only emits light for 1/1050th sec it does in fact light the scene for a full 1/350th sec – most flash manufacturers units are quoted as t5 times.

Now in most cases when you might employ flash – which let’s face it, is as some sort of fill light in a general ambient/flash mixed exposure, this isn’t in reality, a big problem.  Reduced power multiple pulse AutoFP/HSS also makes it not a problem.

But if you are trying to stop high speed action – in other words “freeze time”, then it can become a major headache; especially when you need all the flash power you can get hold of.

Why? Let’s break the diagram above down to basics.

flash duration,fill flash,flash,shutter speed,photography,Andy Astbury,digital photography,wildlife photography

The darker shaded area represents the “tail” of the flash output – the area that can cause many problems when trying to stop high speed action.

  • The first 50% of the total light output is over and finished in the first 1/1050th of the total flash duration.
  • The other 50% of the total light output takes place over a further 1/525th sec, and is represented by the dark grey area – let’s call this area the flash “output tail”.  Some publications & websites refer to this tail as after-glow.  I always thought that ‘after glow” was something ladies did after a certain type of energetic activity!
  • The light will continue to decay for a full 1/525th sec after t5, until the output of light has died down to 0% and the full “burn time” of 1/350th sec has been reached.

That’s right – 1/1050th + 1/525th = 1/350th.

So, if our shutter speed is 1/350th sec or longer we are going to see some ghosting in our image caused by the movement of the subject during that extra 1/525th sec post t5 time.

I need to point out that most speedlight type flash units are “isolated-gate bipolar transistor” devices – that’s IGBT to you and me. Einstein studio flash units are also IGBT units – I’ll cover the implications of this in a later post, but for now you just need to know that the IGBT circuitry works to eliminate sub t5 output BUT doesn’t work if your speedlight is set to output at maximum power.  And if you need access to full 1/1 power with your speedlights for any reason then IGBT won’t help you.

Let’s see the problem in action as it were:

flash duration,fill flash,flash,shutter speed,photography,Andy Astbury,digital photography,wildlife photography

A bouncing golf ball shot at 1/250th sec using full power output on an SB800.
The ball is moving UPWARDS.
The blur between points A & B are caused by the “tail” or “after-glow” of the flash.

And the problem will be further exacerbated if there is ANY ambient light in scene from a window for instance, as this will boost the general scene illumination during that “tail end” 1/525th sec.

We might be well advised, if using any form of non-TTL flash mode, to use a shutter speed equal to, or shorter in duration to the t5 time, as in the shot below:

flash duration,fill flash,flash,shutter speed,photography,Andy Astbury,digital photography,wildlife photography

A bouncing golf ball shot at 1/2000th sec using full power output on an SB800.

All I’ve done in this second shot is go -3Ev on the shutter speed, +1Ev on the aperture and +2Ev on ISO speed.

Don’t forget, the flash is in MANUAL mode with a full power output.

With the D4 in front-curtain synch the full power, 1/350th sec flash pulse begins as the front shutter curtain starts to move, and it “burns” continuously while the 1/2000th sec “letter-box” shutter-slot travels across the sensor.

In both shots you may be wondering how I triggered the exposure. Sitting on the desk you can see a small black box with a jack plug sticking out the back – this is the audio sensor of a TriggerSmart audio/light/Infra Red combined trigger system.  As the golf ball strikes the desk the audio sensor picks up the noise and the control box triggers the camera shutter and hence the flash.

Hardy, down at the distributors,Flaghead, has been kind enough to send me one of these systems for incorporation into some long-term photography projects, and in a series of high speed flash workshops and training tutorials.  And I have to say that I’m mighty impressed with the system, and at the retail pricing point ownership of this product is a no-brainer.  The unit is going to feature in quite a few blog post in the near-future, but click HERE to email Hardy for more details.

Even though I constantly extol the virtues of the Nikon CLS system, there comes a time when its automatic calculations fight AGAINST you – and easy high speed photography becomes something of a chore.

Any form of flash exposure automation makes assumptions about what you are trying to do.  In certain circumstances these assumptions are pretty much correct.  But in others they can be so far wide of the mark that if you don’t turn the automation OFF you’ll never get the shot you want.

Wresting full control over speed lights from the likes of Nikons CLS gives you access to super-highspeed flash durations AND high shutter speeds without a lot of the synching problems incurred with studio monoblocks.

Liquid in Motion,flash duration,fill flash,flash,shutter speed,photography,Andy Astbury,digital photography,wildlife photography

Liquid in Motion – arrested at 1/8000th sec shutter speed using SB800’s at full 1/1 power.

Liquid in Motion,flash duration,fill flash,flash,shutter speed,photography,Andy Astbury,digital photography,wildlife photography

Liquid in Motion – arrested at 1/8000th sec shutter speed using SB800’s at full 1/1 power. A 100% crop from the shot above.

Liquid in Motion,flash duration,fill flash,flash,shutter speed,photography,Andy Astbury,digital photography,wildlife photography

“Scotch & Rocks All Over The Place”
Simple capture with manual speed lights at full power and 1/8000th shutter speed.

The shots above are all taken with 2x SB800s lighting the white background and 1 heavily defused SB800 acting as a top light.

One background light is set at 1/1 manual FP, the other to manual 1/1 SU-4 remote.  The top light is set to 1/8 power SU-4 remote.

The majority light in the shot is in fact that white background – it’s punching light back through the glass and liquid splash – the subject is backlit.

So, that background is being lit for a full 1/350th of a second.

But shooting in front curtain synch I’m using 1/8000th sec as a shutter speed, an exposure duration 3 stops shorter than the flash unit t5 time for full power. So in effect I’m using the combined background flash units as a very short-term continuous light source which lasts for 1/350th of a second, but the camera is only recording the very first 1/8000th sec – in other words, photons are still leaving the flash AFTER the rear shutter curtain has closed and the exposure is finished.

Finally, the shutter and flash are triggered by dropping the faux crushed ice through the IR sensor beam of the TriggerSmart unit.

This is very much along the lines of what’s termed HYPERSYNCH – a technique you can use with conventional slow burn studio flash units and certain types of 3rd party trigger units such as Pocket Wizards – but that’s yet another story, and is fraught with synch problems that you have program out of the system using the Pocket Wizard utility.

So, there’s more to come from me about flash in future posts, but for now just remember – there’s not a lot you can’t do with speed lights – as long as you’ve got enough of the little darlings!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Trap Focus

Trap Focus on the Nikon D4

Trap focus comes to my D4 – Yay!!!!!!!!

What was it Nikon said – “we left it off the D4 because no one wanted it”….or words to that effect.

Well, with today’s (March 18th 2014) update version 1.10 trap focus is back – in a fashion.

What is trap focus some may ask.  Well it’s basically pre-focusing on a particular distance or spot where you expect the subject to be or to pass through.

As the subject enters the frame and gets closer to the camera it’s also getting closer to the pre-focused distance, and when it reaches the set focus distance the camera actually detects the subject/image is sharp and so takes the shot.

Basically you sit there with the shutter button fully depressed, but no shots get taken until the camera AF system deems the subject is now in focus.

It’s a technique that a lot of sports photographers find very useful, but I find it has very limited use for my wildlife & natural history work.  Having said that, it’s got me out of a bind more than once over the years, but ever since the D4 came out you’ve not been able to use it.

The failing lay in the flawed D4 focus priority – even if you told it to only trip the shutter when the image was deemed ‘in focus’ by setting CS a1 & a2 to FOCUS, it would still fire as if a1 and a2 were set to release priority.

But the new firmware update v1.10 has given trap focus back to the D4, but before you start jumping up and down and getting all excited you need to know how to set it up, and bare in mind that “as a technique” trap focus might not suit what you had in mind.

Setup for D4 Trap Focus

  1. Update firmware to v1.10 – read the instructions FULLY before you attempt this, otherwise you may need another camera!
  2. Go to Custom Settings a2 AF-S priority selection and set to FOCUS.
  3. Go to Custom Settings a4 AF activation and set to AF-ON only – this takes to AF activation away from the shutter release button.
  4. Put a wide angle lens on the camera.
  5. Set the lens focus switch to M/A
  6. Set the D4 focus mode selector (the lever on left side of the body front) to AF
  7. Press the AF mode button and rotate the Command Dial (back one) to select AFS and NOT AFC.
  8. Rotate the Sub Command Dial (front one) to select S (single) and NOT Auto.
  9. Focus on your computers monitor screen using either the manual focus ring of the lens or the rear AF-ON button next to the Command Dial.
  10. If you’ve pressed the latter TAKE your thumb OFF!
  11. Move the camera directly away from the computer monitor screen so the image in the viewfinder goes soft.
  12. Jam your finger down on the shutter release. Nothing happens (if it does then start again!).
  13. Keeping that shutter button depressed and NOT touching the lens or AF button, move back towards the computers monitor screen – the shutter will fire when the monitor screen is sharp.

Got that?  Good!  Oh, and by the way, the award-winning shot you just missed – it would have been epic!

Now you’ve got a D4 that does trap focus.

Now for the trap focus caveats:

Trap Focus only works in AFS – not in AFC.

Trap Focus only works with a single AF sensor, AFS-S – so correct prediction of that one AF sensor/subject alignment to get the required ‘bits” in sharp focus and DoF is going to be difficult.

wildlife photography, common Kestrel, photography technique,manual focus trap,trap focus

Common Kestrel Landing
©Andy Astbury/Wildlife in Pixels

Do NOT think you can pull this wildlife shot off using TRAP FOCUS.

By the time the camera has detected the sharp focus and got over the system lock time and triggered the shutter, the bird will be way closer to the camera – and sharp focus in the resulting image will be behind the tail!

This shot is done with a manual focus trap – a completely different technique, as described HERE

The subject is too small and so to close to the camera and 500mm lens for trap focus to work effectively.

However, if you are doing sports photography for instance, you are imaging subjects that are much bigger and a lot further away.

A 500mm f4 on an FX body has over 2 meters depth of field at f5.6 when focused at 40 meters.  Take a baseball match for instance – not that I’ve ever covered one mind!

Set the single AF sensor focus distance at home plate.

Then tilt the camera up slightly, or move the sensor with the Dpad so it can’t see/is not overlaying what you just focused on. Hold the shutter button down and wait for a player to make a dive for home plate.  As he enters the area of the AF sensor the camera will fire continually if you’re in continuous shooting mode, and will only stop when the camera detects focus has been lost.

Works like a charm!

The key thing is that the depth of field generated by the focus distance makes trap focus work for you – at much shorter distances where depth of field is down to an inch or so if you’re lucky, then couple that with a fast subject approach speed, and trap focus will fall down as a reliable method.

If I’m doing studio flash work like this:

WIP00048398

which is never often enough any more! – I sometimes find it useful to use trap focus because it can help doing hand held work under the lowish flash unit modelling lights when you want to make sure eyes are sharp.

Using Trap Focus in a sort of 'bastardised' manner can help you maintain sharp focus on models eyes whilst giving you freedom to move around, change composition, zoom etc. by controlling the sharpness of the image with the lens focus ring.

Using Trap Focus in a sort of ‘bastardised’ manner can help you maintain sharp focus on models eyes whilst giving you freedom to move around, change composition, zoom etc. by controlling the sharpness of the image with the lens focus ring.

Like I said earlier, it’s a technique that can get you out of trouble every now and again, but up until today you hadn’t got recourse to it on the D4.

But you seriously need to understand the limitations of trap focus deployment before you rush out and use it – you could be very disappointed with the results, and it’ll be all your own fault for trying to bang a square peg through a round hole.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

What Shutter Speed?

Shutter speed, and the choices we make over it, can have a profound effect on the outcome of the final image.

Now everyone has a grasp of shutter speed and how it relates to subject movement – at least I hope they do!

We can either use a fast shutter speed to freeze constant action, or we can use a slow shutter speed to:

  • Allow us to capture movement of the subject for creative purposes
  • Allow us to use a lower ISO/smaller aperture when shooting a subject with little or no movement.

 

Fast Shutter Speed – I need MORE LIGHT Barry!

shutter speed,Red Kite,Andy Astbury,action photography,Wildlife in Pixels

1/8000th sec @ f8, Nikon D4 and 500mm f4

Good strongish sunlight directly behind the camera floods this Red Kite with light when it rolls over into a dive.  I’m daft enough to be doing this session with a 500mm f4 that has very little in the way of natural depth-of-field so I opt to shoot at f8.  Normally I’d expect to be shooting the D4 at 2000iso for action like this but my top end shutter speed is 1/8000th and this shutter speed at f8 was slightly too hot on the exposure front, so I knocked the ISO down to 1600 just to protect the highlights a little more.

Creative Slow Shutter Speed – getting rid of light.

shutter speed,Red Kite,Andy Astbury,action photography,Wildlife in Pixels

1/5th sec @ f22

I wanted to capture the movement in a flock of seagulls taking off from the water, so now I have to think the opposite way to the Kite shot above.

Firstly I need to think carefully about the length of shutter speed I choose: too short and I won’t capture enough movement; and too long will bring a vertical movement component into the image from me not being able to hold the camera still – so I opt for 1/5th sec.

Next to consider is aperture.  Diffraction on a deliberate motion blur has little impact, but believe it or not focus and depth of field DO – go figure!

So I can run the lens at f16/20/22 without much of a worry, and 100 ISO gets me the 1/5th sec shutter speed I need at f22.

 

Slow Shutter  Rear Curtain Synch Flash

We can use a combination of both techniques in one SINGLE exposure with the employment of flash, rear curtain synch and a relatively slow shutter speed:

shutter speed,Red Kite,Andy Astbury,action photography,Wildlife in Pixels

6/10th sec @ f3.5 -1Ev rear curtain synch flash

A technique the “Man Cub” uses to great effect in his nightclub photography, here he’s rotated the camera whilst the shutter is open, thus capturing the glowing LEDs and other highlights as circular trails.  As the shutter begins to close, the scene is lit by the 1/10,000th sec burst of light from the reduced power, rear curtain synched SB800 flash unit.

But things are not always quite so cut-and-dried – are they ever?

Assuming the lens you use is tack sharp and the subject is perfectly focused there are two factors that have a direct influence upon how sharp the shot will be:

  • System Vibration – caused by internal vibrations, most notably from the mirror being activated.
  • Camera Shake – caused by external forces like wind, ground vibration or you not holding the camera properly.

Shutter Speed and System Vibration

There was a time when we operated on the old adage that the slowest shutter speed you needed for general hand held shooting was equal to 1/focal length.

So if you were using a 200mm lens you shot with a minimum shutter speed of 1/200th sec, and, for the most part, that rule served us all rather well with 35mm film; assuming of course that 1/200th sec was sufficient to freeze the action!

Now this is a somewhat optimistic rule and assumes that you are hand holding the camera using a good average technique.  But put the camera on a tripod and trigger it with a cable or remote release, and it’s a whole new story.

Why?  Because sticking the camera on a tripod and not touching it during the exposure means that we have taken away the “grounding effect” of our mass from the camera and lens; thus leaving the door open to for system vibration to ruin our image.

 

How Does System Vibration Effect an Image?

Nowadays we live in a digital world with very high resolution sensors instead of film. and the very nature of a sensor – its pixel structure (to use a common parlance) has a direct influence on minimum shutter speed.

So many camera owners today have the misguided notion that using a tripod is the answer to all their prayers in terms of getting sharp images – sadly this ain’t necessarily so.

They also have the other misguided notion that “more megapixels” makes life easier – well, that definitely isn’t true!

The smallest detail that can be recorded by a sensor is a point of light in the projected image that has the same dimensions a one photosite/pixel on that sensor. So, even if a point is SMALLER than the photosite it strikes, its intensity or luminance will effect the whole photosite.

shutter speed,Red Kite,Andy Astbury,action photography,Wildlife in Pixels,vibration reduction,camera shake,mirror slap,sharp images.

A point of light smaller than 1 photosite (left) has an effect on the whole photosite (right).

If the lens is capable of resolving this tiny detail, our sensor – in this case (right) – isn’t, and so the lens out-resolves the sensor.

But let’s now consider this tiny point detail and how it effects a sensor of higher resolution; in other words, a sensor with smaller photosites:

shutter speed,Red Kite,Andy Astbury,action photography,Wildlife in Pixels,vibration reduction,camera shake,mirror slap,sharp images

The same detail projected onto a higher resolution sensor (right). Though not shown, the entire photosite will be effected, but its surface area represents a much small percentage of the whole sensor area – the sensor now matches the lens resolution.

Now this might seem like a good thing; after all, we can resolve smaller details.  But, there’s a catch when it comes to vibration:

shutter speed,Red Kite,Andy Astbury,action photography,Wildlife in Pixels,vibration reduction,camera shake,mirror slap,sharp images

A certain level of vibration causes the small point of light to vibrate. The extremes of this vibration are represented by the the outline circles.

The degree of movement/vibration/oscillation is identical on both sensors; but the resulting effect on the exposure is totally different:

shutter speed,Red Kite,Andy Astbury,action photography,Wildlife in Pixels,vibration reduction,camera shake,mirror slap,sharp images

The same level of vibration has more effect on the higher resolution sensor.

If you read the earlier post on sensor resolution and diffraction HERE you’ll soon identify the same concept.

The upshot of it all is that “X” level of internal system vibration has a greater effect on a higher resolution sensor than it does on a lower resolution sensor.

Now what’s all this got to with shutter speed I hear you ask.  Well, whereas 1/focal length used to work pretty well back in the day, we need to advance the theory a little.

Let’s look at four shots from a Nikon D3, shot with a 300mm f2.8, mounted on a tripod and activated by a remote (so no finger-jabbing on the shutter button to effect the images).

Also please note that the lens is MANUALLY FOCUSED just once, so is sharply on the same place for all 4 shots.

These images are full resolution crops, I strongly recommend that you click on all four images to open them in new tabs and view them sequentially.

shutter speed,Red Kite,Andy Astbury,action photography,Wildlife in Pixels,vibration reduction,camera shake,mirror slap,sharp images

Shutter = 1/1x (1/320th) Focal Length. No VR, No MLU (Mirror Lock Up). Camera on Tripod+remote release.

shutter speed,Red Kite,Andy Astbury,action photography,Wildlife in Pixels,vibration reduction,camera shake,mirror slap,sharp images

Shutter = 1/2x (1/640th) Focal length. No VR. No MLU. Camera on Tripod+remote release.

shutter speed,Red Kite,Andy Astbury,action photography,Wildlife in Pixels,vibration reduction,camera shake,mirror slap,sharp images

Shutter = 1/2x Focal length + VR. No MLU. Camera on Tripod+remote release.

shutter speed,Red Kite,Andy Astbury,action photography,Wildlife in Pixels,vibration reduction,camera shake,mirror slap,sharp images

Shutter = 1/2x Focal length. Camera on Tripod+remote release + MLU – NO VR + Sandbag.

Now the thing is, the first shot at 1/320th looks crap because it’s riddled with system vibration – mainly a result of what’s termed ‘mirror slap’.  These vibrations travel up the lens barrel and are then reflected back by the front of the lens.  You basically end up with a packet of vibrations running up and down the lens barrel until they eventually die out.

These vibrations in effect make the sensor and the image being projected onto it ‘buzz, shimmy and shake’ – thus we get a fuzzy image; and all the fuzziness is down to internal system vibration.

We would actually have got a sharper shot hand holding the lens – the act of hand holding kills the vibrations!

As you can see in shot 2 we get a big jump in vibration reduction just by cranking the shutter speed up to 2x focal length (actually 1/640th).

The shot would be even sharper at 3x or 4x, because the vibrations are of a set frequency and thus speed of travel, and the faster the shutter speed we use the sooner we can get the exposure over and done with before the vibrations have any effect on the image.

We can employ ‘mirror up shooting’ as a technique to combat these vibrations; by lifting the mirror and then pausing to give the vibrations time to decay; and we could engage the lens VR too, as with the 3rd shot.  Collectively there has been another significant jump in overall sharpness of shot 3; though frankly the VR contribution is minimal.

I’m not a very big fan of VR !

In shot 4 you might get some idea why I’m no fan of VR.  Everything is the same as shot 3 except that the VR is OFF, and we’ve added a 3lb sandbag on top of the lens.  This does the same job as hand holding the lens – it kills the vibrations stone dead.

When you are shooting landscapes with much longer exposures/shutter speeds THE ONLY way to work is tripod plus mirror up shooting AND if you can stand to carry the weight, a good heavy sand bag!

Shot 4 would have been just as sharp if the shutter had been open for 20 seconds, just as long as there was no movement at all in the subject AND there was no ground vibration from a passing heavy goods train (there’s a rail track between the camera and the subject!).

For general tripod shooting of fairly static subjects I was always confident of sharp shots on the D3 (12Mb) at 2x focal length.

But since moving to a 16Mp D4 I’ve now found that sometimes this let’s me down, and that 2.5x focal length is a safer minimum to use.

But that’s nothing compared to what some medium format shooters have told me; where they can still detect the effects of vibration on super high resolution backs such as the IQ180 etc at as much as 5x focal length – and that’s with wide angle landscape style lenses!

So, overall my advice is to ALWAYS push for the highest shutter speed you can possibly obtain from the lighting conditions available.

Where this isn’t possible you really do need to perfect the skill of hand holding – once mastered you’ll be amazed at just how slow a shutter speed you can use WITHOUT employing the VR system (VR/IS often causes far more problems than it would apparently solve).

For long lens shooters the technique of killing vibration at low shutter speeds when the gear is mounted on a tripod is CRITICAL, because without it, the images will suffer just because of the tripod!

The remedy is simple – it’s what your left arm is for.

So, to recap:

  • If you shot without a tripod, the physical act of hand holding – properly – has a tendency to negate internal system vibrations caused by mirror slap etc just because your physical mass is in direct contact with the camera and lens, and so “damps” the vibrations.
  • If you shoot without a tripod you need to ensure that you are using a shutter speed fast enough to negate camera shake.
  • If you shoot without a tripod you need to ensure that you are using a shutter speed fast enough to FREEZE the action/movement of your subject.

 

Camera Shake and STUPID VR!

Now I’m going to have to say at the outset that this is only my opinion, and that this is pointed at Nikons VR system, and I don’t strictly know if Canons IS system works on the same math.

And this is not relevant to sensor-based stabilization, only the ‘in the lens’ type of VR.

The mechanics of how it works are somewhat irrelevant, but what is important is its working methodology.

Nikon VR works at a frequency of 1000Hz.

What is a “hertz”?  Well 1Hz = 1 full frequency cycle per second.  So 1000Hz = 1000 cycles per second, and each cycle is 1/1000th sec in duration.

shutter speed,Red Kite,Andy Astbury,action photography,Wildlife in Pixels,vibration reduction,camera shake,mirror slap,sharp images

Full cycle sine wave showing 1,0.5 & 0.25 cycles.

Now then, here’s the thing.  The VR unit is measuring the angular momentum of the lens movement at a rate of 1000 times per second. So in other words it is “sampling” movement every 1/1000th of a second and attempting to compensate for that movement.

But Nyquist-Shannon sampling theory – if you’re up for some mind-warping click HERE – says that effective sampling can only be achieved at half the working frequency – 500 cycles per second.

What is the time duration of one cycle at a frequency of 500Hz?  That’s right – 1/500th sec.

So basically, for normal photography, VR ceases to be of any real use at any shutter speed faster than 1/500th.

Remember shot 3 with the 300mm f2.8 earlier – I said the VR contribution at 1/640th was minimal?  Now you know why I said it!

Looking again at the frequency diagram above, we may get a fairly useful sample at 1/4 working frequency – 1/250th sec; but other than that my personal feelings about VR is that it’s junk – under normal circumstances it should be turned OFF.

What circumstances do I class as abnormal? Sitting on the floor of a heli doing ariel shots out of the open door springs to mind.

If you are working in an environment where something is vibrating YOU while you hand hold the camera then VR comes into its own.

But if it’s YOU doing the vibrating/shaking then it’s not going to help you very much in reality.

Yes, it looks good when you try it in the shop, and the sales twat tells you it’ll buy you three extra stops in shutter speed so now you can get shake-free shots at 1/10th of a second.

But unless you are photographing an anaesthetized Sloth or a statue, that 1/10th sec shutter speed is about as much use to you as a hole in the head. VR/IS only stabilizes the lens image – it doesn’t freeze time and stop a bird from flapping its wings, or indeed a brides veil from billowing in the breeze.

Don’t get me wrong; I’m not saying VR/IS is a total waste of time in ALL circumstances.  But I am saying that it’s a tool that should only be deployed when you need it, and YOU need to understand WHEN that time is; AND you need to be aware that it can cause major image problems if you use it in the wrong situation.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

In Conclusion

shutter speed,Red Kite,Andy Astbury,action photography,Wildlife in Pixels,vibration reduction,camera shake,mirror slap,sharp images

1/2000th sec is sufficient to pretty much freeze the forward motion of this eagle, but not the downward motion of the primary feathers.

This rather crappy shot of a White-tailed eagle might give you food for thought, especially if compared with the Red Kite at the start of the post.

The primary feathers are soft because we’ve run out of depth of field.  But, notice the motion blur on them too?  Even though 1/2000th sec in conjunction with a good panning technique is ample to freeze the forward motion of the bird, that same 1/2000th sec is NOT fast enough to freeze the speed of the descending primary feathers on the end of that 4 foot lever called a wing.

Even though your subject as a whole might be still for 1/60th sec or longer, unless it’s dead, some small part of it will move.  The larger the subject is in the frame then more apparent that movement will be.

Getting good sharp shots without motion blur in part of the subject, or camera shake and system vibration screwing up the entire image is easy; as long as you understand the basics – and your best tool to help you on your way is SHUTTER SPEED.

A tack sharp shot without blur but full of high iso noise is vastly superior to a noiseless shot full of blur and vibration artefacting.

Unless it’s done deliberately of course – “H-arty Farty” as my mate Ole Martin Dahle calls it!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Metering Modes Explained

Camera Metering Modes

Become a Patron!

I always get asked about which camera metering mode I use,  and to be honest, I think sometimes the folk doing the asking just can’t get their heads around my simplistic, and sometimes quite brutal answers!

“Andy, it’s got to be more complicated than that surely….otherwise why does the camera give me so many options…?”

Well, I always like to keep things really simple, mainly because I’m not the brightest diamond in the jewellery shop, and because I’m getting old and most often times my memory keeps buggering off on holiday without telling me!

But before I espouse on “metering the Uncle Andy way” let’s take a quick look at exactly how the usual metering options work and their effects on exposure.

The Metering Modes

  • Average (a setting usually buried in the center-weighted menu)
  • Spot
  • Center-weighted
  • 3D Matrix (Nikon) or Evaluative (Canon)
Metering Mode Icons

Metering Mode Icons

You can continue reading this article FREE over on my public Patreon posts pages.  Just CLICK HERE

Auto Focus & Shooting Speed

Auto Focus & Shooting Speed

Firstly, an apology to my blog followers for the weird blog post notification this morning – I had one of those “senior moments” where I confused the Preview button with Publish – DOH!

There is truly no hope………..!  But let’s get on….

The effectiveness of auto focus and its ability to track and follow a moving subject IS INFLUENCED by frame rate.

Why is this I here you ask.

Well, it’s simple, and logical if you think about it – where are your AF sensors?

They’re in the bottom of your cameras mirror box.

Most folk thing that the mirror just sits there, reflecting at 45 degrees all the light that comes through the lens up to the focus screen and viewfinder.  The fact that the mirror is still DOWN when they are using the auto focus leads most people into thinking the AF sensor array is elsewhere – that’s if they can be bothered to think about it in the first place.

 

So how does the AF array SEE the scene?

Because the center area of the main mirror is only SEMI silvered, and in reality light from the lens does actually pass through it.

 

auto focus,how auto focus works,main mirror,dslr mirror,mirror box,photography,camera

Main mirror of a Nikon D2Xs in the down position.

 

Now I don’t recommend you jam a ball point pen under your own main mirror, but in the next image:

 

auto focus,how auto focus works,main mirror,dslr mirror,mirror box,photography,camera

Main mirror of a Nikon D2Xs lifted so you can see the secondary mirror.

 

Now there’s a really good diagram of the mechanics at http://www.reikan.co.uk/ – makers of FoCal software, and I’ll perhaps get my goolies cut of for linking to it, but here it is:

 

This image belongs to Reikan

 

As you can now hopefully understand, light passes through the mirror and is reflected downwards by the secondary mirror into the AF sensor array.

As long as the mirror is DOWN the auto focus sensor array can see – and so do its job.

Unless the MAIN mirror is fully down, the secondary mirror is not in the correct position to send light to the auto focus sensor array – SO GUESS WHAT – that’s right, your AF ain’t working; or at least it’s just guessing.

So how do we go about giving the main mirror more “down time”?  Simply by slowing the frame rate down is how!

When I’m shooting wildlife using a continuous auto focus mode then I tend to shot at  5 frames per second in Continuous LOW (Nikon-speak) and have the Continuous HIGH setting in reserve set for 9 frames per second.

 

The Scenario Forces Auto Focus Settings Choices

From a photography perspective we are mainly concerned with subjects CROSSING or subjects CLOSING our camera position.

Once focus is acquired on a CROSSING subject (one that’s not changing its distance from the camera) then I might elect to use a faster frame rate as mirror-down-time isn’t so critical.

But subjects that are either CLOSING or CROSSING & CLOSING are far more common; and head on CLOSING subjects are the ones that give our auto focus systems the hardest workout – and show the system failures and short-comings the most.

Consider the focus scale on any lens you happen to have handy – as you focus closer to you the scale divisions get further apart; in other words the lens focus unit has to move further to change from say 10 meters to 5 meters than it does to move from 15 meters to 10 meters – it’s a non-linear scale of change.

So the closer a subject comes to your camera position the greater is the need for the auto focus sensors to see the subject AND react to its changed position – and yes, by the time it’s acquired focus and is ready to take the next frame the subject is now even closer – and things get very messy!

That’s why high grade dSLR auto focus systems have ‘predictive algorithms’ built into them.

Also. the amount of light on the scene AND the contrast between subject and background ALL effect the ability of the auto focus to do its job.  Even though most pro-summer and all pro body systems use phase detection auto focus, contrast between the subject to be tracked and its background does impact the efficiency of the overall system.

A swan against a dark background is a lot easier on the auto focus system than a panther in the jungle or a white-tailed eagle against a towering granite cliff in Norway, but the AF system in most cameras is perfectly capable of acquiring, locking on and tracking any of the above subjects.

So as a basic rule of thumb the more CLOSING a subject is then the LOWER your frame rate needs to be if you are looking for a sharp sequence of shots.  Conversely the more CROSSING a subject is then the higher the frame rate can be and you might still get away with it.

 

Points to Clarify

The mechanical actions of an exposure are:

  1. Mirror lifts
  2. Front shutter curtain falls
  3. Rear shutter curtain falls
  4. Mirror falls closed (down)

Here’s the thing; the individual time taken for each of these actions is the same ALL the time – irrespective of whether the shutter speed is 1/8000th sec or 8 sec; it’s the gap in between 2. & 3. that makes the difference.

And it’s the ONLY thing shutter-related we’ve got any control over.

So one full exposure takes t1 + t2 + shutter speed + t3 +t4, and the gap between t4 and the repeat of t1 on the next frame is what gives us our mirror down time between shots for any given frame rate.  So it’s this time gap between t4 and the repeat of t1 that we lengthen by dropping the shooting speed frame rate.

There’s another problem with using 10 or 11 frames per second with Nikon D3/D4 bodies.

10 fps on a D3 LOCKS the exposure to the values/settings of the first frame in the burst.

11 fps on a D3 LOCKS both exposure AND auto focus to the values/settings of the first frame in the burst.

11 fps on a D4 LOCKS both exposure AND auto focus* to those of the first frame in the burst – and it’s one heck of a burst to shoot where all the shots can be out of focus (and badly exposed) except the first one!

*Page 112 of the D4 manual says that at 11fps the second and subsequent shots in a burst may not be in focus or exposed correctly.

That’s Nikon-speak for “If you are photographing a statue or a parked car ALL your shots will be sharp and exposed the same; but don’t try shooting anything that’s getting closer to the camera, and don’t try shooting things where the frame exposure value changes”.

 

There’s a really cool video of 11 fps slowed right down with 5000fps slo-mo  HERE  but for Christ’ sake turn your volume down because the ST is some Marlene Dietrich wannabe!

So if you want to shoot action sequences that are sharp from the first frame to the last then remember – DON’T be greedy – SLOW DOWN!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Flash Photography

Flash Photography

 

Red Squirrel,Andy Astbury,Flash,flash photography,fill flash,photography techniques

Really Cute Red Squirrel

 

On Sunday myself and my buddy Mark Davies made a short foray up to the Lake District and our small Red Squirrel site.  The weather was horrible, sleet, sun. rain, cloudy, sunny then rain again – in other words just not conducive to a half-descent session on the D4.

The one Achilles Heal with this site is the fact that it’s hard to get a descent background for your shots – it’s in the middle of a small wooded valley and you just can’t get away from tree trunks in the background.

This is further complicated by the fact that the “Squidgers” have a propensity for keeping in the ‘not so sunny’ bits, so frequently you end up with a scenario where backgrounds are brighter than foregrounds – which just won’t DO!

So what’s needed is some way to switch the lighting balance around to give a brighter foreground/subject AND a darker background.

Now that sounds all very well BUT; how do we achieve it?

Reflectors perhaps?  They’d do the trick but have one big problem; they rely on AMBIENT light  – and in the conditions we were shooting in the other day the value of the ambient light was up and down like a Yo-Yo.

Wouldn’t it be cool if we could have a consistent level of subject/foreground illumination AND at the same time have some degree of control over the exposure of the background?

Well with flash we can do just that!

Let’s look at a shot without flash:

 

No FLASH

No FLASH, AMBIENT light only – 1/320th @ f7.1

 

I don’t suppose this shot is too bad because the background isn’t strongly lit by the sun (it’s gone behind a cloud again!) but the foreground and background are pretty much the same exposure-wise.  For me there is not enough tonal separation between the two areas of the image, and the lighting is a bit flat.

If we could knock a stop or so out of the background; under expose it, then the image would have more tonal separation between foreground and background, and would look a lot better, but of course if we’re just working with ambient light then our adjusted exposure would under expose the foreground as well, so we’d be no better off.

Now look at the next image – we’ve got a background that’s under exposed by around  -1.5Ev, but the subject and foreground are lit pretty much to the same degree as before, and we’ve got a little more shape and form to the squirrel itself – it’s not quite so flat-looking.

 

With FLASH

With FLASH added – 1/800th @ f7.1

 

The image also has the slight sense that it’s been shot in more sunny conditions – which I can promise you it wasn’t !

And both images are basically straight off the camera, just with my neutral camera profile applied to them on import.

 

The Set Up

The Setup - shocking iPhone 3 quality!

The Setup – shocking iPhone 3 quality!

 

The first secret to good looking flash photography OF ANY KIND is to get the damn flash OFF the camera.

If we were in a totally dark studio with the sexiest looking model on the planet we’d NOT be lighting her with one light from the camera position now would we?

So we use basic studio lighting layouts where ever we can.

There are two other things to consider too:

  •   It’s broad daylight, so our exposure will contain both FLASH and an element of AMBIENT light – so we are working along the premise of ADDING to what’s already there.
  •   If we put the flash closer to the subject (off camera) then the output energy has less distance to travel in order to do its job – so it doesn’t have to have as much power behind it as it would have if emanating from the camera position.

 

You can see in the horrible iPhone 3 shot I took of the setup that I’m using two flash guns with white Lambency diffusers on them; one on a stand to the left and slightly in front of the log where the squirrels will sit, and one placed on the set base (Mr. Davies old knackered Black & Decker Workmate!) slightly behind the log and about the same distance away from where I anticipate a squirrel will sit on the log as the left flash.

The thing to note here is that I’m using the SIDE output of these Lambency diffuser domes and NOT the front – that’s why they are pointed up at the sky. The side output of these diffusers is very soft – just what the flash photography doctor ordered in terms of ‘keeping it real’.

The left light is going to be my MAIN light, the right is my FILL light.

The sun, when & if it decides to pop its head out, will be behind me and to my left so I place my MAIN light in a position where it will ‘simulate’ said ball in the sky.

The FILL light basically exists to ‘counter balance’ the ‘directionality’ of the MAIN light, and to weaken any shadows thrown by the MAIN light.

Does this flash bother a subject? For the most part NOT SO YOU’D NOTICE!

Take a look at the shot below – the caption will be relevant shortly.

This SB800 has just fired in "front curtain synch" and the balance of the exposure is from the ambient light - the shutter is still open after the flash has died. Does the squirrel look bothered?

This SB800 has just fired in “front curtain synch” and the balance of the exposure is from the ambient light. Does the squirrel look bothered?

Settings & The Black Art!

Before we talk about anything else I need to address the shutter curtain synch question.

We have two curtain synch options, FRONT & REAR.

Front Curtain (as in the shot above) – this means that the flash will fire as the front curtain starts to move, and most likely, the flash will be finished long before the rear curtain closes. If your subject reacts to the flash then some element of subject movement might be present in the shot due to the ambient light part of the exposure.

Rear Curtain Synch – my recommended ‘modus operandi’ – the ‘ambient only’ part of the exposure gets done first, then the flash fires as the rear curtain begins to close the exposure. This way, if the subject reacts to the flash the exposure will be over before it has chance to – MOSTLY!

The framing I want, and the depth of field I want dictates my camera position and aperture – in this case f7 or f8 – actually f7.1 is what I went for.

 

I elect to go with 2000 iso on the D4.

So now my only variable is shutter speed.

Ambient light dictates that to be 1/320th on average, and I want to UNDER EXPOSE that background by at least a stop and a bit (technical terms indeed!) so I elect to use a shutter speed of 1/800th.

So that’s it – I’m done; seeing as the light from the flashes will be constant my foreground/subject will ALWAYS be exposed correctly. In rear curtain synch I’ll negate the risk of subject movement ‘ghosting’ in the image, and at 1/800th I’ll have a far better chance of eliminating motion blur caused by a squirrel chewing food or twitching its whiskers etc.

 

Triggering Off-Camera Flashes

 

We can fire off-camera flashes in a number of ways, but distance, wet ground, occasional rain and squirrels with a propensity for chewing everything they see means CORDS ain’t one of ’em!

With the Nikon system that I obviously use we could employ another flash on-camera in MASTER/COMMANDER mode, with the flash pulse deactivated; or a dedicated commander such as the SU800; or if your camera has one, the built-in flash if it has a commander mode in the menu.

The one problem with Nikon CLS triggering system, and Canons as far as I know, is the reliance upon infra-red as the communication band. This is prone to a degree of unreliability in what we might term ‘dodgy’ conditions outdoors.

I use a Pocket Wizard MiniTT1 atop the camera and a FlexTT5 under my main light. The beauty of this system is that the comms is RADIO – far more reliable outdoors than IR.

Because a. I’m poor and can’t afford another TT5, and b. the proximity of my MAIN and FILL light, I put the SB800 FILL light in SU mode so it gets triggered by the flash from the MAIN light.

What I wouldn’t give for a dozen Nikon SB901’s and 12 TT5s – I’d kill for them!

The MAIN light itself is in TTL FP mode.

The beauty of this setup is that the MAIN light ‘thinks’ the TT5 is a camera, and the camera ‘thinks’ the miniTTL is a flash gun, so I have direct communication between camera and flash of iso and aperture information.

Also, I can turn the flash output down by up to -3Ev using the flash exposure compensation button without it having an effect on the background ambient exposure.

Don’t forget, seeing as my exposure is always going to 1/800th @ f7.1 at 2000 iso the CAMERA is in MANUAL exposure mode. So as long as the two flashes output enough light to expose the subject correctly at those settings (which they always will until the batteries die!) I basically can’t go wrong.

When shooting like this I also have a major leaning towards shooting in single servo – one shot at a time with just one AF point active.

 

Flash Photography – Flash Duration or Burn Time

Now here’s what you need to get your head around. As you vary the output of a flash like the SB800 the DURATION of the flash or BURN TIME of the tube changes

Below are the quoted figures for the Nikon SB800, burn time/output:

1/1050 sec. at M1/1 (full) output
1/1100 sec. at M1/2 output
1/2700 sec. at M1/4 output
1/5900 sec. at M1/8 output
1/10900 sec. at M1/16 output
1/17800 sec. at M1/32 output
1/32300 sec. at M1/64 output
1/41600 sec. at M1/128 output

On top of that there’s something else we need to take into account – and this goes for Canon shooters too; though Canon terminology is different.

Shutter Speed & The FP Option

35mm format cameras all have a falling curtain shutter with two curtains, a front one, and a rear one.

As your press the shutter button the FRONT curtain starts to fall, then the rear curtain starts to chase after it, the two meet at the bottom of the shutter plane and the exposure is over.

The LONGER or slower the shutter speed the greater head-start the front curtain has!

At speeds of 1/250th and slower the front curtain has reached the end of its travel BEFORE the rear curtain wakes up and decides to move – in other words THE SENSOR is FULLY exposed.

The fastest shutter speed that results in a FULLY EXPOSED film plane/sensor is the basic camera-to-flash synch speed; X synch as it used to be called, and when I started learning about photography this was usually 1/60th; and on some really crap cameras it was 1/30th!

But with modern technology and light weight materials these curtains can now get moving a lot faster, so basic synch now runs at 1/250th for a full frame DSLR.

If you go into your flash camera menu you’ll find an AUTO FP setting for Nikon, Canon refer to this as HSS or High Speed Synch – which makes far more sense (Nikon please take note, Canon got something right so please replicate!).

There’s something of an argument as to whether FP stands for Focal Plane or Flash Pulse; and frankly both are applicable, but it means the same as Canon’s HSS or High Speed Synch.

At speeds above/faster than 1/250th the sensor/film plane is NOT fully exposed. The gap between the front and rear curtains forms a slot or ‘letter box’ that travels downwards across the face of the sensor, so the image is, if you like, ‘scanned’ onto the imaging plane.

Obviously this is going to cause on heck of an exposure problem if the flash output is ‘dumped’ as a single pulse.

So FP/HSS mode physically pulses or strobes the flash output to the point where it behaves like a continuous light source.

If the flash was to fire with a single pulse then the ‘letterbox slot’ would receive the flash exposure, but you’d end up with bands of under exposure at the bottom or top of the image depending on the curtain synch mode – front or rear.

In FP/HSS mode the power output of each individual pulse in the sequence will drop as the shutter speed shortens, so even though you might have 1:1 power selected on the back of the flash itself (which I usually do on the MAIN light, and 1/2 on the FILL light) the pulses of light will be of lower power, but their cumulative effect gives the desired result.

By reviewing the shot on the back of the camera we can compensate for changes in ambient in the entire scene (we might want to dilute the effect of the main light somewhat if the sun suddenly breaks out on the subject as well as the background) by raising the shutter speed a little – or we might want to lighten the shot globally by lowering the shutter speed if it suddenly goes very gloomy.

We might want to change the balance between ambient and flash; this again can be done from the camera with the flash exposure compensation controls; or if needs be, by physically getting up and moving the flash units are little nearer or further away from the subject.

All in all, using flash is really easy, and always has been.

Except nowadays manufacturers tend to put far more controls and modes on things then are really necessary; the upshot of which is to frighten the uninitiated and then confuse them even further with instruction manuals that appear to be written by someone under the influence of Class A drugs!

 

"Trouble Brewing.." Confrontation over the right to feed between two Red Squirrels.

“Trouble Brewing..” Confrontation over the right to feed between two Red Squirrels.

 

The whole idea of flash is that it should do its job but leave no obvious trace to the viewer.

But its benefits to you as the photographer are invaluable – higher shutter speeds, more depth of field and better isolation of the subject from its background are the three main ones that you need to be taking advantage of right now.

If you have the gear and don’t understand how to use it then why not book a tuition day with me – then perhaps I could afford some more TT5s!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Accurate Camera Colour within Lightroom

Obtaining accurate camera colour within Lightroom 5, in other words making the pics in your Lr Library look like they did on the back of the camera; is a problem that I’m asked about more and more since the advent of Lightroom 5 AND the latest camera marks – especially Nikon!

UPDATE NOTE: Please feel free to read this post THEN go HERE for a further post on achieving image NEUTRALITY in Lightroom 6/CC 2015

Does this problem look familiar?

Accurate Camera Colour within Lightroom

Back of the camera (left) to Lightroom (right) – click to enlarge.

The image looks fine (left) on the back of the camera, fine in the import dialogue box, and fine in the library module grid view UNTIL the previews have been created – then it looks like the image on the right.

I hear complaints that the colours are too saturated and the contrast has gone through the roof, the exposure has gone down etc etc.

All the visual descriptions are correct, but what’s responsible for the changes is mostly down to a shift in contrast.

Let’s have a closer look at the problem:

Accurate Camera Colour within Lightroom

Back of the camera (left) to Lightroom (right) – click to enlarge.

The increase in contrast has resulted in “choking” of the shadow detail under the wing of the Red Kite, loss of tonal separation in the darker mid tones, and a slight increase in the apparent luminance noise level – especially in that out-of-focus blue sky.

And of course, the other big side effect is an apparent increase in saturation.

You should all be aware of my saying that “Contrast Be Thine Enemy” by now – and so we’re hardly getting off to a good start with a situation like this are we…………

So how do we go about obtaining accurate camera colour within Lightroom?

Firstly, we need to understand just what’s going on inside the camera with regard to various settings, and what happens to those settings when we import the image into Lightroom.

Camera Settings & RAW files

Let’s consider all the various settings with regard to image control that we have in our cameras:

  • White Balance
  • Active D lighting
  • Picture Control – scene settings, sharpening etc:
  • Colour Space
  • Distortion Control
  • Vignette Control
  • High ISO NR
  • Focus Point/Group
  • Uncle Tom Cobbly & all…………..

All these are brought to bare to give us the post-view jpeg on the back of the camera.

And let’s not forget

  • Exif
  • IPTC

That post-view/review jpeg IS subjected to all the above image control settings, and is embedded in the RAW file; and the image control settings are recorded in what is called the raw file “header”.

It’s actually a lot more complex than that, with IFD & MakerNote tags and other “scrummy” tech stuff – see this ‘interesting’ article HERE – but don’t fall asleep!

If we ship the raw file to our camera manufacturers RAW file handler software such as Nikon CapNX then the embedded jpeg and the raw header data form the image preview.

However, to equip Lightroom with the ability to read headers from every digital camera on the planet would be physically impossible, and in my opinion, totally undesirable as it’s a far better raw handler than any proprietary offering from Nikon or Canon et al.

So, in a nutshell, Lightroom – and ACR – bin the embedded jpeg preview and ignore the raw file header, with the exception of white balance, together with Exif & IPTC data.

However, we still need to value the post jpeg on the camera because we use it to decide many things about exposure, DoF, focus point etc – so the impact of the various camera image settings upon that image have to be assessed.

Now here’s the thing about image control settings “in camera”.

For the most part they increase contrast, saturation and vibrancy – and as a consequence can DECREASE apparent DYNAMIC RANGE.  Now I’d rather have total control over the look and feel of my image rather than hand that control over to some poxy bit of cheap post-ASIC circuitry inside my camera.

So my recommendations are always the same – all in-camera ‘picture control’ type settings should be turned OFF; and those that can’t be turned off are set to LOW or NEUTRAL as applicable.

That way, when I view the post jpeg on the back of the camera I’m viewing the very best rendition possible of what the sensor has captured.

And it’s pointless having it any other way because when you’re shooting RAW then both Lightroom and Photoshop ACR ignore them anyway!

Accurate Camera Colour within Lightroom

So how do we obtain accurate camera colour within Lightroom?

We can begin to understand how to achieve accurate camera colour within Lightroom if we look at what happens when we import a raw file; and it’s really simple.

Lightroom needs to be “told” how to interpret the data in the raw file in order to render a viewable preview – let’s not forget folks, a raw file is NOT a visible image, just a matrix full of numbers.

In order to do this seemingly simple job Lightroom uses process version and camera calibration settings that ship inside it, telling it how to do the “initial process” of the image – if you like, it’s a default process setting.

And what do you think the default camera calibration setting is?

Accurate Camera Colour within Lightroom

The ‘contrasty’ result of the Lightroom Nikon D4 Adobe Standard camera profile.

Lightroom defaults to this displayed nomenclature “Adobe Standard” camera profile irrespective of what camera make and model the raw file is recorded by.

Importantly – you need to bare in mind that this ‘standard’ profile is camera-specific in its effect, even though the displayed name is the same when handling say D800E NEF files as it is when handling 1DX CR2 files, the background functionality is totally different and specific to the make and model of camera.

What it says on the tin is NOT what’s inside – so to speak!

So this “Adobe Standard” has as many differing effects on the overall image look as there are cameras that Lightroom supports – is it ever likely that some of them are a bit crap??!!

Some files, such as the Nikon D800 and Canon 5D3 raws seem to suffer very little if any change – in my experience at any rate – but as a D4 shooter this ‘glitch in the system’ drives me nuts.

But the walk-around is so damned easy it’s not worth stressing about:

  1. Bring said image into Lightroom (as above).
  2. Move the image to the DEVELOP module
  3. Go to the bottom settings panel – Camera Calibration.
  4. Select “Camera Neutral” from the drop-down menu:
    Accurate Camera Colour within Lightroom

    Change camera profile from ‘Adobe Standard’ to ‘Camera Neutral’ – see the difference!

    You can see that I’ve added a -25 contrast adjustment in the basics panel here too – you might not want to do that*

  5. Scoot over to the source panel side of the Lightroom GUI and open up the Presets Panel

    Accurate Camera Colour within Lightroom

    Open Presets Panel (indicated) and click the + sign to create a new preset.

  6. Give the new preset a name, and then check the Process Version and Calibration options (because of the -25 contrast adjustment I’ve added here the Contrast option is ticked).
  7. Click CREATE and the new “camera profile preset” will be stored in the USER PRESETS across ALL your Lightroom 5 catalogs.
  8. The next time you import RAW files you can ADD this preset as a DEVELOP SETTING in the import dialogue box:
    Accurate Camera Colour within Lightroom

    Choose new preset

    Accurate Camera Colour within Lightroom

    Begin the import

  9. Your images will now look like they did on the back of the camera (if you adopt my approach to camera settings at least!).

You can play around with this procedure as much as you like – I have quite a few presets for this “initial process” depending on a number of variables such as light quality and ISO used to name but two criteria (as you can see in the first image at 8. above).

The big thing I need you to understand is that the camera profile in the Camera Calibration panel of Lightroom acts merely as Lightroom’s own internal guide to the initial process settings it needs to apply to the raw file when generating it’s library module previews.

There’s nothing complicated, mysterious or sinister going on, and no changes are being made to your raw images – there’s nothing to change.

In fact, I don’t even bother switching to Camera Neutral half the time; I just do a rough initial process in the Develop module to negate the contrast in the image, and perhaps noise if I’ve been cranking the ISO a bit – then save that out as a preset.

Then again, there are occasions when I find switching to Camera Neutral is all that’s needed –  shooting low ISO wide angle landscapes when I’m using the full extent of the sensors dynamic range springs to mind.

But at least now you’ve got shots within your Lightroom library that look like they did on the back of the camera, and you haven’t got to start undoing the mess it’s made on import before you get on with the proper task at hand – processing – and keeping that contrast under control.

Some twat on a forum somewhere slagged this post off the other day saying that I was misleading folk into thinking that the shot on the back of the camera was “neutral” – WHAT A PRICK…………

All we are trying to do here is to make the image previews in Lr5 look like they did on the back of the camera – after all, it is this BACK OF CAMERA image that made us happy with the shot in the first place.

And by ‘neutralising’ the in-camera sharpening and colour/contrast picture control ramping the crappy ‘in camera’ jpeg is the best rendition we have of what the sensor saw while the shutter was open.

Yes, we are going to process the image and make it look even better, so our Lr5 preview starting point is somewhat irrelevant in the long run; but a lot of folk freak-out because Lr5 can make some really bad changes to the look of their images before they start.  All we are doing in this article is stopping Lr5 from making those unwanted changes.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

MTF, Lens & Sensor Resolution

MTF, Lens & Sensor Resolution

I’ve been ‘banging on’ about resolution lens performance and MTF over the last few posts so I’d like to start bringing all these various bits of information together with at least a modicum of simplicity.

If this is your first visit to my blog I strongly recommend you peruse HERE and HERE before going any further!

You might well ask the question “Do I really need to know this stuff – you’re a pro Andy and I’m not, so I don’t think I need to…”

My answer is “Yes you bloody well do need to know, so stop whinging – it’ll save you time and perhaps stop you wasting money…”

Words used like ‘resolution’ do tend to get used out of context sometimes, and when you guys ‘n gals are learning this stuff then things can get a mite confusing – and nowhere does terminology get more confusing than when we are talking ‘glass’.

But before we get into the idea of bringing lenses and sensors together I want to introduce you to something you’ve all heard of before – CONTRAST – and how it effects our ability to see detail, our lens’s ability to transfer detail, and our camera sensors ability to record detail.

Contrast & How It Effects the Resolving of Detail

In an earlier post HERE I briefly mentioned that the human eye can resolve 5 line pairs per millimeter, and the illustration I used to illustrate those line pairs looked rather like this:

5 line pairs per millimeter with a contrast ratio of 100% or 1.0

5 line pairs per millimeter with a contrast ratio of 100% or 1.0

Now don’t forget, these line pairs are highly magnified – in reality each pair should be 0.2mm wide.  These lines are easily differentiated because of the excessive contrast ratio between each line in a pair.

How far can contrast between the lines fall before we can’t tell the difference any more and all the lines blend together into a solid monotone?

Enter John William Strutt, the 3rd Baron Rayleigh…………

5 line pairs at bottom threshold of human vision - a 9% contrast ratio.

5 line pairs at bottom threshold of human vision – a 9% contrast ratio.

The Rayleigh Criterion basically stipulates that the ‘discernability’ of each line in a pair is low end limited to a line pair contrast ratio of 9% or above, for average human vision – that is, when each line pair is 0.2mm wide and viewed from 25cms.  Obviously they are reproduced much larger here, hence you can see ’em!

Low contrast limit for Human vision (left) & camera sensor (right).

Low contrast limit for Human vision (left) & camera sensor (right).

However, it is said in some circles that dslr sensors are typically limited to a 12% to 15% minimum line pair contrast ratio when it comes to discriminating between the individual lines.

Now before you start getting in a panic and misinterpreting this revelation you must realise that you are missing one crucial factor; but let’s just recap what we’ve got so far.

  1. A ‘line’ is a detail.
  2. but we can’t see one line (detail) without another line (detail) next to it that has a different tonal value ( our line pair).
  3. There is a limit to the contrast ratio between our two lines, below which our lines/details begin to merge together and become less distinct.

So, what is this crucial factor that we are missing; well, it’s dead simple – the line pair per millimeter (lp/mm) resolution of a camera sensor.

Now there’s something you won’t find in your cameras ‘tech specs’ that’s for sure!

Sensor Line Pair Resolution

The smallest “line” that can be recorded on a sensor is 1 photosite in width – now that makes sense doesn’t it.

But in order to see that line we must have another line next to it, and that line must have a higher or lower tonal value to a degree where the contrast ratio between the two lines is at or above the low contrast limit of the sensor.

So now we know that the smallest line pair our sensor can record is 2 photosites/pixels in width – the physical width is governed by the sensor pixel pitch; in other words the photosite diameter.

In a nutshell, the lp/mm resolution of a sensor is 0.5x the pixel row count per millimeter – referred to as the Nyquist Rate, simply because we have to define (sample) 2 lines in order to see/resolve 1 line.

The maximum resolution of an image projected by the lens that can be captured at the sensor plane – in other words, the limit of what can be USEFULLY sampled – is the Nyquist Limit.

Let’s do some practical calculations:

Canon 1DX 18.1Mp

Imaging Area = 36mm x 24mm / 5202 x 3533 pixels/photosites OR LINES.

I actually do this calculation based on the imaging area diagonal

So sensor resolution in lp/mm = (pixel diagonal/physical diagonal) x 0.5 = 72.01 lp/mm

Nikon D4 16.2Mp = 68.62 lp/mm

Nikon D800 36.3Mp = 102.33 lp/mm

PhaseOne P40 40Mp medium format = 83.15 lp/mm

PhaseOne IQ180 80Mp medium format = 96.12 lp/mm

Nikon D7000 16.2mp APS-C (DX) 4928×3264 pixels; 23.6×15.6mm dimensions  = 104.62 lp/mm

Canon 1D IV 16.1mp APS-H 4896×3264 pixels; 27.9×18.6mm dimensions  = 87.74 lp/mm

Taking the crackpot D800 as an example, that 102.33 lp/mm figure means that the sensor is capable of resolving 204.66 lines, or points of detail, per millimeter.

I say crackpot because:

  1. The Optical Low Pass “fights” against this high degree of resolving power
  2. This resolving power comes at the expense of S/N ratio
  3. This resolving power comes at the expense of diffraction
  4. The D800E is a far better proposition because it negates 1. above but it still leaves 2. & 3.
  5. Both sensors would purport to be “better” than even an IQ180 – newsflash – they ain’t; and not by a bloody country mile!  But the D800E is an exceptional sensor as far as 35mm format (36×24) sensors go.

A switch to a 40Mp medium format is BY FAR the better idea.

Before we go any further, we need a reality check:

In the scene we are shooting, and with the lens magnification we are using, can we actually “SEE” detail as small as 1/204th of a millimeter?

We know that detail finer than that exists all around us – that’s why we do macro/micro photography – but shooting a landscape with a 20mm wide angle where the nearest detail is 1.5 meters away ??

And let’s not forget the diffraction limit of the sensor and the incumbent reduction in depth of field that comes with 36Mp+ crammed into a 36mm x 24mm sensor area.

The D800 gives you something with one hand and takes it away with the other – I wouldn’t give the damn thing house-room!  Rant over………

Anyway, getting back to the matter at hand, we can now see that the MTF lp/mm values quoted by the likes of Nikon and Canon et al of 10 and 30 lp/mm bare little or no connectivity with the resolving power of their sensors – as I said in my previous post HERE – they are meaningless.

The information we are chasing after is all about the lens:

  1. How well does it transfer contrast because its contrast that allows us to “see” the lines of detail?
  2. How “sharp” is the lens?
  3. What is the “spread” of 1. and 2. – does it perform equally across its FoV (field of view) or is there a monstrous fall-off of 1. and 2. between 12 and 18mm from the center on an FX sensor?
  4. Does the lens vignette?
  5. What is its CA performance?

Now we can go to data sites on the net such as DXO Mark where we can find out all sorts of more meaningful data about our potential lens purchase performance.

But even then, we have to temper what we see because they do their testing using Imatest or something of that ilk, and so the lens performance data is influenced by sensor, ASIC and basic RAW file demosaicing and normalisation – all of which can introduce inaccuracies in the data; in other words they use camera images in order to measure lens performance.

The MTF 50 Standard

Standard MTF (MTF 100) charts do give you a good idea of the lens CONTRAST transfer function, as you may already have concluded. They begin by measuring targets with the highest degree of modulation – black to white – and then illustrate how well that contrast has been transferred to the image plane, measured along a corner radius of the frame/image circle.

MTF 1.0 (100%) left, MTF 0.5 (50%) center and MTF 0.1 (10%) right.

MTF 1.0 (100%) left, MTF 0.5 (50%) center and MTF 0.1 (10%) right.

As you can see, contrast decreases with falling transfer function value until we get to MTF 0.1 (10%) – here we can guess that if the value falls any lower than 10% then we will lose ALL “perceived” contrast in the image and the lines will become a single flat monotone – in other words we’ll drop to 9% and hit the Rayleigh Criterion.

It’s somewhat debatable whether or not sensors can actually discern a 10% value – as I mentioned earlier in this post, some favour a value more like 12% to 15% (0.12 to 0.15).

Now then, here’s the thing – what dictates the “sharpness” of edge detail in our images?  That’s right – EDGE CONTRAST.  (Don’t mistake this for overall image contrast!)

Couple that with:

  1. My well-used adage of “too much contrast is thine enemy”.
  2. “Detail” lies in midtones and shadows, and we want to see that detail, and in order to see it the lens has to ‘transfer’ it to the sensor plane.
  3. The only “visual” I can give you of MTF 100 would be something like power lines silhouetted against the sun – even then you would under expose the sun, so, if you like, MTF would still be sub 100.

Please note: 3. above is something of a ‘bastardisation’ and certain so-called experts will slag me off for writing it, but it gives you guys a view of reality – which is the last place some of those aforementioned experts will ever inhabit!

Hopefully you can now see that maybe measuring lens performance with reference to MTF 50 (50%, 0.5) rather than MTF 100 (100%, 1.0) might be a better idea.

Manufacturers know this but won’t do it, and the likes of Nikon can’t do it even if they wanted to because they use a damn calculator!

Don’t be trapped into thinking that contrast equals “sharpness” though; consider the two diagrams below (they are small because at larger sizes they make your eyes go funny!).

A lens can transfer full contrast but be unsharp.

A lens can have a high contrast transfer function but be unsharp.

A lens can have low contrast transmission (transfer function) but still be sharp.

A lens can have low contrast transfer function but still be sharp.

In the first diagram the lens has RESOLVED the same level of detail (the same lp/mm) in both cases, and at pretty much the same contrast transfer value; but the detail is less “sharp” on the right.

In the lower diagram the lens has resolved the same level of detail with the same degree of  “sharpness”, but with a much reduced contrast transfer value on the right.

Contrast is an AID to PERCEIVED sharpness – nothing more.

I actually hate that word SHARPNESS; and it’s a nasty word because it’s open to all sorts of misconceptions by the uninitiated.

A far more accurate term is ACUTANCE.

How Acutance effects perceived "sharpness" and is contrast independent.

How Acutance effects perceived “sharpness”.

So now hopefully you can see that LENS RESOLUTION is NOT the same as lens ACUTANCE (perceived sharpness..grrrrrr).

Seeing as it is possible to have a lens with a higher degree resolving power, but a lower degree of acutance you need to be careful – low acutance tends to make details blur into each other even at high contrast values; which tends to negate the positive effects of the resolving power. (Read as CHEAP LENS!).

Lenses need to have high acutance – they need to be sharp!  We’ve got enough problems trying to keep the sharpness once the sensor gets hold of the image, without chucking it a soft one in the first place – and I’ll argue this point with the likes of Mr. Rockwell until the cows have come home!

Things We Already Know

We already know that stopping down the aperture increases Depth of Field; and we already know that we can only do this to a certain degree before we start to hit diffraction.

What does increasing DoF do exactly; it increases ACUTANCE is what it does – exactly!

Yes it gives us increased perceptual sharpness of parts of the subject in front and behind the plane of sharp focus – but forget that bit – we need to understand that the perceived sharpness/acutance of the plane of focus increases too, until you take things too far and go beyond the diffraction limit.

And as we already know, that diffraction limit is dictated by the size of photosites/pixels in the sensor – in other words, the sensor resolution.

So the diffraction limit has two effects on the MTF of a lens:

  1. The diffraction limit changes with sensor resolution – you might get away with f14 on one sensor, but only f9 on another.
  2. All this goes “out the window” if we talk about crop-sensor cameras because their sensor dimensions are different.

We all know about “loss of wide angles” with crop sensors – if we put a 28mm lens on an FX body and like the composition but then we switch to a 1.5x crop body we then have to stand further away from the subject in order to achieve the same composition.

That’s good from a DoF PoV because DoF for any given aperture increases with distance; but from a lens resolving power PoV it’s bad – that 50 lp/mm detail has just effectively dropped to 75 lp/mm, so it’s harder for the lens to resolve it, even if the sensors resolution is capable of doing so.

There is yet another way of quantifying MTF – just to confuse the issue for you – and that is line pairs per frame size, usually based on image height and denoted as lp/IH.

Imatest uses MTF50 but quotes the frequencies not as lp/mm, or even lp/IH; but in line widths per image height – LW/IH!

Alas, there is no single source of the empirical data we need in order to evaluate pure lens performance anymore.  And because the outcome of any particular lens’s performance in terms of acutance and resolution is now so inextricably intertwined with that of the sensor behind it, then you as lens buyers, are left with a confusing myriad of various test results all freely available on the internet.

What does Uncle Andy recommend? – well a trip to DXO Mark is not a bad starting point all things considered, but I do strongly suggest that you take on board the information I’ve given you here and then scoot over to the DXO test methodology pages HERE and read them carefully before you begin to examine the data and draw any conclusions from it.

But do NOT make decisions just on what you see there; there is no substitute for hands-on testing with your camera before you go and spend your hard-earned cash.  Proper testing and evaluation is not as simple as you might think, so it’s a good idea to perhaps find someone who knows what they are doing and is prepared to help you out.   Do NOT ask the geezer in the camera shop – he knows bugger all about bugger all!

Do Sensors Out Resolve Lenses?

Well, that’s the loaded question isn’t it – you can get very poor performance from what is ostensibly a superb lens, and to a degree vice versa.

It all depends on what you mean by the question, because in reality a sensor can only resolve what the lens chucks at it.

If you somehow chiseled the lens out of your iPhone and Sellotaped it to your shiny new 1DX then I’m sure you’d notice that the sensor did indeed out resolve the lens – but if you were a total divvy who didn’t know any better then in reality all you’d be ware of is that you had a crappy image – and you’d possibly blame the camera, not the lens – ‘cos it took way better pics on your iPhone 4!

There are so many external factors that effect the output of a lens – available light, subject brightness range, angle of subject to the lens axis to name but three.  Learning how to recognise these potential pitfalls and to work around them is what separates a good photographer from an average one – and by good I mean knowledgeable – not necessarily someone who takes pics for a living.

I remember when the 1DX specs were first ‘leaked’ and everyone was getting all hot and bothered about having to buy the new Canon glass because the 1DX was going to out resolve all Canons old glass – how crackers do you need to be nowadays to get a one way ticket to the funny farm?

If they were happy with the lens’s optical performance pre 1DX then that’s what they would get post 1DX…duh!

If you still don’t get it then try looking at it this way – if lenses out resolve your sensor then you are up “Queer Street” – what you see in the viewfinder will be far better than the image that comes off the sensor, and you will not be a happy camper.

If on the other hand, our sensors have the capability to resolve more lines per millimeter than our lenses can throw at them, and we are more than satisfied with our lenses resolution and acutance, then we would be in a happy place, because we’d be wringing the very best performance from our glass – always assuming we know how to ‘drive the juggernaut’  in the first place!

Become a Patron!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.