The Importance of Finished Image Previsualization

The Importance of Finished Image Previsualization (Patreon Only).

For those of you who haven’t yet subscribed to my YouTube channel, I uploaded a video describing how I shot and processed the Lone Tree at Llyn Padarn in North Wales the other day.

You can view the video here:

Image previsualization is hugely important in all photography, but especially so in landscape photography.

Most of us do it in some way or other.  Looking at images of a location by other photographers is the commonest form of image previsualization that I come across amongst most hobby photographers – and up to a point, there’s nothing intrinsically wrong in that – as long as you put your own ‘slant’ on the shot.

But relying on this method alone has one massive Achilles Heel – nature does not always ‘play nice’ with the light!

You set off for your chosen location with a certain knowledge that the weather forecast is correct, and you are guaranteed to get the perfect light for the shot you have in mind.

Three hours later, you arrive at your destination, and the first thought that enters your head is “how do I blow up the Met Office” – how could they have lied to me so badly?

If you rely solely on ‘other folks images’ for what your shot should look like, then you now have a severe problem.  Nature is railing against your preconceptions, and unless you make some mental modifications then you are deep into a punch-up with nature that you will never win.

Just such an occasion transpired for me the other day at Llyn Padarn in North Wales.

The forecast was for low level cloud with no wind, just perfect for a moody shot of the famous Lone Tree on the south shore of the lake.

So, arriving at the location to be greeted by this was a surprise to say the least:

image previsualization

This would have been disastrous for some, simply because the light does not comply with their initial expectations.  I’ve seen many people get a ‘fit of the sulks’ when this happens, and they abandon the location without even getting out of the car.

Alternatively, there are folk who will get their gear set up and make an attempt, but their initial disappointment becomes a festering ‘mental block’, and they cannot see a way to turn this bad situation into something good.

But, here’s the thing – there is no such thing as a bad situation!

There are however, multiple BAD REACTIONS to a situation.

And every adverse reaction has its roots buried in either:

  • Rigid, inflexible preconceptions.
  • Poor understanding of photographic equipment and post-processing.

Or both!

On this occasion, I was expecting a rather heavy, flat-ish light scenario; but was greeted by the exact opposite.

But instead of getting ‘stroppy about it’, experience and knowledge allow me to change my expectation, and come up with a new ‘finished image previsualization’ on the fly so to speak.

image previsualization

Instead of the futility of trying to produce my original idea – which would never work out – I simply change my image previsualization, based on what’s in front of me.

It’s then up to me to identify what I need to do in order to bring this new idea to fruition.

The capture workflow for both ‘anticipated’ and ‘reality’ would involve bracketing due to excessive subject brightness range, but there the similarity ends.

The ‘anticipated’ capture workflow would only require perhaps 3 or 4 shots – one for the highlights, and the rest for the mid tones and shadow detail.

But the ‘reality’ capture workflow is very different.  The scene has massive contrast and the image looks like crap BECAUSE of that excessive contrast. Exposing for the brightest highlights gives us a very dark image:

image previsualization

But I know that the contrast can be reduced in post to give me this:

image previsualization

So, while I’m shooting I can previz in my head what the image I’ve shot will look like in post.

This then allows me to capture the basic bracket of shots to capture all my shadow and mid tone detail.

If you watch the video, you’ll see that I only use TWO shots from the bracket sequence to produce the basic exposure blend – and they are basically 5 stops apart. The other shots I use are just for patching blown highlights.

Because the clouds are moving, the sun is in and out like a yo-yo.  Obviously, when it’s fully uncovered, it will flare across the lens.  But when it is partially to fully covered, I’m doing shot after shot to try and get the best exposures of the reflected highlights in the water.

By shooting through a polarizer AND a 6 stop ND, I’m getting relatively smooth water in all these shots – with the added bonus of blurring out the damn canoeists!

And it’s the ‘washed out colour, low contrast previsualization’ of the finished image that is driving me to take all the shots – I’m gathering enough pixel data to enable me to create the finished image without too much effort in Lightroom or Photoshop.

Anyway, go and watch the video as it will give you a much better idea of what I’m talking about!

But remember, always take your time and try reappraise what’s in front of you when the lighting conditions differ from what you were expecting.  You will often be amazed at the awesome images you can ‘pull’ from what ostensibly appears to be a right-off situation.

 

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

FX vs DX

FX versus DX

It amazes me that people still don’t understand the relationship between FX and DX format sensors.

Millions of people across the planet think still that when they put a DX body on an FX lens and turn the camera on, something magic happens and the lens somehow becomes a different beast.

NO…it doesn’t!

There is so much crap out there on the web, resulting in the blind being led by the stupid – and that is a hardcore fact.  Some of the ‘stuff’ I get sent links to on the likes of ‘diaper review’ (to coin Ken W.’s name for it) and others, leaves me totally aghast at the number of fallacies that are being promoted and perpetuated within the content of these high-traffic websites.

FFS – this has GOT to STOP.

Fallacy 1.  Using a DX crop sensor gives me more magnification.

Oh no it doesn’t!

If we arm an FX and a DX body with identical lenses, let’s say 500mm f4’s, and go and take the same picture, at the same time and from the same subject distance with both setups, we get the following images:

FX versus DX

FX versus DX: FX + 500m f4 image – 36mm x 24mm frame area FoV

FX versus DX

FX versus DX: DX + 500mm f4 image – 24mm x 16mm frame area FoV

FX versus DX

FX versus DX: With both cameras at the same distance from the subject, the Field of View of the DX body+500mm f4 combo is SMALLER – but the subject is EXACTLY the SAME SIZE.

Let’s overlay the two images:

FX versus DX

FX versus DX: The DX field of view (FoV) is indicated by the black line. HOWEVER, this line only denotes the FoV area. It should NOT be taken as indicative of pixel dimensions.

The subject APPEARS larger in the DX frame because the frame FoV is SMALLER than that of the FX frame.

FX versus DX

But I will say it again – the subject is THE SAME DAMN SIZE.  Any FX lens projects an image onto the focal plane that is THE SAME SIZE irrespective of whether the sensor is FX or DX – end of story.

Note: If such a thing existed, a 333mm prime on a DX crop body would give us the same COMPOSITION, at the same subject distance, as our 500mm prime on the FX body.  But at the same aperture and distance, this fictitious 333mm lens would give us MORE DoF due to it being a shorter focal length.

Fallacy 2.  Using a DX crop sensor gives me more Depth of Field for any given aperture.

The other common variant of this fallacy is:

Using a DX crop sensor gives me less Depth of Field for any given aperture.

Oh no it doesn’t – not in either case!

Understand this people – depth of field is, as we all know, governed by the aperture diaphragm – in other words the f number.  Now everyone understands this, surely to God.

But here’s the thing – where’s the damn aperture diaphragm?  Inside the LENS – not the camera!

Depth of field is REAL or TRUE focal length, aperture and subject distance dependent, so our two identical 500mm f4 lenses at say 30 meters subject distance and f8 are going to yield the same DoF.  That’s irrespective of the physical dimensions of the sensor – be they 36mm x 24mm, or 24mm x 16mm.

But, in order for the FX setup to obtain the same COMPOSITION as that of the DX, the FX setup will need to be CLOSER to the subject – and so using the same f number/aperture value will yield an image with LESS DoF than that of the DX, because DoF decreases with decreased distance, for any given f number.

To obtain the same COMPOSITION with the DX as that of the FX, then said DX camera would need to move further away from the subject.  Therefore the same aperture value would yield MORE DoF, because DoF increases with increased distance, for any given f number.

The DX format does NOT change DoF, it’s the pixel pitch/CoC that alters the total DoF in the final image.  In other words it’s total megapixels the alters DoF, and that applies evenly across FX and DX.

Fallacy 3.  An FX format sensor sees more light, or lets more light in, giving me more exposure because it’s a bigger ‘eye’ on the scene.

Oh no it doesn’t!

Now this crap really annoys the hell out of me.

Exposure has nothing to do with sensor size WHAT SO EVER.  The intensity of light falling onto the focal plane is THE SAME, irrespective of sensor size.  Exposure is a function of Intensity x Time, and so for the same intensity (aperture) and time (shutter speed) the resulting exposure will be the SAME.  Total exposure is per unit area, NOT volume.

It’s the buckets full of rain water again:

FX versus DX

The level of water in each bucket is the same, and represents total exposure.  There is no difference in exposure between sensor sizes.

There is a huge difference in volume, but your sensor does not work on total volume – it works per unit area.  Each and every square millimeter, or square micron, of the focal plane sees the same exposure from the image projected into it by the lens, irrespective of the dimensions of the sensor.

The smallest unit area of the sensor is a photosite. And each photosite recieves the same said exposure value, no matter how big the sensor they are embedded in is.

HOWEVER, it is how those individual photosites COPE with that exposure that makes the difference. And that leads us neatly on to the next fallacy.

Fallacy 4.  FX format sensors have better image quality because they are bigger.

Oh no they don’t – well, not because they are just bigger !

It’s all to do with pixel pitch, and pixel pitch governs VOLUME.

FX versus DX

FX format sensors usually give a better image because their photosites have a larger diameter, or pitch. You should read HERE  and HERE for more detail.

Larger photosites don’t really ‘see’ more light during an exposure than small ones, but because they are larger, each one has a better potential signal to noise ratio.  This can, turn, allow for greater subtle variation in recorded light values amongst other things, such as low light response.  Think of a photosite as an eyeball, then think of all the animals that mess around in the dark – they all have big eyes!

That’s not the most technological statement I’ve ever made, but it’s fact, and makes for a good analogy at this point.

Everyone wants a camera sensor that sees in the dark, generates zero noise at ISO 1 Million, has zero diffraction at f22, and has twice the resolution of £35Ks worth medium format back.

Well kids, I hate to break it to you, but such a beast does not exist, and nor will it for many a year to come.

The whole FX versus DX format  ‘thing’ is really a meaningless argument, and the DX format has no advantage over the FX format apart from less weight and lower price (perhaps).

Yes, if we shoot a DX format camera using an FX lens we get the ‘illusion’ of a magnified subject – but that’s all it is – an illusion.

Yes, if we shoot the same shot on a 20Mp FX and crop it to look like the shot from a 20Mp DX, then the subject in the DX shot will have twice as many pixels in it, because of the higher translational density – but at what cost.

Cramming more mega pixels into either a 36mm x 24mm or 24mm x 16mm area results in one thing only – smaller photosites.  Smaller photosites come with one single benefit – greater detail resolution.  Every other attribute that comes with smaller photosites is a negative one:

  • Greater susceptibility to subject motion blur – the bane of landscape and astro photographers.
  • Greater susceptibility to diffraction due to lower CoC.
  • Lower CoC also reduces DoF.
  • Lower signal to noise ratio and poorer high ISO performance.

Note: Quite timely this! With the new leaked info about the D850, we see it’s supposed to have a BSI sensor.  This makes it impossible to make a comparison between it and the D500, even though the photosites are nearly pretty much the same size/pitch.  Any comparison is made even more impossible with the different micro-lens tech sported by the D850.  Also, the functionality of the ADC/SNR firmware is bound to be different from the D500 too.

Variations in: AA filter type/properties and micro lens design, wiring substrate thickness, AF system algorithms and performance, ADC/SNR and other things, all go towards making FX versus DX comparisons difficult, because we use our final output images to draw our conclusions; and they are effected by all of the above.

But facts are facts – DX does not generate either greater magnification or greater/less depth of field than FX when used with identical FX lenses at the same distance and aperture.

Sensor format effects nothing other than FoV,  everything else is purely down to pixel pitch.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Color Temperature

Lightroom Color Temperature (or Colour Temperature if you spell correctly!)

“Andy – why the heck is Lightrooms temperature slider the wrong way around?”

That’s a question that I used to get asked quite a lot, and it’s started again since I mentioned it in passing a couple of posts ago.

The short answer is “IT ISN”T….it’s just you who doesn’t understand what it is and how it functions”.

But in order to give the definitive answer I feel the need to get back to basics though – so here goes.

The Spectrum Locus

Let’s get one thing straight from the start – LOCUS is just a posh word for PATH!

Visible light is just part of the electro-magnetic energy spectrum typically between 380nm (nanometers) and 700nm:

Color Temperature

In the first image below is what’s known as the Spectrum Locus – as defined by the CIE (Commission Internationale de l´Eclairage or International Commission on Illumination).

In a nutshell the locus represents the range of colors visible to the human eye – or I should say chromaticities:

Color Temperature

The blue numbers around the locus are simply the nanometer values from that same horizontal scale above. The reasoning behind the unit values of the x and y axis are complex and irrelevant to us in this post, otherwise it’ll go on for ages.

The human eye is a fickle thing.

It will always perceive, say, 255 green as being lighter than 255 red or 255 blue, and 255 blue as being the darkest of the three.  And the same applies to any value of the three primaries, as long as all three are the same.

Color Temperature

This stems from the fact that the human eye has around twice the response to green light as it does red or blue – crazy but true.  And that’s why your camera sensor – if it’s a Bayer type – has twice the number of green photosites on it as red or blue.

In rather over-simplified terms the CIE set a standard by which all colors in the visible spectrum could be expressed in terms of ‘chromaticity’ and ‘brightness’.

Brightness can be thought of as a grey ramp from black to white.

Any color space is a 3 dimensional shape with 3 axes x, y and z.

Z is the grey ramp from black to white, and the shape is then defined by the colour positions in terms of their chromaticity on the x and y axes, and their brightness on the z axis:

Color Temperature

But if we just take the chromaticity values of all the colours visible to the human eye we end up with the CIE1931 spectrum locus – a two dimensional plot if you like, of the ‘perceived’ color space of human vision.

Now here’s where the confusion begins for the majority of ‘uneducated photographers’ – and I mean that in the nicest possible way, it’s not a dig!

Below is the same spectrum locus with an addition:

Color Temperature

This additional TcK curve is called the Planckian Locus, or dark body locus.  Now please don’t give up here folks, after all you’ve got this far, but it’ll get worse before it gets better!

The Planckian Locus simply represents the color temperature in degrees Kelvin of the colour emitted by a ‘dark body’ – think lump of pure carbon – as it is heated.  Its color temperature begins to visibly rise as its thermal temperature rises.

Up to a certain thermal temperature it’ll stay visibly black, then it will begin to glow a deep red.  Warm it up some more and the red color temperature turns to orange, then yellow and finally it will be what we can call ‘white hot’.

So the Planckian Locus is the 2D chromaticity plot of the colours emitted by a dark body as it is heated.

Here’s point of confusion number 1: do NOT jump to the conclusion that this is in any way a greyscale. “Well it starts off BLACK and ends up WHITE” – I’ve come across dozens of folk who think that – as they say, a little knowledge is a dangerous thing indeed!

What the Planckian Locus IS indicative of though is WHITE POINT.

Our commonly used colour management white points of D65, D55 and D50 all lie along the Planckian Locus, as do all the other CIE standard illumimant types of which there’s more than few.

The standard monitor calibration white point of D65 is actually 6500 Kelvin – it’s a standardized classification for ‘mean Noon Daylight’, and can be found on the Spectrum Locus/Plankckian Locus at 0.31271x, 0.32902y.

D55 or 5500 Kelvin is classed as Mid Morning/Mid Afternoon Daylight and can be found at 0.33242x, 0.34743y.

D50 or 5000 kelvin is classed as Horizon Light with co-ordinates of 0.34567x, 0.35850.

But we can also equate Planckian Locus values to our ‘picture taking’ in the form of white balance.

FACT: The HIGHER the color temperature the BLUER the light, and lower color temperatures shift from blue to yellow, then orange (studio type L photofloods 3200K), then more red (standard incandescent bulb 2400K) down to candle flame at around 1850K).  Sunset and sunrise are typically standardized at 1850K and LPS Sodium street lights can be as low as 1700K.

And a clear polar sky can be upwards of 27,000K – now there’s blue for you!

And here’s where we find confusion point number 2!

Take a look at this shot taken through a Lee Big Stopper:

Color Temperature

I’m an idle git and always have my camera set to a white balance of Cloudy B1, and here I’m shooting through a filter that notoriously adds a pretty severe bluish cast to an image anyway.

If you look at the TEMP and TINT sliders you will see Cloudy B1 is interpreted by Lightroom as 5550 Kelvin and a tint of +5 – that’s why the notation is ‘AS SHOT’.

Officially a Cloudy white balance is anywhere between 6000 Kelvin and 10,000 kelvin depending on your definition, and I’ve stuck extra blue in there with the Cloudy B1 setting, which will make the effective temperature go up even higher.

So either way, you can see that Lightrooms idea of 5550 Kelvin is somewhat ‘OFF’ to say the least, but it’s irrelevant at this juncture.

Where the real confusion sets in is shown in the image below:

Color Temperature

“Andy, now you’ve de-blued the shot why is the TEMP slider value saying 8387 Kelvin ? Surely it should be showing a value LOWER than 5550K – after all, tungsten is warm and 3200K”….

How right you are…..and wrong at the same time!

What Lightroom is saying is that I’ve added YELLOW to the tune of 8387-5550 or 2837.

FACT – the color temperature controls in Lightroom DO NOT work by adjusting the Planckian or black body temperature of light in our image.  They are used to COMPENSATE for the recorded Planckian/black body temperature.

If you load in image in the develop module of Lightroom and use any of the preset values, the value itself is ball park correct(ish).

The Daylight preset loads values of 5500K and +10. The Shade preset will jump to 7500K and +10, and Tungsten will drop to 2850K and +/-0.

But the Tungsten preset puts the TEMP slider in the BLUE part of the slider Blue/Yellow graduated scale, and the Shade preset puts the slider in the YELLOW side of the scale, thus leading millions of people into mistakenly thinking that 7500K is warmer/yellower than 2850K when it most definitely is NOT!

This kind of self-induced bad learning leaves people wide open to all sorts of misunderstandings when it comes to other aspects of color theory and color management.

My advice has always been the same, just ignore the numbers in Lightroom and do your adjustments subjectively – do what looks right!

But for heaven sake don’t try and build an understanding of color temperature based on the color balance control values in Lightroom – otherwise you get in one heck of a mess.

 

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Monitor Calibration Update

Monitor Calibration Update

Okay, so I no longer NEED a new monitor, because I’ve got one – and my wallet is in Leighton Hospital Intensive Care Unit on the critical list..

What have you gone for Andy?  Well if you remember, in my last post I was undecided between 24″ and 27″, Eizo or BenQ.  But I was favoring the Eizo CS2420, on the grounds of cost, both in terms of monitor and calibration tool options.

But I got offered a sweet deal on a factory-fresh Eizo CS270 by John Willis at Calumet – so I got my desire for more screen real-estate fulfilled, while keeping the costs down by not having to buy a new calibrator.

monitor calibration update

But it still hurt to pay for it!

Monitor Calibration

There are a few things to consider when it comes to monitor calibration, and they are mainly due to the physical attributes of the monitor itself.

In my previous post I did mention one of them – the most important one – the back light type.

CCFL and WCCFL – cold cathode fluorescent lamps, or LED.

CCFL & WCCFL (wide CCFL) used to be the common type of back light, but they are now less common, being replaced by LED for added colour reproduction, improved signal response time and reduced power consumption.  Wide CCFL gave a noticeably greater colour reproduction range and slightly warmer colour temperature than CCFL – and my old monitor was fitted with WCCFL back lighting, hence I used to be able to do my monitor calibration to near 98% of AdobeRGB.

CCFL back lights have one major property – that of being ‘cool’ in colour, and LEDs commonly exhibit a slightly ‘warmer’ colour temperature.

But there’s LEDs – and there’s LEDs, and some are cooler than others, some are of fixed output and others are of a variable output.

The colour temperature of the backlighting gives the monitor a ‘native white point’.

The ‘brightness’ of the backlight is really the only true variable on a standard type of LCD display, and the inter-relationship between backlight brightness and colour temperature, and the size of the monitors CLUT (colour look-up table) can have a massive effect on the total number of colours that the monitor can display.

Industry-standard documentation by folk a lot cleverer than me has for years recommended the same calibration target settings as I have alluded to in previous blog posts:

White Point: D65 or 6500K

Brightness: 120 cdm² or candelas per square meter

Gamma: 2.2

monitor calibration update

The ubiquitous ColorMunki Photo ‘standard monitor calibration’ method setup screen.

This setup for ‘standard monitor calibration’ works extremely well, and has stood me in good stead for more years than I care to add up.

As I mentioned in my previous post, standard monitor calibration refers to a standard method of calibration, which can be thought of as ‘software calibration’, and I have done many print workshops where I have used this method to calibrate Eizo ColorEdge and NEC Spectraviews with great effect.

However, these more specialised colour management monitors have the added bonus of giving you a ‘hardware monitor calbration’ option.

To carry out a hardware monitor calibration on my new CS270 ColorEdge – or indeed any ColorEdge – we need to employ the Eizo ColorNavigator.

The start screen for ColorNavigator shows us some interesting items:

monitor calibration update

The recommended brightness value is 100 cdm² – not 120.

The recommended white point is D55 not D65.

Thank God the gamma value is the same!

Once the monitor calibration profile has been done we get a result screen of the physical profile:

monitor calibration update

Now before anyone gets their knickers in a knot over the brightness value discrepancy there’s a couple of things to bare in mind:

  1. This value is always slightly arbitrary and very much dependent on working/viewing conditions.  The working environment should be somewhere between 32 and 64 lux or cdm² ambient – think Bat Cave!  The ratio of ambient to monitor output should always remain at between 32:75/80 and 64:120/140 (ish) – in other words between 1:2 and 1:3 – see earlier post here.
  2. The difference between 100 and 120 cdm² is less than 1/4 stop in camera Ev terms – so not a lot.

What struck me as odd though was the white point setting of D55 or 5500K – that’s 1000K warmer than I’m used to. (yes- warmer – don’t let that temp slider in Lightroom cloud your thinking!).

monitor calibration updateAfter all, 1000k is a noticeable variation – unlike the brightness 20cdm² shift.

Here’s the funny thing though; if I ‘software calibrate’ the CS270 using the ColorMunki software with the spectro plugged into the Mac instead of the monitor, I visually get the same result using D65/120cdm² as I do ‘hardware calibrating’ at D55 and 100cdm².

The same that is, until I look at the colour spaces of the two generated ICC profiles:

monitor calibration update

The coloured section is the ‘software calibration’ colour space, and the wire frame the ‘hardware calibrated’ Eizo custom space – click the image to view larger in a separate window.

The hardware calibration profile is somewhat larger and has a slightly better black point performance – this will allow the viewer to SEE just that little bit more tonality in the deepest of shadows, and those perennially awkward colours that sit in the Blue, Cyan, Green region.

It’s therefore quite obvious that monitor calibration via the hardware/ColorNavigator method on Eizo monitors does buy you that extra bit of visual acuity, so if you own an Eizo ColorEdge then it is the way to go for sure.

Having said that, the differences are small-ish so it’s not really worth getting terrifically evangelical over it.

But if you have the monitor then you should have the calibrator, and if said calibrator is ‘on the list’ of those supported by ColorNavigator then it’s a bit of a JDI – just do it.

You can find the list of supported calibrators here.

Eizo and their ColorNavigator are basically making a very effective ‘mash up’ of the two ISO standards 3664 and 12646 which call for D65 and D50 white points respectively.

Why did I go CHEAP ?

Well, cheaper…..

Apart from the fact that I don’t like spending money – the stuff is so bloody hard to come by – I didn’t want the top end Eizo in either 27″ or 24″.

With the ‘top end’ ColorEdge monitors you are paying for some things that I at least, have little or no use for:

  • 3D CLUT – I’m a general sort of image maker who gets a bit ‘creative’ with my processing and printing.  If I was into graphics and accurate repro of Pantone and the like, or I specialised in archival work for the V & A say, then super-accurate colour reproduction would be critical.  The advantage of the 3D CLUT is that it allows a greater variety of SUBTLY different tones and hues to be SEEN and therefore it’s easier to VISUALLY check that they are maintained when shifting an image from one colour space to another – eg softproofing for print.  I’m a wildlife and landscape photographer – I don’t NEED that facility because I don’t work in a world that requires a stringent 100% colour accuracy.
  • Built-in Calibrator – I don’t need one ‘cos I’ve already got one!
  • Built-in Self-Correction Sensor – I don’t need one of those either!

So if your photography work is like mine, then it’s worth hunting out a ‘zero hours’ CS270 if you fancy the extra screen real-estate, and you want to spend less than if buying its replacement – the CS2730.  You won’t notice the extra 5 milliseconds slower response time, and the new CS2730 eats more power – but you do get a built-in carrying handle!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Camera ISO Settings

The Truth About ISO

Andy Astbury,noise,iso

The effect of increased ISO – in camera “push processing” automatically lift the exposure value to where the camera thinks it is supposed to be.

Back in the days of ‘wet photography’, we had rolls and sheets of film that carried various ISO/ASA/DIN numbers.

ISO stands for International Standards Organisation

ASA stands for American Standards Association

DIN – well, that’s ‘Deutsches Institut für Normung’ or German Institute for Standardisation

ISO and ASA were basically identical values, and DIN = (log10)ISO x10 +1, so ASA/ISO 100 equated to DIN 21….nope, I’m not going to say anything!

These numbers were the film ‘speed’ values.  Film speed was critical to exposure metering as it specified the film sensitivity to light.  Metering a scene properly at the correct ISO/ASA/DIN gave us an overall exposure value that ensured the film got the correct ‘dose’ of light from the shutter speed and aperture combination.

Low ISO/ASA/DIN values meant the film was LESS sensitive to light (SLOW FILM) and high values meant MORE sensitivity to light (FAST FILM).

Ilford Pan F was a very slow mono negative film at ASA 50, while Ilford HP5 was a fast 400 ASA mono negative film.

The other characteristic of film speed was ‘grain’.  Correctly exposed, Pan F was extremely fine grained, whereas correctly exposed HP5 was ‘visibly grainy’ on an 8×10 print.

Another Ilford mono negative film I used a lot was FP4.  The stated ASA for this film was 125ASA/ISO, but I always rated it (set the meter ASA speed dial) to 100ASA on my 35mm Canon A1 and F1 (yup, you read that right!) because they both slightly over-metered most scenes.

If we needed to shoot at 1/1000th and f8 but 100ASA only gave us 1/250th at f8 we would switch to 400ASA film – two stops greater sensitivity to light means we can take a shutter speed two stops shorter for the same aperture and thus get our required 1/1000th sec.

But, what if we were already set up with 400ASA film, but the meter (set at 400ASA) was only giving us 1/250th?

Prior to the release of films like Delta 1600/3200 we would put a fresh roll of 400ASA film in the camera and set the meter to a whopping 1600ASA! We would deliberately UNDER EXPOSE Ilford HP5 or Kodak Tri-X by 2 stops to give us our required 1/1000th at f8.

The two stops underexposed film would then be ‘push processed’, which basically meant it was given a longer time in the developer.  This ‘push processing’ always gave us a grainy image, because of the manner in which photographic chemistry worked.

And just to confuse you even more, very occasionally a situation might arise where we would over expose film and ‘pull process’ it – but that’s another story.

We are not here for a history lesson, but the point you need to understand is this – we had a camera body into which we inserted various sensitivities of film, and that sometimes those sensitivities were chemically manipulated in processing.

That Was Then, This Is Now!

ISO/ASA/DIN was SENSITIVITY of FILM.

It is NOT SENSITIVITY of your DSLR SENSOR….!!! Understand that once and for all!

The sensitivity of your sensor IS FIXED.

It is set in Silicon when the sensor is manufactured.  Just like the sensitivity of Kodak Tri-X Pan was ‘fixed’ at 400ASA/ISO when it was made at the factory.

How is the sensitivity of a digital sensor fixed?  By the SIZE of the individual PHOTOSITES on the sensor.

Larger photosites will gather more photons from a given exposure than small ones – it’s that simple.

The greater the number of photons captured means that the output signal from a larger photosite is GREATER than the output signal from a smaller photosite for the same exposure value (EV being a combination shutter speed and aperture/f number).

All sensors have a base level of noise – we can refer to this as the sensor ‘noise floor’.

This noise floor is an amalgamation of the noise floors of each photosite on the sensor.

But the noise floor of each photosite on the sensor is masked/obscured by the photosite signal output; therefore the greater the signal, the larger the signal to noise (S/N) ratio is said to be.

In general, larger photosites yield a higher S/N ratio than smaller ones given the same exposure.

This is why the Nikon D3 had such success being full frame but just over 12 megapixels, and it’s the reason that some of us don’t get overly excited about seeing more megapixels being crammed into our 36mm x 24mm sensors.

Anyway, the total output from a photosite contains both signal and noise floor, and the signal component can be thought of as ‘gain’ over the noise floor – natural gain.

As manufacturers put more megapixels on our sensors this natural gain DECREASES because the photosites get SMALLER – they have to in order to fit more of them into the finite sensor area.

Natural gain CAN be brought back in certain sensor designs by manipulating the design of the micro lenses that sit on top of the individual photosites. Re-design of these micro lenses to ‘suck in’ more tangential photons – rather like putting a funnel in a bottle to make filling it easier and more efficient.

There is a brilliantly simple illustration of how a sensor fits into the general scheme of things, courtesy of digital camera world:

Camera ISO Settings

The main item of note in this image is perhaps not quite so obvious, but it’s the boundary between the analogue and digital parts of the system.

We have 3 component arrays forward of this boundary:

  1. Mosaic Filter including Micro Lenses & Moire filter if fitted.
  2. Sensor Array of Photosites – these suck in photons and release proportional electrons/charge.
  3. Analogue Electronics – this holds the charge record of the photosite output.

Everything forward of the Analogue/Digital Converter – ADC – is just that, analogue! And the variety of attributes that a manufacturer puts on the sensor forward of this boundary can be thought of mostly as modifying/enhancing natural gain.

So What About My ISO Control Settings Andy?

All sensors have a BASE ISO. In other words they have an ISO sensitivity/speed rating just like film!  And as I said before THIS IS A FIXED VALUE.

The base ISO of a sensor photosite array can be defined as that ISO setting that yields the best dynamic range across the whole array, and it is the ISO setting that carries NO internal amplification.

Your chosen ISO setting has absolutely ZERO effect on what happens forward of the Analogue/Digital boundary – NONE.

So, all those idiots who tell you that ISO effects/governs exposure are WRONG – it has nothing to do with it for the simple reason that ISO effecting sensor sensitivity is a total misconception….end of!

Now I’ll bet that’s going to set off a whole raft of negative comments and arguments – and they will all be wrong, because they don’t know what they’re talking about!

The ‘digital side’ of the boundary is where all the ‘voodoo’ happens, and it’s where your ISO settings come into play.

At the end of an exposure the Analogue Digital Converter, or ADC, comes along and makes a ‘count’ of the contents of the ‘analogue electronics’ mosaic (as Digital Camera World like to call it – nice and unambiguous!).

Remember, it’s counting/measuring TOTAL OUTPUT from each photosite – and that comprises both signal and noise floor outputs.

Camera ISO Settings

If the exposure has been carried out at ‘base ISO’ then we have the maximum S/N ratio, as in column 1.

However, if we increase our ISO setting above ‘base’ then the total sensor array output looks like column 2.  We have in effect UNDER EXPOSED the shot, resulting in a reduced signal.  But we have the same value for the noise floor, so we have a lower S/N ratio.

In principal, the ADC cannot discriminate between noise floor and signal outputs, and so all it sees in one output value for each photosite.

At base ISO this isn’t a problem, but once we begin to shoot at ISO settings above base, under exposing in other words, the cameras internal image processors apply gain to boost the output values handed to it by the ADC.

Yes, this boosts the signal output, but it also amplifies the noise floor component of the signal at the same time – hence that perennial problem we all like to call ‘high ISO noise’.

So your ISO control behaves in exactly the same way as the ‘gain switch’ on a CB or long wave radio, or indeed the db gain on a microphone – ISO is just applied gain.

Things You Should Know

My first digital camera had a CCD (charge coupled device) sensor, it was made by Fuji and it cost a bloody fortune.

Cameras today for the most part use CMOS (complimentary metal oxide semi-conductor) sensors.

  • CCD sensors create high-quality, low-noise images.
  • CMOS sensors, traditionally, are more susceptible to noise.
  • Because each photosite on a CMOS sensor has a series of transistors located next to it, the light sensitivity of a CMOS chip tends to be lower. Many of the photons striking the sensory photosite array hit the transistors instead of the photosites.  This is where the newer micro lens designs come in handy.
  • A CMOS sensor consumes less power. CCD sensors can consume up to 100 times more power than an equivalent CMOS sensor.
  • CMOS chips can be produced easily, making them cheaper to manufacture than CCD sensors.

Basic CMOS tech has changed very little over the years – by that I’m referring to the actual ‘sensing’ bit of the sensor.  Yes, the individual photosites are now manufactured with more precision and consistency, but the basic methodology is pretty much ‘same as it ever was’.

But what HAS changed are the bits they stick in front of it – most notably micro-lens design; and the stuff that goes behind it, the ADC and image processors (IPs).

The ADC used to be 12 bit, now they are 14 bit on most digital cameras, and even 16 bit on some.  Increasing the bit depth accuracy in the ADC means it can detect smaller variations in output signal values between adjacent photosites.

As long as the ‘bits’ that come after the ADC can handle these extended values then the result can extend the cameras dynamic range.

But the ADC and IPs are firmware based in their operation, and so when you turn your ISO above base you are relying on a set of algorithms to handle the business of compensating for your under exposure.

All this takes place AFTER the shutter has closed – so again, ISO settings have less than nothing to do with the exposure of the image; said exposure has been made and finished with before any ISO applied gain occurs.

For a camera to be revolutionary in terms of high ISO image quality it must deliver a lower noise floor than its predecessor whilst maintaining or bettering its predecessors low ISO performance in terms of noise and dynamic range.

This where Nikon have screwed their own pooch with the D5. At ISOs below 3200 it has poorer IQ and narrower dynamic range than either the D4 or 4S.  Perhaps some of this problem could be due to the sensor photosite pitch (diameter) of 6.45 microns compared to the D4/4S of 7.30 microns – but I think it’s mostly due to poor ADC and S/N firmware; which of course can be corrected in the future.

Can I Get More Photons Onto My Sensor Andy?

You can get more photons onto your sensor by changing to a lens that lets in more light.

You might now by thinking that I mean switching glass based on a lower f-number or f-stop.

If so you’re half right.  I’m actually talking about t-stops.

The f-number of a lens is basically an expression of the relationship between maximum aperture diameter and focal length, and is an indication of the amount of light the lens lets in.

T-stops are slightly different. They are a direct indicator of how much light is transmitted by the lens – in other words how much light is actually being allowed to leave the rear element.

We could have two lenses of identical focal length and f-number, but one contains 17 lens elements and the other only 13. Assuming the glass and any coatings are of equal quality then the lens with fewer elements will have a higher transmission value and therefore lower T-number.

As an example, the Canon 85mm f1.2 actually has a t-number of 1.4, and so it’s letting in pretty much HALF a stop less light than you might think it is.

In Conclusion

I’ve deliberately not embellished this post with lots of images taken at high ISO – I’ve posted and published enough of those in the past.

I’ve given you this information so that you can digest it and hopefully understand more about how your camera works and what’s going on.  Only by understanding how something works can you deploy or use it to your best advantage.

I regularly take, market and sell images taken at ISO speeds that a lot of folk wouldn’t go anywhere near – even when they are using the same camera as me.

The sole reason I opt for high ISO settings is to obtain very fast shutter speeds with big glass in order to freeze action, especially of subjects close to the camera.  You can freeze very little action with a 500mm lens using speeds in the hundredths of a second.

Picture buyers love frozen high speed action and they don’t mind some noise if the shot is a bit special. Noise doesn’t look anywhere near as severe in a print as it does on your monitor either, so high ISO values are nothing to shy away from – especially if to do so would be at the expense of the ‘shot of a lifetime’.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Twilight & Astro Landscape Photography

Twilight & Astro Landscape Photography

Everyone likes a nice moody sunset, but great images await those camera operators that start shooting after most folks have started packing their gear away and heading home.

For me, twilight is where the fun starts.

Twilight & Astro Landscape Photography

The rock stack lying off the boulder-strewn beach of Porth Saint, Rhoscolyn Head, Anglesey.

The low light levels on a scene once ‘civil daylight’ has ended mean you get awesome light with lower contrast shadows, subtle skies, and nice long shutter speeds for dreamy water effects without needing expensive 10 stop ND filters.

However, that awesome light vanishes very quickly, so you have to be ready!  I waited nearly 90 minutes for the shot above.

But that time was spent doing ‘dry runs’ and rehearsals – once the composition was set how I wanted it, the foreground was outside of DoF, so I knew I needed to shoot a focus stack as well as an exposure blend…mmmm….yummy!

Once we have made the long transition from civil daylight end to astronomical daylight end the fun really begins though.

Astro Landscape Photography

Twilight & Astro Landscape Photography

The Milky Way over the derelict buildings of Magpie Mine in Derbyshire.

Astro landscape photography, or wide field astro as it’s sometimes known, is not as difficult as a lot of photographers imagine.

But astro landscape photography IS very demanding of your familiarity with your gear, and will require some expenditure on additional bits of kit if disappointment is to be avoided.

Twilight & Astro Landscape Photography

 

Twilight & Astro Landscape PhotographyTwilight & Astro Landscape PhotographyHere’s the kit I usually venture out at night with:

Dew Heater Band (A).

An essential bit of kit for astro landscape photography – it’s amazing how rapidly a lot of lenses, especially super-wides like the Nikon 14-24 f2.8 encounter a problem with dew at night.  This will in effect fog the front element, starting at its centre and if left unchecked it can spread across the entire face of the lens.

Heating the lens front sufficiently to keep its temperature above the dew point for your current location and time will prevent a ruined session – don’t leave home without one!

This dew heater is powered by a battery (C) via a dew heater controller (D) with is basically a simple rotary rheostat which controls the level of current driving the heater band.

I use mine at about 75% of ‘full chat’ and it seems to work just fine.

A final note on dew heater bands – these are designed for use by those strange folk who spend hours behind telescopes.  They tape the bands in place and leave them there.  As photographers we need to add or remove them as needed.  The bands can prove fragile, need I say more?

Yes, it pays to carry a spare, and it pays to treat them with care and not just throw them in the camera bag – I’m on band number 3 with number 4 in reserve!

Intervalometer (B).

You will need to shoot a long exposure of you scene foreground, slightly re-focuused closer to you, at a much lower ISO, and perhaps at a slightly narrower aperture; this shot might well be 20 minutes long or more and with long exposure NR engaged to produce a black subtraction.

Yes, a lockable cable release and the timer on your watch will do the job, hence (F) and (G) in case (B) stops working!

But an intervalometer will make this easier – as long as you’ve read the instructions..doh!

If you want to shoot star trails the external intervalometer is vastly superior to your cameras built in one.  That’s because the in-camera intervalometer on nearly all cameras except the Nikon D810A is limited to a 30 second shutter speed.

An hours worth of star rotation is barely enough:

Twilight & Astro Landscape Photography

 

But at 30 seconds shutter speed you will end up with 120 frames at fairly high ISO.

Far better to shoot at ‘bulb’ with a 5 minute exposure and lower ISO – then you’ll only have 12 frames – your computer with thank you for the lower number when it comes to stacking the shots in Photoshop.

There is also another problem, for certain marks of Nikon cameras.  The D800E that I use has a stupid cap on continuous shooting.  The much touted method of setting the shutter to 30 seconds and putting the camera in continuous low speed shooting mode and locking the cable release button down does NOT work – it only allows you to take 100 frames then the camera just STOPS taking pictures.

But if you use an external intervalometer set to a 30 second exposure, continuous and just drop the camera in BULB and Single Shot then the D800E and its like will sit there and fill your cards up with frames.

Other Essentials.

Micro fibre cloths, bin liners and gaffer tape (B,I and J).

After a couple of hours of full darkness your gear (and I mean all of it) will most likely be wet with dew, especially here in the UK.  Micro fibre cloths are great for getting the majority of this dampness off your camera gear when you put it away for the trip home.

Bin liners are great for keeping any passing rain shower off your camera gear when its set up – just drop one (opened of course) over your camera and tape it to the tripod legs with a bit of gaffer tape. Leave the dew heater ON.

Also, stick the battery supply in one – rain water and 13 volts DC at 4000MAh don’t mix well.

Photopills on your iPhone (G) is incredibly useful for showing you where the Milky Way is during that extended period between civil and astronomical daylight end.  Being able to see it in relationship to your scene with the Night Augmented Reality feature cetainly makes shot composition somewhat easier.

Head Lamp (H) – preferably one which has a red light mode.  Red light does not kill off your carefully tuned night vision when you need to see some camera setting control lever or button.

Accurate GPS positioner (K).  Not entirely an ‘essential’ but it’s mighty useful for all sorts of reasons, especially when forward planning a shot, or getting to a set position in the dark.

Twilight & Astro Landscape Photography

The Milky Way towering over the National Coastwatch Institution (NCI) station at Rhoscolyn on Anglesey.

 

I love taking someone who’s never seen the Milky Way out at night to capture it with their own equipment – the constant stream of ‘WOWS’ makes me all warm ‘n fuzzy!  This year has seen me take more folk out than ever; and even though we are going to loose the galactic centre in the next few weeks the opportunities for night photography get better as the nights grow longer.

Twilight & Astro Landscape Photography

The Milky Way over the derelict buildings of Magpie Mine in Derbyshire.

So if you want to get out there with me then just give me shout at tution@wildlifeinpixels.net

The Milky Way will still be a prominent feature in the sky until October, and will be in a more westerly position, so lots of great bays on the North Wales & Anglesey coast will come into their own as locations.

Astro Landscape Photography

The Milky Way over the Afon Glaslyn Valley looking towards Beddgelert and Porthmadog. The patchy green colour of the sky is cause by a large amount of airglow, another natural phenomenon that very few people actually see.

And just look at that star detail:

Astro Landscape Photography

Over the next few weeks I’m going to be putting together a training video title on processing astro landscape photography images, and if the next new moon phase at the end of this month comes with favorable weather I’m going to try and supplement these with a couple of practical shooting videos – so fingers crossed.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Night Sky Imaging

Night Sky Photography – A Brief Introduction

I really get a massive buzz from photographing the night sky – PROPERLY.

By properly I mean using your equipment to the best of its ability, and using correct techniques in terms of both ‘shooting’ and post processing.

The majority of images within the vast plethora of night sky images on Google etc, and methods described, are to be frank PANTS!

Those 800 pixel long-edge jpegs hide a multitude of shooting and processing sins – such as HUGE amounts of sensor noise and the biggest sin of all – elongated stars.

Top quality full resolution imagery of the night sky demands pin-prick stars, not trails that look like blown out sausages – unless of course, you are wanting them for visual effect.

Pin sharp stars require extremely precise MANUAL FOCUS in conjunction with a shutter speed that is short enough to arrest the perceived movement of the night sky across the cameras field of view.

They also demand that the lens is ‘shot’ pretty much wide open in terms of aperture – this allows the sensor to ‘see and gather’ as many photons of light from each point-source (star) in the night sky.

So we are in the situation where we have to use manual focus and exposure with f2.8 as an approximate working aperture – and high ISO values, because of the demand for a relatively fast shutter speed.

And when it comes to our shutter speed the much-vaunted ‘500 Rule’ needs to be consigned to the waste bin – it’s just not a good enough standard to work to, especially considering modern high megapixel count sensors such as Nikon’s D800E/D810/D810A and Canons 5DS.

Leaving the shutter open for just 10 seconds using a 14mm lens will elongate stars EVER SO SLIGHTLY – so the ‘500 Rule’ speed of 500/14 = 35.71 seconds is just going to make a total hash of things.

In the shot below; a crop from the image top left; I’ve used a 10 second exposure, but in preference I’ll use 5 seconds if I can get away with it:

Nikon D800E,14-24 f2.8@14mm,10 seconds exposure,f2.8,ISO 6400 Full Resolution Crop

Nikon D800E,14-24 f2.8@14mm,10 seconds exposure,f2.8,ISO 6400
RAW, Unprocessed, Full Resolution Crop

WOW….look at all that noise…well, it’s not going to be there for long folks; and NO, I won’t make it vanish with any Noise Reduction functions or plugins either!

6 consecutive frames put through Starry Landscape Stacker

5 consecutive frames put through Starry Landscape Stacker – now we have something we can work with!

Download Starry Landscape Stacker from the App Store:
icon175x175

Huge amounts of ‘noise’ can be eradicated using Median Stacking within Photoshop, but Mac users can circumnavigate the ‘agro’ of layer alignment and layer masking by using this great ‘app’ Starry Landscape Stacker – which does all the ‘heavy lifting’ for you.  Click the link above to download it from the App Store.  Just ignore any daft iTunes pop-ups and click ‘View in Mac App Store’!

I have a demonstration of Median Stacking on my YouTube channel:

This video is best viewed on YouTube in full screen mode.

In a manner of speaking, the ‘shooting aspect’ of Milky Way/Night Sky/Wide-field Astro is pretty straight forward.  You are working in between some very hard constraints with little margin for error.

  • The Earths rotation makes the stars track across our frame – so this dictates our shutter speed for any given focal length of lens – shorter focal length = longer shutter speed.
  • Sensor Megapixel count – more megs = shorter shutter speed.
  • We NEED to shoot with a ‘wide open’ aperture, so our ISO speed takes over as our general exposure control.
  • Focusing – this always seems to be the big ‘sticking point’ for most folk – and despite what you read to the contrary, you can’t reliably use the ‘hyperfocal’ method with wide open apertures – it especially will not work with wide-angle zoom lenses!
  • The Earths ‘seasonal tilt’ dictates what we can and can’t see from a particular latitude; and in conjunction with time of day, dictates the direction and orientation of a particular astral object such as the Milky Way.
  • Light pollution can mask even the cameras ability to record all the stars, and it effects the overall scene luminance level.
  • The position and phase of the moon – a full moon frequently throws far too much light into the entire sky – my advice is to stay at home!
  • A moon in between its last quarter and new moon is frequently diagonally opposite the Milky Way, and can be useful for illuminating your foreground.

And there are quite a few other considerations to take into account, like dew point and relative humidity – and of course, the bloody clouds!

The point I’m trying to make is that these shots take PLANNING.

Using applications and utilities like Stellarium and Photographers Ephemeris in conjunction with Google Earth has always been a great way of planning shots.  But for me, the best planning aid is Photopills – especially because of its augmented reality feature.  This allows you to pre-visualise your shot from your current location, and it will compute the dates and times that the shot is ‘on’.

Download Photopills from the App Store:

Photopills400x400bb

But it won’t stop the clouds from rolling in!

Even with the very best planning the weather conditions can ruin the whole thing!

I’m hoping that before the end of the year I’ll have a full training video finished about shooting perfect ‘wide field astro’ images – it’ll cover planning as well as BOTH shooting AND processing.

I will show you how to:

  • Effectively use Google Earth in conjunction with Stellarium and Photopills for forward planning.
  • The easiest way to ensure perfect focus on those stars – every time.
  • How to shoot for improved foreground.
  • When, and when NOT to deploy LONG EXPOSURE noise reduction in camera – black frame shooting.
  • How to process RAW files in Lightroom for correct colour balance.
  • How to properly use both Median Stacking in Photoshop and Starry Landscape Stacker to reduce ISO noise.
  • And much more!

One really useful FREE facility on the net is the Light Pollution Map website – I suggest using the latest 2015 VIIRIS overlay and the Bing Map Hybrid mode in order to get a rough idea of your foreground and the background light pollution effecting your chosen location.

Don’t forget – if you shoot vertical (portrait?) with a 14mm lens, the top part of the frame can be slightly behind you!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.