🤘🤔Nikon D800E, D810 and D850 Usable Dynamic Range Test and Budget Buying Advice👌🤘

Like it or not this video compares the real usable dynamic range of Nikons’ three most used cameras for landscape photography. Everyone bangs on endlessly about dynamic range when in fact most of them have no clue what they’re talking about. If you want to see the truth about dynamic range improvements since 2012 then the results of this video may well come as a shock!

If you want to see the tonal response curves of the three Nikon models AND the Canon 5DMk3 then click the image below to view at full size:

usable dynamic range
As you can see, there is very little difference between the three Nikon cameras in the highlight to midtone zone, and the biggest difference between all 4 cameras comes on the left side of the chart, the shadows and lower midtones.

Camera Imaging Area

Camera Imaging Area

For God’s Sake! Another bloody idiot YouTuber uploaded a video the other day where they were trying out the Fuji GFX 50, and just moments into said video he came out with the same old pile of junk which amounts to “it’s a bigger sensor so it soaks up more light”.

So now his 76,000+ subscribers are misled into believing something that is plain WRONG.

For anyone who didn’t manage to grasp what I was saying in my previous post HERE let’s try attacking this crackpot concept from a different angle.

Devotees of this farcical belief quite often liken larger sensors to bigger windows in a room.

“A bigger window lets in more light” they say.

Erm…no it doesn’t.  A bigger window has an increased SURFACE AREA that just lets in a larger area of THE SAME LIGHT VALUE.

A 6 foot square pane of glass has a transmission value that is EXACTLY the same as as a 3 foot square pane of the same glass, therefore a ‘BIGGER WINDOW’ onto the outside world does NOT let in more light.

Imagine we have a room that has a 6×6 foot and 3×3 foot window in one wall.  Now go press your nose up to both windows – does the world outside look any different?

No, of course it doesn’t.

The only property that ‘window size’ has any bearing on is the area of ‘illumination foot print’.

So basically the window analogy has ZERO bearing on the matter!

What lets light into the camera is the LENS, not the damn sensor!

The ‘illuminant value’ – or Ev – of the light leaving the back of the lens and entering the imaging plane DOES NOT CHANGE if we swap out our FX body for a crop body – DOES IT!!??!!

So why do these bloody idiots seem to think physics changes when we put a bigger sensor behind the lens?  It’s pure abject stupidity.

The imaging area of a sensor has ZERO effect on the intensity of light striking it – that is something that is only influenced by aperture (intensity) and shutter speed (time).

With digital photography, exposure is ‘per unit area’ NOT total area.  A dslr sensor is NOT a unit but an amalgamation of individual units called PHOTOSITES or pixels.  Hence it is the photosite area that governs exposure NOT sensor total area.

There is a sensor on the market that blows all this ‘sucks in more light’ crap clean out of the water, and that sensor is the Sony IMX224MQV.  This is a 1/3 class sensor with a diagonal dimension of just 6.09mm and 1.27Mp.  By definition this one hell of a small ‘window’ yet it can ‘see’ light down to 0.005lux with good enough SNR to allow the image processor to capture 120 10bit images per second.

A cameras ‘window onto the world’ is the lens – end of!

Imagine going back to film for a moment – correct exposure value was the same for say Ilford FP4 irrespective of whether you were using 35mm, 120/220 roll film, 5×4 or 10×8 sheet film.

The size of the recording media within the imaging plane was and still is completely irrelevant to exposure.

Bigger recording media never have and never will ‘suck in’ more light, because they can’t suck in more light than the lens is transmitting!

The only properties of the sensor within the imaging area that WILL change how the it reacts to the light transmitted by the lens are:

  • Photosite surface area – number of megapixels
  • Sensor construction – FSI vs BSI
  • Micro lens design
  • CFA array absorption characteristics

After my previous post some stroppy idiot emailed me saying Ken Wheeler AKA The Angry Photographer says the Nikon D850 is a full frame Nikon D500, and that because the D850 is an FX camera and has better dynamic range then this proves I’m talking bollocks.

Well, Ken never said this in terms of anything other than approximate pixel density – he’s not that stupid and dick-heads should listen more carefully!

The D500 is an FSI sensor while the D850 is a BSI sensor and has a totally different micro lens design, AD Converter and IP.

Out of the 4 characteristics listed above 3 of them are DRASTICALLY different between the two cameras and the other is different enough to have definite implications – so you cannot compare them ‘like for like’.

But using the same lens, shutter speed, ISO and aperture while imaging a flat white or grey scene the sensor in a D850 will ‘see’ no higher light value than the sensor in a D500.

Why?  Because the light emanating from the scene doesn’t change and neither does the light transmitted by the lens.

I own what was the best light meter on the planet – the Sekonic 758.  No where does it have a sensor size function/conversion button on it, and neither does its superseding brother the 858!

There are numerous advantages and disadvantages between bigger and smaller sensors but bigger ‘gathering’ or ‘soaking up’ more light isn’t one of them!

So the next time you hear someone say that increased size of the imaging area – bigger sensor size – soaks up more photons you need to stop listening to them because they do not know what the hell they’re talking about.

But if you chose to believe what they say then so be it – in the immortal words of Forrest Gump ” Momma says stoopid is as stoopid does………”

Post Script:

Imaging Area

Above you can see the imaging area for various digital sensor formats. You can click the image to view it bigger.

Each imaging area is accurately proportional to the others.

Compare FX to the PhaseOne IQ4.  Never, repeat never think that any FX format sensor will ever deliver the same image fidelity as a 645 sensor – it won’t.

Why?

Because look at how much the fine detail in a scene has to be crushed down by the lens to make it ‘fit’ into the sensor imaging area on FX compared to 645.

Andy your talking crap!  Am I ?  Why do you think the worlds top product and landscape photographers shoot medium format digital?

Here’s the skinny – it’s not because they can afford to, but rather they can’t afford NOT TO.

As for the GFX50 – its imaging area is around 66% that of true MF and it’s smaller than a lot of its ‘wannabe’ owners imagine.

Sensor Size Myth – Again!

Sensor Size Myth – “A bigger sensor gathers more light.”

If I hear this crap one more time either my head’s going to explode or I’m going to do some really nasty things to someone!

A larger sensor size does NOT necessarily gather any more light than a smaller sensor – END OF!

What DOES gather more light is BIGGER PHOTOSITES – those individual light receptors that cumulatively ‘make up’ the photosensitive surface plane of our camera sensor.

sensor size

Above we have two fictional sensors, one with smaller physical dimensions and one with larger dimensions – the bottom one is a ‘larger sensor size’ than the top one, and the bottom one has TWICE as many photosites as the top one (analogous to more megapixels).

But the individual photosites in BOTH sensors are THE SAME SIZE.

Ignoring the factors of:

  • Micro Lens design
  • Variations in photosite design such as resistivity
  • Wiring Substrate
  • SNR & ADC

the photosites in both sensors will have exactly the same pixel pitch, reactivity to light, saturation capacity and base noise level.

However, if we now try to cram the number of photosites (megapixels) into the area of the SMALLER sensor – to increase the resolution:

sensor size

we end up with SMALLER photosites.

We have a HIGHER pixel resolution but this comes with a multi-faceted major penalty:

  • Decreased Dynamic Range
  • Increased susceptibility to specular highlight clipping
  • Lower photosite SNR (signal to noise ratio)
  • Increased susceptibility to diffraction – f-stop limiting

And of course EXACTLY the same penalties are incurred when we increase the megapixel count of LARGER sensors too – the mega-pixel race – fueled by FOOLS and NO-NOTHING IDIOTS and accommodated by camera manufacturers trying to make a profit.

But this perennial argument that a sensor behaves like a window is stupid – it doesn’t matter if I look outside through a small window or a big one, the light value of the scene outside is the same.

Just because I make the window bigger the intensity of the light coming through it does NOT INCREASE.

And the ultimate proof of the stupidity and futility of the ‘big window vs small window’ argument lies with the ‘proper photographers’ like Ben Horne, Nick Carver and Steve O’nions to name but three – those who shoot FILM!

A 10″x8″ sheet of Provia 100 has exactly the same exposure characteristics as a roll of 35mm or 120/220 Provia 100, and yet the 10″x 8″ window is 59.73x the size of the 35mm window.

And don’t even get me started on the other argument the ‘bigger = more light’ idiots use – that of the solar panel!

“A bigger solar panel pumps out more volts so because it gathers more light, so a bigger sensor gathers more light so must pump out better images………”

What a load of shite…………

Firstly, SPs are cumulative and they increase their ‘megapixel count’ by growing in physical dimensions, not by making their ‘photosites’ smaller.

But if you cover half of one with a thick tarpaulin then the cumulative output of the panel drops dramatically!

Also, we want SPs to hit their clip point for maximum voltage generation (the clip point would be that where more light does NOT produce more volts!).

Our camera sensor CANNOT be thought of in the same way:

sensor size

We are not interested in a cumulative output, and we don’t want all the photosites on our sensors to ‘max out’ otherwise we’ll have no tonal variation in our image will we…..!

The shot above is from a D800E fitted with a 21mm prime, ISO 100 and 2secs @f13.

If I’d have shot this with the same lens on the D500 and framed the same composition I’d have had to use a SHORTER exposure to prevent the highlights from clipping.

But if bigger sensors gather more light (FX gathers more than DX) I’d have theoretically have had expose LONGER……….and that would have been a disaster.

Seriously folks, when it comes to sensor size bigger ones (FX) do not gather more light than smaller (DX) sensors.

It’s not the sensor total area that does the light gathering, but the photosites contained therein – bigger photosites gather more light, have better SNR, are less prone to diffraction and result in a higher cumulative dynamic range for the sensor as a whole.

Do NOT believe anyone anywhere on any website, forum or YouTube channel who tells you any different because they a plain WRONG!

Where does this shite originate from you may ask?

Well, some while back FX dslr cameras where not made and everything from Canon and Nikon was APSC 1.5x or 1.6x, or APSH 1.3x. Canon was first with an FX digital then Nikon joined the fray with the D3.

Prior to the D3 we Nikon folk had the D300 DX which was 12.3Mp with a photosite area 30.36 microns2

The D3 FX came along with 12.1Mp but with a photosite area of 70.9 microns2

Better in low light than its DX counterpart due to these MASSIVE photosites it gave the dick heads, fools and no-nothing idiots the crackpot idea that a bigger sensor size gathers more light – and you know what……it stuck; and for some there’s no shifting it!

Hope this all makes sense folks.

Don’t forget, any questions or queries then just ask!

If you feel I deserve some support for putting this article together then please consider joining my membership site over on Patreon by using the link below.

Support me on Patreon

Alternatively you could donate via PayPal to tuition@wildlifeinpixels.net

You can also find this article on the free-to-view section of my Patreon channel by clicking this link https://www.patreon.com/posts/sensor-size-myth-22242406

If you are not yet a member of my Patreon site then please consider it as members get benefits, with more membership perks planned over the next 3 months.  Your support would be very much appreciated and rewarded.

Before I go, there’s a new video up on my YouTube Channel showing the sort of processing video I do for my Patreon Members.

You can see it here (it’s 23 minutes long so be warned!):

Please leave a comment on the video if you find it useful, and if you fancy joining my other members over on Patreon then I could be doing these for you too!

All the best

Andy

Dynamic Range, Mid Tones, Metering and ETTR

Dynamic Range, Mid Tones, Metering and ETTR

I recently uploaded a video to my YouTube channel showing you an easy way to find the ‘usable dynamic range’ of you dSLR:

 

The other day I was out with Paul Atkins for a landscape session in the awesome Dinorwic Quarry in Llanberis, Snowdonia.  Highly dynamic clouds and moody light made the place look more like Mordor!

dynamic range

Looking towards the top of the Llanberis Pass from the middle level of Dinorwic Quarry and Electric Mountain.

Here are the 6 unedited shots that make this finished panoramic view:

dynamic range

As you can see, the images are are shot in a vertical aspect ratio.  Shooting at 200mm on the D800E this yields an assembled pano that is 16,000 x 7000 pixels; the advantages for both digital sales and print should be obvious to you!

As you can see, the bright parts of the sky are a lot brighter in the captures than they are in the finished image, but they are not ‘blown’.  Also the shadows in the foreground are not choked or blocked.

In other words the captures are shot ETTR.

Meter – in camera or external.

Any light meter basically looks at a scene (or part thereof) and AVERAGES the tones that it sees.  This average value value is then classed by the meter is MID GREY and the exposure is calculated in terms of the 3 variables you set – Time, Intensity and Applied Gain, or shutter, aperture and ISO.

But this leads to all sorts of problems.

All meters are calibrated to an ANSI Standard of 12% grey (though this gets a bit ambiguous between manufactures and testers).  But you can get a good idea of what ‘light meter mid grey/mid tone” looks like by mentally assigning an RGB value of 118,118,118 to it.

However, we – humans – find 18% grey a more acceptable ‘mid tone grey’ both in print and on our modern monitors.

NOTE: 18% grey refers to the level of REFLECTANCE – it reflects 18% of the light falling on it.  It can also be reproduced in Photoshop using a grey with 128,128.128 RGB values.

So problem number 1 is that of mid tone perception and the difference between what you ‘see’ and what the camera sees and then does in terms of exposure (if you let the camera make a decision for you).

dynamic range

128RGB grey versus 118RGB meter mid grey

Click on the pano image from Dinorwic to view it bigger, then try to FIND a mid grey that you could point your camera meter at – you can’t.

Remember, the grey you try to measure MUST be exactly mid-grey – try it, it’ll drive you nuts trying to find it!

This leads us to problem number 2.

Take your camera outside, find a white wall.  Fill your frame with it and take a shot using ZERO exposure compensation – the wall will look GREY in the resulting shot not WHITE.

Next, find something matte black or near to it.  Fill your frame with it and take another shot – the black will look grey in the shot not black(ish).

Problem number 3 is this – and it’s a bit of a two-headed serpent.  An exposure meter of any kind is COLOUR BLIND but YOU can SEE colours but are tonally blinded to them to some degree or other:

Simple primary red, green and blue translate to vastly different grey tones which comes as a big surprise to a lot of folk, especially how tonally light green is.

Scene or Subject Brightness Range

Any scene in front of you and your camera has a range of tones from brightest to darkest, and this tonal range is the subject brightness range or SBR for short.  Some folk even refer to it as the scene dynamic range.

If you put your camera meter into spot mode you can meter around your chosen scene and make note of the different exposure values for the brightest and darkest areas of your potential shot.

You camera spot meter isn’t the most accurate of spot meters because its ‘spot’ is just too big, typically between 4mm and 5mm, but it will serve to give you a pretty good idea of your potential SBR.

A 1 degree spot meter will, with correct usage, yield a somewhat more accurate picture (pun intended) of the precise SBR of the scene in front of you.

Right about now some of you will be thinking I’m hair-splitting and talking about unnecessary things in todays modern world of post-processing shadow and highlight recovery.

Photography today is full of folk who are prepared to forego the CRAFT of the expert photographer in favour of getting it half-right in camera and then using the crutch of software recovery to correct their mistakes.

Here’s the news – recovery of popped highlights is IMPOSSIBLE and recovery of shadows to anymore than a small degree results in pixel artifacting.  Get this, two WRONGS do NOT make a RIGHT!

If the Mercedes F1 team went racing with the same attitude as the majority of camera users take pictures with, then F1 would be banned because drivers would die at an alarming rate and no car would ever make the finish line!

So, one way or another we can quantify our potential scene SBR.

“But Andy I don’t need to do that because my camera meter does that for me…….”

Oh no it does NOT, it just averages it to what IT THINKS is a correct mid tone grey – which it invariably isn’t!

This whole mid tone/mid grey ‘thing’ is a complete waste of time because:

  • It’s near impossible to find a true mid tone in your scene to take a reading off.
  • What you want as a mid tone will be at odds with your camera meter by at least 1/2stop.
  • If you are shooting wildlife or landscapes you can’t introduce a ‘grey card’.
  • Because of the above, your shot WILL BE UNDER EXPOSED.

“Yeah, but I can always bracket my shots and do an exposure blend Andy so you’re still talking crap….”

Two answers to that one:

  1. You can’t bracket shots and blend if your MAIN subject is moving – de-ghosting is only effective on small parts of a scene with minimal movement between frames.
  2. The popular “shoot and bracket two each end” makes you look like total dickhead and illustrates that you know less than zero about exposure.  Try doing that on a paying job in front of the client and see how long you last in a commercial environment.

By far the BEST way of calculating exposure is the ETTR method.

ETTR, Expose to the Right.

If you meter for a highlight, your camera will treat that as a mid tone because your camera ASSUMES it’s a mid tone.

Your camera meter is a robot programmed to react to anything it sees in EXACTLY the same way.  It doesn’t matter if your subject is a black cat in the coal house or a snow man in a snow storm, the result will be the same 118,118,118 grey sludge.

Mid tones are as we’ve already ascertained, difficult to pin down and full of ambiguity but highlights are not.  So let’s meter the brightest area of the image and expose it hard over to the right of the histogram.

The simplest way to achieve this is to use your live view histogram with the camera in full manual mode.

Unlike the post-shot review histogram, the live-view histogram is not subject to jpeg compression, and can be thought of as something of a direct readout of scene tonality/brightness.

Using your exposure controls (usually shutter speed for landscape photography) you can increase your exposure to push the highlight peak of the histogram to the right as far as you can go before ‘hitting the wall’ on the right hand side of the histogram axis – in other words the camera sensor highlight clipping point.

Of course, this has the added benefit of shifting ALL the other tones ( mids and shadows) to the right as well,resulting in far less clipping potential in your shadow areas.

So back to Dinorwic again and here’s a shot that has been exposed ETTR on the live view histogram using spot metering over what I deemed to be the brightest area of the sky:

The red square indicates the approximate size of the spot meter area.

I was a naughty boy not recording this on video for you but I forgot to pack the HDMI lead for the video recorder – I’ll do one shortly!

The problem with using the Live View Histogram is that it can be a bit of a struggle to see it.  your live view screen itself can be hard to see in certain light conditions outside, and the live view histogram itself is usually a bit on the small side – no where near as big as the image review histogram you can see here.

But looking at the review histogram above you can see that there’s a ‘little bit more juice’ to be had in terms of exposure of the highlights because of that tiny gap between the right end of the histogram and the ‘wall’ at the end of the axis.

Going back to the video the maximum ETTR ‘tipping point’ was centered around these three shots:

Clipped

Not Clipped (the one we allocated the star rating to). Exposure is -1/3rd stop below clipped.

Safe, but -2/3rd stop below Clipped.

The review histogram puts the Dinorwic shot highlights firmly in the same exposure bracket as ‘Safe, but -2/3rd stop below Clipped, and tells us there is another 1/3rd stop ‘more juice’ to be had!

So lengthening the exposure by 1/3rd stop and changing from 160th sec to 1/50th sec gives us this:

The red square indicates the approximate size of the spot meter area.

Live View Histogram ETTR

Live View Histogram plus 1/3 stop more juice! Highlights STILL below Clipping Point and shadows get 1/3rd stop more exposure.

That’s what it’s all about baby – MORE JUICE!

And you will not be in a position to confidently acquire more juice unless you find the USABLE DYNAMIC RANGE of your camera sensor.

The whole purpose of finding that usable DR is to discover where your highlight and shadow clipping points are – and they are very different between camera models.

For instance, the highlight clipping point value of the Nikon D850 is different from that of the Nikon D800E, but the shadow clipping point is pretty similar.

There is an awful lot more use to discovering your cameras usable dynamic range than a lot of folk imagine.

And if you do it the precise way then you can acquire a separate meter that will accept camera profiling:

dynamic range

You can create a dynamic range profile for your camera (and lens combo*) and then load it into the meter:

and then have your cameras usable dynamic range as part of the metering scale – so then you have NO EXCUSE for producing a less than optimum exposure.

(*)Note: yes, the lens does have an effect on dynamic range due to micro-contrast and light transmission variables – if you want to be super-picky!

AND THEY SAY HANDHELD METERS ARE DEAD, OLD TECH and of NO USE!!!

Anyone who says or even thinks that is a total KNOB.

Your camera dynamic range, the truthful one – FIND IT, KNOW IT, USE IT.

And don’t listen to the idiots and know-nothings, just listen and heed the advice of those of us who actually know what we’re doing.

NOTE:  The value of grey (gray) cards and how to use them for accurate measurement is a subject in its own right and provides the curious with some really interesting reading.  Believe me it’s far more expansive than the info I’ve given here.  But adopting an ETTR approach when exposing to sensor that you KNOW the physical behavior of (dynamic response to light/dynamic range) can alleviate you of all critical mid-tone concerns.

This article has taken me over 8 hours to produce in total, and is yours to view for FREE.  If you feel I deserve some support for doing this then please consider joining my membership site over on Patreon by using the link below.

Support me on Patreon

Alternatively you could donate via PayPal to tuition@wildlifeinpixels.net

Exposure Value – What does it mean?

Exposure Value (Ev) – what does Ev mean?

I get asked this question every now and again because I frequently use it in the description annotations of image shot data here on the blog.

And I have to say from the outset the Exposure Value comes in two flavours – relative and absolute – and here I’m only talking mainly about the former.

So, let’s start with basic exposure.

Exposure can be thought of as Intensity x Time.

Intensity is controlled by our aperture, and time is controlled by our shutter speed.

This image was shot at 0.5sec (time), f11 (intensity) and ISO 100.

exposure value

We can think of the f11 intensity of light striking the sensor for 0.5sec as a ‘DOSAGE’ – and if that dosage results in the desired scene exposure then that dosage can be classed as the exposure value.

Let’s consider two exposure settings – 0.5sec at f11 ISO100 and 1sec at f16 ISO 100.

Technically speaking they are two different exposures, but BOTH result in the same light dosage at the sensor.  The second exposure is TWICE the length of time but HALF the intensity.

So both exposures have the same Exposure Value or Ev.

The following exposure of the same scene is 1sec at f11 ISO 100:

exposure value

The image was shot at the same intensity (f11) but the shutter speed (time) was twice as long, and so the dosage was doubled.  Double the dose = +1Ev!

And in this version the exposure was 0.25sec at f11 ISO 100:

exposure value

Here the light dosage at the sensor is HALF that of the correct/desired exposure because the time factor was halved while using the same intensity.

So half the dose = -1Ev!

Now some of you will be thinking that -1Ev is 1 stop under exposure – and you’d be right!

But Ev, or exposure value, is just a cleaner way of thinking about exposure because it doesn’t tie you to any specific camera setting – and it’s more easily transferable between cameras.

What Do I Mean by that?

Example – If I use say a 50mm prime lens on my Nikon D800E with the metering in matrix mode, ISO 100 and f14 I might get a metered exposure shutter speed of 1/10th of a second.

But if I replace the D800E with a D4 set at 100 ISO, matrix and f14 I’ll guarantee the metered shutter speed requirement will be either 1/13 or 1/15th of a second.

The D4 meters between -1/3Ev and -2/3Ev (in other words 1/2 stop) faster/brighter than the D800E when fitted with the same lens and set to the same aperture and ISO, and shooting exactly the same framing/composition.

Yet the ‘as metered’ shots from both cameras look pretty much the same with respect to light dosage – exposure value.

Exposure Settings Don’t Transfer between camera models very well, because the meter in a camera is calibrated to the response curve of the sensor.

A Canon 1DX Mk2 will usually generate a evaluative metered shutter speed 1/3rd of a stop faster than a matrix metered Nikon D4S for the same given focal length, aperture and ISO setting.

Both setups ‘as metered’ shots will look pretty much the same, but transposing the Canon settings to the Nikon will result in -1/3 stop under exposure – which on a digital camera is definitely NOT the way to go!

‘As Metered’ can be regarded as +/-0Ev for any camera (Note: this does NOT mean Ev=0!)

Any exposure compensation you use in order to achieve the ‘desired’ exposure on the other hand can be thought of as ‘metered + or – xEv’.

exposure compensation

Shot with the D4 plus 70-200 f2.8@70mm in manual exposure mode, 1/2000th sec, f8 and ISO 400 using +2/3Ev compensation.

The matrix metered exposure indicated by the camera before the exposure value compensation was 1/3200th – this would have made the Parasitic Jaeger (posh name for an Arctic Skua!) too dark.

A 1DXMk2 using the corresponding lens and focal length, f8, ISO 400 and evaluative metering would have wanted to generate a shutter speed of at least 1/4000th sec without any exposure compensation, and 1/2500th with +2/3Ev exposure compensation.

And if shot at those settings the Canon image would look pretty much like the above.

But if the Nikon D4 settings had been fully replicated on the Canon then the shot would be between 1/3 and 1/2 stop over exposed, risking ‘blowing’ of some of the under-wing and tail highlights.

So the simple lesson here is don’t use other photographers settings – they never work unless you’re on identical gear! 

But if you are out with me and I tell you “matrix/evaluative plus 1Ev” then your exposure will have pretty much the same ‘light dosage’ as mine irrespective of you using the right shutter speed, aperture or ISO for the job or not!

I was brought up to think in terms of exposure value and Ev units, and to use light meters that had Ev scales on them – hell, the good ones still have ’em!

If you look up the ‘tech-specs’ for your camera you’ll find that metering sensitivity is normally quoted as an Ev range.  And that’s not all – your auto focus may well have a low light Ev limited quoted too!

To all intents and purposes Ev units and your more familiar ‘f-stops’ amount to one and the same thing.

As we’ve seen before, different exposures in terms of intensity and time can have the same exposure value, and all Ev is concerned with is the cumulative outcome of our shutter speed, aperture and ISO choices.

Most of you will take exposures at ‘what the camera meter says’ settings, or you will use the meter indicated exposure as a baseline and modify the exposure settings with either positive or negative ‘weighting’ via your exposure compensation dial.

That’s Ev compensation relative to your meters baseline.

But have you ever asked yourself just how accurate your camera meter is?

So I’ve just stepped outside my front door and taken these two frames:

exposure value

EV=15/Sunny 16 Rule 1/100th sec, f16, 100 ISO – click to view large.

exposure value

Matrix Metering, no exposure compensation 1/200th sec, f16, ISO 100 – click to view large

These two raw files have been brought into Lightroom and THE ONLY adjustment has been to change the profile from Adobe Color to Camera Neutral.

Members of my subscription site can download the raw files and see for themselves.

Look at the histogram in both images!

The exposure for xxx164.NEF (the top image) is perfection personified while xxx162.NEF is under exposed by ONE WHOLE STOP – why?

Because the bottom image has been shot at the camera-specified matrix metered exposure, while the top image has been shot using the good old ‘Sunny 16 Rule’ that’s been around since God knows when!

“Yeah, but I could just use the shadow recovery slider on the bottom shot Andy….”  Yes, you could, if you wanted to be an idle tit, and even then the top image would still be better because there’s no ‘recovery’ being used on it in the first place.  Remember, more work at the camera means less work in processing!

Recovery of either shadows or highlights is ‘poor form’ and no substitute for correct exposure in the first place. Digital photography is just like shooting colour transparency film – you need to ‘peg the highlights’ as highlights BUT without over exposing them and causing them to ‘blow’.

In other words – ETTR, expose to the right!

And seeing as your camera meter wants to turn everything into midtone grey shite it’s the very last thing you should ever allow to dictate your final exposure settings – as the two images above prove beyond argument.

And herein lies the problem.

Even if you use the spot metering function the meter will read the brightness of what is covered by the ‘spot’ and then calculate the exposure required to expose that tonal brightness AS A MID TONE GREY.

That’s all fine ‘n dandy – if the metered area is actually an exact mid tone.  But what if you were metering a highlight?

Then the metered exposure would want to expose said highlight as a midtone and the overall highlight exposure would be far too dark.  And you can guess what would happen if you trusted your meter to spot-read a shadow.

A proper hand-held spot meter has an angle of view or AoV of 1 degree.

Your camera spot meter angle of view is dictated by the focal length of the lens you have fitted.

On my D800E for example, I need to have a lens AoV of around 130mm focal length equivalent for my spot to cover 1 degree, because the ‘spot’ is 4mm in diameter – total stupidity.

But it does function fairly well with wider angle lenses and exposure calculations when used in conjunction with the live view histogram.  And that will be subject of my next blog post – or perhaps I’ll do a video for YouTube!

So I doubt this blog post about relative exposure compensation is going to light your world on fire – it began as an explanation to a recurring question about my exif annotation habits and snowballed somewhat from there!

But I’ll leave you with this little guide to the aforementioned Sunny 16 Rule, which has been around since Noah took up boat-building:

To use this table just set your ISO to 100.

Your shutter speed needs to be the reciprocal of your ISO – in other words 1/100 sec for use with the stated aperture values:

Aperture Lighting conditions Shadow PROPERTIES
f/22* Snow/sand Dark with sharp edges
f/16 Sunny Distinct
f/11 Slight overcast Soft around edges
f/8 Overcast Barely visible
f/5.6** Heavy overcast No shadows
f/4 Open shade/sunset No shadows

* – I would not shoot at f22 because of diffraction – try 1/200th f16

** – let’s try some cumulative Ev thinking here and go for more depth of field using f11 and sticking with 100 ISO. -2Ev intensity (f5.6 to f11) requires +2Ev on time, so 1/100th sec becomes 1/25th sec.

Over the years I’ve taken many people out on photo training days, and a lot of them seem to think I’m some sort of magician when I turn their camera on, switch it manual, dial in a couple of settings and produce a half decent image without ever looking at the meter on their camera.

It ain’t magic – I just had this table burnt into the back of my eyeballs years ago.

Works a charm – if you can do the mental calculations in your head, and that’s easy with practice.  The skill is in evaluating your shooting conditions and relating them to the lighting and shadow descriptions.

And here’s a question for you; we know our camera meter wants to ‘peg’ what it’s measuring as a midtone irrespective of whether it’s measuring a midtone or not.  But what do you think the Sunny 16 Rule is ‘pegging’ and where is it pegging it on the exposure curve?

If you can answer that question correctly then the other flavour of exposure value – absolute – might well be of distinct interest to you!

Give it a try, and if you use it correctly you’ll never be more than 1/3rd of a stop out, if that.  Then you can go and unsubscribe from all those twats on YouTube who told you it was out-dated and defunct or never told you about it in the first place!

I hope you’ve found the information in this post useful.

I don’t monetize my YouTube videos or fill my blog posts with masses of affiliate links, and I rely solely on my patrons to help cover my time and server costs. If you would like to help me to produce more content please visit my Patreon page on the button above.

Many thanks and best light to you all.

Astro Landscape Photography

Astro Landscape Photography

Astro Landscape Photography

One of my patrons, Paul Smith, and I ventured down to Shropshire and the spectacular quartsite ridge of The Stiperstones to get this image of the Milky Way and Mars (the large bright ‘star’ above the rocks on the left).

I always work the same way for astro landscape photography, beginning with getting into position just before sunset.

Using the PhotoPills app on my phone I can see where the milky way will be positioned in my field of view at the time of peak sky darkness.  This enables me to position the camera exactly where I want it for the best composition.

The biggest killer in astro landscape photography is excessive noise in the foreground.

The other problem is that foregrounds in most images of this genre are not sharp due to a lack of depth of field at the wide apertures you need to shoot the night sky at – f2.8 for example.

To get around this problem we need to shoot a separate foreground image at a lower ISO, a narrower aperture and focused closer to the camera.

Some photographers change focus, engage long exposure noise reduction and then shoot a very long exposure.  But that’s an eminently risky thing to do in my opinion, both from a technical standpoint and one of time – a 60 minute exposure will take 120 minutes to complete.

The length of exposure is chosen to allow the very low photon-count from the foreground to ‘build-up’ on the sensor and produced a usable level of exposure from what little natural light is around.

From a visual perspective, when it works, the method produces images that can be spectacular because the light in the foreground matches the light in the sky in terms of directionality.

Light Painting

To get around the inconvenience of time and super-long exposures a lot of folk employ the technique of light painting their foregrounds.

Light painting – in my opinion – destroys the integrity of the finished image because it’s so bloody obvious!  The direction of light that’s ‘painted’ on the foreground bares no resemblance to that of the sky.

The other problem with light painting is this – those that employ the technique hardly ever CHECK to see if they are in the field of view of another photographer – think about that one for a second or two!

My Method

As I mentioned before, I set up just before sunset.  In the shot above I knew the milky way and Mars were not going to be where I wanted them until just after 1am, but I was set up by 9.20pm – yep, a long wait ahead, but always worth the effort.

Astro Landscape Photography

As we move towards the latter half of civil twilight I start shooting my foreground exposure, and I’ll shoot a few of these at regular intervals between then and mid nautical twilight.

Because I shoot raw the white balance set in camera is irrelevant, and can be balanced with that of the sky in Photoshop during post processing.

The key things here are that I have a shadowless even illumination of my foreground which is shot at a low ISO, in perfect focus, and shot at say f8 has great depth of field.

Once deep into blue hour and astronomical twilight the brighter stars are visible and so I now use full magnification in live view and focus on a bright star in the cameras field of view.

Then it’s a waiting game – waiting for the sky to darken to its maximum and the Milky Way to come into my desired position for my chosen composition.

Shooting the Sky

Astro landscape photography is all about showing the sky in context with the foreground – I have absolutely ZERO time for those popular YouTube photographers who composite a shot of the night sky into a landscape image shot in a different place or a different angle.

Good astro landscape photography HAS TO BE A COMPOSITE though – there is no way around that.

And by GOOD I mean producing a full resolution image that will sell through the agencies and print BIG if needed.

The key things that contribute to an image being classed good in my book are simple:

  • Pin-point stars with no trailing
  • Low noise
  • Sharp from ‘back’ to ‘front’.

Pin-points stars are solely down to correct shutter speed for your sensor size and megapixel count.

Low noise is covered by shooting a low ISO foreground and a sequence of high ISO sky images, and using Starry Landscape Stacker on Mac (Sequator on PC appears to be very similar) in conjunction with a mean or median stacking mode.

Further noise cancelling is achieved but the shooting of Dark Frames, and the typical wide-aperture vignetting is cancelled out by the creation of a flat field frame.

And ‘back to front’ image sharpness should be obvious to you from what I’ve already written!

So, I’ll typically shoot a sequence of 20 to 30 exposures – all one after the other with no breaks or pauses – and then a sequence of 20 to 30 dark frames.

Shutter speeds usually range from 4 to 6 seconds

Watch this video on my YouTube Channel about shutter speed:

Best viewed on the channel itself, and click the little cog icon to choose 1080pHD as the resolution.

Putting it all Together

Shooting all the frames for astro landscape photography is really quite simple.

Putting it all together is fairly simple and straight forward too – but it’s TEDIOUS and time-consuming if you want to do it properly.

The shot above took my a little over 4 hours!

And 80% of it is retouching in Photoshop.

I produce a very extensive training title – Complete Milky Way Photography Workflow – with teaches you EVERYTHING you need to know about the shooting and processing of astro landscape photography images – you can purchase it here – and if you use the offer code MWAY15 at the checkout you’ll get £15 off the purchase price.

But I wanted to try Raw Therapee for this Stiperstones image, and another of my patrons – Frank – wanted a video of processing methodology in Raw Therapee.

Easier said than done, cramming 4 hours into a typical YouTube video!  But after about six attempts I think I’ve managed it, and you can see it here, but I warn you now that it’s 40 minutes long:

Best viewed on the channel itself, and click the little cog icon to choose 1080pHD as the resolution.

I hope you’ve found the information in this post useful, together with the YouTube videos.

I don’t monetize my YouTube videos or fill my blog posts with masses of affiliate links, and I rely solely on my patrons to help cover my time and server costs.  If you would like to help me to produce more content please visit my Patreon page on the button above.

Many thanks and best light to you all.

Photoshop View Magnification

View Magnification in Photoshop (Patreon Only).

A few days ago I uploaded a video to my YouTube channel explaining PPI and DPI – you can see that HERE .

But there is way more to pixel per inch (PPI) resolution values than just the general coverage I gave it in that video.

And this post is about a major impact of PPI resolution that seems to have evaded the understanding and comprehension of perhaps 95% of Photoshop users – and Lightroom users too for that matter.

I am talking about image view magnification, and the connection this has to your monitor.

Let’s make a new document in Photoshop:

View Magnification

We’ll make the new document 5 inches by 4 inches, 300ppi:

View Magnification

I want you to do this yourself, then get a plastic ruler – not a steel tape like I’ve used…..

Make sure you are viewing the new image at 100% magnification, and that you can see your Photoshop rulers along the top and down the left side of the workspace – and right click on one of the rulers and make sure the units are INCHES.

Take your plastic ruler and place it along the upper edge of your lower monitor bezel – not quite like I’ve done in the crappy GoPro still below:

View Magnification

Yes, my 5″ long image is in reality 13.5 inches long on the display!

The minute you do this, you may well get very confused!

Now then, the length of your 5×4 image, in “plastic ruler inches” will vary depending on the size and pixel pitch of your monitor.

Doing this on a 13″ MacBook Pro Retina the 5″ edge is actually 6.875″ giving us a magnification factor of 1.375:1

On a 24″ 1920×1200 HP monitor the 5″ edge is pretty much 16″ long giving us a magnification factor of 3.2:1

And on a 27″ Eizo ColorEdge the 5″ side is 13.75″ or there abouts, giving a magnification factor of 2.75:1

The 24″ HP monitor has a long edge of not quite 20.5 inches containing 1920 pixels, giving it a pixel pitch of around 94ppi.

The 27″ Eizo has a long edge of 23.49 inches containing 2560 pixels, giving it a pixel pitch of 109ppi – this is why its magnification factor is less then the 24″ HP.

And the 13″ MacBook Pro Retina has a pixel pitch of 227ppi – hence the magnification factor is so low.

So WTF Gives with 1:1 or 100% View Magnification Andy?

Well, it’s simple.

The greatest majority of Ps users ‘think’ that a view magnification of 100% or 1:1 gives them a view of the image at full physical size, and some think it’s a full ppi resolution view, and they are looking at the image at 300ppi.

WRONG – on BOTH counts !!

A 100% or 1:1 view magnification gives you a view of your image using ONE MONITOR or display PIXEL to RENDER ONE IMAGE PIXEL  In other words the image to display pixel ratio is now 1:1

So at a 100% or 1:1 view magnification you are viewing your image at exactly the same resolution as your monitor/display – which for the majority of desk top users means sub-100ppi.

Why do I say that?  Because the majority of desk top machine users run a 24″, sub 100ppi monitor – Hell, this time last year even I did!

When I view a 300ppi image at 100% view magnification on my 27″ Eizo, I’m looking at it in a lowly resolution of 109ppi.  With regard to its properties such as sharpness and inter-tonal detail, in essence, it looks only 1/3rd as good as it is in reality.

Hands up those who think this is a BAD THING.

Did you put your hand up?  If you did, then see me after school….

It’s a good thing, because if I can process it to look good at 109ppi, then it will look even better at 300ppi.

This also means that if I deliberately sharpen certain areas (not the whole image!) of high frequency detail until they are visually right on the ragged edge of being over-sharp, then the minuscule halos I might have generated will actually be 3 times less obvious in reality.

Then when I print the image at 1440, 2880 or even 5760 DOTS per inch (that’s Epson stuff), that print is going to look so sharp it’ll make your eyeballs fall to bits.

And that dpi print resolution, coupled with sensible noise control at monitor ppi and 100% view magnification, is why noise doesn’t print to anywhere near the degree folk imagine it will.

This brings me to a point where I’d like to draw your attention to my latest YouTube video:

Did you like that – cheeky little trick isn’t it!

Anyway, back to the topic at hand.

If I process on a Retina display at over 200ppi resolution, I have a two-fold problem:

  • 1. I don’t have as big a margin or ‘fudge factor’ to play with when it comes to things like sharpening.
  • 2. Images actually look sharper than they are in reality – my 13″ MacBook Pro is horrible to process on, because of its excessive ppi and its small dimensions.

Seriously, if you are a stills photographer with a hankering for the latest 4 or 5k monitor, then grow up and learn to understand things for goodness sake!

Ultra-high resolution monitors are valid tools for video editors and, to a degree, stills photographers using large capacity medium format cameras.  But for us mere mortals on 35mm format cameras, they can actually ‘get in the way’ when it comes to image evaluation and processing.

Working on a monitor will a ppi resolution between the mid 90’s and low 100’s at 100% view magnification, will always give you the most flexible and easy processing workflow.

Just remember, Photoshop linear physical dimensions always ‘appear’ to be larger than ‘real inches’ !

And remember, at 100% view magnification, 1 IMAGE pixel is displayed by 1 SCREEN pixel.  At 50% view magnification 1 SCREEN pixel is actually displaying the dithered average of 2 IMAGE pixels.  At 25% magnification each monitor pixel is displaying the average of 4 image pixels.

Anyway, that’s about it from me until the New Year folks, though I am the worlds biggest Grinch, so I might well do another video or two on YouTube over the ‘festive period’ so don’t forget to subscribe over there.

Thanks for reading, thanks for watching my videos, and Have a Good One!

 

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

FX vs DX

FX versus DX

It amazes me that people still don’t understand the relationship between FX and DX format sensors.

Millions of people across the planet think still that when they put a DX body on an FX lens and turn the camera on, something magic happens and the lens somehow becomes a different beast.

NO…it doesn’t!

There is so much crap out there on the web, resulting in the blind being led by the stupid – and that is a hardcore fact.  Some of the ‘stuff’ I get sent links to on the likes of ‘diaper review’ (to coin Ken W.’s name for it) and others, leaves me totally aghast at the number of fallacies that are being promoted and perpetuated within the content of these high-traffic websites.

FFS – this has GOT to STOP.

Fallacy 1.  Using a DX crop sensor gives me more magnification.

Oh no it doesn’t!

If we arm an FX and a DX body with identical lenses, let’s say 500mm f4’s, and go and take the same picture, at the same time and from the same subject distance with both setups, we get the following images:

FX versus DX

FX versus DX: FX + 500m f4 image – 36mm x 24mm frame area FoV

FX versus DX

FX versus DX: DX + 500mm f4 image – 24mm x 16mm frame area FoV

FX versus DX

FX versus DX: With both cameras at the same distance from the subject, the Field of View of the DX body+500mm f4 combo is SMALLER – but the subject is EXACTLY the SAME SIZE.

Let’s overlay the two images:

FX versus DX

FX versus DX: The DX field of view (FoV) is indicated by the black line. HOWEVER, this line only denotes the FoV area. It should NOT be taken as indicative of pixel dimensions.

The subject APPEARS larger in the DX frame because the frame FoV is SMALLER than that of the FX frame.

FX versus DX

But I will say it again – the subject is THE SAME DAMN SIZE.  Any FX lens projects an image onto the focal plane that is THE SAME SIZE irrespective of whether the sensor is FX or DX – end of story.

Note: If such a thing existed, a 333mm prime on a DX crop body would give us the same COMPOSITION, at the same subject distance, as our 500mm prime on the FX body.  But at the same aperture and distance, this fictitious 333mm lens would give us MORE DoF due to it being a shorter focal length.

Fallacy 2.  Using a DX crop sensor gives me more Depth of Field for any given aperture.

The other common variant of this fallacy is:

Using a DX crop sensor gives me less Depth of Field for any given aperture.

Oh no it doesn’t – not in either case!

Understand this people – depth of field is, as we all know, governed by the aperture diaphragm – in other words the f number.  Now everyone understands this, surely to God.

But here’s the thing – where’s the damn aperture diaphragm?  Inside the LENS – not the camera!

Depth of field is REAL or TRUE focal length, aperture and subject distance dependent, so our two identical 500mm f4 lenses at say 30 meters subject distance and f8 are going to yield the same DoF.  That’s irrespective of the physical dimensions of the sensor – be they 36mm x 24mm, or 24mm x 16mm.

But, in order for the FX setup to obtain the same COMPOSITION as that of the DX, the FX setup will need to be CLOSER to the subject – and so using the same f number/aperture value will yield an image with LESS DoF than that of the DX, because DoF decreases with decreased distance, for any given f number.

To obtain the same COMPOSITION with the DX as that of the FX, then said DX camera would need to move further away from the subject.  Therefore the same aperture value would yield MORE DoF, because DoF increases with increased distance, for any given f number.

The DX format does NOT change DoF, it’s the pixel pitch/CoC that alters the total DoF in the final image.  In other words it’s total megapixels the alters DoF, and that applies evenly across FX and DX.

Fallacy 3.  An FX format sensor sees more light, or lets more light in, giving me more exposure because it’s a bigger ‘eye’ on the scene.

Oh no it doesn’t!

Now this crap really annoys the hell out of me.

Exposure has nothing to do with sensor size WHAT SO EVER.  The intensity of light falling onto the focal plane is THE SAME, irrespective of sensor size.  Exposure is a function of Intensity x Time, and so for the same intensity (aperture) and time (shutter speed) the resulting exposure will be the SAME.  Total exposure is per unit area, NOT volume.

It’s the buckets full of rain water again:

FX versus DX

The level of water in each bucket is the same, and represents total exposure.  There is no difference in exposure between sensor sizes.

There is a huge difference in volume, but your sensor does not work on total volume – it works per unit area.  Each and every square millimeter, or square micron, of the focal plane sees the same exposure from the image projected into it by the lens, irrespective of the dimensions of the sensor.

The smallest unit area of the sensor is a photosite. And each photosite recieves the same said exposure value, no matter how big the sensor they are embedded in is.

HOWEVER, it is how those individual photosites COPE with that exposure that makes the difference. And that leads us neatly on to the next fallacy.

Fallacy 4.  FX format sensors have better image quality because they are bigger.

Oh no they don’t – well, not because they are just bigger !

It’s all to do with pixel pitch, and pixel pitch governs VOLUME.

FX versus DX

FX format sensors usually give a better image because their photosites have a larger diameter, or pitch. You should read HERE  and HERE for more detail.

Larger photosites don’t really ‘see’ more light during an exposure than small ones, but because they are larger, each one has a better potential signal to noise ratio.  This can, turn, allow for greater subtle variation in recorded light values amongst other things, such as low light response.  Think of a photosite as an eyeball, then think of all the animals that mess around in the dark – they all have big eyes!

That’s not the most technological statement I’ve ever made, but it’s fact, and makes for a good analogy at this point.

Everyone wants a camera sensor that sees in the dark, generates zero noise at ISO 1 Million, has zero diffraction at f22, and has twice the resolution of £35Ks worth medium format back.

Well kids, I hate to break it to you, but such a beast does not exist, and nor will it for many a year to come.

The whole FX versus DX format  ‘thing’ is really a meaningless argument, and the DX format has no advantage over the FX format apart from less weight and lower price (perhaps).

Yes, if we shoot a DX format camera using an FX lens we get the ‘illusion’ of a magnified subject – but that’s all it is – an illusion.

Yes, if we shoot the same shot on a 20Mp FX and crop it to look like the shot from a 20Mp DX, then the subject in the DX shot will have twice as many pixels in it, because of the higher translational density – but at what cost.

Cramming more mega pixels into either a 36mm x 24mm or 24mm x 16mm area results in one thing only – smaller photosites.  Smaller photosites come with one single benefit – greater detail resolution.  Every other attribute that comes with smaller photosites is a negative one:

  • Greater susceptibility to subject motion blur – the bane of landscape and astro photographers.
  • Greater susceptibility to diffraction due to lower CoC.
  • Lower CoC also reduces DoF.
  • Lower signal to noise ratio and poorer high ISO performance.

Note: Quite timely this! With the new leaked info about the D850, we see it’s supposed to have a BSI sensor.  This makes it impossible to make a comparison between it and the D500, even though the photosites are nearly pretty much the same size/pitch.  Any comparison is made even more impossible with the different micro-lens tech sported by the D850.  Also, the functionality of the ADC/SNR firmware is bound to be different from the D500 too.

Variations in: AA filter type/properties and micro lens design, wiring substrate thickness, AF system algorithms and performance, ADC/SNR and other things, all go towards making FX versus DX comparisons difficult, because we use our final output images to draw our conclusions; and they are effected by all of the above.

But facts are facts – DX does not generate either greater magnification or greater/less depth of field than FX when used with identical FX lenses at the same distance and aperture.

Sensor format effects nothing other than FoV,  everything else is purely down to pixel pitch.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Monitor Calibration Update

Monitor Calibration Update

Okay, so I no longer NEED a new monitor, because I’ve got one – and my wallet is in Leighton Hospital Intensive Care Unit on the critical list..

What have you gone for Andy?  Well if you remember, in my last post I was undecided between 24″ and 27″, Eizo or BenQ.  But I was favoring the Eizo CS2420, on the grounds of cost, both in terms of monitor and calibration tool options.

But I got offered a sweet deal on a factory-fresh Eizo CS270 by John Willis at Calumet – so I got my desire for more screen real-estate fulfilled, while keeping the costs down by not having to buy a new calibrator.

monitor calibration update

But it still hurt to pay for it!

Monitor Calibration

There are a few things to consider when it comes to monitor calibration, and they are mainly due to the physical attributes of the monitor itself.

In my previous post I did mention one of them – the most important one – the back light type.

CCFL and WCCFL – cold cathode fluorescent lamps, or LED.

CCFL & WCCFL (wide CCFL) used to be the common type of back light, but they are now less common, being replaced by LED for added colour reproduction, improved signal response time and reduced power consumption.  Wide CCFL gave a noticeably greater colour reproduction range and slightly warmer colour temperature than CCFL – and my old monitor was fitted with WCCFL back lighting, hence I used to be able to do my monitor calibration to near 98% of AdobeRGB.

CCFL back lights have one major property – that of being ‘cool’ in colour, and LEDs commonly exhibit a slightly ‘warmer’ colour temperature.

But there’s LEDs – and there’s LEDs, and some are cooler than others, some are of fixed output and others are of a variable output.

The colour temperature of the backlighting gives the monitor a ‘native white point’.

The ‘brightness’ of the backlight is really the only true variable on a standard type of LCD display, and the inter-relationship between backlight brightness and colour temperature, and the size of the monitors CLUT (colour look-up table) can have a massive effect on the total number of colours that the monitor can display.

Industry-standard documentation by folk a lot cleverer than me has for years recommended the same calibration target settings as I have alluded to in previous blog posts:

White Point: D65 or 6500K

Brightness: 120 cdm² or candelas per square meter

Gamma: 2.2

monitor calibration update

The ubiquitous ColorMunki Photo ‘standard monitor calibration’ method setup screen.

This setup for ‘standard monitor calibration’ works extremely well, and has stood me in good stead for more years than I care to add up.

As I mentioned in my previous post, standard monitor calibration refers to a standard method of calibration, which can be thought of as ‘software calibration’, and I have done many print workshops where I have used this method to calibrate Eizo ColorEdge and NEC Spectraviews with great effect.

However, these more specialised colour management monitors have the added bonus of giving you a ‘hardware monitor calbration’ option.

To carry out a hardware monitor calibration on my new CS270 ColorEdge – or indeed any ColorEdge – we need to employ the Eizo ColorNavigator.

The start screen for ColorNavigator shows us some interesting items:

monitor calibration update

The recommended brightness value is 100 cdm² – not 120.

The recommended white point is D55 not D65.

Thank God the gamma value is the same!

Once the monitor calibration profile has been done we get a result screen of the physical profile:

monitor calibration update

Now before anyone gets their knickers in a knot over the brightness value discrepancy there’s a couple of things to bare in mind:

  1. This value is always slightly arbitrary and very much dependent on working/viewing conditions.  The working environment should be somewhere between 32 and 64 lux or cdm² ambient – think Bat Cave!  The ratio of ambient to monitor output should always remain at between 32:75/80 and 64:120/140 (ish) – in other words between 1:2 and 1:3 – see earlier post here.
  2. The difference between 100 and 120 cdm² is less than 1/4 stop in camera Ev terms – so not a lot.

What struck me as odd though was the white point setting of D55 or 5500K – that’s 1000K warmer than I’m used to. (yes- warmer – don’t let that temp slider in Lightroom cloud your thinking!).

monitor calibration updateAfter all, 1000k is a noticeable variation – unlike the brightness 20cdm² shift.

Here’s the funny thing though; if I ‘software calibrate’ the CS270 using the ColorMunki software with the spectro plugged into the Mac instead of the monitor, I visually get the same result using D65/120cdm² as I do ‘hardware calibrating’ at D55 and 100cdm².

The same that is, until I look at the colour spaces of the two generated ICC profiles:

monitor calibration update

The coloured section is the ‘software calibration’ colour space, and the wire frame the ‘hardware calibrated’ Eizo custom space – click the image to view larger in a separate window.

The hardware calibration profile is somewhat larger and has a slightly better black point performance – this will allow the viewer to SEE just that little bit more tonality in the deepest of shadows, and those perennially awkward colours that sit in the Blue, Cyan, Green region.

It’s therefore quite obvious that monitor calibration via the hardware/ColorNavigator method on Eizo monitors does buy you that extra bit of visual acuity, so if you own an Eizo ColorEdge then it is the way to go for sure.

Having said that, the differences are small-ish so it’s not really worth getting terrifically evangelical over it.

But if you have the monitor then you should have the calibrator, and if said calibrator is ‘on the list’ of those supported by ColorNavigator then it’s a bit of a JDI – just do it.

You can find the list of supported calibrators here.

Eizo and their ColorNavigator are basically making a very effective ‘mash up’ of the two ISO standards 3664 and 12646 which call for D65 and D50 white points respectively.

Why did I go CHEAP ?

Well, cheaper…..

Apart from the fact that I don’t like spending money – the stuff is so bloody hard to come by – I didn’t want the top end Eizo in either 27″ or 24″.

With the ‘top end’ ColorEdge monitors you are paying for some things that I at least, have little or no use for:

  • 3D CLUT – I’m a general sort of image maker who gets a bit ‘creative’ with my processing and printing.  If I was into graphics and accurate repro of Pantone and the like, or I specialised in archival work for the V & A say, then super-accurate colour reproduction would be critical.  The advantage of the 3D CLUT is that it allows a greater variety of SUBTLY different tones and hues to be SEEN and therefore it’s easier to VISUALLY check that they are maintained when shifting an image from one colour space to another – eg softproofing for print.  I’m a wildlife and landscape photographer – I don’t NEED that facility because I don’t work in a world that requires a stringent 100% colour accuracy.
  • Built-in Calibrator – I don’t need one ‘cos I’ve already got one!
  • Built-in Self-Correction Sensor – I don’t need one of those either!

So if your photography work is like mine, then it’s worth hunting out a ‘zero hours’ CS270 if you fancy the extra screen real-estate, and you want to spend less than if buying its replacement – the CS2730.  You won’t notice the extra 5 milliseconds slower response time, and the new CS2730 eats more power – but you do get a built-in carrying handle!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

More ISO Settings Misinformation

More ISO Settings Misinformation

This WAS going to be a post about exposure…….!

But, this morning I was on the Facebook page of a friend where I came across a link he’d shared to this page which makes a feature of this diagram:

exposure

Please Note: I’m “hot linking” this image so’s not to be accused of theft!

This style of schematic for the Exposure Triangle is years old and so is nothing new.

When using FILM the ISO value IS a measure of sensitivity to light – that of the film, in other words its SPEED.  Higher ISO film is more sensitive to light than lower ISO film, and the increased sensitivity brings about larger ‘grain’ in the image.

When we talk ‘digital photography’ however the ISO value HAS NOTHING TO WITH SENSITIVITY TO LIGHT – of anything inside your camera, including the damn sensor.

ISO in digital cameras is APPLIED GAIN. Applied ‘after the exposure has been made’..after the fact…after Elvis has left the freaking building!

Your sensors sensitivity to light is FIXED and dictated by the size of the photosites that make up the sensor – that is, the sensor pixel pitch.

People who persist in leading you guys into thinking that ISO controls sensor sensitivity should be shot, or better still strapped over the muzzle of an artillery piece……..

The article then goes on to advise the following pile of horse crap:

Recommended ISO settings:

  • ISO 100 or 200 for sunny and bright daylight 
  • ISO 400 ISO for cloudy days, or indoors 
  • ISO 800 for indoors (without a flash) 
  • ISO 1600+ for very low light situations 

WTF??? What year are we in – 2007??

And this pile of new 2017 junk is on a website dedicated to a certain camera manufacturer who’s cameras have produced superb images at ISO settings way higher than the parameters stated above for ages.

Take this shot from a Canon 1DX Mk1 – old tech/off-sensor ADCs etc:

Canon 1DX Mark 1 ISO 10,000 1/8000th @ f7.1 – click for the full size image.

ISO settings are at the bottom of the pile when it comes to good action photography – the overriding importance at all times is SHUTTER SPEED and AF performance.

I don’t care about ‘ISO noise’ anywhere near as much as I care about focus and freezing the action, and neither should you guys.

What have the above and below shots got in common – apart from the wildlife category?

More ISO Settings Misinformation

1/8000th shutter speed and an aperture of 7.1 – aperture for DoF and shutter speed to freeze the action – stuff the ‘noise’.

And speaking of ‘noise’ – there isn’t anywhere near enough to screw the shot up for stock sale even at full size, and I’ll tell you again, noise hardly prints at all!

Here’s another ‘old tech’ Canon 1DX Mk1 shot:

More ISO Settings Misinformation

I don’t really want to wheel the same shots out over and over but don’t forget the Canon 5D Mk4 Great Tit at 10,000ISO or 1DX Mk2 Musk Ox at 16,000ISO either!

Don’t get me wrong, when I want maximum Dynamic Range I shoot at base ISO, but generally you’ll never find me shooting at any fixed ISO other than base; other than when shooting astro landscapes.  Everything else is Auto ISO.

So a fan website, in 2017, is basically telling you not to use the ISO speeds that I use all the damn time – and they are justifying that with bad information.

Please people, 90% plus of what you see on the web is total garbage, please don’t take it as gospel truth until you check with someone who actually knows what they are talking about.

Do I know what I’m talking about, well, only you can judge that one.  But everything I do tell you can be justified with full resolution images – not meaningless little jpegs on a web site.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Anyway, that’s it – rant over!

As ever, if you like the info in this post hit the subscribe button. Hop over to my YouTube channel and subscribe there too and if you are feeling generous then a couple of bucks donation via PayPal to tuition@wildlifeinpixels.net would be gratefully appreciated!

Thanks Folks!