FX vs DX

FX versus DX

It amazes me that people still don’t understand the relationship between FX and DX format sensors.

Millions of people across the planet think still that when they put a DX body on an FX lens and turn the camera on, something magic happens and the lens somehow becomes a different beast.

NO…it doesn’t!

There is so much crap out there on the web, resulting in the blind being led by the stupid – and that is a hardcore fact.  Some of the ‘stuff’ I get sent links to on the likes of ‘diaper review’ (to coin Ken W.’s name for it) and others, leaves me totally aghast at the number of fallacies that are being promoted and perpetuated within the content of these high-traffic websites.

FFS – this has GOT to STOP.

Fallacy 1.  Using a DX crop sensor gives me more magnification.

Oh no it doesn’t!

If we arm an FX and a DX body with identical lenses, let’s say 500mm f4’s, and go and take the same picture, at the same time and from the same subject distance with both setups, we get the following images:

D5A9517 2 900x600 FX vs DX

FX versus DX: FX + 500m f4 image – 36mm x 24mm frame area FoV

D5A9517 900x600 FX vs DX

FX versus DX: DX + 500mm f4 image – 24mm x 16mm frame area FoV

FXvDX 900x411 FX vs DX

FX versus DX: With both cameras at the same distance from the subject, the Field of View of the DX body+500mm f4 combo is SMALLER – but the subject is EXACTLY the SAME SIZE.

Let’s overlay the two images:

FXvDX2 900x600 FX vs DX

FX versus DX: The DX field of view (FoV) is indicated by the black line. HOWEVER, this line only denotes the FoV area. It should NOT be taken as indicative of pixel dimensions.

The subject APPEARS larger in the DX frame because the frame FoV is SMALLER than that of the FX frame.

FXvDX3 900x496 FX vs DX

But I will say it again – the subject is THE SAME DAMN SIZE.  Any FX lens projects an image onto the focal plane that is THE SAME SIZE irrespective of whether the sensor is FX or DX – end of story.

Note: If such a thing existed, a 333mm prime on a DX crop body would give us the same COMPOSITION, at the same subject distance, as our 500mm prime on the FX body.  But at the same aperture and distance, this fictitious 333mm lens would give us MORE DoF due to it being a shorter focal length.

Fallacy 2.  Using a DX crop sensor gives me more Depth of Field for any given aperture.

The other common variant of this fallacy is:

Using a DX crop sensor gives me less Depth of Field for any given aperture.

Oh no it doesn’t – not in either case!

Understand this people – depth of field is, as we all know, governed by the aperture diaphragm – in other words the f number.  Now everyone understands this, surely to God.

But here’s the thing – where’s the damn aperture diaphragm?  Inside the LENS – not the camera!

Depth of field is REAL or TRUE focal length, aperture and subject distance dependent, so our two identical 500mm f4 lenses at say 30 meters subject distance and f8 are going to yield the same DoF.  That’s irrespective of the physical dimensions of the sensor – be they 36mm x 24mm, or 24mm x 16mm.

But, in order for the FX setup to obtain the same COMPOSITION as that of the DX, the FX setup will need to be CLOSER to the subject – and so using the same f number/aperture value will yield an image with LESS DoF than that of the DX, because DoF decreases with decreased distance, for any given f number.

To obtain the same COMPOSITION with the DX as that of the FX, then said DX camera would need to move further away from the subject.  Therefore the same aperture value would yield MORE DoF, because DoF increases with increased distance, for any given f number.

The DX format does NOT change DoF, it’s the pixel pitch/CoC that alters the total DoF in the final image.  In other words it’s total megapixels the alters DoF, and that applies evenly across FX and DX.

Fallacy 3.  An FX format sensor sees more light, or lets more light in, giving me more exposure because it’s a bigger ‘eye’ on the scene.

Oh no it doesn’t!

Now this crap really annoys the hell out of me.

Exposure has nothing to do with sensor size WHAT SO EVER.  The intensity of light falling onto the focal plane is THE SAME, irrespective of sensor size.  Exposure is a function of Intensity x Time, and so for the same intensity (aperture) and time (shutter speed) the resulting exposure will be the SAME.  Total exposure is per unit area, NOT volume.

It’s the buckets full of rain water again:

sensor size exposure 900x297 FX vs DX

The level of water in each bucket is the same, and represents total exposure.  There is no difference in exposure between sensor sizes.

There is a huge difference in volume, but your sensor does not work on total volume – it works per unit area.  Each and every square millimeter, or square micron, of the focal plane sees the same exposure from the image projected into it by the lens, irrespective of the dimensions of the sensor.

The smallest unit area of the sensor is a photosite. And each photosite recieves the same said exposure value, no matter how big the sensor they are embedded in is.

HOWEVER, it is how those individual photosites COPE with that exposure that makes the difference. And that leads us neatly on to the next fallacy.

Fallacy 4.  FX format sensors have better image quality because they are bigger.

Oh no they don’t – well, not because they are just bigger !

It’s all to do with pixel pitch, and pixel pitch governs VOLUME.

pixelpitch 900x302 FX vs DX

FX format sensors usually give a better image because their photosites have a larger diameter, or pitch. You should read HERE  and HERE for more detail.

Larger photosites don’t really ‘see’ more light during an exposure than small ones, but because they are larger, each one has a better potential signal to noise ratio.  This can, turn, allow for greater subtle variation in recorded light values amongst other things, such as low light response.  Think of a photosite as an eyeball, then think of all the animals that mess around in the dark – they all have big eyes!

That’s not the most technological statement I’ve ever made, but it’s fact, and makes for a good analogy at this point.

Everyone wants a camera sensor that sees in the dark, generates zero noise at ISO 1 Million, has zero diffraction at f22, and has twice the resolution of £35Ks worth medium format back.

Well kids, I hate to break it to you, but such a beast does not exist, and nor will it for many a year to come.

The whole FX versus DX format  ‘thing’ is really a meaningless argument, and the DX format has no advantage over the FX format apart from less weight and lower price (perhaps).

Yes, if we shoot a DX format camera using an FX lens we get the ‘illusion’ of a magnified subject – but that’s all it is – an illusion.

Yes, if we shoot the same shot on a 20Mp FX and crop it to look like the shot from a 20Mp DX, then the subject in the DX shot will have twice as many pixels in it, because of the higher translational density – but at what cost.

Cramming more mega pixels into either a 36mm x 24mm or 24mm x 16mm area results in one thing only – smaller photosites.  Smaller photosites come with one single benefit – greater detail resolution.  Every other attribute that comes with smaller photosites is a negative one:

  • Greater susceptibility to subject motion blur – the bane of landscape and astro photographers.
  • Greater susceptibility to diffraction due to lower CoC.
  • Lower CoC also reduces DoF.
  • Lower signal to noise ratio and poorer high ISO performance.

Note: Quite timely this! With the new leaked info about the D850, we see it’s supposed to have a BSI sensor.  This makes it impossible to make a comparison between it and the D500, even though the photosites are nearly pretty much the same size/pitch.  Any comparison is made even more impossible with the different micro-lens tech sported by the D850.  Also, the functionality of the ADC/SNR firmware is bound to be different from the D500 too.

Variations in: AA filter type/properties and micro lens design, wiring substrate thickness, AF system algorithms and performance, ADC/SNR and other things, all go towards making FX versus DX comparisons difficult, because we use our final output images to draw our conclusions; and they are effected by all of the above.

But facts are facts – DX does not generate either greater magnification or greater/less depth of field than FX when used with identical FX lenses at the same distance and aperture.

Sensor format effects nothing other than FoV,  everything else is purely down to pixel pitch.

 

 

 

 

Monitor Calibration Update

Monitor Calibration Update

Okay, so I no longer NEED a new monitor, because I’ve got one – and my wallet is in Leighton Hospital Intensive Care Unit on the critical list..

What have you gone for Andy?  Well if you remember, in my last post I was undecided between 24″ and 27″, Eizo or BenQ.  But I was favoring the Eizo CS2420, on the grounds of cost, both in terms of monitor and calibration tool options.

But I got offered a sweet deal on a factory-fresh Eizo CS270 by John Willis at Calumet – so I got my desire for more screen real-estate fulfilled, while keeping the costs down by not having to buy a new calibrator.

%name Monitor Calibration Update

But it still hurt to pay for it!

Monitor Calibration

There are a few things to consider when it comes to monitor calibration, and they are mainly due to the physical attributes of the monitor itself.

In my previous post I did mention one of them – the most important one – the back light type.

CCFL and WCCFL – cold cathode fluorescent lamps, or LED.

CCFL & WCCFL (wide CCFL) used to be the common type of back light, but they are now less common, being replaced by LED for added colour reproduction, improved signal response time and reduced power consumption.  Wide CCFL gave a noticeably greater colour reproduction range and slightly warmer colour temperature than CCFL – and my old monitor was fitted with WCCFL back lighting, hence I used to be able to do my monitor calibration to near 98% of AdobeRGB.

CCFL back lights have one major property – that of being ‘cool’ in colour, and LEDs commonly exhibit a slightly ‘warmer’ colour temperature.

But there’s LEDs – and there’s LEDs, and some are cooler than others, some are of fixed output and others are of a variable output.

The colour temperature of the backlighting gives the monitor a ‘native white point’.

The ‘brightness’ of the backlight is really the only true variable on a standard type of LCD display, and the inter-relationship between backlight brightness and colour temperature, and the size of the monitors CLUT (colour look-up table) can have a massive effect on the total number of colours that the monitor can display.

Industry-standard documentation by folk a lot cleverer than me has for years recommended the same calibration target settings as I have alluded to in previous blog posts:

White Point: D65 or 6500K

Brightness: 120 cdm² or candelas per square meter

Gamma: 2.2

Screen Shot 2017 04 02 at 13.04.25 Monitor Calibration Update

The ubiquitous ColorMunki Photo ‘standard monitor calibration’ method setup screen.

This setup for ‘standard monitor calibration’ works extremely well, and has stood me in good stead for more years than I care to add up.

As I mentioned in my previous post, standard monitor calibration refers to a standard method of calibration, which can be thought of as ‘software calibration’, and I have done many print workshops where I have used this method to calibrate Eizo ColorEdge and NEC Spectraviews with great effect.

However, these more specialised colour management monitors have the added bonus of giving you a ‘hardware monitor calbration’ option.

To carry out a hardware monitor calibration on my new CS270 ColorEdge – or indeed any ColorEdge – we need to employ the Eizo ColorNavigator.

The start screen for ColorNavigator shows us some interesting items:

colnav1 Monitor Calibration Update

The recommended brightness value is 100 cdm² – not 120.

The recommended white point is D55 not D65.

Thank God the gamma value is the same!

Once the monitor calibration profile has been done we get a result screen of the physical profile:

colnav2 Monitor Calibration Update

Now before anyone gets their knickers in a knot over the brightness value discrepancy there’s a couple of things to bare in mind:

  1. This value is always slightly arbitrary and very much dependent on working/viewing conditions.  The working environment should be somewhere between 32 and 64 lux or cdm² ambient – think Bat Cave!  The ratio of ambient to monitor output should always remain at between 32:75/80 and 64:120/140 (ish) – in other words between 1:2 and 1:3 – see earlier post here.
  2. The difference between 100 and 120 cdm² is less than 1/4 stop in camera Ev terms – so not a lot.

What struck me as odd though was the white point setting of D55 or 5500K – that’s 1000K warmer than I’m used to. (yes- warmer – don’t let that temp slider in Lightroom cloud your thinking!).

1000k Monitor Calibration UpdateAfter all, 1000k is a noticeable variation – unlike the brightness 20cdm² shift.

Here’s the funny thing though; if I ‘software calibrate’ the CS270 using the ColorMunki software with the spectro plugged into the Mac instead of the monitor, I visually get the same result using D65/120cdm² as I do ‘hardware calibrating’ at D55 and 100cdm².

The same that is, until I look at the colour spaces of the two generated ICC profiles:

profile Monitor Calibration Update

The coloured section is the ‘software calibration’ colour space, and the wire frame the ‘hardware calibrated’ Eizo custom space – click the image to view larger in a separate window.

The hardware calibration profile is somewhat larger and has a slightly better black point performance – this will allow the viewer to SEE just that little bit more tonality in the deepest of shadows, and those perennially awkward colours that sit in the Blue, Cyan, Green region.

It’s therefore quite obvious that monitor calibration via the hardware/ColorNavigator method on Eizo monitors does buy you that extra bit of visual acuity, so if you own an Eizo ColorEdge then it is the way to go for sure.

Having said that, the differences are small-ish so it’s not really worth getting terrifically evangelical over it.

But if you have the monitor then you should have the calibrator, and if said calibrator is ‘on the list’ of those supported by ColorNavigator then it’s a bit of a JDI – just do it.

You can find the list of supported calibrators here.

Eizo and their ColorNavigator are basically making a very effective ‘mash up’ of the two ISO standards 3664 and 12646 which call for D65 and D50 white points respectively.

Why did I go CHEAP ?

Well, cheaper…..

Apart from the fact that I don’t like spending money – the stuff is so bloody hard to come by – I didn’t want the top end Eizo in either 27″ or 24″.

With the ‘top end’ ColorEdge monitors you are paying for some things that I at least, have little or no use for:

  • 3D CLUT – I’m a general sort of image maker who gets a bit ‘creative’ with my processing and printing.  If I was into graphics and accurate repro of Pantone and the like, or I specialised in archival work for the V & A say, then super-accurate colour reproduction would be critical.  The advantage of the 3D CLUT is that it allows a greater variety of SUBTLY different tones and hues to be SEEN and therefore it’s easier to VISUALLY check that they are maintained when shifting an image from one colour space to another – eg softproofing for print.  I’m a wildlife and landscape photographer – I don’t NEED that facility because I don’t work in a world that requires a stringent 100% colour accuracy.
  • Built-in Calibrator – I don’t need one ‘cos I’ve already got one!
  • Built-in Self-Correction Sensor – I don’t need one of those either!

So if your photography work is like mine, then it’s worth hunting out a ‘zero hours’ CS270 if you fancy the extra screen real-estate, and you want to spend less than if buying its replacement – the CS2730.  You won’t notice the extra 5 milliseconds slower response time, and the new CS2730 eats more power – but you do get a built-in carrying handle!

 

More ISO Settings Misinformation

More ISO Settings Misinformation

This WAS going to be a post about exposure…….!

But, this morning I was on the Facebook page of friend where I came across a link he’d shared to this page which makes a feature of this:

%name More ISO Settings Misinformation

Please Note: I’m “hot linking” this image so’s not to be accused of theft!

This style of schematic for the Exposure Triangle is years old and so is nothing new.

When using FILM the ISO value IS a measure of sensitivity to light – that of the film, in other words its SPEED.  Higher ISO film is more sensitive to light than lower ISO film, and the increased sensitivity brings about larger ‘grain’ in the image.

When we talk ‘digital photography’ however the ISO value HAS NOTHING TO WITH SENSITIVITY TO LIGHT – of anything inside your camera, including the damn sensor.

ISO in digital cameras is APPLIED GAIN. Applied ‘after the exposure has been made’..after the fact…after Elvis has left the freaking building!

Your sensors sensitivity to light is FIXED and dictated by the size of the photosites that make up the sensor – that is, the sensor pixel pitch.

People who persist in leading you guys into thinking that ISO controls sensor sensitivity should be shot, or better still strapped over the muzzle of an artillery piece……..

The article then goes on to advise the following pile of horse crap:

Recommended ISO settings:

  • ISO 100 or 200 for sunny and bright daylight 
  • ISO 400 ISO for cloudy days, or indoors 
  • ISO 800 for indoors (without a flash) 
  • ISO 1600+ for very low light situations 

WTF??? What year are we in – 2007??

And this pile of new 2017 junk is on a website dedicated to a certain camera manufacturer who’s cameras have produced superb images at ISO settings way higher than the parameters stated above for ages.

Take this shot from a Canon 1DX Mk1 – old tech/off-sensor ADCs etc:

FW1Q4333 600x400 More ISO Settings Misinformation

Canon 1DX Mark 1 ISO 10,000 1/8000th @ f7.1 – click for the full size image.

ISO settings are at the bottom of the pile when it comes to good action photography – the overriding importance at all times is SHUTTER SPEED and AF performance.

I don’t care about ‘ISO noise’ anywhere near as much as I care about focus and freezing the action, and neither should you guys.

What have the above and below shots got in common – apart from the wildlife category?

 D4R3440 More ISO Settings Misinformation

Nikon D4 – a meagre ISO 3200 1/8000th @ f7.1 – click for full size image.

1/8000th shutter speed and an aperture of 7.1 – aperture for DoF and shutter speed to freeze the action – stuff the ‘noise’.

And speaking of ‘noise’ – there isn’t anywhere near enough to screw the shot up for stock sale even at full size, and I’ll tell you again, noise hardly prints at all!

Here’s another ‘old tech’ Canon 1DX Mk1 shot:

GX2R4727 More ISO Settings Misinformation

And here’s where the rubber really meets the road – low light 4000ISO  1/200th @ f6.3 – click for full size image.

I don’t really want to wheel the same shots out over and over but don’t forget the Canon 5D Mk4 Great Tit at 10,000ISO or 1DX Mk2 Musk Ox at 16,000ISO either!

Don’t get me wrong, when I want maximum Dynamic Range I shoot at base ISO, but generally you’ll never find me shooting at any fixed ISO other than base; other than when shooting astro landscapes.  Everything else is Auto ISO.

So a fan website, in 2017, is basically telling you not to use the ISO speeds that I use all the damn time – and they are justifying that with bad information.

Please people, 90% plus of what you see on the web is total garbage, please don’t take it as gospel truth until you check with someone who actually knows what they are talking about.

Do I know what I’m talking about, well, only you can judge that one.  But everything I do tell you can be justified with full resolution images – not meaningless little jpegs on a web site.

Anyway, that’s it – rant over!

As ever, if you like the info in this post hit the subscribe button. Hop over to my YouTube channel and subscribe there too and if you are feeling generous then a couple of bucks donation via PayPal to tuition@wildlifeinpixels.net would be gratefully appreciated!

Thanks Folks!

Raw File Compression

Raw File Compression.

Today I’m going to give you my point of view over that most vexatious question – is LOSSLESS raw file compression TRULY lossless?

I’m going to upset one heck of a lot of people here, and my chances of Canon letting me have any new kit to test are going to disappear over the horizon at a great rate of knots, but I feel compelled to post!

What prompts me to commit this act of potential suicide?

It’s this shot from my recent trip to Norway:

FW1Q1351 2 900x600 Raw File Compression

Direct from Camera

FW1Q1351 900x600 Raw File Compression

Processed in Lightroom

I had originally intended to shoot Nikon on this trip using a hire 400mm f2.8, but right at the last minute there was a problem with the lens that couldn’t be sorted out in time, so Calumet supplied me with a 1DX and a 200-400 f4 to basically get me out of a sticky situation.

As you should all know by now, the only problems I have with Canon cameras are their  short Dynamic Range, and Canons steadfast refusal to allow for uncompressed raw recording.

The less experienced shooter/processor might look at the shot “ex camera” and be disappointed – it looks like crap, with far too much contrast, overly dark shadows and near-blown highlights.

Shot on Nikon the same image would look more in keeping with the processed version IF SHOT using the uncompressed raw option, which is something I always do without fail; and the extra 3/4 stop dynamic range of the D4 would make a world of difference too.

Would the AF have done as good a job – who knows!

The lighting in the shot is epic from a visual PoV, but bad from a camera exposure one. A wider dynamic range and zero raw compression on my Nikon D4 would allow me to have a little more ‘cavalier attitude’ to lighting scenarios like this – usually I’d shoot with +2/3Ev permanently dialled into the camera.  Overall the extra dynamic range would give me less contrast, and I’d have more highlight detail and less need to bump up the shadow areas in post.

In other words processing would be easier, faster and a lot less convoluted.

But I can’t stress enough just how much detrimental difference LOSSLESS raw file compression CAN SOMETIMES make to a shot.

Now there is a lot – and I mean A LOT – of opinionated garbage written all over the internet on various forums etc about lossless raw file compression, and it drives me nuts.  Some say it’s bad, most say it makes no difference – and both camps are WRONG!

Sometimes there is NO visual difference between UNCOMPRESSED and LOSSLESS, and sometimes there IS.  It all depends on the lighting and the nature of the scene/subject colours and how they interact with said lighting.

The main problem with the ‘it makes no difference’ camp is that they never substantiate their claims; and if they are Canon shooters they can’t – because they can’t produce an image with zero raw file compression to compare their standard lossless CR2 files to!

So I’ve come up with a way of illustrating visually the differences between various levels of raw file compression on Nikon using the D800E and Photoshop.

But before we ‘get to it’ let’s firstly refresh your understanding. A camera raw file is basically a gamma 1.0, or LINEAR gamma file:

LinVsHum3 900x271 Raw File Compression

Linear (top) vs Encoded Gamma

The right hand 50% of the linear gamma gradient represents the brightest whole stop of exposure – that’s one heck of a lot of potential for recording subtle highlight detail in a raw file.

It also represents the area of tonal range that is frequently most effected by any form of raw file compression.

Neither Nikon or Canon will reveal to the world the algorithm-based methods they use for lossless or lossy raw file compression, but it usually works by a process of ‘Bayer Binning’.

Bayer Pattern Raw File Compression

If we take a 2×2 block, it contains 2 green, 1 red and 1 blue photosite photon value – if we average the green value and then interpolate new values for red and blue output we will successfully compress the raw file.  But the data will be ‘faux’ data, not real data.

The other method we could use is to compress the tonal values in that brightest stop of recorded highlight tone – which is massive don’t forget – but this will result in a ’rounding up or down’ of certain bright tonal values thus potentially reducing some of the more subtle highlight details.

We could also use some variant of the same type of algorithm to ‘rationalise’ shadow detail as well – with pretty much the same result.

In the face of Nikon and Canons refusal to divulge their methodologies behind raw file compression, especially lossless, we can only guess what is actually happening.

I read somewhere that with lossless raw file compression the compression algorithms leave a trace instruction about what they have done and where they’ve done it in order that a raw handler programme such as Lightroom can actually ‘undo’ the compression effects – that sounds like a recipe for disaster if you ask me!

Personally I neither know nor do I care – I know that lossless raw file compression CAN be detrimental to images shot under certain conditions, and here’s the proof – of a fashion:

Let’s look at the following files:

14bitUC1 258x400 Raw File Compression

Image 1: 14 bit UNCOMPRESSED

14bitUC2 258x400 Raw File Compression

Image 2: 14 bit UNCOMPRESSED

14bitLosslessC 258x400 Raw File Compression

Image 3: 14 bit LOSSLESS compression

14bitLossyC 258x400 Raw File Compression

Image 4: 14 bit LOSSY compression

12bitUC 258x400 Raw File Compression

Image 5: 12 bit UNCOMPRESSED

Yes, there are 2 files which are identical, that is 14 bit uncompressed – and there’s a reason for that which will become apparent in a minute.

First, some basic Photoshop ‘stuff’.  If I open TWO images in Photoshop as separate layers in the same document, and change the blend mode of the top layer to DIFFERENCE I can then see the differences between the two ‘images’.  It’s not a perfect way of proving my point because of the phenomenon of photon flux.

Photon Flux Andy??? WTF is that?

Well, here’s where shooting two identical 14 bit uncompressed files comes in – they themselves are NOT identical!:

controlunamplified 258x400 Raw File Compression control 258x400 Raw File Compression

The result of overlaying the two identical uncompressed raw files (above left) – it looks almost black all over indicating that the two shots are indeed pretty much the same in every pixel.  But if I amplify the image with a levels layer (above right) you can see the differences more clearly.

So there you have it – Photon Flux! The difference between two 14 bit UNCOMPRESSED raw files shot at the same time, same ISO, shutter speed AND with a FULLY MANUAL APERTURE.  The only difference between the two shots is the ratio and number of photons striking the subject and being reflected into the lens.

Firstly 14 Bit UNCOMPRESSED compared to 14 bit LOSSLESS (the important one!):

14bitUCvLosslessC 258x400 Raw File Compression

14 bit UNCOMPRESSED vs 14 bit LOSSLESS

Please remember, the above ‘difference’ image contains photon flux variations too, but if you look carefully you will see greater differences than in the ‘flux only’ image above.

14bitUCvLossyC 258x400 Raw File Compression 14bitUCv12bitUC 258x400 Raw File Compression

The two images above illustrate the differences between 14 bit uncompressed and 14 bit LOSSY compression (left) and 14 bit UNCOMPRESSED and 12 bit UNCOMPRESSED (right) just for good measure!

In Conclusion

As I indicated earlier in the post, this is not a definitive testing method, sequential shots will always contain a photon flux variation that ‘pollutes’ the ‘difference’ image.

I purposefully chose this white subject with textured aluminium fittings and a blackish LED screen because the majority of sensor response will lie in that brightest gamma 1.0 stop.

The exposure was a constant +1EV, 1/30th @ f 18 and 100 ISO – nearly maximum dynamic range for the D800E, and f18 was set manually to avoid any aperture flicker caused by auto stop down.

You can see from all the ‘difference’ images that the part of the subject that seems to suffer the most is the aluminium part, not the white areas.  The aluminium has a stippled texture causing a myriad of small specular highlights – brighter than the white parts of the subject.

What would 14 bit uncompressed minus 14 bit lossless minus photon flux look like?  In a perfect world I’d be able to show you accurately, but we don’t live in one of those so I can’t!

We can try it using the flux shot from earlier:

losslessminuscontrol 258x400 Raw File Compression

But this is wildly inaccurate as the flux component is not pertinent to the photons at the actual time the lossless compression shot was taken.  But the fact that you CAN see an image does HINT that there is a real difference between UNCOMPRESSED and LOSSLESS compression – in certain circumstances at least.

If you have never used a camera that offers the zero raw file compression option then basically what you’ve never had you never miss.  But as a Nikon shooter I shoot uncompressed all the time – 90% of the time I don’t need to, but it just saves me having to remember something when I do need the option.

FW1Q4469 600x400 Raw File Compression

Would this 1DX shot be served any better through UNCOMPRESSED raw recording?  Most likely NO – why?  Low Dynamic Range caused in the main by flat low contrast lighting means no deep dark shadows and nothing approaching a highlight.

I don’t see it as a costly option in terms of buffer capacity or on-board storage, and when it comes to processing I would much rather have a surfeit of sensor data rather than a lack of it – no matter how small that deficit might be.

Lossless raw file compression has NO positive effect on your images, and it’s sole purpose in life is to allow you to fit more shots on the storage media – that’s it pure and simple.  If you have the option to shoot uncompressed then do so, and buy a bigger card!

What pisses my off about Canon is that it would only take, I’m sure, a firmware upgrade to give the 1DX et al the ability to record with zero raw file compression – and, whether needed or not, it would stop miserable grumpy gits like me banging on about it!

 

Colour in Photoshop

Colour in Photoshop.

Understanding colour inside Photoshop is riddled with confusion for the majority of users.  This is due to the perpetual misuse of certain words and terms.  Adobe themselves use incorrect terminology – which doesn’t help!

The aim of this post is to understand the attributes or properties of colour inside the Photoshop environment – “…is that right Andy?”  “Yeh, it is!”

So, the first colour attribute we’re going to look at is HUE:

ColWheel1 1 Colour in Photoshop

A colour wheel showing point-sampled HUES (colours) at 30 degree increments.

HUE can be construed as meaning ‘colour’ – or color for the benefit of our American friends “come on guys, learn to spell – you’ve had long enough!”

The colour wheel begins at 0 degrees with pure Red (255,0,0 in 8bit RGB terms), and moves clockwise through all the HUES/colours to end up back at pure Red – simple!

Hue1 Colour in Photoshop

Above, we can see samples of primary red and secondary yellow together with their respective HUE degree values which are Red 0 degrees and Yellow 60 degrees.  You can also see that the colour channel values for Red are 255,0,0 and Yellow 255,255,0.  This shows that Yellow is a mix of Red light and Green light in equal proportions.

I told you it was easy!

Inside Photoshop the colour wheel starts and ends at 180 degrees CYAN, and is flattened out into a horizontal bar as in the Hue/Saturation adjustment:

ColWheel2 Colour in Photoshop

Overall, there is no ambiguity over the meaning or terminology HUE; it is what it is, and it is usually taken as meaning ‘what colour’ something is.

The same can be said for the next attribute of colour – SATURATION.

Or can it?

How do we define saturation?

Sat1 Colour in Photoshop

Two different SATURATION values (100% & 50%) of the same HUE.

Above we can see two different saturation values for the same HUE (0 degrees Hue, 100% and 50% Saturation). I suppose the burning question is, do we have two different ‘colours’?

As photographers we mainly work with additive colour; that is we add Red, Green and Blue coloured light to black in order to attain white.  But in the world of painting for instance, subtractive colour is used; pigments are overlaid on white (thus subtracting white) to make black.  Printing uses the same model – CMY+K inks overlaid on ‘white’ paper …..mmm see here

If we take a particular ‘colour’ of paint and we mix it with BLACK we have a different SHADE of the same colour.  If we instead add WHITE we end up with what’s called a TINT of the same colour; and if add grey to the original paint we arrive at a different TONE of the same colour.

Let’s look at that 50% saturated Red again:

Sat2 Colour in Photoshop

Hue Red 0 degrees with 50% saturation.

We’ve basically added 128 Green and 128 Blue to 255 Red. Have we kept the same HUE – yes we have.

Is it the same colour? Be honest – you don’t know do you!

The answer is NO – they are two different ‘colours’, and the hexadecimal codes prove it – those are the hash-tag values ff0000 and ff8080.  But in our world of additive colour we should only think of the word ‘colour’ as a generalisation because it is somewhat ambiguous and imprecise.

But we can quantify the SATURATION of a HUE – so we’re all good up to this point!

So we beaver away in Photoshop in the additive RGB colour mode, but what you might not realise is that we are working in a colour model within that mode, and quite frankly this is where the whole chebang turns to pooh for a lot of folk.

There are basically two colour models for dare I use the word ‘normal’, photography work; HSB (also known as HSV) and HSL, and both are cylindrical co-ordinate colour models:

HSBHSL Colour in Photoshop

HSB (HSV) and HSL colour models for additive RGB.

Without knowing one single thing about either, you can tell they are different just by looking at them.

All Photoshop default colour picker referencing is HSB – that is Hue, Saturation & Brightness; with equivalent RGB, Lab, CMYK  hexadecimal values:

col3 Colour in Photoshop

But in the Hue/Sat adjustment for example, we see the adjustments are HSL:

ColWheel2 Colour in Photoshop

The HSL model references colour in terms of Hue, Saturation & Lightness – not flaming LUMINOSITY as so many people wrongly think!

And it’s that word luminosity that’s the single largest purveyor of confusion and misunderstanding – luminosity masking, luminosity blending mode are both terms that I and oh so many others use – and we’re all wrong.

I have an excuse – I know everything, but I have to use the wrong terminology otherwise no one else knows what I’m talking about!!!!!!!!!  Plausible story and I’m sticking to it your honour………

Anyway, within Photoshop, HSB is used to select colours, and HSL is used to change them.

The reason for this is somewhat obvious when you take a close look at the two models again:

HSBHSL Colour in Photoshop

HSB (HSV) and HSL colour models for additive RGB. (V stands for Value = B in HSB).

In the HSB model look where the “whiteness” information is; it’s radial, and bound up in the ‘S’ saturation co-ordinate.  But the “blackness” information is vertical, on the ‘B’ brightness co-ordinate.  This great when we want to pick/select/reference a colour.

But surely it would be more beneficial for the “whiteness” and “blackness” information to be attached to the axis or dimension, especially when we need to increase or decrease that “white” or “black” co-ordinate value in processing.

So within the two models the ‘H’ hue co-ordinates are pretty much the same, but the ‘S’ saturation co-ordinates are different.

So this leaves us with that most perennial of questions – what is the difference between Brightness and Lightness?

Firstly, there is a massive visual difference between the Brightness and Lightness  information contained within an image as you will see now:

BHSB Colour in Photoshop

The ‘Brightness’ channel of HSB.

LHSL Colour in Photoshop

The ‘L’ channel of HSL

Straight off the bat you can see that there is far more “whites detail” information contained in the ‘L’ lightness map of the image than in the brightness map.  Couple that with the fact that Lightness controls both black and white values for every pixel in your image – and you should now be able to comprehend the difference between Lightness and Brightness, and so be better at understanding colour inside Photoshop.

We’ll always use the highly bastardised terms like luminosity, luminance etc – but please be aware that you may be using them to describe something to which they DO NOT APPLY.

Luminosity is a measure of the magnitude of a light source – typically stars; but could loosely be applied to the lumens output power of any light source.  Luminance is a measure of the reflected light from a subject being illuminated by a light source; and varies with distance from said light source – a la the inverse square law etc.

Either way, neither of them have got anything to do with the pixel values of an image inside Photoshop!

But LIGHTNESS certainly does.

Night Sky Imaging

Night Sky Photography – A Brief Introduction

D8E3685 3690composite23 Night Sky Imaging

I really get a massive buzz from photographing the night sky – PROPERLY.

By properly I mean using your equipment to the best of its ability, and using correct techniques in terms of both ‘shooting’ and post processing.

The majority of images within the vast plethora of night sky images on Google etc, and methods described, are to be frank PANTS!

Those 800 pixel long-edge jpegs hide a multitude of shooting and processing sins – such as HUGE amounts of sensor noise and the biggest sin of all – elongated stars.

Top quality full resolution imagery of the night sky demands pin-prick stars, not trails that look like blown out sausages – unless of course, you are wanting them for visual effect.

Pin sharp stars require extremely precise MANUAL FOCUS in conjunction with a shutter speed that is short enough to arrest the perceived movement of the night sky across the cameras field of view.

They also demand that the lens is ‘shot’ pretty much wide open in terms of aperture – this allows the sensor to ‘see and gather’ as many photons of light from each point-source (star) in the night sky.

So we are in the situation where we have to use manual focus and exposure with f2.8 as an approximate working aperture – and high ISO values, because of the demand for a relatively fast shutter speed.

And when it comes to our shutter speed the much-vaunted ‘500 Rule’ needs to be consigned to the waste bin – it’s just not a good enough standard to work to, especially considering modern high megapixel count sensors such as Nikon’s D800E/D810/D810A and Canons 5DS.

Leaving the shutter open for just 10 seconds using a 14mm lens will elongate stars EVER SO SLIGHTLY – so the ‘500 Rule’ speed of 500/14 = 35.71 seconds is just going to make a total hash of things.

In the shot below; a crop from the image top left; I’ve used a 10 second exposure, but in preference I’ll use 5 seconds if I can get away with it:

D8E3690 Night Sky Imaging

Nikon D800E,14-24 f2.8@14mm,10 seconds exposure,f2.8,ISO 6400
RAW, Unprocessed, Full Resolution Crop

WOW….look at all that noise…well, it’s not going to be there for long folks; and NO, I won’t make it vanish with any Noise Reduction functions or plugins either!

1444x1444testcomposite Night Sky Imaging

5 consecutive frames put through Starry Landscape Stacker – now we have something we can work with!

Download Starry Landscape Stacker from the App Store:
%name Night Sky Imaging

Huge amounts of ‘noise’ can be eradicated using Median Stacking within Photoshop, but Mac users can circumnavigate the ‘agro’ of layer alignment and layer masking by using this great ‘app’ Starry Landscape Stacker – which does all the ‘heavy lifting’ for you.  Click the link above to download it from the App Store.  Just ignore any daft iTunes pop-ups and click ‘View in Mac App Store’!

I have a demonstration of Median Stacking on my YouTube channel:

This video is best viewed on YouTube in full screen mode.

In a manner of speaking, the ‘shooting aspect’ of Milky Way/Night Sky/Wide-field Astro is pretty straight forward.  You are working in between some very hard constraints with little margin for error.

  • The Earths rotation makes the stars track across our frame – so this dictates our shutter speed for any given focal length of lens – shorter focal length = longer shutter speed.
  • Sensor Megapixel count – more megs = shorter shutter speed.
  • We NEED to shoot with a ‘wide open’ aperture, so our ISO speed takes over as our general exposure control.
  • Focusing – this always seems to be the big ‘sticking point’ for most folk – and despite what you read to the contrary, you can’t reliably use the ‘hyperfocal’ method with wide open apertures – it especially will not work with wide-angle zoom lenses!
  • The Earths ‘seasonal tilt’ dictates what we can and can’t see from a particular latitude; and in conjunction with time of day, dictates the direction and orientation of a particular astral object such as the Milky Way.
  • Light pollution can mask even the cameras ability to record all the stars, and it effects the overall scene luminance level.
  • The position and phase of the moon – a full moon frequently throws far too much light into the entire sky – my advice is to stay at home!
  • A moon in between its last quarter and new moon is frequently diagonally opposite the Milky Way, and can be useful for illuminating your foreground.

And there are quite a few other considerations to take into account, like dew point and relative humidity – and of course, the bloody clouds!

The point I’m trying to make is that these shots take PLANNING.

Using applications and utilities like Stellarium and Photographers Ephemeris in conjunction with Google Earth has always been a great way of planning shots.  But for me, the best planning aid is Photopills – especially because of its augmented reality feature.  This allows you to pre-visualise your shot from your current location, and it will compute the dates and times that the shot is ‘on’.

Download Photopills from the App Store:

Photopills400x400bb 150x150 Night Sky Imaging

But it won’t stop the clouds from rolling in!

Even with the very best planning the weather conditions can ruin the whole thing!

I’m hoping that before the end of the year I’ll have a full training video finished about shooting perfect ‘wide field astro’ images – it’ll cover planning as well as BOTH shooting AND processing.

I will show you how to:

  • Effectively use Google Earth in conjunction with Stellarium and Photopills for forward planning.
  • The easiest way to ensure perfect focus on those stars – every time.
  • How to shoot for improved foreground.
  • When, and when NOT to deploy LONG EXPOSURE noise reduction in camera – black frame shooting.
  • How to process RAW files in Lightroom for correct colour balance.
  • How to properly use both Median Stacking in Photoshop and Starry Landscape Stacker to reduce ISO noise.
  • And much more!

One really useful FREE facility on the net is the Light Pollution Map website – I suggest using the latest 2015 VIIRIS overlay and the Bing Map Hybrid mode in order to get a rough idea of your foreground and the background light pollution effecting your chosen location.

Don’t forget – if you shoot vertical (portrait?) with a 14mm lens, the top part of the frame can be slightly behind you!

 

 

Monitor Brightness.

Monitor Brightness & Room Lighting Levels.

I had promised myself I was going to do a video review of my latest purchase – the Lee SW150Mk2 system and Big and Little Stopper filters I’ve just spent a Kings ransom on for my Nikon 14-24mm and D800E:

D4D3598 Edit Monitor Brightness.

PURE SEX – and I’ve bloody well paid for this! My new Lee SW150 MkII filter system for the Nikon 14-24. Just look at those flashy red anodised parts – bound to make me a better photographer!

But I think that’ll have to wait while I address a question that keeps cropping up lately.  What’s the question?

Well, that’s the tricky bit because it comes in many guises. But they all boil down to “what monitor brightness or luminance level should I calibrate to?”

Monitor brightness is as critical as monitor colour when it comes to calibration.  If you look at previous articles on this blog you’ll see that I always quote the same calibration values, those being:

White Point: D65 – that figure takes care of colour.

Gamma: 2.2 – that value covers monitor contrast.

Luminance: 120 cdm2 (candelas per square meter) – that takes care of brightness.

Simple in’it….?!

However, when you’ve been around all this photography nonsense as long as I have you can overlook the possibility that people might not see things as being quite so blindingly obvious as you do.

And one of those ‘omissions on my part’ has been to do with monitor brightness settings COMBINED with working lighting levels in ‘the digital darkroom’.  So I suppose I’d better correct that failing on my part now.

What does a Monitor Profile Do for your image processing?

A correctly calibrated monitor and its .icc profile do a really simple but very mission-critical job.

If we open a new document in Photoshop and fill it with flat 255 white we need to see that it’s white.  If we hold an ND filter in front of our eye then the image won’t look white, it’ll look grey.

If we hold a blue filter in front of our eye the image will not look white – it’ll look blue.

That white image doesn’t exist ‘inside the monitor’ – it’s on our computer!  It only gets displayed on the monitor because of the graphics output device in our machine.

So, if you like, we’re on the outside looking in; and we are looking through a window on to our white image.  The colour and brightness level in our white image are correct on the inside of the system – our computer – but the viewing window or monitor might be too bright or too dark, and/or might be exhibiting a colour tint or cast.

Unless our monitor is a totally ‘clean window’ in terms of colour neutrality, then our image colour will not be displayed correctly.

And if the monitor is not running at the correct brightness then the colours and tones in our images will appear to be either too dark or too bright.  Please note the word ‘appear’…

Let’s get a bit fancy and make a greyscale in Photoshop:

Untitled 1 Monitor Brightness.

The dots represent Lab 50 to Lab 95 – the most valuable tonal range between midtone and highlight detail.

Look at the distance between Lab 50 & Lab 95 on the three greyscales above – the biggest ‘span’ is the correctly calibrated monitor.  In both the ‘too bright & contrasty’ and the ‘too dark low contrast’ calibration, that valuable tonal range is compressed.

In reality the colours and tones in, say an unprocessed RAW file on one of our hard drives, are what they are.  But if our monitor isn’t calibrated correctly, what we ‘see’ on our monitor IS NOT REALITY.

Reality is what we need – the colours and tones in our images need to be faithfully reproduced on our monitor.

And so basically a monitor profile ensures that we see our images correctly in terms of colour and brightness; it ensures that we look at our images through a clean window that displays 100% of the luminance being sent to it – not 95% and not 120% – and that all our primary colours are being displayed with 100% fidelity.

In a nutshell, on an uncalibrated monitor, an image might look like crap, when in reality it isn’t.  The shit really starts to fly when you start making adjustments in an uncalibrated workspace – what you see becomes even further removed from reality.

“My prints come out too dark Andy – why?”

Because your monitor is too bright – CALIBRATE it!

“My pics look great on my screen, but everyone on Nature Photographers Network keeps telling me they’ve got too much contrast and they need a levels adjustment.  One guy even reprocessed one – everyone thought his version was better, but frankly it looked like crap to me – why is this happening Andy?

“Because your monitor brightness is too low but your gamma is too high – CALIBRATE it!  If you want your images to look like mine then you’ve got to do ALL the things I do, not just some of ’em – do you think I do all this shit for fun??????????……………grrrrrrr….

But there’s a potential problem;  just because your monitor is calibrated to perfection, that does NOT mean that everything will be golden from this point on

Monitor Viewing Conditions

So we’re outside taking a picture on a bright sunny day, but we can’t see the image on the back of the camera because there’s too much daylight, and we have to dive under a coat with our camera to see what’s going on.

But if we review that same image on the camera in the dark then it looks epic.

Now you have all experienced that…….

The monitor on the back of your camera has a set brightness level – if we view the screen in a high level of ambient light the image looks pale, washed out and in a general state of ultra low contrast.  Turn the ambient light down and the image on the camera screen becomes more vivid and the contrast increases.

But the image hasn’t changed, and neither has the camera monitor.

What HAS changed is your PERCEPTION of the colour and luminance values contained within the image itself.

Now come on kids – join the dots will you!

It does not matter how well your monitor is calibrated, if your monitor viewing conditions are not within specification.

Just like with your camera monitor, if there is too much ambient light in your working environment then your precisely calibrated monitor brightness and gamma will fail to give you a correct visualization or ‘perception’ of your image.

And the problems don’t end there either; coloured walls and ceilings reflect that colour onto the surface of your monitor, as does that stupid luminous green shirt you’re wearing – yes, I can see you!  And if you are processing on an iMac then THAT problem just got 10 times worse because of the glossy screen!

Nope – bead-blasting your 27 inches of Apple goodness is not the answer!

Right, now comes the serious stuff, so READ, INGEST and ACT.

ISO Standard 3664:2009 is the puppy we need to work to (sort of) – you can actually go and purchase this publication HERE should you feel inclined to dump 138 CHF on 34 pages of light bedtime reading.

There are actually two ISO standards that are relevant to us as image makers; ISO 12646:2015(draft) being the other.

12646 pertains to digital image processing where screens are to be compared to prints side by side (that does not necessarily refer to ‘desktop printer prints from your Epson 3000’).

3664:2009 applies to digital image processing where screen output is INDEPENDENT of print output.

We work to this standard (for the most part) because we want to process for the web as well as for print.

If we employ a print work flow involving modern soft-proofing and otherwise keep within the bounds of 3664 then we’re pretty much on the dance-floor.

ISO 3664 sets out one or two interesting and highly critical working parameters:

Ambient Light White Point: D50 – that means that the colour temperature of the light in your editing/working environment should be 5000Kelvin (not your monitor) – and in particular this means the light FALLING ON TO YOUR MONITOR from within your room. So room décor has to be colour neutral as well as the light source.

Ambient Light Value in your Editing Area: 32 to 64 Lux or lower.  Now this is what shocks so many of you guys – lower than 32 lux is basically processing in the dark!

Ambient Light Glare Permissible: 0 – this means NO REFLECTIONS on your monitor and NO light from windows or other light sources falling directly on the monitor.

Monitor White Point – D65 (under 3664) and D50 (under 12646) – we go with D65.

Monitor Luminance – 75 to 100 cdm2 (under 3664) and 80 to 120 cdm2 (under 12646 – here we begin to deviate from 3664.

We appear to be dealing with mixed reference units, but 1 Lux = 1 cdm2 or 1 candela per square metre.

The way Monitor Brightness or Luminance relates to ambient light levels is perhaps a little counter-intuitive for some folk.  Basically the LOWER your editing area Lux value the LOWER your Monitor Brightness or luminance needs to be.

Now comes the point in the story where common sense gets mixed with experience, and the outcome can be proved by looking at displayed images and prints; aesthetics as opposed numbers.

Like all serious photographers I process my own images on a wide-gamut monitor, and I print on a wide-gamut printer.

Wide gamut monitors display pretty much 90% to100% of the AdobeRGB1998 colour space.

What we might refer to as Standard Gamut monitors display something a little larger than the sRGB colour space, which as we know is considerably smaller than AdobeRGB1998.

StandardGamutvsWideGamut Monitor Brightness.

Left is a standard gamut/sRGB monitor and right is a typical wide gamut/AdobeRGB1998 monitor – if you can call any NEC ‘typical’!

Find all the gory details about monitors on this great resource site – TFT Central.

At workshops I process on a 27 inch non-Retina iMac – this is to all intents and purposes a ‘standard gamut’ monitor.

I calibrate my monitors with a ColorMunki Photo – which is a spectrophotometer.  Spectro’s have a tendency to be slow, and slightly problematic in the very darkest tones and exhibit something of a low contrast reaction to ‘blacks’ below around Lab 6.3 (RGB 20,20,20).

If you own a ColorMunki Display or i1Dispaly you do NOT own a spectro, you own a colorimeter!  A very different beast in the way it works, but from a colour point of view they give the same results as a spectro of the same standard – plus, for the most part, they work faster.

However, from a monitor brightness standpoint, they differ from spectros in their slightly better response to those ultra-dark tones.

So from a spectrophotometer standpoint I prefer to calibrate to ISO 12646 standard of 120cdm2 and control my room lighting to around 35-40 Lux.

Just so that you understand just how ‘nit-picking’ these standards are, the difference between 80cdm2 and 120 cdm2 is just 1/2 or 1/3rd of a stop Ev in camera exposure terms, depending on which way you look at it!

However, to put this monitor brightness standard into context, my 27 inch iMac came from Apple running at 290 cdm2 – and cranked up fully it’ll thump out 340 cdm2.

Most stand-alone monitors you buy, especially those that fall under the ‘standard gamut’ banner, will all be running at massively high monitor brightness levels and will require some severe turning down in the calibration process.

You will find that most monitor tests and reviews are done with calibration to the same figures that I have quoted – D65, 120cdm2 and Gamma 2.2 – in fact this non-standard set up has become so damn common it is now ‘standard’ – despite what the ISO chaps may think.

Using these values, printing out of Lightroom for example, becomes a breeze when using printer profiles created to the ICC v2 standard as long as you ‘soft proof’ the image in a fit and proper manner – that means CAREFULLY, take your time.  The one slight shortcoming of the set up is that side by side print/monitor comparisons may look ever so slightly out of kilter because of the D65 monitor white point – 6,500K transmitted white point as opposed to a 5,000K reflective white point.  But a shielded print-viewer should bring all that back into balance if such a thing floats your boat.

But the BIG THING you need to take away from the rather long article is the LOW LUX VALUE of you editing/working area ambient illumination.

Both the ColorMunki Photo and i1Pro2 spectrophotometers will measure your ambient light, as will the ColorMunki Display and i1 Display colorimeters, to name but a few.

But if you measure your ambient light and find the device gives you a reading of more than 50-60 lux then DO NOT ask the device to profile for your ambient light; in fact I would not recommend doing this AT ALL, here’s why.

I have a main office light that is colour corrected to 5000K and it chucks out 127 Lux at the monitor.  If I select the ‘measure and calibrate to ambient’ option on the ColorMunki Photo it eventually tells me I need a monitor brightness or luminance of 80 cdm2 – the only problem is that it gives me the same figure if I drop the ambient lux value to 100.

Now that smells a tad fishy to me……..

So my advice to anyone is to remove the variables, calibrate to 120 cdm2 and work in a very subdued ambient condition of 35 to 40 Lux. I find it easier to control my low lux working ambient light levels than bugger about with over-complex calibration.

To put a final perspective on this figure there is an interesting page on the Apollo Energytech website which quotes lux levels that comply with the law for different work environments – don’t go to B&Q or Walmart to do a spot of processing, and we’re all going to end up doing hard time at Her Madges Pleasure –  law breakers that we are!

Please consider supporting this blog.

This blog really does need your support. All the information I put on these pages I do freely, but it does involve costs in both time and money.

If you find this post useful and informative please could you help by making a small donation – it would really help me out a lot – whatever you can afford would be gratefully received.

Your donation will help offset the costs of running this blog and so help me to bring you lots more useful and informative content.

Many thanks in advance.

 

HDR in Lightroom CC (2015)

Lightroom CC (2015) – exciting stuff!

New direct HDR MERGE for bracketed exposure sequences inside the Develop Module of Lightroom CC 2015 – nice one Adobe!  I can see Eric Chan’s finger-prints all over this one…!

D4D4469 HDR 600x400 HDR in Lightroom CC (2015)

Twilight at Porth Y Post, Anglesey.

After a less than exciting 90 minutes on the phone with Adobe this vary morning – that’s about 10 minutes of actual conversation and an eternity of crappy ‘Muzak’ – I’ve managed to switch from my expensive old single app PsCC subscription to the Photography Plan – yay!

They wouldn’t let me upgrade my old stand-alone Lr4/Lr5 to Lr6 ‘on the cheap’ so now they’ve given me two apps for half the price I was paying for 1 – mental people, but I’ll not be arguing!

I was really eager to try out the new internal ‘Merge’ script/command for HDR sequences – and boy am I impressed.

I picked a twilight seascape scene I shot last year:

5 600x375 HDR in Lightroom CC (2015)

Click to view LARGER IMAGE.

I’ve taken a 6 shot exposure bracketed sequence of RAW files above, into the Develop Module of Lightroom CC and done 3 simple adjustments to all 6 under Auto Synch:

  1. Change camera profile from Adobe Standard to Camera Neutral.
  2. ‘Tick’ Remove Chromatic Aberration in the Lens Corrections panel.
  3. Change the colour temperature from ‘as shot’ to a whopping 13,400K – this neutralises the huge ‘twilight’ blue cast.

You have to remember that NOT ALL adjustments you can make in the Develop Module will carry over in this process, but these 3 will.

4 600x282 HDR in Lightroom CC (2015)

Click to view LARGER IMAGE.

Ever since Lr4 came out we have had the ability to take a bracketed sequence in Lightroom and send them to Photoshop to produce what’s called a ’32 bit floating point TIFF’ file – HDR without any of the stupid ‘grunge effects’ so commonly associated with the more normal styles of HDR workflow.

The resulting TIFF file would then be brought back into Lightroom where some very fancy processing limits were given to us – namely the exposure latitude above all else.

‘Normal’ range images, be they RAW or TIFF etc, have a potential 10 stops of exposure adjustment, +5 to -5 stops, both in the Basics Panel, and with Linear and Radial graduated filters.

But 32 bit float TIFFs had a massive 20 stops of adjustment, +10 to -10 stops – making for some very fancy and highly flexible processing.

Now the, what’s a ‘better’ file type than pixel-based TIFF?  A RAW file……

1 600x375 HDR in Lightroom CC (2015)

Click to view LARGER IMAGE.

So, after selecting the six RAW images, right-clicking and selecting ‘Photomerge>HDR’…

2 600x375 HDR in Lightroom CC (2015)

Click to view LARGER IMAGE.

…and selecting ‘NONE’ from the ‘de-ghost’ options, I was amazed to find the resulting ‘merged file’ was a DNG – not a TIFF – yet it still carries the 20 stop exposure adjustment  latitude.

6 600x375 HDR in Lightroom CC (2015)

Click to view LARGER IMAGE.

This is the best news for ages, and grunge-free, ‘real-looking’ HDR workflow time has just been axed by at least 50%.  I can’t really say any more about it really, except that, IMHO of course, this is the best thing to happen for Adobe RAW workflow since the advent of PV2012 itself – BRILLIANT!

Note: Because all the shots in this sequence featured ‘blurred water’, applying any de-ghosting would be detrimental to the image, causing some some weird artefacts where water met static rocks etc.

But if you have image sequences that have moving objects in them you can select from 3 de-ghost pre-sets to try and combat the artefacts caused by them, and you can check the de-ghost overlay tick-box to pre-visualise the de-ghosting areas in the final image.

3 600x375 HDR in Lightroom CC (2015)

Click to view LARGER IMAGE.

Switch up to Lightroom CC 2015 – it’s worth it for this facility alone.

D4D4469 HDR 2 600x400 HDR in Lightroom CC (2015)

Click to view LARGER IMAGE.

 

Please consider supporting this blog.

This blog really does need your support. All the information I put on these pages I do freely, but it does involve costs in both time and money.

If you find this post useful and informative please could you help by making a small donation – it would really help me out a lot – whatever you can afford would be gratefully received.

Donations would help offset the costs of running this blog and so help me to bring you lots more useful and informative content.

Many thanks in advance.

 

Image Sharpness

Image Sharpness

I spent the other afternoon in the Big Tower at Gigrin, in the very pleasant company company of Mr. Jeffrey “Jeffer-Cakes” Young.    Left arm feeling better yet Jeff?

I think I’m fairly safe in saying that once feeding time commenced at 3pm it didn’t take too long before Jeff got a firm understanding of just how damn hard bird flight photography truly is – if you are shooting for true image sharpness at 1:1 resolution.

I’d warned Jeff before-hand that his Canon 5Dmk3 would make his session somewhat more difficult than a 1Dx, due to it’s slightly less tractable autofocus adjustments.  But that with his 300mm f2.8 – even with his 1.4x converter mounted, his equipment was easily up to the job at hand.

I on the other hand was back on the Nikon gear – my 200-400 f4; but using a D4S I’d borrowed from Paul Atkins for some real head-to-head testing against the D4 (there’s a barrow load of Astbury venom headed Nikon’s way shortly I can tell you….watch this space as they say).

Amongst the many topics discussed and pondered upon, I was trying to explain to Jeff the  fundamental difference between ‘perceived’ and ‘real’ image sharpness.

Gigrin is a good place to find vast armies of ‘photographers’ who have ZERO CLUE that such an argument or difference even exists.

As a ‘teacher’ I can easily tell when I’m sharing hide space with folk like this because they develop quizzical frowns and slightly self-righteous smirks as they eavesdrop on the conversation between my client and I.

“THEY” don’t understand that my client is wanting to achieve the same goal as the one I’m always chasing after; and that that goal is as different from their goal as a fillet of oak-smoked Scottish salmon is from a tin of John West mush.

I suppose I’d better start explaining myself at this juncture; so below are two 800 pixel long edge jpeg files that you typically see posted on a nature photography forum, website or blog:

D4S6753 Image Sharpness

IMAGE 1. Red Kite – Nikon D4S+200-400 f4 – CLICK IMAGE to view properly.

Click the images to view them properly.

D4S6693 2 Image Sharpness

IMAGE 2. Red Kite – Nikon D4S+200-400 f4 – CLICK IMAGE to view properly.

“THEY” would be equally as pleased with either…..!

Both images look pretty sharp, well exposed and have pretty darn good composition from an editorial point of view too – so we’re all golden aren’t we!

Or are we?

Both images would look equally as good in terms of image sharpness at 1200 pixels on the long edge, and because I’m a smart-arse I could easily print both images to A4 – and they’d still look as good as each other.

But, one of them would also readily print to A3+ and in its digital form would get accepted at almost any stock agency on the planet, but the other one would most emphatically NOT pass muster for either purpose.

That’s because one of them has real, true image sharpness, while the other has none; all it’s image sharpness is perceptual and artificially induced through image processing.

Guessed which is which yet?

D4S6753 2 Image Sharpness

IMAGE 1 at 1:1 native resolution – CLICK IMAGE to view properly.

Image 1. has true sharpness because it is IN FOCUS.

D4S6693 Edit 2 Image Sharpness

IMAGE 2 at 1:1 native resolution – CLICK IMAGE to view properly.

And you don’t need glasses to see that image 2 is simply OUT OF FOCUS.

The next question is; which image is the cropped one – number 2 ?

Wrong…it’s number 1…

D4S6753 4 Image Sharpness

Image 1 uncropped is 4928 pixels long edge, and cropped is 3565, in other words a 28% crop, which will yield a 15+ inch print without any trouble whatsoever.

Image 2 is NOT cropped – it has just been SHRUNK to around 16% of its original size in the Lightroom export utility with standard screen output sharpening.  So you can make a ‘silk purse from a sows ear’ – and no one would be any the wiser, as long as they never saw anything approaching the full resolution image!

Given that both images were shot at 400mm focal length, it’s obvious that the bird in image 1 (now you know it’s cropped a bit) is FURTHER AWAY than the bird in image 2.

So why is one IN FOCUS and the other not?

The bird in image 1 is ‘crossing’ the frame more than it is ‘closing in’ on the camera.

The bird in image 2 is closer to the camera to begin with, and is getting closer by the millisecond.

These two scenarios impose totally different work-loads on the autofocus system.

The ability of the autofocus system to cope with ANY imposed work-load is totally dependent upon the control parameters you have set in the camera.

The ‘success’ rate of these adjustable autofocus parameter settings is effected by:

  1. Changing spatial relationship between camera and subject during a burst of frames.
  2. Subject-to-camera closing speed
  3. Pre-shot tracking time.
  4. Frame rate.

And a few more things besides…!

The autofocus workloads for images 1 & 2 are poles apart, but the control parameter settings are identical.

The Leucistic Red Kite in the shot below is chugging along at roughly the same speed as its non-leucistic cousin in image 2. It’s also at pretty much the same focus distance:

D4S6621 2 600x400 Image Sharpness

Image 3. Leucistic Red Kite – same distance, closing speed and focal length as image 2. CLICK IMAGE to view larger version.

So why is image 3 IN FOCUS when, given a similar scenario, image 2 is out of focus?

Because the autofocus control parameters are set differently – that’s why.

FACT: no single combination of autofocus control parameter settings will be your ‘magic bullet’ and give you nothing but sharp images with no ‘duds’ – unless you use a 12mm fish-eye lens that is!

Problems and focus errors INCREASE in frequency in direct proportion to increasing focal length.

They will also increase in frequency THE INSTANT you switch from a prime lens to a zoom lens, especially if the ‘zoom ratio’ exceeds 3:1.

Then we have to consider the accuracy and speed of the cameras autofocus system AND the speed of the lens autofocus motor – and sadly these criteria generally become more favourable with an increased price tag.

So if you’re using a Nikon D800 with an 80-400, or a Canon 70D with a 100-400 then there are going to be more than a few bumps in your road.  And if you stick to just one set of autofocus control settings all the time then those bumps are going to turn into mountains – some of which are going to kill you off before you make their summit….metaphorically speaking of course!

And God forbid that you try this image 3 ‘head on close up’ malarkey with a Sigma 50-500 – if you want that level of shot quality then you might just as well stay at home and save yourself the hide fees and petrol money !

Things don’t get any easier if you do spend the ‘big bucks’ either.

Fast glass and a pro body ‘speed machine’ will offer you more control adjustments for sure.  But that just means more chances to ‘screw things up’ unless you know EXACTLY how your autofocus system works, exactly what all those different controls actually DO, and you know how to relate those controls to what’s happening in front of you.

Whatever lens and camera body combination any of us use, we have to first of all find, then learn to work within it’s ‘effective envelope of operation’ – and by that I mean the REAL one, which is not necessarily always on a par with what the manufacturer might lead you to believe.

Take my Nikon 200-400 for example.  If I used autofocus on a static subject, let alone a moving one, at much past 50 metres using the venerable old D3 body and 400mm focal length, things in the critical image sharpness department became somewhat sketchy to say the least.  But put it on a D4 or D4S and I can shoot tack sharp focussing targets at 80 to 100 metres all day long……not that I make a habit of this most meaningless of photographic pastimes.

That discrepancy is due to the old D3 autofocus system lacking the ability to accurately  discriminate between precise distances from infinity to much over 50 metres when that particular lens was being used. But swap the lens out for a 400 f2.8 prime and things were far better!

Using the lens on either a D4 or D4S on head-on fast moving/closing subjects such as Mr.Leucistic above, we hit another snag at 400mm – once the subject is less than 20 metres away the autofocus system can’t keep up and the image sharpness effectively drops off the proverbial cliff.  But zoom out to 200mm and that ‘cut-off’ distance will reduce to 10 metres or so. Subjects closing at slower speeds can get much closer to the camera before sharp focus begins to fail.

As far as I’m concerned this problem is more to do with the speed of the autofocus motor inside the lens than anything else.  Nikon brought out an updated version of this lens a few years back – amongst its ‘star qualities’ was a new nano-coating that stopped the lens from flaring.  But does it focus any faster – does it heck!  And my version doesn’t suffer from flare either….!

Getting to know your equipment and how it all works is critical if you want your photography to improve in terms of image sharpness.

Shameless Plug Number 1.

I keep mentioning it – my ebook on Canon & Nikon Autofocus with long glass.

I’ll finish it one day soon – I need the money!

Click the images for larger view

afdoc1 564x400 Image Sharpnessafdoc2 564x400 Image Sharpness

Shameless Plug Number 2.

1 to 1 Tuition Day

Understanding Canon & Nikon Autofocus

for

Bird in Flight Photography

GX2R2055 Edit 2 2 Image Sharpness

Click Image for details.

 

Please consider supporting this blog.

This blog really does need your support. All the information I put on these pages I do freely, but it does involve costs in both time and money.

If you find this post useful and informative please could you help by making a small donation – it would really help me out a lot – whatever you can afford would be gratefully received.

Donations would help offset the costs of running this blog and so help me to bring you lots more useful and informative content.

Many thanks in advance.

 

View Autofocus Points in Lightroom

Mr. Malcolm Clayton sent me a link last week to a free plug-in for Lightroom that displays the autofocus points used for the shot, plus other very useful information such as focus distance, f-number and shutter speed, depth of field (DoF) values and other bits and bobs.

The plug-in is called “Show Focus Points” and you can download it HERE

Follow the installation instruction to the letter!

Once installed you can only launch it from the LIBRARY MODULE:

FPlugAx 900x563 View Autofocus Points in Lightroom

Accessing the Plug-in via the Library>Plug-in Extras menu CLICK to view LARGER

You will see this sort of thing:

FPPlugin 900x506 View Autofocus Points in Lightroom

The “Show Focus Points” for Lightroom plug-in window. CLICK to view LARGER.

It’s a usefull tool to have because short of running the rather clunky Canon DPP or Nikon ViewNX software it’s the easiest way of getting hold of autofocus information without sending the image to Photoshop and looking through the mind-numbing RAW schema data – something I do out of habbit!

It displays a ton of useful data about your camera focus settings and exposure, and the autofocus point used – be it set by you, or chosen by the camera.

As far as I can see, the plug-in only displays the main active autofocus point on Nikon D4 and D4S files, but all the autofocus group as well as active points seem to display when viewing .CR2 Canon files as we can see on this very impressive car number plate!:

Canon2 900x563 View Autofocus Points in Lightroom

Screen grab of an unprocessed 1Dx/200-400/TC shot I did while testing the tracking capabilities of the Canon lens with the TC active – the REAL image looks more impressive than this! I’m actually zooming out while tracking too – this is around 200mm + the 1.4x TC. CLICK to view LARGER

Canon 900x503 View Autofocus Points in Lightroom

Canon 1Dx in AI Servo AF Point Expansion 4 point; what I call “1 with 4 friends”. CLICK to view LARGER.

CanonAIF 900x503 View Autofocus Points in Lightroom

Canon 1Dx in AI-F autofocus showing all autofocus points used be the camera.

Viewing your autofocus points is a very valid learning tool when trying to become familiar with your cameras autofocus, and it’s also handy if you want to see why and where you’ve “screwed the pooch” – hey, we ALL DO IT from time to time!

Useful tool to have IMO and it’s FREE – Andy likes free…

Cheers to Malc Clayton for bringing this to my attention.

Please consider supporting this blog.

This blog really does need your support. All the information I put on these pages I do freely, but it does involve costs in both time and money.

If you find this post useful and informative please could you help by making a small donation – it would really help me out a lot – whatever you can afford would be gratefully received.

Your donation will help offset the costs of running this blog and so help me to bring you lots more useful and informative content.

Many thanks in advance.