FX vs DX

FX versus DX

It amazes me that people still don’t understand the relationship between FX and DX format sensors.

Millions of people across the planet think still that when they put a DX body on an FX lens and turn the camera on, something magic happens and the lens somehow becomes a different beast.

NO…it doesn’t!

There is so much crap out there on the web, resulting in the blind being led by the stupid – and that is a hardcore fact.  Some of the ‘stuff’ I get sent links to on the likes of ‘diaper review’ (to coin Ken W.’s name for it) and others, leaves me totally aghast at the number of fallacies that are being promoted and perpetuated within the content of these high-traffic websites.

FFS – this has GOT to STOP.

Fallacy 1.  Using a DX crop sensor gives me more magnification.

Oh no it doesn’t!

If we arm an FX and a DX body with identical lenses, let’s say 500mm f4’s, and go and take the same picture, at the same time and from the same subject distance with both setups, we get the following images:

D5A9517 2 900x600 FX vs DX

FX versus DX: FX + 500m f4 image – 36mm x 24mm frame area FoV

D5A9517 900x600 FX vs DX

FX versus DX: DX + 500mm f4 image – 24mm x 16mm frame area FoV

FXvDX 900x411 FX vs DX

FX versus DX: With both cameras at the same distance from the subject, the Field of View of the DX body+500mm f4 combo is SMALLER – but the subject is EXACTLY the SAME SIZE.

Let’s overlay the two images:

FXvDX2 900x600 FX vs DX

FX versus DX: The DX field of view (FoV) is indicated by the black line. HOWEVER, this line only denotes the FoV area. It should NOT be taken as indicative of pixel dimensions.

The subject APPEARS larger in the DX frame because the frame FoV is SMALLER than that of the FX frame.

FXvDX3 900x496 FX vs DX

But I will say it again – the subject is THE SAME DAMN SIZE.  Any FX lens projects an image onto the focal plane that is THE SAME SIZE irrespective of whether the sensor is FX or DX – end of story.

Note: If such a thing existed, a 333mm prime on a DX crop body would give us the same COMPOSITION, at the same subject distance, as our 500mm prime on the FX body.  But at the same aperture and distance, this fictitious 333mm lens would give us MORE DoF due to it being a shorter focal length.

Fallacy 2.  Using a DX crop sensor gives me more Depth of Field for any given aperture.

The other common variant of this fallacy is:

Using a DX crop sensor gives me less Depth of Field for any given aperture.

Oh no it doesn’t – not in either case!

Understand this people – depth of field is, as we all know, governed by the aperture diaphragm – in other words the f number.  Now everyone understands this, surely to God.

But here’s the thing – where’s the damn aperture diaphragm?  Inside the LENS – not the camera!

Depth of field is REAL or TRUE focal length, aperture and subject distance dependent, so our two identical 500mm f4 lenses at say 30 meters subject distance and f8 are going to yield the same DoF.  That’s irrespective of the physical dimensions of the sensor – be they 36mm x 24mm, or 24mm x 16mm.

But, in order for the FX setup to obtain the same COMPOSITION as that of the DX, the FX setup will need to be CLOSER to the subject – and so using the same f number/aperture value will yield an image with LESS DoF than that of the DX, because DoF decreases with decreased distance, for any given f number.

To obtain the same COMPOSITION with the DX as that of the FX, then said DX camera would need to move further away from the subject.  Therefore the same aperture value would yield MORE DoF, because DoF increases with increased distance, for any given f number.

The DX format does NOT change DoF, it’s the pixel pitch/CoC that alters the total DoF in the final image.  In other words it’s total megapixels the alters DoF, and that applies evenly across FX and DX.

Fallacy 3.  An FX format sensor sees more light, or lets more light in, giving me more exposure because it’s a bigger ‘eye’ on the scene.

Oh no it doesn’t!

Now this crap really annoys the hell out of me.

Exposure has nothing to do with sensor size WHAT SO EVER.  The intensity of light falling onto the focal plane is THE SAME, irrespective of sensor size.  Exposure is a function of Intensity x Time, and so for the same intensity (aperture) and time (shutter speed) the resulting exposure will be the SAME.  Total exposure is per unit area, NOT volume.

It’s the buckets full of rain water again:

sensor size exposure 900x297 FX vs DX

The level of water in each bucket is the same, and represents total exposure.  There is no difference in exposure between sensor sizes.

There is a huge difference in volume, but your sensor does not work on total volume – it works per unit area.  Each and every square millimeter, or square micron, of the focal plane sees the same exposure from the image projected into it by the lens, irrespective of the dimensions of the sensor.

The smallest unit area of the sensor is a photosite. And each photosite recieves the same said exposure value, no matter how big the sensor they are embedded in is.

HOWEVER, it is how those individual photosites COPE with that exposure that makes the difference. And that leads us neatly on to the next fallacy.

Fallacy 4.  FX format sensors have better image quality because they are bigger.

Oh no they don’t – well, not because they are just bigger !

It’s all to do with pixel pitch, and pixel pitch governs VOLUME.

pixelpitch 900x302 FX vs DX

FX format sensors usually give a better image because their photosites have a larger diameter, or pitch. You should read HERE  and HERE for more detail.

Larger photosites don’t really ‘see’ more light during an exposure than small ones, but because they are larger, each one has a better potential signal to noise ratio.  This can, turn, allow for greater subtle variation in recorded light values amongst other things, such as low light response.  Think of a photosite as an eyeball, then think of all the animals that mess around in the dark – they all have big eyes!

That’s not the most technological statement I’ve ever made, but it’s fact, and makes for a good analogy at this point.

Everyone wants a camera sensor that sees in the dark, generates zero noise at ISO 1 Million, has zero diffraction at f22, and has twice the resolution of £35Ks worth medium format back.

Well kids, I hate to break it to you, but such a beast does not exist, and nor will it for many a year to come.

The whole FX versus DX format  ‘thing’ is really a meaningless argument, and the DX format has no advantage over the FX format apart from less weight and lower price (perhaps).

Yes, if we shoot a DX format camera using an FX lens we get the ‘illusion’ of a magnified subject – but that’s all it is – an illusion.

Yes, if we shoot the same shot on a 20Mp FX and crop it to look like the shot from a 20Mp DX, then the subject will have twice as many pixels in it, because of the higher translational density – but at what cost.

Cramming more mega pixels into either a 36mm x 24mm or 24mm x 16mm area results in one thing only – smaller photosites.  Smaller photosites come with one single benefit – greater detail resolution.  Every other attribute that comes with smaller photosites is a negative one:

  • Greater susceptibility to subject motion blur – the bane of landscape and astro photographers.
  • Greater susceptibility to diffraction due to lower CoC.
  • Lower CoC also reduces DoF.
  • Lower signal to noise ratio and poorer high ISO performance.

Note: Quite timely this! With the new leaked info about the D850, we see it’s supposed to have a BSI sensor.  This makes it impossible to make a comparison between it and the D500, even though the photosites are nearly pretty much the same size/pitch.  Any comparison is made even more impossible with the different micro-lens tech sported by the D850.  Also, the functionality of the ADC/SNR firmware is bound to be different from the D500 too.

Variations in: AA filter type/properties and micro lens design, wiring substrate thickness, AF system algorithms and performance, ADC/SNR and other things, all go towards making FX versus DX comparisons difficult, because we use our final output images to draw our conclusions; and they are effected by all of the above.

But facts are facts – DX does not generate either greater magnification or greater/less depth of field than FX when used with identical FX lenses at the same distance and aperture.

Sensor format effects nothing other than FoV,  everything else is purely down to pixel pitch.

 

 

 

 

Color Temperature

Lightroom Color Temperature (or Colour Temperature if you spell correctly!)

“Andy – why the heck is Lightrooms temperature slider the wrong way around?”

That’s a question that I used to get asked quite a lot, and it’s started again since I mentioned it in passing a couple of posts ago.

The short answer is “IT ISN”T….it’s just you who doesn’t understand what it is and how it functions”.

But in order to give the definitive answer I feel the need to get back to basics though – so here goes.

The Spectrum Locus

Let’s get one thing straight from the start – LOCUS is just a posh word for PATH!

Visible light is just part of the electro-magnetic energy spectrum typically between 380nm (nanometers) and 700nm:

%name Color Temperature

In the first image below is what’s known as the Spectrum Locus – as defined by the CIE (Commission Internationale de l´Eclairage or International Commission on Illumination).

In a nutshell the locus represents the range of colors visible to the human eye – or I should say chromaticities:

1200px CIE1931xy blank Color Temperature

The blue numbers around the locus are simply the nanometer values from that same horizontal scale above. The reasoning behind the unit values of the x and y axis are complex and irrelevant to us in this post, otherwise it’ll go on for ages.

The human eye is a fickle thing.

It will always perceive, say, 255 green as being lighter than 255 red or 255 blue, and 255 blue as being the darkest of the three.  And the same applies to any value of the three primaries, as long as all three are the same.

perception Color Temperature

This stems from the fact that the human eye has around twice the response to green light as it does red or blue – crazy but true.  And that’s why your camera sensor – if it’s a Bayer type – has twice the number of green photosites on it as red or blue.

In rather over-simplified terms the CIE set a standard by which all colors in the visible spectrum could be expressed in terms of ‘chromaticity’ and ‘brightness’.

Brightness can be thought of as a grey ramp from black to white.

Any color space is a 3 dimensional shape with 3 axes x, y and z.

Z is the grey ramp from black to white, and the shape is then defined by the colour positions in terms of their chromaticity on the x and y axes, and their brightness on the z axis:

adobeRGB1998 Color Temperature

But if we just take the chromaticity values of all the colours visible to the human eye we end up with the CIE1931 spectrum locus – a two dimensional plot if you like, of the ‘perceived’ color space of human vision.

Now here’s where the confusion begins for the majority of ‘uneducated photographers’ – and I mean that in the nicest possible way, it’s not a dig!

Below is the same spectrum locus with an addition:

PlanckianLocus Color Temperature

This additional TcK curve is called the Planckian Locus, or dark body locus.  Now please don’t give up here folks, after all you’ve got this far, but it’ll get worse before it gets better!

The Planckian Locus simply represents the color temperature in degrees Kelvin of the colour emitted by a ‘dark body’ – think lump of pure carbon – as it is heated.  Its color temperature begins to visibly rise as its thermal temperature rises.

Up to a certain thermal temperature it’ll stay visibly black, then it will begin to glow a deep red.  Warm it up some more and the red color temperature turns to orange, then yellow and finally it will be what we can call ‘white hot’.

So the Planckian Locus is the 2D chromaticity plot of the colours emitted by a dark body as it is heated.

Here’s point of confusion number 1: do NOT jump to the conclusion that this is in any way a greyscale. “Well it starts off BLACK and ends up WHITE” – I’ve come across dozens of folk who think that – as they say, a little knowledge is a dangerous thing indeed!

What the Planckian Locus IS indicative of though is WHITE POINT.

Our commonly used colour management white points of D65, D55 and D50 all lie along the Planckian Locus, as do all the other CIE standard illumimant types of which there’s more than few.

The standard monitor calibration white point of D65 is actually 6500 Kelvin – it’s a standardized classification for ‘mean Noon Daylight’, and can be found on the Spectrum Locus/Plankckian Locus at 0.31271x, 0.32902y.

D55 or 5500 Kelvin is classed as Mid Morning/Mid Afternoon Daylight and can be found at 0.33242x, 0.34743y.

D50 or 5000 kelvin is classed as Horizon Light with co-ordinates of 0.34567x, 0.35850.

But we can also equate Planckian Locus values to our ‘picture taking’ in the form of white balance.

FACT: The HIGHER the color temperature the BLUER the light, and lower color temperatures shift from blue to yellow, then orange (studio type L photofloods 3200K), then more red (standard incandescent bulb 2400K) down to candle flame at around 1850K).  Sunset and sunrise are typically standardized at 1850K and LPS Sodium street lights can be as low as 1700K.

And a clear polar sky can be upwards of 27,000K – now there’s blue for you!

And here’s where we find confusion point number 2!

Take a look at this shot taken through a Lee Big Stopper:

2 Color Temperature

I’m an idle git and always have my camera set to a white balance of Cloudy B1, and here I’m shooting through a filter that notoriously adds a pretty severe bluish cast to an image anyway.

If you look at the TEMP and TINT sliders you will see Cloudy B1 is interpreted by Lightroom as 5550 Kelvin and a tint of +5 – that’s why the notation is ‘AS SHOT’.

Officially a Cloudy white balance is anywhere between 6000 Kelvin and 10,000 kelvin depending on your definition, and I’ve stuck extra blue in there with the Cloudy B1 setting, which will make the effective temperature go up even higher.

So either way, you can see that Lightrooms idea of 5550 Kelvin is somewhat ‘OFF’ to say the least, but it’s irrelevant at this juncture.

Where the real confusion sets in is shown in the image below:

1 Color Temperature

“Andy, now you’ve de-blued the shot why is the TEMP slider value saying 8387 Kelvin ? Surely it should be showing a value LOWER than 5550K – after all, tungsten is warm and 3200K”….

How right you are…..and wrong at the same time!

What Lightroom is saying is that I’ve added YELLOW to the tune of 8387-5550 or 2837.

FACT – the color temperature controls in Lightroom DO NOT work by adjusting the Planckian or black body temperature of light in our image.  They are used to COMPENSATE for the recorded Planckian/black body temperature.

If you load in image in the develop module of Lightroom and use any of the preset values, the value itself is ball park correct(ish).

The Daylight preset loads values of 5500K and +10. The Shade preset will jump to 7500K and +10, and Tungsten will drop to 2850K and +/-0.

But the Tungsten preset puts the TEMP slider in the BLUE part of the slider Blue/Yellow graduated scale, and the Shade preset puts the slider in the YELLOW side of the scale, thus leading millions of people into mistakenly thinking that 7500K is warmer/yellower than 2850K when it most definitely is NOT!

This kind of self-induced bad learning leaves people wide open to all sorts of misunderstandings when it comes to other aspects of color theory and color management.

My advice has always been the same, just ignore the numbers in Lightroom and do your adjustments subjectively – do what looks right!

But for heaven sake don’t try and build an understanding of color temperature based on the color balance control values in Lightroom – otherwise you get in one heck of a mess.