Parallel Horizontals.

Quite often when shooting landscapes, or more commonly seascapes, you may run into a problem with parallel horizontals and distortion between far and near horizontal features such as in the image below.

Parallel horizontals that are not parallel - but should be!

Parallel horizontals that are not parallel – but should be!

This sort of error cannot be fully corrected in Lightroom alone; we have to send the image to Photoshop in order to make the corrections in the most efficient manner.

Here’s a video lesson on how to effectively do just that, using the simplest, easiest and quickest of methods:

You can watch the video at full size HERE – make sure you click the HD icon.

This is something which commonly happens when photographing water with a regular shaped man-made structure in the foreground and a foreshortened horizon line such as the receding opposite shore in this shot.  But with a little logical thought these problems with parallel horizontals being “out of kilter” can be easily cured.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Paper White – Desktop Printing 101

Paper White video

A while back I posted an article called How White is Paper White

As a follow-up to my last post on the basic properties of printing paper media I thought I’d post this video to refresh the idea of “white”.

In this video we basically look at a range of 10 Permajet papers and simply compare their tints and brightness – it’s an illustration I give at my print workshops which never fails to amaze all the attendees.

I know I keep ‘banging on’ about this but you must understand:

  • Very few paper whites are even close to being neutral.
  • No paper is WHITE in terms of luminosity – RGB 255 in 8 bit colour terms.
  • No paper can hold a true black – RGB 0 in 8 bit colour terms.

In real-world terms ALL printing paper is a TINTED GREY – some cool, some warm.

printing,paper white,desktop printing,Andy Astbury,Wildlife in Pixels

If we attempted to print the image above on a cool tinted paper then we would REDUCE or even CANCEL OUT the warm tonal effects and general ‘atmosphere’ of the image.

Conversely, print it to a warmer tinted ‘paper white’ and the atmosphere would be enhanced.

Would this enhancement be a good thing?  Well, er NO – not if we were happy with our original ‘on screen’ processing.

You need to look upon ‘paper white’ as another TOOL to help you achieve your goal of great looking photographs, with a minimum of fuss and effort on your part.

We have to ‘soft proof’ our images if we want to get a print off the printer that matches what we see on our monitor.

But we can’t soft proof until we have made a decision about what paper we are going to soft-proof to.

Choosing a paper who’s characteristics match our finished ‘on screen’ image in terms of TINT especially, will make the job of soft proofing much easier.

How, why?

Proper soft proofing requires us to make a copy of our original image (there’s most peoples first mistake – not making a copy) and then making adjustments to said copy, in a soft proof environment, so that it it renders correctly on the print – in other words it matches our original processed image.

Printing from Photoshop requires a hard copy, printing from Lightroom is different – it relies on VIRTUAL copies.

Either way, this copy and its proof adjustments are what get sent to the printer along what we call the PRINT PIPELINE.

The print pipeline has to do a lot of work:

  • It has to transpose our adjusted/soft proofed image colour values from additive RGB to print CMYK
  • It has to up sample or interpolate the image dpi instructions to the print head, depending on print output size.
  • It has to apply the correct droplet size instructions to each nozzle in the print head hundreds of times per second.
  • And it has to do a lot of other ‘stuff’ besides!!

The key component is the Printer Driver – and printer drivers are basically CRAP at carrying out all but the simplest of instructions.

In other words they don’t like hard work.

Printing to a paper white that matches our image:

  • Warm image to warm tint paper white
  • Cool image to cool paper white

will reduce to the amount of adjustments we have to make under soft proofing and therefore REDUCE the printer driver workload.

The less work the print driver has to do, the lower is the risk of things  ‘getting lost in translation‘ and if nothing gets lost then the print matches the on screen image – assuming of course that your eyes haven’t let you down at the soft proofing stage!

print,desktop printing,paper white

IMPORTANT – Click Image to Enlarge in new window

If we try to print this squirrel on the left to Permajet Gloss 271 (warmish image to very cool tint paper white) we can see what will happen.

We have got to make a couple of tweaks in terms on luminosity BUT we’ve also got to make a global change to the overall colour temperature of the image – this will most likely present us with a need for further  opposing colour channel adjustments between light and dark tones.

 

print,desktop printing,paper white

IMPORTANT – Click Image to Enlarge in new window

Whereas the same image sent to Permajet Fibre Base Gloss Warmtone all we’ll have to do is tweak the luminosity up a tiny bit and saturation down a couple of points and basically we’ll be sorted.

So less work, and less work means less room for error in our hardware drivers; this leads to more efficient printing and reduced print production costs.

And reduced cost leads to a happy photographer!

Printing images is EASY –  as long as you get all your ducks in a row – and you’ve only got a handful of ducks to control.

Understanding print media and grasping the implications of paper white is one of those ducks………

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Photoshop CC Update

Photoshop CC Update

Installing a new Photoshop CC update is supposed to be a simple matter of clicking a button and the job gets done.

This morning both my Mac systems were telling me to update from v14.1.2 to v14.2

I have two Macs, a late 2012 iMac and a mid 2009 Mac Pro.  The Mac Pro used to run Snow Leopard but was upgraded to Mountain Lion because of Lightroom 5 dropping Snow Leopard support.

Now I never have any problems with Cloud Updates from Adobe on the iMac, but sometimes the Mac Pro can do some strange things – and this morning was no exception!

The update installed on the iMac without a hitch, but when the update was complete on the Mac Pro I was greeted with a message telling me that some components had not installed correctly.  On opening Photoshop CC I was greeted with the fact that the version had rolled back to v14.0 and that hitting UPDATE in both the app and my CC control panel simply informed me that my software was up to date and no updates were available!

So I just thought I’d do a blog entry on what to do if this ever happens to you!

 

Remove Photoshop CC

The first thing to do is UNINSTALL  Photoshop CC with the supplied uninstaller.

You’ll find this in the main Photoshop CC root directory:

Photoshop CC Update

Locate the Photoshop CC Uninstaller.

Take my advice and put a tick in the check box to “Remove Preferences” – the Photoshop preferences file can be a royal pain in the ass sometimes, so dump it – a new one will get written as soon as your fire Photoshop up after the new install.

Click UNINSTALL.

Once this action is complete YOU MUST RESTART THE MACHINE.

 

After the restart wait for the Creative Cloud to connect then open your CC control panel.

Under the Apps tab you’ll see that Photoshop CC is no longer listed.

Scroll down past all the apps Adobe have listed and you’ll come to Photoshop CC;  it’ll have an INSTALL button next to it – click the install button:

Photoshop CC Update

Install Photoshop CC from the Cloud control panel.

If you are installing the 14.1.2 to 14.2 update (the current one as of today’s date) you might find a couple of long ‘stick bits’ during the installation process – notably between 1 and 20% and a long one at 90% – just let the machine do it’s thing.

When the update is complete I’d recommend you do a restart – it might not be necessary, but I do it anyway.

Once the machine has restarted fire up Photoshop, click on ‘About Photoshop’ and you should see:

Photoshop CC Update

Photoshop “about screen” showing version number.

Because we dumped the preferences file we need to go and change the defaults for best best working practice:

Photoshop CC Update

Preferences Interface tab.

If you want to change the BG colour then do it here.

Next, click File Handling:

Photoshop CC Update

File handling tab in Photoshop Preferences

Remove the tick from the SAVE IN BACKGROUND check box – like the person who put it there, you too might think background auto-save is a good idea – IT ISN’T – think about it!

Finally, go to Performance:

Photoshop CC Update

Photoshop preferences Performance tab

and change the Scratch Disc to somewhere other than your system drive if you have the internal drives fitted.  If you only have 1 internal drive then leave “as is”.  You ‘could’ use an external drive as a scratch disk, but to be honest it really does need to be a fast drive over a fast connection – USB 2 to an old 250Gb portable isn’t really going to cut it!

You can go and check your Colour Settings, though these should not have changed – assuming you had ’em set right in the first place!

Here’s what they SHOULD look like:

Photoshop CC Update

Photoshop PROPER COLOUR SETTINGS!

That’s it – you’re done!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Please consider supporting this blog.

This blog really does need your support. All the information I put on these pages I do freely, but it does involve costs in both time and money.

If you find this post useful and informative please could you help by making a small donation – it would really help me out a lot – whatever you can afford would be gratefully received.

Your donation will help offset the costs of running this blog and so help me to bring you lots more useful and informative content.

Many thanks in advance.

 

Sensor Resolution

Sensor Resolution

In my previous two posts on this subject HERE and HERE I’ve been looking at pixel resolution as it pertains to digital display and print, and the basics of how we can manipulate it to our benefit.

You should also by aware by now that I’m not the worlds biggest fan of high sensor resolution 35mm format dSLRs – there’s nothing wrong with mega pixels; you can’t have enough of them in my book!

BUT, there’s a limit to how many you can cram into a 36 x 24 millimeter sensor area before things start getting silly and your photographic life gets harder.

So in this post I want to explain the reasoning behind my thoughts.

But before I get into that I want to address something else to do with resolution – the standard by which we judge everything we see around us – the resolution of the eye.

 

Human Eye – How Much Can We See?

In very simple terms, because I’m not an optician, the answer goes like this.

Someone with what some call 20/20/20 vision – 20/20 vision in a 20 year old – has a visual acuity of 5 line pairs per millimeter at a distance of 25 centimeters.

What’s a line pair?

5 line pairs per millimeter. Each line pair is 0.2mm and each line is 0.1mm.

5 line pairs per millimeter. Each line pair is 0.2mm and each line is 0.1mm.

Under ideal viewing conditions in terms of brightness and contrast the human eye can at best resolve 0.1mm detail at a distance of 25 centimeters.

Drop the brightness and the contrast and black will become less black and more grey, and white will become greyer; the contrast between light and dark becomes reduced and therefore that 0.1mm detail becomes less distinct.  until the point comes where the same eye can’t resolve detail any smaller than 0.2mm at 25cms, and so on.

Now if I try and focus on something at 25 cms my eyeballs start to ache,  so we are talking extreme close focus for the eye here.

An interesting side note is that 0.1mm is 100µm (microns) and microns are what we measure the size of sensor photosites in – which brings me nicely to SENSOR resolution.

 

Sensor Resolution – Too Many Megapixels?

As we saw in the post on NOISE we do not give ourselves the best chances by employing sensors with small photosite diameters.  It’s a basic fact of physics and mathematics – the more megapixels on a sensor, then the smaller each photosite has to be in order to fit them all in there;  and the smaller they are then the lower is their individual signal to noise or S/N ratio.

But there is another problem that comes with increased sensor resolution:

Increased diffraction threshold.

Andy Astbury,Wildlife in Pixels,sensor resolution,megapixels,pixel pitch,base noise,signal to noise ratio

Schematic of identical surface areas on lower and higher megapixel sensors.

In the above schematic we are looking at the same sized tiny surface area section on two sensors.

If we say that the sensor resolution on the left is that of a 12Mp Nikon D3, and the ‘area’ contains 3 x 3 photosites which are each 8.4 µm in size, then we can say we are looking at an area of about 25µm square.

On the right we are looking at that same 25µm (25 micron) square, but now it contains 5.2 x 5.2 photosites, each 4.84µm in size – a bit like the sensor resolution of a 36Mp D800.

 

What is Diffraction?

Diffraction is basically the bending or reflecting of waves by objects placed in their path (not to be confused with refraction).  As it pertains to our camera sensor, and overall image quality, it causes an general softening of every single point of sharp detail in the image that is projected onto the sensor during the exposure.

I say during the exposure because diffraction is ‘aperture driven’ and it’s effects only occur when the aperture is ‘stopped down’; which on modern cameras only occurs during the time the shutter is open.

At all other times you are viewing the image with the aperture wide open, and so you can’t see the effect unless you hit the stop down button (if you have one) and even then the image in the viewfinder is so small and dark you can’t see it.

As I said, diffraction is caused by aperture diameter – the size of the hole that lets the light in:

Andy Astbury,Wildlife in Pixels,sensor resolution,megapixels,pixel pitch,base noise,signal to noise ratio

Diffraction has a low presence in the system at wider apertures.

Light enters the lens, passes through the aperture and strikes the focal plane/sensor causing the image to be recorded.

Light waves passing through the center of the aperture and light waves passing through the periphery of the aperture all need to travel the same distance – the focal distance – in order for the image to be sharp.

The potential for the peripheral waves to be bent by the edge of the aperture diaphragm increases as the aperture becomes smaller.

Andy Astbury,Wildlife in Pixels,sensor resolution,megapixels,pixel pitch,base noise,signal to noise ratio

Diffraction has a greater presence in the system at narrower apertures.

If I apply some randomly chosen numbers to this you might understand it a little better:

Let’s say that the focal distance of the lens (not focal length) is 21.25mm.

As long as light passing through all points of the aperture travels 21.25mm and strikes the sensor then the image will be sharp; in other words, the more parallel the central and peripheral light waves are, then the sharper the image.

Making the aperture narrower by ‘stopping down’ increases the divergence between central and peripheral waves.

This means that peripheral waves have to travel further before the strike the sensor; further than 21.25mm – therefore they are no longer in focus, but those central waves still are.  This effect gives a fuzzy halo to every single sharply focused point of light striking our sensor.

Please remember, the numbers I’ve used above are meaningless and random.

The amount of fuzziness varies with aperture – wider aperture =  less fuzzy; narrower aperture = more fuzzy, and the circular image produced by a single point of sharp focus is known as an Airy Disc.

As we ‘stop down’ the aperture the edges of the Airy Disc become softer and more fuzzy.

Say for example, we stick a 24mm lens on our camera and frame up a nice landscape, and we need to use f14 to generate the amount of depth of field we need for the shot.  The particular lens we are using produces an Airy Disc of a very particular size at any given aperture.

Now here is the problem:

Andy Astbury,Wildlife in Pixels,sensor resolution,megapixels,pixel pitch,base noise,signal to noise ratio

Schematic of identical surface areas on lower and higher megapixel sensors and the same diameter Airy Disc projected on both of them.

As you can see, the camera with the lower sensor resolution and larger photosite diameter contains the Airy Disc within the footprint of ONE photosite; but the disc effects NINE photosites on the camera with the higher sensor resolution.

Individual photosites basically record one single flat tone which is the average of what they see; so the net outcome of the above scenario is:

Andy Astbury,Wildlife in Pixels,sensor resolution,megapixels,pixel pitch,base noise,signal to noise ratio

Schematic illustrating the tonal output effect of a particular size Airy Disc on higher and lower resolution sensors

On the higher resolution sensor the Airy Disc has produced what we might think of as ‘response pollution’ in the 8 surrounding photosites – these photosites need to record the values of the own ‘bits of the image jigsaw’ as well – so you end up with a situation where each photosite on the sensor ends up recording somewhat imprecise tonal values – this is diffraction in action.

If we were to stop down to f22 or f32 on the lower resolution sensor then the same thing would occur.

If we used an aperture wide enough on the higher resolution sensor – an aperture that generated an Airy Disc that was the same size or smaller than the diameter of the photosites – then only 1 single photosite would be effected and diffraction would not occur.

But that would leave of with a reduced depth of field – getting around that problem is fairly easy if you are prepared to invest in something like a Tilt-Shift lens.

Andy Astbury,Wildlife in Pixels,sensor resolution,megapixels,pixel pitch,base noise,signal to noise ratio

Both images shot with a 24mm TS lens at f3.5. Left image lens is set to zero and behaves as normal 24mm lens. Right image has 1 degree of down tilt applied.

Above we see two images shot with a 24mm Tilt-Shift lens, and both shots are at f3.5 – a wide open aperture.  In the left hand image the lens controls are set to zero and so it behaves like a standard construction lens of 24mm and gives the shallow depth of field that you’d expect.

The image on the right is again, shot wide open at f3.5, but this time the lens was tilted down by just 1 degree – now we have depth of field reaching all the way through the image.  All we would need to do now is stop the lens down to its sharpest aperture – around f8 – and take the shot;  and no worries about diffraction.

Getting back to sensor resolution in general, if your move into high megapixels counts on 35mm format then you are in a ‘Catch 22’ situation:

  • Greater sensor resolution enables you to theoretically capture greater levels of detail.

but that extra level of detail is somewhat problematic because:

  • Diffraction renders it ‘soft’.
  • Eliminating the diffraction causes you to potentially lose the newly acquired level of, say foreground detail in a landscape, due to lack of depth of field.

All digital sensors are susceptible to diffraction at some point or other – they are ‘diffraction limited’.

Over the years I’ve owned a Nikon D3 I’ve found it diffraction limited to between f16 & f18 – I can see it at f18 but can easily rescue the situation.  When I first used a 24Mp D3X I forgot what I was using and spent a whole afternoon shooting at f16 & f18 – I had to go back the next day for a re-shoot because the sensor is diffraction limited to f11 – the pictures certainly told the story!

Everything in photography is a trade-off – you can’t have more of one thing without having less of another.  Back in the days of film we could get by with one camera and use different films because they had very different performance values, but now we buy a camera and expect its sensor to perform all tasks with equal dexterity – sadly, this is not the case.  All modern consumer sensors are jacks of all trades.

If it’s sensor resolution you want then by far the best way to go about it is to jump to medium format, if you want image quality of the n’th degree – this way you get the ‘pixel resolution’ without many of the incumbent problems I’ve mentioned, simply because the sensors are twice the size; or invest in a TS/PC lens and take the Scheimpflug route to more depth of field at a wider aperture.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Pixel Resolution – part 2

More on Pixel Resolution

In my previous post on pixel resolution  I mentioned that it had some serious ramifications for print.

The major one is PHYSICAL or LINEAR image dimension.

In that previous post I said:

  • Pixel dimension divided by pixel resolution = linear dimension

Now, as we saw in the previous post, linear dimension has zero effect on ‘digital display’ image size – here’s those two snake jpegs again:

Andy Astbury,wildlife in pixels,pixel,dpi,ppi,pixel resolution,photoshop,lightroom,adobe

European Adder – 900 x 599 pixels with a pixel resolution of 300PPI

Andy Astbury,wildlife in pixels,pixel,dpi,ppi,pixel resolution,photoshop,lightroom,adobe

European Adder – 900 x 599 pixels with a pixel resolution of 72PPI

Digital display size is driven by pixel dimensionNOT linear dimension or pixel resolution.

Print on the other hand is directly driven by image linear dimension – the physical length and width of our image in inches, centimeters or millimeters.

Now I teach this ‘stuff’ all the time at my Calumet workshops and I know it’s hard for some folk to get their heads around print size and printer output, but it really is simple and straightforward if you just think about it logically for minute.

Let’s get away from snakes and consider this image of a cute Red Squirrel:

Andy Astbury,wildlife in pixels,

Red Squirrel with Bushy Tail – what a cutey!
Shot with Nikon D4 – full frame render.

Yeah yeah – he’s a bit big in the frame for my taste but it’s a seller so boo-hoo – what do I know ! !

Shot on a Nikon D4 – the relevance of which is this:

  • The D4 has a sensor with a linear dimension of 36 x 24 millimeters, but more importantly a photosite dimension of 4928 x 3280. (this is the effective imaging area – total photosite area is 4992 x 3292 according to DXO Labs).

Importing this image into Lightroom, ACR, Bridge, CapOne Pro etc will take that photosite dimension as a pixel dimension.

They also attach the default standard pixel resolution of 300 PPI to the image.

So now the image has a set of physical or linear dimensions:

  • 4928/300  x  3280/300 inches  or  16.43″ x 10.93″

or

  • 417.24 x 277.71 mm for those of you with a metric inclination!

So how big CAN we print this image?

 

Pixel Resolution & Image Physical Dimension

Let’s get back to that sensor for a moment and ask ourselves a question:

  • “Does a sensor contain pixels, and can it have a PPI resolution attached to it?
  • Well, the strict answer would be No and No not really.

But because the photosite dimensions end up being ‘converted’ to pixel dimensions then let’s just for a moment pretend that it can.

The ‘effective’ PPI value for the D4 sensor could be easily derived from its long edge ‘pixel’ count of the FX frame divided by the linear length which is just shy of 36mm or 1.4″ – 3520 PPI or thereabouts.

So, if we take this all literally our camera captures and stores a file that has linear dimensions of  1.4″ x 0.9″, pixel dimensions of  4928 x 3280 and a pixel resolution of 3520 PPI.

Import this file into Lightroom for instance, and that pixel resolution is reduced to 300 PPI.  It’s this very act that renders the image on our monitor at a size we can work with.  Otherwise we’d be working on postage stamps!

And what has that pixel resolution done to the linear image dimensions?  Well it’s basically ‘magnified’ the image – but by how much?

 

Magnification & Image Size

Magnification factors are an important part of digital imaging and image reproduction, so you need to understand something – magnification factors are always calculated on the diagonal.

So we need to identify the diagonals of both our sensor, and our 300 PPI image before we can go any further.

Here is a table of typical sensor diagonals:

Andy Astbury

Table of Sensor Diagonals for Digital Cameras.

And here is a table of metric print media sizes:

Andy Astbury

Metric Paper Sizes including diagonals.

To get back to our 300 PPI image derived from our D4 sensor,  Pythagoras tells us that our 16.43″ x 10.93″ image has a diagonal of 19.73″ – or 501.14mm

So with a sensor diagonal of 43.2mm we arrive at a magnification factor of around 11.6x for our 300 PPI native image as displayed on our monitor.

This means that EVERYTHING on the sensor – photosites/pixels, dust bunnies, logs, lumps of coal, circles of confusion, Airy Discs – the lot – are magnified by that factor.

Just to add variety, a D800/800E produces native 300 PPI images at 24.53″ x 16.37″ – a magnification factor of 17.3x over the sensor size.

So you can now begin to see why pixel resolution is so important when we print.

 

How To Blow Up A Squirrel !

Let’s get back to ‘his cuteness’ and open him up in Photoshop:

Our Squirrel at his native 300 PPI open in Photoshop.

Our Squirrel at his native 300 PPI open in Photoshop.

See how I keep you on your toes – I’ve switched to millimeters now!

The image is 417 x 277 mm – in other words it’s basically A3.

What happens if we hit print using A3 paper?

Red Squirrel with Bushy Tail. D4 file at 300 PPI printed to A3 media.

Red Squirrel with Bushy Tail. D4 file at 300 PPI printed to A3 media.

Whoops – that’s not good at all because there is no margin.  We need workable margins for print handling and for mounting in cut mattes for framing.

Do not print borderless – it’s tacky, messy and it screws your printer up!

What happens if we move up a full A size and print A2:

Red Squirrel 300 PPI printed on A2

Red Squirrel D4 300 PPI printed on A2

Now that’s just over kill.

But let’s open him back up in Photoshop and take a look at that image size dialogue again:

Our Squirrel at his native 300 PPI open in Photoshop.

Our Squirrel at his native 300 PPI open in Photoshop.

If we remove the check mark from the resample section of the image size dialogue box (circled red) and make one simple change:

Our Squirrel at a reduced pixel resolution of 240 PPI open in Photoshop.

Our Squirrel at a reduced pixel resolution of 240 PPI open in Photoshop.

All we need to do is to change the pixel resolution figure from 300 PPI to 240 PPI and click OK.

We make NO apparent change to the image on the monitor display because we haven’t changed any physical dimension and we haven’t resampled the image.

All we have done is tell the print pipeline that every 240 pixels of this image must occupy 1 liner inch of paper – instead of 300 pixels per linear inch of paper.

Let’s have a look at the final outcome:

Red Squirrel D4 240 PPI printed on A2.

Red Squirrel D4 240 PPI printed on A2.

Perfick… as Pop Larkin would say!

Now we have workable margins to the print for both handling and mounting purposes.

But here’s the big thing – printed at 2880+ DPI printer output resolution you would see no difference in visual print quality.  Indeed, 240 PPI was the Adobe Lightroom, ACR default pixel resolution until fairly recently.

So there we go, how big can you print?? – Bigger than you might think!

And it’s all down to pixel resolution – learn to understand it and you’ll find a lot of  the “murky stuff” in photography suddenly becomes very simple!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Pixel Resolution

What do we mean by Pixel Resolution?

Digital images have two sets of dimensions – physical size or linear dimension (inches, centimeters etc) and pixel dimensions (long edge & short edge).

The physical dimensions are simple enough to understand – the image is so many inches long by so many inches wide.

Pixel dimension is straightforward too – ‘x’ pixels long by ‘y’ pixels wide.

If we divide the physical dimensions by the pixel dimensions we arrive at the PIXEL RESOLUTION.

Let’s say, for example, we have an image with pixel dimensions of 3000 x 2400 pixels, and a physical, linear dimension of 10 x 8 inches.

Therefore:

3000 pixels/10 inches = 300 pixels per inch, or 300PPI

and obviously:

2400 pixels/8 inches = 300 pixels per inch, or 300PPI

So our image has a pixel resolution of 300PPI.

 

How Does Pixel Resolution Influence Image Quality?

In order to answer that question let’s look at the following illustration:

Andy Astbury,pixels,resolution,dpi,ppi,wildlife in pixels

The number of pixels contained in an image of a particular physical size has a massive effect on image quality. CLICK to view full size.

All 7 square images are 0.5 x 0.5 inches square.  The image on the left has 128 pixels per 0.5 inch of physical dimension, therefore its PIXEL RESOLUTION is 2 x 128 PPI (pixels per inch), or 256PPI.

As we move from left to right we halve the number of pixels contained in the image whilst maintaining the physical size of the image – 0.5″ x 0.5″ – so the pixels in effect become larger, and the pixel resolution becomes lower.

The fewer the pixels we have then the less detail we can see – all the way down to the image on the right where the pixel resolution is just 4PPI (2 pixels per 0.5 inch of edge dimension).

The thing to remember about a pixel is this – a single pixel can only contain 1 overall value for hue, saturation and brightness, and from a visual point of view it’s as flat as a pancake in terms of colour and tonality.

So, the more pixels we can have between point A and point B in our image the more variation of colour and tonality we can create.

Greater colour and tonal variation means we preserve MORE DETAIL and we have a greater potential for IMAGE SHARPNESS.

REALITY

So we have our 3 variables; image linear dimension, image pixel dimension and pixel resolution.

In our typical digital work flow the pixel dimension is derived from the the photosite dimension of our camera sensor – so this value is fixed.

All RAW file handlers like Lightroom, ACR etc;  all default to a native pixel resolution of 300PPI. * (this 300ppi myth annoys the hell out of me and I’ll explain all in another post).

So basically the pixel dimension and default resolution SET the image linear dimension.

If our image is destined for PRINT then this fact has some serious ramifications; but if our image is destined for digital display then the implications are very different.

 

Pixel Resolution and Web JPEGS.

Consider the two jpegs below, both derived from the same RAW file:

Andy Astbury,pixels,resolution,dpi,ppi,Wildlife in Pixels

European Adder – 900 x 599 pixels with a pixel resolution of 300PPI

European Adder - 900 x 599 pixels with a pixel resolution of 72PPI

European Adder – 900 x 599 pixels with a pixel resolution of 72PPI

In order to illustrate the three values of linear dimension, pixel dimension and pixel resolution of the two images let’s look at them side by side in Photoshop:

Andy Astbury,photoshop,resolution,pixels,ppi,dpi,wildlife in pixels,image size,image resolution

The two images opened in Photoshop – note the image size dialogue contents – CLICK to view full size.

The two images differ in one respect – their pixel resolutions.  The top Adder is 300PPI, the lower one has a resolution of 72PPI.

The simple fact that these two images appear to be exactly the same size on this page means that, for DIGITAL display the pixel resolution is meaningless when it comes to ‘how big the image is’ on the screen – what makes them appear the same size is their identical pixel dimensions of 900 x 599 pixels.

Digital display devices such as monitors, ipads, laptop monitors etc; are all PIXEL DIMENSION dependent.  The do not understand inches or centimeters, and they display images AT THEIR OWN resolution.

Typical displays and their pixel resolutions:

  • 24″ monitor = typically 75 to 95 PPI
  • 27″ iMac display = 109 PPI
  • iPad 3 or 4 = 264 PPI
  • 15″ Retina Display = 220 PPI
  • Nikon D4 LCD = 494 PPI

Just so that you are sure to understand the implication of what I’ve just said – you CAN NOT see your images at their NATIVE 300 PPI resolution when you are working on them.  Typically you’ll work on your images whilst viewing them at about 1/3rd native pixel resolution.

Yes, you can see 2/3rds native on a 15″ MacBook Pro Retina – but who the hell wants to do this – the display area is minuscule and its display gamut is pathetically small. 😉

Getting back to the two Adder images, you’ll notice that the one thing that does change with pixel resolution is the linear dimensions.

Whilst the 300 PPI version is a tiny 3″ x 2″ image, the 72 PPI version is a whopping 12″ x 8″ by comparison – now you can perhaps understand why I said earlier that the implications of pixel resolution for print are fundamental.

Just FYI – when I decide I’m going to create a small jpeg to post on my website, blog, a forum, Flickr or whatever – I NEVER ‘down sample’ to the usual 72 PPI that get’s touted around by idiots and no-nothing fools as “the essential thing to do”.

What a waste of time and effort!

Exporting a small jpeg at ‘full pixel resolution’ misses out the unnecessary step of down sampling and has an added bonus – anyone trying to send the image direct from browser to a printer ends up with a print the size of a matchbox, not a full sheet of A4.

It won’t stop image theft – but it does confuse ’em!

I’ve got a lot more to say on the topic of resolution and I’ll continue in a later post, but there is one thing related to PPI that is my biggest ‘pet peeve’:

 

PPI and DPI – They Are NOT The Same Thing

Nothing makes my blood boil more than the persistent ‘mix up’ between pixels per inch and dots per inch.

Pixels per inch is EXACTLY what we’ve looked at here – PIXEL RESOLUTION; and it has got absolutely NOTHING to do with dots per inch, which is a measure of printer OUTPUT resolution.

Take a look inside your printer driver; here we are inside the driver for an Epson 3000 printer:

Andy Astbury,printer,dots per inch,dpi,pixels per inch,ppi,photoshop,lightroom,pixel resolution,output resoloution

The Printer Driver for the Epson 3000 printer. Inside the print settings we can see the output resolutions in DPI – Dots Per Inch.

Images would be really tiny if those resolutions were anything to do with pixel density.

It surprises a lot of people when they come to the realisation that pixels are huge in comparison to printer dots – yes, it can take nearly 400 printer dots (20 dots square) to print 1 square pixel in an image at 300 PPI native.

See you in my next post!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Bit Depth

Bit Depth – What is a Bit?

Good question – from a layman’s point of view it’s the smallest USEFUL unit of computer/digital information; useful in the fact that it can have two values – 0 or 1.

Think of it as a light switch; it has two positions – ON and OFF, 1 or 0.

bit, Andy Astbury, bit depth

A bit is like a light switch.

We have 1 switch (bit) with 2 potential positions (bit value 0 or 1) so we have a bit depth of 1. We can arrive at this by simple maths – number of switch positions to the power of the number of switches; in other words 2 to the 1st power.

How Does Bit Depth Impact Our Images:

So what would this bit depth of 1 mean in image terms:

Andy Astbury,bit depth,

An Image with a Bit Depth of 1 bit.

Well, it’s not going to win Wildlife Photographer of the Year is it!

Because each pixel in the image can only be black or white, on or off, 0 or 1 then we only have two tones we can use to describe the entire image.

Now if we were to add another bit to the overall bit depth of the image we would have 2 switches (bits) each with 2 potential values so the total number of potential values, so 2 to the 2nd, or 4 potential output values/tones.

Andy Astbury,bits,bit depth

An image with a bit depth of 2 bits.

Not brilliant – but it’s getting there!

If we now double the bit depth again, this time to 4 bit, then we have 2 to the 4th, or 16 potential tones or output values per image pixel:

Andy Astbury,bits,bit depth

A bit depth of 4 bits gives us 16 tonal values.

And if we double the bit depth again, up to 8 bit we will end up with 2 to the 8th power, or 256 tonal values for each image pixel:

Andy Astbury,bits,bit depth

A bit depth of 8 bits yields what the eye perceives to be continuous unbroken tone.

This range of 256 tones (0 to 255) is the smallest number of tonal values that the human eye can perceive as being continuous in nature; therefore we see an unbroken range of greys from black to white.

More Bits is GOOD

Why do we need to use bit depths HIGHER than 8 bit?

Our modern digital cameras capture and store RAW images to a bit depth of 12 bit, and now in most cases 14 bit – 4096 & 16,384 tonal values respectively.

Just as we use the ProPhotoRGB colour space to preserve as many CAPTURED COLOURS as we can, we need to apply a bit depth to our pixel-based images that is higher than the capture depth in order to preserve the CAPTURED TONAL RANGE.

It’s the “bigger bucket” or “more stairs on the staircase” scenario all over again – more information about a pixels brightness and colour is GOOD.

Andy Astbury,bits,bit depth,tonal range,tonality,tonal graduation

How Tonal Graduation Increases with Bit Depth.

Black is black, and white is white, but increased bit depth gives us a higher number of steps/tones; tonal graduations, to get from black to white and vice versa.

So, if our camera captures at 14 bit we need a 15 bit or 16 bit “bucket” to keep it in.  And for those who want to know why a 14 bit bucket ISN’T a good idea then try carrying 2 gallons of water in a 2 gallon bucket without spillage!

The 8 bit Image Killer

Below we have two identical grey scale images open in Photoshop – simple graduations from black to white; one is a 16 bit image, the other 8 bit:

Andy Astbury,bits,bit depth,tone,tonal graduation

16 bit greyscale at the top. 8 bit greyscale below – CLICK Image to view full size.

Now everything looks OK at this “fit to screen” magnification; and it doesn’t look so bad at 1:1 either, but let’s increase the magnification to 1600% so we can see every pixel:

 

Andy Astbury,bits,bit depth,tone,tonal range,tonal graduation

CLICK Image to view full size. At 1600% magnification we can see that the 8 bit file is degraded.

At this degree of magnification we can see a huge amount of image degradation in the lower, 8 bit image whereas the upper, 16 bit image looks tonally smooth in its graduation.

The degradation in the 8 bit image is simply due to the fact that the total number of tones is “capped” at 256. and 256 steps to get from the black to the white values of the image are not sufficient – this leaves gaps in the image that Photoshop has to fill with “invented” tonal information based on its own internal “logic”….mmmmmm….

There was a time when I thought “girlies” were the most illogical things on the planet; but since Photoshop, now I’m not so sure…!

The image is a GREYSCALE – RGB ratios are supposedly equal in every pixel, but as you can see, Photoshop begins to skew the ratios where it has to do its “inventing” so we not only have luminosity artifacts, but we have colour artifacts being generated too.

You might look upon this as “pixel peeping” and “geekey”, but when it comes to image quality, being a pixel-peeping Geek is never a bad thing.

Of course, we all know 8bit as being “jpeg”, and these artifacts won’t show up on a web-based jpeg for your website; but if you are in the business of large scale gallery prints, then printing from an 8 bit image file is never going to be a good idea as these artifacts WILL show on the final print.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Lightroom Tutorials #2

 

Lightroom Tutorials,video,lessoneagle,golden eagle,snow,winter,Norway,wildlife

Image Processing in Lightroom & Photoshop

 

In this Lightroom tutorial preview I take a close look at the newly evolved Clone/Heal tool and dust spot removal in Lightroom 5.

This newly improved tool is simple to use and highly effective – a vast improvement over the great tool that it was already in Lightroom 4.

 

Lightroom Tutorials  Sample Video Link below: Video will open in a new window

 

https://vimeo.com/64399887

 

This 4 disc Lightroom Tutorials DVD set is available from my website at http://wildlifeinpixels.net/dvd.html

How White is Paper White?

What is Paper White?

We should all know by now that, in RGB terms, BLACK is 0,0,0 and that WHITE is 255,255,255 when expressed in 8 bit colour values.

White can also be 32,768: 32,768: 32,768 when viewed in Photoshop as part of a 16 bit image (though those values are actually 15 bit – yet another story!).

Either way, WHITE is WHITE; or is it?

paper white,photo paper white,printing paper white,Permajet paper whites, snow, Arctic Fox

Arctic Fox in Deep Snow ©Andy Astbury/Wildlife in Pixels

Take this Arctic Fox image – is anything actually white?  No, far from it! The brightest area of snow is around 238,238,238 which is neutral, but it’s not white but a very light grey.  And we won’t even discuss the “whiteness” of  the fox itself.

paper white,photo paper white,printing paper white,Permajet paper whites, bird, pheasant, snow

Hen Pheasant in Snow ©Andy Astbury/Wildlife in Pixels

The Hen Pheasant above was shot very late on a winters afternoon when the sun was at a very low angle directly behind me – the colour temperature has gone through the roof and everything has taken on a very warm glow which adds to the atmosphere of the image.

paper white,photo paper white,printing paper white,Permajet paper whites, snow, sunset, extreme colour temperature

Extremes of colour temperature – Snow Drift at Sunset ©Andy Astbury/Wildlife in Pixels

We can take the ‘snow at sunset’ idea even further, where the suns rays strike the snow it lights up pink, but the shadows go a deep rich aquamarine blue – what we might call a ‘crossed curves’ scenario, where shadow and lower mid tones are at a low Kelvin temperature, and upper mid tones and highlights are at a much higher Kelvin.

All three of these images might look a little bit ‘too much’ – but try clicking one and viewing it on a darker background without the distractions of the rest of the page – GO ON, TRY IT.

Showing you these three images has a couple of purposes:

Firstly, to show you that “TRUE WHITE” is something you will rarely, if ever, photograph.

Secondly, viewing the same image in a different environment changes the eyes perception of the image.

The secondary purpose is the most important – and it’s all to do with perception; and to put it bluntly, the pack of lies that your eyes and brain lead you to believe is the truth.

Only Mother Nature, wildlife and cameras tell the truth!

So Where’s All This Going Andy, and What’s it got to do with Paper White?

Fair question, but bare with me!

If we go to the camera shop and peruse a selection of printer papers or unprinted paper samplers, our eyes tell us that we are looking at blank sheets of white paper;  but ARE WE?

Each individual sheet of paper appears to be white, but we see very subtle differences which we put down to paper finish.

But if we put a selection of, say Permajet papers together and compare them with ‘true RGB white’ we see the truth of the matter:

paper white,photo paper white,printing paper white,Permajet paper whites

Paper whites of a few Permajet papers in comparison to RGB white – all colour values are 8bit.

Holy Mary Mother of God!!!!!!!!!!!!!!!!

I’ll bet that’s come as a bit of a shocker………

No paper is WHITE; some papers are “warm”; and some are “cool”.

So, if we have a “warmish” toned image it’s going to be a lot easier to “soft proof” that image to a “warm paper” than a cool one – with the result of greater colour reproduction accuracy.

If we were to try and print a “cool” image on to “warm paper” then we’ve got to shift the whole colour balance of the image, in other words warm it up in order for the final print to be perceived as neutral – don’t forget, that sheet of paper looked neutral to you when you stuck it in the printer!

Well, that’s simple enough you might think, but you’d be very, very wrong…

We see colour on a print because the inks allow use to see the paper white through them, but only up to a point.  As colours and tones become darker on our print we see less “paper white” and more reflected colour from the ink surface.

If we shift the colour balance of the entire image – in this case warm it up – we shift the highlight areas so they match the paper white; but we also shift the shadows and darker tones.  These darker areas hide paper white so the colour shift in those areas is most definitely NOT desirable because we want them to be as perceptually neutral as the highlights.

What we need to do in truth is to somehow warm up the higher tonal values while at the same time keep the lowest tonal values the same, and then somehow match all the tones in between the shadows and highlights to the paper.

This is part of the process called SOFT PROOFING – but the job would be a lot easier if we chose to print on a paper whose “paper white” matched the overall image a little more closely.

The Other Kick in the Teeth

Not only are we battling the hue of paper white, or tint if you like, but we also have to take into account the luminance values of the paper – in other words just how “bright” it is.

Those RGB values of paper whites across a spread of Permajet papers – here they are again to save you scrolling back:

paper white,photo paper white,printing paper white,Permajet paper whites

Paper whites of a few Permajet papers in comparrison to RGB white – all colour values are 8bit.

not only tell us that there is a tint to the paper due to the three colour channel values being unequal, but they also tell us the brightest value we can “print” – in other words not lay any ink down!

Take Oyster for example; a cracking all-round general printer paper that has a very large colour gamut and is excellent value for money – Permajet deserve a medal for this paper in my opinion because it’s economical and epic!

Its paper white is on average 240 Red, 245 Green ,244 Blue.  If we have any detail in areas of our image that are above 240, 240, 240 then part of that detail will be lost in the print because the red channel minimum density (d-min) tops out at 240; so anything that is 241 red or higher will just not be printed and will show as 240 Red in the paper white.

Again, this is a problem mitigated in the soft proofing process.

But it’s also one of the reasons why the majority of photographers are disappointed with their prints – they look good on screen because they are being displayed with a tonal range of 0 to 255, but printed they just look dull, flat and generally awful.

Just another reason for adopting a Colour Managed Work Flow!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Colour Space & Profiles

colour space

From Camera to Print
copyright 2013 Andy Astbury/Wildlife in Pixels

Colour space and device profiles seem to cause a certain degree of confusion for a lot of people; and a feeling of dread, panic and total fear in others!

The reality of colour spaces and device profiles is that they are really simple things, and that how and why we use them in a colour managed work flow is perfectly logical and easy to understand.

Up to a point colour spaces and device profiles are one and the same thing – they define a certain “volume” of colours from red to green to blue, and from black to white – and all the colours that lie in between those five points.

The colour spaces that most photographers are by now familiar with are ProPhotoRGB, AdobeRGB(1998) and sRGB – these are classed as “working colour spaces” and are standards of colour set by the International Color Consortium, or ICC; and they all have one thing in common; where red, green and blue are present in equal amounts the colour produced will be NEUTRAL.

The only real differences between these three working colour spaces is the “distances” between the five set points of red, green, blue, black and white.  The greater the distance between the three primary colours then the greater is the degree of graduation between them, hence the greater the number of potential colours.  In the diagram below we can see the sRGB & ProPhoto working colour spaces displayed on the same axes:

colour space volume

The sRGB & ProPhoto colour spaces. The larger volume of ProPhoto contains more colour variety between red, green & blue than sRGB.

If we were to mark five different points on the surface of a partially inflated balloon,  and then inflate it some more then the points in relation to the balloons surface would NOT change: the points remain the same.  But the spatial distances between the points would change, as would the internal volume.  It’s the same with our five points of colour reference – red, green, blue, black & white – they do NOT change between colour spaces; red is red no matter what the working colour space.  But the range of potential colours between our 5 points of reference increases due to increased colour space volume.

So now we have dealt with the basics of the three main working colour spaces, we need to consider the volume of colour our camera sensor can capture – if you like, its colour space; but I’d rather use the word “gamut”.

Let’s take the Canon 5DMk3 as an example, and look at the volume, or gamut, of colour that its sensor can capture, in direct comparison with our 3 quantifiable working colour spaces:

colour space

The Canon 5DMk3 sensor gamut (black) in comparison to ProPhoto (largest), AdobeRGB1998 & sRGB (smallest) working colour spaces.

In a previous blog article I wrote – see here – I mentioned how to setup the colour settings in Photoshop, and this is why.  If you want to keep the greatest proportion of your camera sensors captured colour then you need to contain the image within the ProPhotoRGB working colour space.  If you don’t, and you use AdobeRGB or sRGB as Photoshops working colour space then you will loose a certain proportion of those captured colours – as I’ve heard it put before, it’s like a sex change operation – certain colours get chopped off, and once that’s happened you can’t get them back!

To keep things really simple just think of the 3 standard working colour spaces as buckets – the bigger the bucket, the more colour it contains; and you can’t tip the colours captured by your camera into a smaller bucket without getting spillage and making a mess on the floor!

As I said before, working colour spaces are neutral; but seldom does our camera ever capture a scene that contains pure neutrals.  Even though an item in the scene may well be neutral in colour, camera sensors quite often skew these colours ever so slightly; most Canon RAW files always look a teeny-weeny ever so slight bit magenta to me when I import them; but there again I’m a Nikon shooter seem to have a minute greenish tinge to them before processing.

Throughout our imaging work flow we have 3 stages:

1. Input (camera or scanner).

2. Working Process (Lightroom, Photoshop etc).

3. Output (printer for example).

And each stage has its representative type of colour space – we have input profiles, working colour spaces and output profiles.

So we have our camera capture gamut (colour space if you like) and we’ve opened our image in Photoshop or Lightroom in the ProPhoto working colour space – there’s NO SPILLAGE!

We now come to the crux of colour management; before we can do anything else we need to profile our “window onto our image” – the monitor.

In order to see the reality of what the camera captured we need to ensure that our monitor is in line with our WORKING COLOUR SPACE in terms of colour neutrality – not that of the camera as some people seem to think.

All 3 working colour spaces posses the same degree of colour neutrality where red, green & blue are present at the same values irrespective of physical size of the colour space.

So as long as our monitor is profiled to be:

1. Accurately COLOUR NEUTRAL

2. Displaying maximum brightness only in the presence true white – which you’ll hardly ever photograph, even snow isn’t white.

then we will see a highly workable representation of image colour neutrality and luminosity on our monitor.  Only by working this way can we actually tell if the camera has captured the image correctly in terms of colour balance and overall exposure.

And the fact that our monitor CANNOT display all the colours contained within our big ProPhoto bucket is, to all intents and purposes,  a fairly mute point; though seeing as many of them as possible is never a bad thing.

And using a monitor that does NOT display the volume of colour approximating or exceeding that of the Adobe working space can be highly detrimental for the reasons discussed in my previous post.

Now that we’ve covered input profiles and working colour spaces we need to move on and outline the basics of output profiles, and printer profiles in particular.

colour space, profile, print profile

Adobe & sRGB working paces in comparison to the colours contained in the Kingfisher image and the profile for Permajet Oyster paper using the Epson 7900 printer. (CLICK image for full sized view).

In the image above we can see both the Adobe and sRGB working spaces and the full distribution of colours contained in the Kingfisher image which is a TIFF file in our big ProPhoto bucket of colour;  and a black trace which is the colour profile (or space if you like) for Permajet Oyster paper using Epson UltraChrome HDR ink on an Epson 7900 printer.

As we can see, some of the colours contained in the image fall outside the gamut of the sRGB working colour space; notably some oranges and “electric blues” which are basically colours of the subject and are most critical to keep in the print.

However, all those ProPhoto colours are capable of being reproduced on the Epson 7900 using Permajet Oyster paper because, as the black trace shows, the printer/ink/paper combination can reproduce colours that lie outside of the Adobe working colour space.

The whole purpose of that particular profile is to ensure that the print matches what we can see on the monitor both in terms of colour and brightness – in other words, what we see is what we get – WYSIWYG!

The beauty of a colour managed workflow is that it’s economical – assuming the image is processed correctly then printing via an accurate printer profile can give you a perfect printed rendition of your screen image using just a single sheet of paper – and only one sheets worth of ink.

colour space, colour profile

The difference between colour profiles for the same printer paper on different printers. Epson 3000 printer profile trace in Red (CLICK image for full size view).

If we were to switch printers to an Epson 3000 using UltraChrome K3 ink on the very same paper, the area circled in white shows us that there are a couple of orange hue colours that are a little problematic – they lie either close to or outside the colour gamut of this printer/ink/paper combination, and so they need to be changed in order to ‘fit’, either by localised adjustment or variation of rendering intent – but that’s a story for later!

Why is it different? Well, it’s not to do with the paper for sure, so it’s down to either the ink change or printer head.  Using the same K3 ink in an Epson 4800 brings the colours back into gamut, so the difference is in the printer head itself, or the printer driver, but as I said, it’s a small problem easily fixed.

When you consider the low cost of achieving an accurate monitor profile – see this previous post – and combine that with an accurate printer output profile or two to match your chosen printer papers, and then deploy these assets correctly you have a proper colour managed workflow.  Add to that the cost savings in ink and paper and it becomes a bit of a “no-brainer” doesn’t it?

In this post I set out to hopefully ‘demystify’ colour spaces and profiles in terms of what they are and how they are used – I hope I’ve succeeded!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.