Topaz JPEG to RAW AI Review

Frank asked me last week about the Topaz JPEG to RAW AI plugin and if it was any good.

Well, my response after downloading the free trial and messing about with it is this – it’s CRAP!

All it does is create a DNG file or 16 bit TIFF (you pick) from an 8 bit jpeg.

Should you opt for DNG then the DNG is NOT a raw file, it’s just a DNG with 8 bits per channel worth of information rattling around inside it. And if you opt for 16 bit TIFF then you end up with and 8 bit jpeg sitting inside a 16 bit TIFF container.

For the love of God, how the hell does the marketing department at Topaz think they can get away with this total bullshit – Ai learning my arse………….!

Are they trying to make the inexperienced and unknowledgeable actually believe that this expensive bit of garbage can re-engineer 32,000+ bits per channel from a poxy 8 bits per channel?

If you believe that then all I’ll say is that ‘a fool and their money are easily parted’………

Click image to view LARGE in new window

Click the image above and it will open in a new window. We are looking at 200% magnification of an are of the original raw file (right), a full resolution jpeg created from that raw file (middle), and a shitty DNG created by this abomination from Topaz on the left.

Notice something about the Topaz image – it’s got more artifacts in it than the original jpeg (middle) because there has been some form of sharpening applied to it by the Topaz software – and yet there are NO controls for any application of sharpening in the Topaz UI.

Let’s take something different – a jpeg of a Clouded Ermine.

Click the image to see what the original jpeg looks like.

One of the most stunningly beautiful moths in the UK, this Clouded Ermine is resting on tree bark.

Now let’s feed that jpeg into the Topaz JPEG to RAW AI GUI

And now let’s look at a sectional 400% magnification comparison shall we

Sharpening artifacts again and noise that wasn’t there to begin with.

Seriously, you’d be better off tweaking your jpegs in Lightroom!

I do NOT recommend anyone purchase this Topaz product because it’s rubbish. But what concerns me is the manner in which it is being marketed; I see the marketing as just plain misleading.

I’m not dismissing all Topaz software – DeNoise is brilliant for certain tasks.

But Topaz JPEG to RAW AI is nothing short of misleading junk with a huge price tag – Christ, you could buy 12 months subscription to my Patreon Chanel for less!

Two Blend Modes in Photoshop EVERY Photographer Should Know!

Two Blend Modes in Photoshop EVERY Photographer Should Know!

The other day one of my members over on my Patreon suggested I do a video on Blending Modes in Photoshop.

Well, that would take a whole heap of time as it’s quite a big subject because Blending Modes don’t just apply to layers. Brushes of all descriptions have their own unique blend modes, and so do layer groups.

There is no need to go into a great deal of detail over blend modes in order for you to start reaping their benefits.

There are TWO blend modes – Multiply and Screen – which you can start using straight away to vary the apparent exposure of your images.

And seeing as my last few videos have been concerned with exposing for highlights and ETTR in general, the use of the Multiply Layer Blending Mode will be clear to see once you’ve watched the video.

Hope the video gives you some more insight folks!

My Members over on Patreon get the benefit of being able to download the raw files used in this video.

All the best.

Lumenzia – New Training

Lumenzia – New Training Course Available

Regular subscribers to my blog and YouTube channel should know by know that I highly recommend Greg Benz’s Lumenzia plugin for Photoshop.

Lumenzia

I know many readers of my blog have downloaded the Lumenzia plugin from my links dotted around the site, and previous posts such as HERE and HERE

Lumenzia is just about the best tool you can buy to help you master exposure blending using luminosity masks, but its uses do not stop there – I use it on quite a lot of my images for making ‘controlled tweaks’ in Photoshop.

But it is most readily associated with landscape photography exposure blending.

An awful lot of people have asked me if I’d do a set of comprehensive training videos on how to use Lumenzia, but that would be a little difficult to do without on-going additions as the plugin is frequently updated with new facilities.

But I’m pleased to say the Greg Benz (the plugin author) has just launched a comprehensive training course for Lumenzia, and I have bought the course myself!

Yes, that’s right – I’ve bought someone else’s training!

Lumenzia

After watching the videos that Greg has put together I can honestly say that the course is excellent – as you would expect.

The course is hosted on Teachable – so you don’t have to download any huge chunky videos either.

For those of you who already have the Lumenzia Photoshop plugin you can get the full course by clicking on the following link:

Lumenzia

Exposure Blending Master Course

And for those of you you have NOT already got the plugin itself, you can buy it bundled with the training course on the link below:

Lumenzia

Lumenzia + Exposure Blending Master Course

If you only want the plugin, you can still get that on its own by clicking below:

Lumenzia

Lumenzia Plugin on its own click here.

Greg covers everything you need to know in order to leverage the power of Lumenzia.  And anything that gets people to use Photoshop gets an extra ‘thumbs up’ from me!

Greg is the one trainer I know of who does what I do with my training videos – supply RAW files to support each of the lessons.

You will get raw files from various cameras including some D850 files, so you will have the added bonus of seeing how these cameras perform in the hands of an expert photographer.

So, I strongly urge you to use the links above and purchase this great training course from Greg Benz, and get to grips with Lumenzia.

You might be wondering why the heck I’m promoting training from someone else. 

Well, the reasons are two-fold; I’ve already said that logistically it would be a nightmare because of the fundamental updates.

But more importantly, I’d never be able to teach you how Lumenzia works any better than Greg himself – he IS the plugin author, so it stands to reason!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Professional Grade Image Sharpening

Professional Grade Image Sharpening for Archive, Print & Web – my latest training video collection.

image sharpening

View the overview page on my download store HERE

Over 11 hours of video training, spread across 58 videos…well, I told you it was going to be big!

And believe me, I could have made it even bigger, because there is FAR MORE to image sharpening than 99% of photographers think.

And you don’t need ANY stupid sharpener plugins – or noise reductions ones come to that.  Because Photoshop does it ALL anyway, and is far more customizable and controllable than any plugin could hope to be.

So don’t waste your money any more – spend it instead, on some decent training to show you how to do the job properly in the first place!

You won’t find a lot of these methods anywhere else on the internet – free or paid for – because ‘teachers cannot teach what they don’t know’ – and I know more than most!

image sharpening

As you can see from the list of lessons above, I cover more than just ‘plain old sharpening’.

Traditionally, image sharpening produces artifacts – usually white and black halos – if it’s over done. And image sharpening emphasizes ‘noise’ in areas of shadow and other low frequency detail, when it’s applied to an image in the ‘traditional’, often taught, blanket manner.

Why sharpen what isn’t in focus – to do so is madness, because all you do is sharpen the noise, and cause more artifacts!

Maximum sharpening should only be applied to detail in the image that is ‘fully in focus’.

So, as ‘focus sharpness’ falls off, so to should the level of applied sharpening.  That way, noise and other artifacts CAN NOT build up in an image.

And the same can be said for noise reduction, but ‘in reverse’.

So image sharpening needs to be applied in a differential manor – and that’s what this training is all about.

Using a brush in Lightroom etc to ‘brush in’ some sort of differential sharpening is NOT a good idea, because it’s imprecise, and something of a fools task.

Why do I say that? Simple……. Because the ‘differential factor bit’ is contained within the image itself – and it’s just sitting there on your computer screen WAITING for you to get stuck in and use it.

But, like everything else in modern digital photography, the knowledge and skill to do so has somehow been lost in the last 12 to 15 years, and the internet is full of ‘teachers’ who have never had these skills in the first place – hence they can’t teach ’em!

However, everyone who buys this training of mine WILL have those skills by the end of the course.

It’s been a real hard slog to produce these videos.  Recording the lessons is easy – it’s the editing and video call-outs that take a lot of time.  And I’ve edited all the audio in Audacity to remove breath sounds and background noise – many thanks to Curtis Judd for putting those great lessons on YouTube!

The price is £59.99. So right now, that’s over 11 hours of training for less than £5.50 per hour – that’s way cheaper than a 1to1, or even a workshop day with a crowd of other people!

So head off over to my download store and buy it, because what you’ll learn will improve your image processing, whether it’s for big prints or just jpegs on the web – guaranteed – just click here!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

 

Image Sharpening

Image Sharpening and Raw Conversion.

A lot of people imagine that there is some sort of ‘magic bullet’ method for sharpening images.

Well, here’s the bad news – there isn’t !

Even if you shoot the same camera and lens combo at the same settings all the time, your images will exhibit an array of various properties.

And those properties, and the ratio/mix thereof, can, and will, effect the efficacy of various sharpening methods and techniques.

And, those properties will rarely be the same from shoot to shoot.

Add interchangeable lenses, varied lighting conditions, and assorted scene brightness and contrast ranges to the mix – now the range of image properties has increased exponentially.

What are the properties of an image that can determine your approach to sharpening?

I’m not even going to attempt to list them all here, because that would be truly frightening for you.

But sharpening is all about pixels, edges and contrast.  And our first ‘port of call’ with regard to all three of those items is ‘demosaicing’ and raw file conversion.

“But Andy, surely the first item should be the lens” I here you say.

No, it isn’t.

And if that were the case, then we would go one step further than that, and say that it’s the operators ability to focus the lens!

So we will take it as a given, that the lens is sharp, and the operator isn’t quite so daft as they look!

Now we have a raw file, taken with a sharp lens and focused to perfection.

Let’s hand that file to two raw converters, Lightroom and Raw Therapee:

Image Sharpening

I am Lightroom – Click me!

Image Sharpening

I am Raw Therapee – Click me!

In both raw converters there is ZERO SHARPENING being applied. (and yes, I know the horizon is ‘wonky’!).

Now check out the 800% magnification shots:

Image Sharpening

Lightroom at 800% – Click me!

Image Sharpening

Raw Therapee at 800% – Click me!

What do we see on the Lightroom shot at 800%?

A sharpening halo, but hang on, there is NO sharpening being applied.

But in Raw Therapee there is NO halo.

The halo in Lightroom is not a sharpening halo, but a demosaicing artifact that LOOKS like a sharpening halo.

It is a direct result of the demosaicing algorithm that Lightroom uses.

Raw Therapee on the other hand, has a selection of demosaicing algorithms to choose from.  In this instance, it’s using its default AMaZE (Alias Minimization & Zipper Elimination) algorithm.  All told, there are 10 different demosaic options in RT, though some of them are a bit ‘old hat’ now.

There is no way of altering the base demosaic in Lightroom – it is something of a fixed quantity.  And while it works in an acceptable manner for the majority of shots from an ever burgeoning mass of digital camera sensors, there will ALWAYS be exceptions.

Let’s call a spade a bloody shovel and be honest – Lightrooms demosaicing algorithm is in need of an overhaul.  And why something we have to pay for uses a methodology worse than something we get for free, God only knows.

It’s a common problem in Lightroom, and it’s the single biggest reason why, for example, landscape exposure blends using luminosity masks fail to work quite as smoothly as you see demonstrated on the old Tube of You.

If truth be told – and this is only my opinion – Lightroom is by no means the best raw file processor in existence today.

I say that with a degree of reservation though, because:

  1. It’s very user friendly
  2. It’s an excellent DAM (digital asset management) tool, possibly the best.
  3. On the surface, it only shows its problems with very high contrast edges.

As a side note, my Top 4 raw converters/processors are:

  1. Iridient Developer
  2. Raw Therapee
  3. Capture One Pro
  4. Lightroom

Iridient is expensive and complex – but if you shoot Fuji X-Trans you are crazy if you don’t use it.

Raw Therapee is very complex (and slightly ‘clunky’ on Mac OSX) but it is very good once you know your way around it. And it’s FREEEEEEEEE!!!!!!!

Iridient and RT have zero DAM capability that’s worth talking about.

Capture One Pro is a better raw converter on the whole than Lightroom, but it’s more complex, and its DAM structure looks like it was created by crack-smoking monkeys when you compare it to the effective simplicity of Lightroom.

If we look at Lightroom as a raw processor (as opposed to raw converter) it encourages the user to employ ‘recovery’ in shadow and highlight areas.

Using BOTH can cause halos along high contrast edges, and edges where high frequency detail sits next to very low frequency detail of a contrasting colour – birds in flight against a blue sky spring to mind.

Why do I keep ‘banging on’ about edges?

Because edges are critical – and most of you guys ‘n gals hardly ever look at them close up.

All images contain areas of high and low frequency detail, and these areas require different process treatments, if you want to obtain the very best results AND want to preserve the ability to print.

Cleanly defined edges between these areas allow us to use layer masks to separate these areas in an image, and obtain the selective control.

Clean inter-tonal boundaries also allow us to separate shadows, various mid tone ranges, and highlights for yet more finite control.

Working on 16 bit images (well, 15 bit plus 1 level if truth be told) means we can control our adjustments in Photoshop within a range of 32,768 tones.  And there is no way in hell that localised adjustments in Lightroom can be carried out to that degree of accuracy – fact.

I’ll let you in to a secret here!  You all watch the wrong stuff on YouTube!  You sit and watch a video by God knows what idiot, and then wonder why what you’ve just seen them do does NOT work for you.

That’s because you’ve not noticed one small detail – 95% of the time they are working on jpegs!  And jpegs only have a tonal range of 256.  It’s really easy to make luminosity selections etc on such a small tonal range work flawlessly.  You try the same settings on a 16 bit image and they don’t work.

So you end up thinking it’s your fault – your image isn’t as ‘perfect’ as theirs – wrong!

It’s a tale I hear hundreds of times every year when I have folk on workshops and 1to1 tuition days.  And without fail, they all wish they’d paid for the training instead of trying to follow the free stuff.

You NEVER see me on a video working with anything but raw files and full resolution 16 bit images.

My only problem is that I don’t ‘fit into’ today’s modern ‘cult of personality’!

Most adjustments in Lightroom have a global effect.  Yes, we have range masks and eraser brushes.  But they are very poor relations of the pixel-precise control you can have in Photoshop.

Lightroom is – in my opinion of course – becoming polluted by the ‘one stop shop, instant gratification ideology’ that seems to pervade photography today.

Someone said to me the other day that I had not done a YouTube video on the new range masking option in Lightroom.  And they are quite correct.

Why?

Because it’s a gimmick – and real crappy one at that, when compared to what you can do in Photoshop.

Photoshop is the KING of image manipulation and processing.  And that is a hard core, irrefutable fact.  It has NO equal.

But Photoshop is a raster image editor, which means it needs to be fed a diet of real pixels.  Raw converters like Lightroom use ‘virtual pixels’ – in a manner of speaking.

And of course, Lightroom and the CameraRaw plug in for Photoshop amount to the same thing.  So folk who use either Lightroom or Photoshop EXCLUSIVELY are both suffering from the same problems – if they can be bothered to look for them.

It Depends on the Shot

sharpening

The landscape image is by virtue, a low ISO, high resolution shot with huge depth of field, and bags of high frequency inter-tonal detail that needs sharpening correctly to its very maximum.  We don’t want to sharpen the sky, as it’s sharp enough through depth of field, as is the water, and we require ZERO sharpening artifacts, and no noise amplification.

If we utilise the same sharpening workflow on the center image, then we’ll all get our heads kicked in!  No woman likes to see their skin texture sharpened – in point of fact we have to make it even more unsharp, smooth and diffuse in order to avoid a trip to our local A&E department.

The cheeky Red Squirrel requires a different approach again.  For starters, it’s been taken on a conventional ‘wildlife camera’ – a Nikon D4.  This camera sensor has a much lower resolution than either of the camera sensors used for the previous two shots.

It is also shot from a greater distance than the foreground subjects in either of the preceding images.  And most importantly, it’s at a far higher ISO value, so it has more noise in it.

All three images require SELECTIVE sharpening.  But most photographers think that global sharpening is a good idea, or at least something they can ‘get away with’.

If you are a photographer who wants to do nothing else but post to Facebook and Flickr then you might as well stop reading this post.  Good luck to you and enjoy your photography,  but everything you read in this post, or anywhere on this blog, is not for you.

But if you want to maximize the potential of your thousands of pounds worth of camera gear, and print or sell your images, then I hate to tell you, but you are going to have to LEARN STUFF.

Photoshop is where the magic happens.

As I said earlier, Photoshop is a raster image processor.  As such, it needs to be fed an original image that is of THE UTMOST QUALITY.  By this I mean a starting raw file that has been demosaiced and normalized to:

  1. Contain ZERO demosaic artifacts of any kind.
  2. Have the correct white and black points – in other words ZERO blown highlights or blocked shadows.  In other words, getting contrast under control.
  3. Maximize the midtones to tease out the highest amount of those inter-tonal details, because this is where your sharpening is going to take place.
  4. Contain no more sharpening than you can get away with, and certainly NOT the amount of sharpening you require in the finished image.

With points 1 thru 3 the benefits should be fairly obvious to you, but if you think about it for a second, the image described is rather ‘flattish – looking’.

But point 4 is somewhat ambiguous.  What Adobe-philes like to call capture or input sharpening is very dependent on three variables:

  1. Sensor megapixels
  2. Demosaic effeciency
  3. Sharpening method – namely Unsharp Mask or Deconvolution

The three are inextricably intertwined – so basically it’s a balancing act.

To learn this requires practice!

And to that end I’m embarking on the production of a set of videos that will help you get to grips with the variety of sharpening techniques that I use, and why I use them.

I’ll give you fair warning now – when finished it will be neither CHEAP nor SHORT, but it will be very instructive!

I want to get it to you as soon as possible, but you wouldn’t believe how long tuition videos take to produce.  So right now I’m going to say it should be ready at the end of February or early March.

UPDATE:  The new course is ready and on sale now, over on my digital download site.

sharpening

The link to the course page is HERE.

Hopefully I’ve given you a few things to think about in this post.

Don’t forget, I provide 1to1 and group tuition days in this and all things photography related.

And just in case you’ve missed it, here’s a demo of how useful Photoshop Smart Sharpen can be:

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Photoshop View Magnification

View Magnification in Photoshop (Patreon Only).

A few days ago I uploaded a video to my YouTube channel explaining PPI and DPI – you can see that HERE .

But there is way more to pixel per inch (PPI) resolution values than just the general coverage I gave it in that video.

And this post is about a major impact of PPI resolution that seems to have evaded the understanding and comprehension of perhaps 95% of Photoshop users – and Lightroom users too for that matter.

I am talking about image view magnification, and the connection this has to your monitor.

Let’s make a new document in Photoshop:

View Magnification

We’ll make the new document 5 inches by 4 inches, 300ppi:

View Magnification

I want you to do this yourself, then get a plastic ruler – not a steel tape like I’ve used…..

Make sure you are viewing the new image at 100% magnification, and that you can see your Photoshop rulers along the top and down the left side of the workspace – and right click on one of the rulers and make sure the units are INCHES.

Take your plastic ruler and place it along the upper edge of your lower monitor bezel – not quite like I’ve done in the crappy GoPro still below:

View Magnification

Yes, my 5″ long image is in reality 13.5 inches long on the display!

The minute you do this, you may well get very confused!

Now then, the length of your 5×4 image, in “plastic ruler inches” will vary depending on the size and pixel pitch of your monitor.

Doing this on a 13″ MacBook Pro Retina the 5″ edge is actually 6.875″ giving us a magnification factor of 1.375:1

On a 24″ 1920×1200 HP monitor the 5″ edge is pretty much 16″ long giving us a magnification factor of 3.2:1

And on a 27″ Eizo ColorEdge the 5″ side is 13.75″ or there abouts, giving a magnification factor of 2.75:1

The 24″ HP monitor has a long edge of not quite 20.5 inches containing 1920 pixels, giving it a pixel pitch of around 94ppi.

The 27″ Eizo has a long edge of 23.49 inches containing 2560 pixels, giving it a pixel pitch of 109ppi – this is why its magnification factor is less then the 24″ HP.

And the 13″ MacBook Pro Retina has a pixel pitch of 227ppi – hence the magnification factor is so low.

So WTF Gives with 1:1 or 100% View Magnification Andy?

Well, it’s simple.

The greatest majority of Ps users ‘think’ that a view magnification of 100% or 1:1 gives them a view of the image at full physical size, and some think it’s a full ppi resolution view, and they are looking at the image at 300ppi.

WRONG – on BOTH counts !!

A 100% or 1:1 view magnification gives you a view of your image using ONE MONITOR or display PIXEL to RENDER ONE IMAGE PIXEL  In other words the image to display pixel ratio is now 1:1

So at a 100% or 1:1 view magnification you are viewing your image at exactly the same resolution as your monitor/display – which for the majority of desk top users means sub-100ppi.

Why do I say that?  Because the majority of desk top machine users run a 24″, sub 100ppi monitor – Hell, this time last year even I did!

When I view a 300ppi image at 100% view magnification on my 27″ Eizo, I’m looking at it in a lowly resolution of 109ppi.  With regard to its properties such as sharpness and inter-tonal detail, in essence, it looks only 1/3rd as good as it is in reality.

Hands up those who think this is a BAD THING.

Did you put your hand up?  If you did, then see me after school….

It’s a good thing, because if I can process it to look good at 109ppi, then it will look even better at 300ppi.

This also means that if I deliberately sharpen certain areas (not the whole image!) of high frequency detail until they are visually right on the ragged edge of being over-sharp, then the minuscule halos I might have generated will actually be 3 times less obvious in reality.

Then when I print the image at 1440, 2880 or even 5760 DOTS per inch (that’s Epson stuff), that print is going to look so sharp it’ll make your eyeballs fall to bits.

And that dpi print resolution, coupled with sensible noise control at monitor ppi and 100% view magnification, is why noise doesn’t print to anywhere near the degree folk imagine it will.

This brings me to a point where I’d like to draw your attention to my latest YouTube video:

Did you like that – cheeky little trick isn’t it!

Anyway, back to the topic at hand.

If I process on a Retina display at over 200ppi resolution, I have a two-fold problem:

  • 1. I don’t have as big a margin or ‘fudge factor’ to play with when it comes to things like sharpening.
  • 2. Images actually look sharper than they are in reality – my 13″ MacBook Pro is horrible to process on, because of its excessive ppi and its small dimensions.

Seriously, if you are a stills photographer with a hankering for the latest 4 or 5k monitor, then grow up and learn to understand things for goodness sake!

Ultra-high resolution monitors are valid tools for video editors and, to a degree, stills photographers using large capacity medium format cameras.  But for us mere mortals on 35mm format cameras, they can actually ‘get in the way’ when it comes to image evaluation and processing.

Working on a monitor will a ppi resolution between the mid 90’s and low 100’s at 100% view magnification, will always give you the most flexible and easy processing workflow.

Just remember, Photoshop linear physical dimensions always ‘appear’ to be larger than ‘real inches’ !

And remember, at 100% view magnification, 1 IMAGE pixel is displayed by 1 SCREEN pixel.  At 50% view magnification 1 SCREEN pixel is actually displaying the dithered average of 2 IMAGE pixels.  At 25% magnification each monitor pixel is displaying the average of 4 image pixels.

Anyway, that’s about it from me until the New Year folks, though I am the worlds biggest Grinch, so I might well do another video or two on YouTube over the ‘festive period’ so don’t forget to subscribe over there.

Thanks for reading, thanks for watching my videos, and Have a Good One!

 

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

YouTube Channel Latest Video Training

My YouTube Channel Latest Photography Video Training.

I’ve been busy this week adding more content to the old YouTube channel.

Adding content is really time-consuming, with recording times taking around twice the length of the final video.

Then there’s the editing, which usually takes around the same time, or a bit longer.  Then encoding and compression and uploading takes around the same again.

So yes, a 25 minute video takes A LOT more than 25 minutes to make and make live for the world to view.

This weeks video training uploads are:

This video deals with the badly overlooked topic of raw file demosaicing.

Next up is:

This video is a refreshed version of getting contrast under control in Lightroom – particularly Lightroom Classic CC.

Then we have:

This video is something of a follow-up to the previous one, where I explain the essential differences between contrast and clarity.

And finally, one from yesterday – which is me, restraining myself from embarking on a full blown ‘rant’, all about the differences between DPI (dots per inch) and PPI (pixels per inch):

Important Note

Viewing these videos is essential for the betterment of your understanding – yes it is!  And all I ask for in terms of repayment from yourselves is that you:

  1. Click the main channel subscribe button HERE https://www.youtube.com/c/AndyAstbury
  2. Give the video a ‘like’ by clicking the thumbs up!

YouTube is a funny old thing, but a substantial subscriber base and like videos will bring me closer to laying my hands on latest gear for me to review for you!

If all my blog subscribers would subscribe to my YouTube channel then my subs would more than treble – so go on, what are you waiting for.

I do like creating YouTube free content, but I do have to put food on the table, so I have to do ‘money making stuff’ as well, so I can’t afford to become a full-time YouTuber yet!  But wow, would I like to be in that position.

So that’s that – appeal over.

Watch the videos, and if you have any particular topic you would like me to do a video on, then please just let me know.  Either email me, or you can post in the comments below – no comment goes live here unless I approve it, so if you have a request but don’t want anyone else to see it, then just say.

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

The Importance of Finished Image Previsualization

The Importance of Finished Image Previsualization (Patreon Only).

For those of you who haven’t yet subscribed to my YouTube channel, I uploaded a video describing how I shot and processed the Lone Tree at Llyn Padarn in North Wales the other day.

You can view the video here:

Image previsualization is hugely important in all photography, but especially so in landscape photography.

Most of us do it in some way or other.  Looking at images of a location by other photographers is the commonest form of image previsualization that I come across amongst most hobby photographers – and up to a point, there’s nothing intrinsically wrong in that – as long as you put your own ‘slant’ on the shot.

But relying on this method alone has one massive Achilles Heel – nature does not always ‘play nice’ with the light!

You set off for your chosen location with a certain knowledge that the weather forecast is correct, and you are guaranteed to get the perfect light for the shot you have in mind.

Three hours later, you arrive at your destination, and the first thought that enters your head is “how do I blow up the Met Office” – how could they have lied to me so badly?

If you rely solely on ‘other folks images’ for what your shot should look like, then you now have a severe problem.  Nature is railing against your preconceptions, and unless you make some mental modifications then you are deep into a punch-up with nature that you will never win.

Just such an occasion transpired for me the other day at Llyn Padarn in North Wales.

The forecast was for low level cloud with no wind, just perfect for a moody shot of the famous Lone Tree on the south shore of the lake.

So, arriving at the location to be greeted by this was a surprise to say the least:

image previsualization

This would have been disastrous for some, simply because the light does not comply with their initial expectations.  I’ve seen many people get a ‘fit of the sulks’ when this happens, and they abandon the location without even getting out of the car.

Alternatively, there are folk who will get their gear set up and make an attempt, but their initial disappointment becomes a festering ‘mental block’, and they cannot see a way to turn this bad situation into something good.

But, here’s the thing – there is no such thing as a bad situation!

There are however, multiple BAD REACTIONS to a situation.

And every adverse reaction has its roots buried in either:

  • Rigid, inflexible preconceptions.
  • Poor understanding of photographic equipment and post-processing.

Or both!

On this occasion, I was expecting a rather heavy, flat-ish light scenario; but was greeted by the exact opposite.

But instead of getting ‘stroppy about it’, experience and knowledge allow me to change my expectation, and come up with a new ‘finished image previsualization’ on the fly so to speak.

image previsualization

Instead of the futility of trying to produce my original idea – which would never work out – I simply change my image previsualization, based on what’s in front of me.

It’s then up to me to identify what I need to do in order to bring this new idea to fruition.

The capture workflow for both ‘anticipated’ and ‘reality’ would involve bracketing due to excessive subject brightness range, but there the similarity ends.

The ‘anticipated’ capture workflow would only require perhaps 3 or 4 shots – one for the highlights, and the rest for the mid tones and shadow detail.

But the ‘reality’ capture workflow is very different.  The scene has massive contrast and the image looks like crap BECAUSE of that excessive contrast. Exposing for the brightest highlights gives us a very dark image:

image previsualization

But I know that the contrast can be reduced in post to give me this:

image previsualization

So, while I’m shooting I can previz in my head what the image I’ve shot will look like in post.

This then allows me to capture the basic bracket of shots to capture all my shadow and mid tone detail.

If you watch the video, you’ll see that I only use TWO shots from the bracket sequence to produce the basic exposure blend – and they are basically 5 stops apart. The other shots I use are just for patching blown highlights.

Because the clouds are moving, the sun is in and out like a yo-yo.  Obviously, when it’s fully uncovered, it will flare across the lens.  But when it is partially to fully covered, I’m doing shot after shot to try and get the best exposures of the reflected highlights in the water.

By shooting through a polarizer AND a 6 stop ND, I’m getting relatively smooth water in all these shots – with the added bonus of blurring out the damn canoeists!

And it’s the ‘washed out colour, low contrast previsualization’ of the finished image that is driving me to take all the shots – I’m gathering enough pixel data to enable me to create the finished image without too much effort in Lightroom or Photoshop.

Anyway, go and watch the video as it will give you a much better idea of what I’m talking about!

But remember, always take your time and try reappraise what’s in front of you when the lighting conditions differ from what you were expecting.  You will often be amazed at the awesome images you can ‘pull’ from what ostensibly appears to be a right-off situation.

 

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

Adobe Lightroom Classic and Photoshop CC 2018 tips

Adobe Lightroom Classic and Photoshop CC 2018 tips – part 1

So, you’ve either upgraded to Lightroom Classic CC and Photoshop CC 2018, or you are thinking doing so.

Well, here are a couple of things I’ve found – I’ve called this part1, because I’m sure there will be other problems/irritations!

Lightroom Classic CC GPU Acceleration problem

If you are having problems with shadow areas appearing too dark and somewhat ‘chocked’ in the develop module – but things look fine in the Library module – then just follow the simple steps in the video above and TURN OFF GPU Acceleration in the Lightroom preferences panel under the performance tab.

Adobe Lightroom Classic and Photoshop CC 2018 tips

Turn OFF GPU Acceleration

UPDATE: I have subsequently done another video on this topic that illustrates the fact that the problem did not exist in Lr CC 2015 v.12/Camera Raw v.9.12

In the new Photoshop CC 2018 there is an irritation/annoyance with the brush tool, and something called the ‘brush leash’.

Now why on earth you need your brush on a leash God ONLY KNOWS!

But the brush leash manifests itself as a purple/magenta line that follows your brush tool everywhere.

You have a smoothness slider for your brush – it’s default setting is 10%.  If we increase that value then the leash line gets even longer, and even more bloody irritating.

And why we would need an indicator (which is what the leash is) of smoothness amount and direction for our brush strokes is a bit beyond me – because we can see it anyway.

So, if you want to change the leash length, use the smoothing slider.

If you want to change the leash colour just go to Photoshop>Preferences>Cursors

Adobe Lightroom Classic and Photoshop CC 2018 tips

Here, you can change the colour, or better still, get rid of it completely by unticking the “show brush leash while smoothing” option.

So there are a couple of tips from my first 24 hours with the latest 2018 ransom ware versions from Adobe!

But I’m sure there will be more, so stay tuned, and consider heading over to my YouTube channel and hitting the subscribe button, and hit the ‘notifications bell’ while you’re at it!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.

 

Monitors & Color Bit Depth

Monitors and Color Bit Depth – yawn, yawn – Andy’s being boring again!

Well, perhaps I am, but I know ‘stuff’ you don’t – and I’m telling YOU that you need to know it if you want to get the best out of your photography – so there!

Let me begin by saying that NOTHING monitor-related has any effect on your captured images.  But  EVERYTHING monitor-related DOES have an effect on the way you SEE your images, and therefore definitely has an effect on your image adjustments and post-processing.

So anything monitor-related can have either a positive or negative effect on your final image output.

Bit Depth

I’m going to begin with a somewhat disconnected analogy, but bare with me here.

We live in the ‘real and natural world’, and everything that we see around us is ANALOGUE.  Nature exists on a natural curve and is full of infinite variation. In the digital world though, everything has to be put in a box.

We’ll begin with two dogs – a Labrador and a Poodle.  In this instance both natural  and digital worlds can cope with the situation, because nature just regards them for what they are, and digital can put the Labrador in a box named ‘Labrador’ and the Poodle in a separate box just for Poodles.

Let’s now imagine for a fleeting second that Mr. Lab and Miss Poodle ‘get jiggy’ with the result of dog number 3 – a Labradoodle.  Nature just copes with the new dog because it sits on natures ‘doggy curve’ half way between Mum and Dad.

But digital is having a bloody hissy-fit in the corner because it can’t work out what damn box to put the new dog in.  The only way we can placate digital is to give it another box, one for 50% Labrador and 50% Poodle.

Now if our Labradoodle grows up a bit then starts dating and makes out with another Labrador then we end up with a fourth dog that is 75% Labrador and 25% Poodle.  Again, nature just takes all in her stride, but digital in now having a stroke because it’s got no box for that gene mix.

Every time we give digital a new box we have effectively given it a greater bit depth.

Now imagine this process of cross-breed gene dilution continues until the glorious day arrives when a puppy is born that is 99% Labrador and only 1% Poodle.  It’ll be obvious to you that by this time digital has a flaming warehouse full of boxes that can cope with just about any gene mix, but alas, the last time bit depth was increased was to accommodate 98% Lab 2% Poodle.

Digital is by now quite old and grumpy and just can’t be arsed anymore, so instead of filling in triplicate forms to request a bit depth upgrade it just lumps our new dog in the same classification box as the previous one.

So our new dog is put in the wrong box.

Digital hasn’t been slap-dash though and put the pup in any old box, oh no.  Digital has put the pup in the nearest suitable box – the box with the closest match to reality.

Please note that the above mentioned boxes are strictly metaphorical, and no puppies were harmed during the making of this analogy.

Digital images are made up of pixels, and a pixel can be thought of as a data point.  That single data point contains information about luminance and colour.  The precision of that information is determined by the bit depth of the data

Very little in our ‘real world’ has a surface that looks flat and uniform.  Even a supposedly flat, uniform white wall on a building has subtle variations and graduations of colour and brightness/luminance caused by the angular direction of light and its own surface texture. That’s nature for you in the analogy above.

We are all familiar with RGB values for white being 255,255,255 and black being 0,0,0, but those are only 8 bit values.

8 bit allows for 256 discrete levels of information (or gene mix classification boxes for our Labradoodles), and a scale from 0 to 255 contains 256 values – think about it for a second!

At all bit depth values black is always 0,0,0 but white is another matter entirely:

8 bit = 256 discrete values so image white is 255,255,255

10 bit = 1,024 discrete values so image white is 1023,1023,1023

12 bit = 4,096 discrete values so image white is 4095,4095,4095

14 bit = 16,384 discrete values so image white is 16383,16383,16383

15 bit = 32,768 discrete values so image white is 32767,32767,32767

16 bit = 65,536 discrete values so image white should be 65535,65535,65535 – but it isn’t – more later!

And just for giggles here are some higher bit depth potentials:

24 bit = 16,777,216 discrete values

28 bit = 268,435,456 discrete values

32 bit = 4,294,967,296 discrete values

So you can see a pattern here.  If we double the bit depth we square the value of the information, and if we halve the bit depth the information we are left with is the square root of what we started with.

And if we convert to a lower or smaller bit depth “digital has fewer boxes to put the different dogs in to, so Labradoodles of varying genetic make-ups end up in the same boxes.  They are no longer sorted in such a precise manner”.

The same applies to our images. Where we had two adjacent pixels of slightly differing value in 16 bit, those same two adjacent pixels can very easily become totally identical if we do an 8 bit conversion and so we lose fidelity of colour variation and hence definition.

This is why we should archive our processed images as 16 bit TIFFS instead of 8 bit JPEGs!

In an 8 bit image we have black 0,0,0 and white 255,255,255 and ONLY 254 available shades or tones to graduate from one to the other.

Monitor Display Bit Depth

Whereas, in a 16 bit image black is 0,0,0 and white is 65535,65535,65535 with 65,534 intervening shades of grey to make the same black to white transition:

Monitor Display Bit Depth

But we have to remember that whatever the bit depth value is, it applies to all 3 colour channels:

Monitor Display Bit Depth Monitor Display Bit Depth Monitor Display Bit Depth

So a 16 bit image should contain a potential of 65536 values per colour channel.

How Many Colours?

So how many colours can our bit depth describe Andy?

Simple answer is to cube the bit depth value, so:

8 bit = 256x256x256 = 16,777,216 often quoted as 16.7 million colours.

10 bit = 1024x1024x1024 = 1,073,741,824 or 1.07 billion colours or EXACTLY 64x the value of 8 bit!

16 bit = 65536x65536x65536 = 281,474,976,710,656 colours. Or does it?

Confusion Reigns Supreme

Now here’s where folks get confused.

Photoshop does not WORK  in 16 bit, but in 15 bit + 1 level.  Don’t believe me? Go New Document, RGB, 16 bit and select white as the background colour.

Open up your info panel, stick your cursor anywhere in the image area and look at the 16 bit RGB read out and you will see a value of 32768 for all 3 colour channels – that’s 15 bit folks! Now double the 32768 value – yup, that’s right, you get 16 bit or 65,536!

Why does Photoshop do this?  Simple answer is ‘for speed’ – or so they say at Adobe!  There are numerous others reasons that you’ll find on various forums etc – signed and unsigned integers, mid-points, float-points etc – but really, do we care?

Things are what they are, and rumor has it that once you hit the save button on a 16 bit TIFF is does actually save out at 16 bit.

So how many potential colours in 16 bit Photoshop?  Dunno! But it’ll be somewhere between 35,184,372,088,832 and 281,474,976,710,656, and to be honest either value is plenty enough for me!

The second line of confusion usually comes from PC users under Windows, and the  Windows 24 bit High Color and 32 bit True Color that a lot of PC users mistakenly think mean something they SERIOUSLY DO NOT!

Windows 24 bit means 24 bit TOTAL – in short, 8 bits per channel, not 24!

Windows 32 bit True Color is something else again. Correctly known as 32 bit RGBA it contains 4 channels of 8 bits each; three 8 bit colour channels and an 8 bit Alpha channel used for transparency.

The same 32 bit RGBA colour (Mac call it ARGB) has been utilised on Mac OS for ever, but most Mac users never questioned it because it’s not quite so obvious in OSX as it is in Windows unless you look at the Graphics/Displays section of your System report, and who the Hell ever goes there apart from twats like me:

bit depth

Above you can see the pixel depth being reported as 32 bit colour ARGB8888 – that’s Apple-speak for Windows 32 bit True Colour RGBA.  But like a lot of ‘things Mac’ the numbers give you the real information.  The channels are ordered Alpha, Red, Green, Blue and the four ‘8’s give you the bit depth of each pixel, or as Apple put it ‘pixel depth’.

However, in the latter part of 2015 Apple gave OSX 10.11 El Capitan a 10 bit colour capability, though hardly anyone knew including ‘yours truly’.  I never have understood why they kept it ‘on the down-low’ but there was no fan-fare that’s for sure.

bit depth

Now you can see the pixel depth being reported as 30 bit ARGB2101010 – meaning that the transparency Alpha channel has been reduced from 8 bit to 2 bit and the freed-up 6 bits have been distributed evenly between the Red, Green and Blue colour channels.

Monitor Display

Your computer has a maximum display bit depth output capability that is defined by:

  • a. the operating system
  • b. the GPU fitted

Your system might well support 10 bit colour, but will only output 8 bit if the GPU is limited to 8 bit.

Likewise, you could be running a 10 bit GPU but if your OS only supports 8 bit, then 8 bit is all you will get out of the system (that’s if the OS will support the GPU in the first place).

Monitors have their own panel display bit depth, and panel bit depth costs money.

A lot of LCD panels on the market are only capable of displaying 8 bit, even if you run an OS and GPU that output 10 bit colour.

And then again certain monitors such as Eizo ColorEdge, NEC MultiSynch and the odd BenQ for example, are capable of displaying 10 bit colour from a 10 bit OS/GPU combo, but only if the monitor-to-system connection has 10 bit capability.  This basically means Display Port or HDMI connection.

As photographers we really should be looking to maximise our visual capabilities by viewing the maximum number of colour graduations captured by our cameras.  This means operating with the greatest available colour bit depth on a properly calibrated monitor.

Just to reiterate the fundamental difference between 8 bit and 10 bit monitor display pixel depth:

  • 8 bit = 256x256x256 = 16,777,216 often quoted as 16.7 million colours.
  • 10 bit = 1024x1024x1024 = 1,073,741,824 or 1.07 billion colours.

So 10 bit colour allows us to see exactly 64 times more colour on our display than 8 bit colour. (please note the word ‘see’).

It certainly does NOT add a whole new spectrum of colour to what we see; nor does it ‘add’ anything physical to our files.  It’s purely a ‘visual’ improvement that allows us to see MORE of what we ALREADY have.

I’ve made a pound or two from my images over the years and I’ve been happily using 8 bit colour right up until I bought my Eizo the other month, even though my system has been 10 bit capable since I upgraded the graphics card back in August last year.

The main reason for the upgrade with NOT 10 bit capability either, but for the 4Gb of ‘heavy lifting power’ for Photoshop.

But once I splashed the cash on a 10 bit display I of course made instant use of the systems 10 bit capability and all its benefits – of which there’s really only one!

The Benefits

The ability to see 64 times more colour means that I can see 64x more subtle variantions of the same colours I could see before.

With my wildlife images I find very little benefit if I’m honest, but with landscapes – especially sunset and twilight shots – it’s a different story.  Sunset and twighlight images have massive graduations of similar hues.  Quite often an 8 bit display will not be able to display every colour variant in a graduation and so will replace it with its nearest neighbor that it can display – (putting the 99% Lab pup in the 98% Lab box!).

This leads to a visual ‘banding’ on the display:

bit depth

The banding in the shot above is greatly exaggerated but you get the idea.

A 10 bit colour display also helps me to soft proof slightly faster for print too, and for the same reason.  I can now see much more subtle shifts in proofing when making the same tiny adjustments as I made when using 8 bit.  It doesn’t bring me to a different place, but it allows me to get there faster.

For me the switch to 10 bit colour hasn’t really improved my product, but it has increased my productivity.

If you can’t afford a 10 bit display then don’t stress as 8 bit ARGB has served me well for years!

But if you are still needing a new monitor display then PLEASE be careful what you are buying, as some displays are not even true 8 bit.

A good place to research your next monitor (if not taking the Eizo, NEC 10 bit route) is TFT Central

If you select the panel size you fancy and then look at the Colour Depth column you will see the bit depth values for the display.

You should also check the Tech column and only consider H-IPS panel tech.

Beware of 10 bit panels that are listed as 8 bit + FRC, and 8 bit panels listed as 6 bit + FRC.

FRC is the acronym for FRAME RATE CONTROL – also known as Temporal Dithering.  In very simple terms FRC involves making the pixels flash different colours at you at a frame rate faster than your eye can see.  Therefore you are fooled into seeing what is to all intents and purposes an out ‘n out lie.

It’s a tech that’s okay for gamers and watching movies, but certainly not for any form of colour management or photography workflow.

Do not entertain the idea of anything that isn’t an IPS, H-IPS or other IPS derivative.  IPS is the acronym for In Plane Switching technology.  This the the type of panel that doesn’t visually change if you move your head when looking at it!

So there we go, that’s been a bit of a ramble hasn’t it, but I hope now that you all understand bit depth and how it relates to a monitors display colour.  And let’s not forget that you are all up to speed on Labradoodles!

Become a patron from as little as $1 per month, and help me produce more free content.

Patrons gain access to a variety of FREE rewards, discounts and bonuses.