What to do with Luminance?

Did you ever wonder why imaging Luminance using a monochrome setup may be useful at all? Looking at the transmission curves of regular filters, it is quite obvious that the Luminance represents more or less the sum of all three color channels. While some RGB filters intentionally contain overlaps or gaps, we assume an even split for the rest of this page.

Using the Luminance filter the camera captures about three times the photons in the same timeframe, compared to the individual color channels. As a consequence the overall quality in details increases significantly after stacking, but it still is a black and white image.

Luckily we produce images for humans. Our eyes use two quite different receptors to capture photons. There are highly sensitive rods, to determine brightness, and the cones which only detect colors with a significantly lower density. In camera terms: Our eyes have a much higher resolution for brightness than for colors.

This is, for example, utilized in any kind of television transfer. The brightness, or "Luminance", is always encoded with a much higher resolution and bandwith compared to the colors, the "Chrominance".

A regular RGB image may be split into these components using the L*a*b Color Space:

In this example the RGB channels were exposed for about three hours each. The individual channel could only capture a third from the available photons so I used an exposure time of four minutes (f/5.8 optics) each.

In addition Luminance was captured for another 3.5 hours, using a reduced exposure times of 3 minutes per sub. The larger quantity of subs reduced the sensor induced noise further.

This Luminance stack is now used to replace the Luminance from the RGB stack, which cleans up the result significantly:

To replace Luminance in an image several methods exist.

If your image editor supports working in L*a*b mode, simply convert the image into L*a*b first. Then select the Luminance channel and replace it with the Luminance image using Copy&Paste. Convert the image back to RGB and you're done.

Alternatively place the Luminance integration as a layer on top of the RGB layer and set its mode to "Luminance". That way you may easily compare the differences by toggling the layer's visibility.

Interestingly both approaches produce slightly different results, depending on the target and source images. I personally prefer using the L*a*b variant, the results seem to be a little more smoother and do not increase contrast.

But isn't it a little sad to throw away the luminance information captured from the color channels? In fact it is possible to stack the RGB subs along with the luminance subs to enhance that stack a little, primarily within the darker areas. But don't expect this to be a game changer, differences are tiny. Even in theory the total time of your RGB subs only account for a third to the luminance. In reality enhancement is probably less due to the filter curves.

In this sequence the RGB sums seem to have slightly more contrast. My guess is that this is caused by the fact, that my Antlia color filters do not pass the light from the urban illumination which is recorded in the luminance at its full glory. Differences in the details require some pixel peeping.

That series is taken from my image of the Iris Nebula.

You may have noticed that these are all starless images. The reason is simple: The details of interest are located in the starless version only.

Of course you are free to apply the Luminance Stars to the RGB Stars Luminance layer as well:

The difference in this example is too small to be of any significance. Looking at the details more closely, stars get slightly more "fluffy" and enhance darker stars a little. If this is of benefit for a particular target needs to be decided individually.

But why not replace the star's Luminance from a different layer, like Hydrogen Alpha in this example? I additionally imaged a single sub of H-Alpha for 5 minutes and generated a star mask (using StarXTerminator). Using this as the Luminance channel reduces the star density quite dramatically:

If the image would benefit from that small stars needs to be decided individually.

Editing Chrominance separate from Luminance may open other possibilities as well. RGB images produced with a monochrome camera often expose some color noise in darker areas. When increasing saturation or pushing the curve too far, this can get quite annoying. Using a common denoise tool on the final RGB image most often smooth away smaller details as well.

Reducing noise only in the Chrominance channel in an early state may circumvent this issue. First convert the RGB image into L*a*b, select the Luminance channel and set it to 50% (draw a rectangle or use a large brush). Still in Lab mode simply add some significant blur to the image, then replace the Luminance channel with the proper Luminance image and convert it back to RGB. The result may look like this:

That way the Luminance channel is not affected by the denoise process, as a consequence the finer details stay intact.

This is what my edit of the Seahorse Nebula B150/LDN1082 looks like: