From Mono to Color

From time to time I get asked how these colorful images are created from a monochromatic camera. Searching the Internet or YouTube several tutorials can be found, which may differ in both workflow and recommended tools. My personal workflow may be a little different as well, as it pushes most of the creative work into the photo editor.

But first we do have to prepare the collected data and convert all into a format which can be used for editing. This is a quick outline of my workflow and tools used, but other routes and tools will probably work as well.

  • Stacking
    First of all the captured subs need to be calibrated (using Bias, Flat and Dark frames) and integrated into the master images. I still perform this step in Astro Pixel Prozessor ("APP") since it simply works for me, of course other tools like PixInsight ("PI") or the free Deep Sky Stacker ("DSS") will work as well.
  • Combine Red, Green and Blue
    The color channels are then combined into a RGB image (still in APP).
  • Gradient Removal
    In order to be able to perform good stretches later, it is mandatory to remove even the smallest gradient (i.e. light pollution) from the stacks. I still perform this step in APP, since I am kind of lazy. Of course this may be done in other tools like Graxpert.
  • Color Calibration
    To obtain nice and realistic stars, the colors need to be calibrated. For me Photometric Color Calibration in PixInsight is the tool of choice.
  • Detail Enhancements
    To emphasize finer structures in nebulae and reduce most of the residual noise, I use BlurXTerminator and NoiseXTerminator from RC Astro. These AI based tools simplified my workflow significantly, but using other tools like deconvolution do work as well. I consider this step as optional and it does not have a significant influence on colors anyway.
  • Stretching
    The images are still in the linear state and require some stretching to reveal finer structures of the target which otherwise will stay hidden in the dark when importing into the photo editor. Stretching is more or less applying a steep gradation curve to push the faint details into the center of the data range without over exposing the much brighter stars. In my workflow a quick and simple stretch is sufficient, details are developed later, so I use a regular Screen Transfer Function and reduce contrast if brighter or darker regions lose details.
    Afterwards I use the Histogram Transfer Function to convert the data into a 16 Bit image. A format which can be used with the image editor.
  • Star Separation
    As a final step the stars need to be separated from the background. I use StarXTerminator from RC Astro, of course other tools like StarNet will work as well. The stars from the narrowband images are not used.

Finally these five images are saved in TIFF format and loaded into the editor:

If you like to do your own experiments with a monochrome workflow, you'll find these files here: SH2-119 raw images

I intentionally did not spent time into Luminance subs. For typical narrowband targets like this, I only do a short exposure of RGB to obtain nice stars. Luminance is crucial for targets like dark nebulae, to work out fainter structures. This does not apply to this target. For more information check out my page What to do with Luminance?.

To follow my workflow your image editor needs to be capable of:

  • Working with 16 Bit Color Channels
    Since we perform some more stretching on the gradation curves, the image needs to provide some "Bit Reserve" to avoid color banding.
  • Layers and Layer Groups
    My non-destructive workflow requires to support layers and layer groups.
  • Layer Modes
    It is mandatory that the operational modes can be set for layer groups like "Screen".
  • Layer Filters
    Layer filters are used to control gradation curve or colors of the underlying layers within this group.

Of course all of this is available in Adobe Photoshop, but, for example, in Affinity Photo as well.

Our goal is to establish a layer structure like the following, which I quickly created in Affinity Photo:

The basic idea is to assign two control layers to each image layer (RGB background, Hydrogen Alpha, Sulphur II, Oxygen III), a curve to control gradation and one to control its color (narrowband layers) resp. saturation (RGB stars and background).

Each set of these three layers is placed into a layer group, so that only the particular image layer is affected by the adjustments.

The mode of each layer group needs to be set to "Screen" (maybe called "Negative Multiply" as well). You may imagine each of these groups as a separate slide projector with individual curve and color adjustments and the underlying RGB background be the screen.

In my final edit the five layers looked like this:

Feel free to download the final layers, place each one ontop of the RGB Background and switch each layer mode to "Screen" (resp. "Negative Multiply"). The colors should be identical to the image at the bottom.

This may look simple, but the greatest efford in this workflow is to tweak each curve to obtain a pleasant result.

First of all we need to decide which colors to choose for the narrowband data. For me something turquoise (about 195° on the color wheel) for oxygen is kind of mandatory. The color pick for Sulfur and Hydrogen is much more difficult and highly depends on the target. I normally start with a kind of natural, reddish color for Hydrogen (about 350°). The natural color of ionized Sulphur would be a deep red and quite similar to Hydrogen. For that reason I pick something yellow for Sulphur just to place some highlights (dropping most of the darker areas). For this target I went the opposite way and colored Sulphur in red and Hydrogen in some greenish yellow. Whatever you decide to choose, the result always will be a false color image which does not reflect reality (if we ever could see that with our own eyes).

Regarding colors you may also go with a so called "Hubble Palette", selecting pure red (0°) for Sulphur, green (120°) for Hydrogen and blue (240°) for Oxygen.

Since all this happens non-destructive (in regards to the image data) in my workflow you may try different approaches easily. On some targets I spent some time to "understand" the layers first before I could decide which way to go. Studying the results from other photographers may help as well.

Finetuning curves and colors may be fiddly on some targets and a greater trial of patience. Even tiny changes on a curve may result in a quite different composition. Here I only play a little with the curve of the Sulphur layer:

To not loose track with that many curves I first switch off visibility for all layers except the background and create a somewhat dull image and reduce saturation a little.

Then each narrowband layer is evaluated individually in combination with the background and the curves roughly adjusted to not flood the image and keep details and highlights. Now its time to activate all narrowband layers along with the background and ensure that no brighter areas got overexposed so details stay intact. You may need to reduce intensity on all layers to obtain a good overall exposure.

While constantly monitoring the "problematic spots" both curves and colors are fine adjusted to result in a pleasant image. Activating the Stars layer from time to time and controlling the star density by adjusting its curve should be part of this step as well.

In my version of the Clamshell Nebula I decided to go with a kind of flat image to reserve more "space" for the stars: