My First Trip with Remote Imaging
I got into astrophotography at the same time I got into observational astronomy: July 2015, when I was given my first telescopewhile I was in graduate school. That telescope was an 8 inch Celestron Schmidt Cassegrain, and it came on a Celestron NexStar SE alt-az mount. It was a fairly difficult place to start astrophotography, but I picked it up quickly, and have since been given or sold at a good discount even better equipment and cameras. It has been a fun adventure learning the hobby over the last three years! Recently, an awesome new opportunity walked my way: the chance to image on a high-caliber rig under the very dark New Mexico skies, from afar.
When I was at the 2018 Texas Star Party, I met some folks from The Astro Imaging Channel, an online video series about astrophotography, who were going around interviewing people with astrophotography rigs and asking about their setups for a video they were putting together. They asked if I would do a presentation on their weekly show, and I had a great time presenting "Astrophotography Joyride: A Newbie's Perspective" which can be found on YouTube.
I stayed on as a panel member for the channel and have gotten to know the other members. For example, another presenter, Cary Chleborad, president of Optical Structures (which owns JMI, Farpoint, Astrodon, and Lumicon), asked if I would test a new design of a Lumicon off-axis guider. In late October, Cary and Tolga Gumusayak collaborated to give me fivehours of telescope time on a Deep Sky West scope owned by Tolga of TolgaAstro with a new FLI camera on loan and some sweet Astrodon filters, and asked me to write about the experience! Deep Sky West is located in Rowe, New Mexico, under some really dark Bortle 2 skies.
The telescope rig in question is the following: - Mount: Software Bisque Paramount Taurus 400 fork mount with absolute encoders -Telescope: Planewave CDK14 (14 -inch corrected Dall
Kirkham astrograph) -Camera: Finger Lakes Instrumentation (FLI) Kepler KL4040
-Filters: Astrodon, suite of wideband and narrowband
-Focuser: MoonLite NiteCrawler WR35
The whole thing is worth about $70k!
And you will notice the lack of autoguiding gear you do not need to autoguide this mount. It is just very good already once you are perfectly polar aligned.
After getting the camera specs, I needed to select a target. With a sensor size of 36x36mm and a focal length of 2563mm, my field of
view was going to be 48x48 arcmin (or 0.8x0.8 degrees). It sounded like I was going to get the time soon, so I needed a target that was in a good position this time of year. While I was tempted to do a nebula with narrowband filters, I have not processed narrowband images before, so I wanted to stick with LRGB or LRGB + Ha (hydrogen alpha). I decided that I should do a galaxy. Some ideas that came to mind were M81, the Fireworks Galaxy, the Silver Dollar Galaxy, and M33. M74 was also recommended by a colleague.
I finally settled on M33, which is difficult for me to get a good image of from my light polluted home location because of its large angular size on the sky, and it has some nice H II nebula regions that I have not been able to satisfactorily capture. Messier 33 is
also known as the Triangulum Galaxy for its location in the small Triangulum constellation up between the Aries and Andromeda constellations. It is about 2.7 million light years from Earth, and while it is the third largest galaxy in our Local Group at 40% of the Milky Way's size, it is the smallest spiral galaxy.
As far as how to use the five hours went, I originally proposed 30x300s L and 10x300s RGB each. But then Tolga told me that this camera (like my ZWO ASI1600MM Pro) has very low read noise, but some what high dark current, and it is also very sensitive, so shorter exposures would be better. He also told me that the dynamic range was so good on this camera that he shot 5- minute exposures of the Orion Nebula with it, and the core was not blown out! Even on my ZWO, the core was blown out after only a minute.
So I revised my plan to be 33x180s L, 16x180s RGB each, and I also wanted some Ha, so I asked for 10x300s of that.
The very next night after I decided on my target, November 7, Tolga messaged me saying he was getting the M33 data and asked if I wanted to join him on the VPN! He had me install TeamViewer, which is free for non - commercial users, and then he sent me the login information for the telescope control computer out at the remote site. It was a little laggy, but workable.
This was really cool! We could control the computer as if we were sitting in front of it. The software, TheSkyX with CCDCommander, let you automate everything. The list shown on the screen contains the actions for the scope to follow, which instead of being time based are event based.
The first instruction is "Wait for Sun to set below -10d altitude." This way, you do not have to figure out all the times yourself every night it just looks at the sky model for that night for that location. It turns the camera on, lets it cool, and then goes to the next action, which is to run a sublist for imaging M33 in LRGB until M33 s ets below 30 degrees altitude.
It has the exposure times and filter changes and everything else in there.
It also has how often to dither dithering is when you move the scope just a few pixels every frame or couple so that you do not get hot pixels in the same place in every frame. I have not had to do this yet since I have never been perfectly polar aligned enough or had a scope with good enough gears for it not to already be drifting around a little bit frame to frame on its own.
He only took some of the luminance frames and red frames the rest he would get on another night soon and then switched to green.
On the second green frame, the stars had jumped! Tolga thought at first a cable might be getting caught, so he switched to the live camera feed and moved the scope around a bit, but everything looked fine. He mentioned that it had been hitching in this same spot about a month ago.
It later turned out to be a snagged cable, which was later fixed. Anyway, the mount moved past that trouble spot, and the rest of the frames came out fine. I logged off because it was getting late.
He collected the rest of the frames, and then on November 11, sent me the stacked L, R, G, and B images.
Then it was time to process!
Preparing for Combination
I have recently graduated from processing in DeepSkyStacker and Photoshop to processing in PixInsight, which has been a huge but amazing step up. Since I am still learning PixInsight, I will be following the Light Vortex Astronomy tutorials, starting with "Preparing Monochrome Images for Colour - Combination and Further Post processing."
These tutorials provide excellent step by step instructions, complete with screenshots, and are organized very well.
First, I opened up the stacked frames in PixInsight and applied a screen stretch so I could see them.
The first processing step I did was DynamicBackgroundExtraction to remove background on each of the four stacked images. It may be very dark out in Rowe, NM, but there is likely still some background light. Since they were aligned, I could use the same point model for each one, so I started with the luminance frame, and then applied the process to each of them. This step can also be done on the combined RGB image.
Following the tutorial's advice, I set the "default sample radius" to 15 and "samples per row" to 15 in the Sample Generation tab. I hit Generate, but there were still a lot of points missing from the corners, so I increased the tolerance (in the Model Parameters (1) tab) to 1.000.
After increasing all the way to 1.5, there were still points missing from the corners, but I decided just to add some by hand. I also decided there were too many points per row, so I reduced that from 15 to 12. Then I went through and checked every point, moving it to make sure it was not overlapping a star, and deleting points that were on the galaxy. You want only background. You also want to make sure that none of the points are on top of any part of the galaxy or nebulosity in your image.
Next, I lowered the tolerance until I started getting red points ones that DBE is rejecting, making sure to hit "Resize All" and not "Generate" so I do not lose all my work! I stopped at 0.500, and all my points werestill valid.
I opened the "Target Image Correction" tab, selected "Subtraction" in the "Correction" dropdown, and then hit Execute. After I autostretched the result, this is what I had
Next, I lowered the tolerance until I started getting red points ones that DBE is rejecting, making sure to hit "Resize All" and not "Generate" so I do not lose all my work! I stopped at 0.500, and all my points were still valid. I opened the "Target Image Correction" tab, selected "Subtraction" in the "Correction" dropdown, and then hit Execute. After I autostretched the result, this is what I had:
Hmm, maybe a little too aggressive there are some dark a reas that I do not think are real. I backed off Tolerance to 1.000 and tried again.
The result looks pretty much the same, so I decided to run with it and see what happens. I saved the process to my workspace so I c
adjust later if needed (and I also neededto apply it to my RGB frames). This is what the background it extracted looked like:
I put a New Instance icon for the DBE process in the workspace (by clicking and dragging the New Instance triangle icon on the bottom of the DBE window into the workspace), and then closed the DBE process. Then I minimized the DBE'd luminance image and opened up the red image, and double clicked the process I just put into the workspace, which then applies the sample points to the red image. None were colored red for invalid, so I executed the process, and the result image looked good. I did the same for the green and blue, and saved out all of the DBE'd images for later reference, if needed. I also saved the process to the same folder for possible later use.
Next, I opened up the LinearFit process, which levels the LRGB frames with each other to account for differences in background that are a result of imaging on different nights, different times of the night, the different background levels you can get from the different filters, etc. For this process, you want to select the brightest image as your reference image. It is probably luminance, but you can check with HistogramTransformation.
I selected L, R, G, and B (the ones I have applied DBE to) and zoomed in on the peak (in the lower histogram). It is so dark at the Deep Sky West observatory that especially after background extraction, there is no background, and pretty much all the peaks are in the same place. Even the non DBE'd original images have basically no background (which would show up as space between the left edge of the peak and the left side of the histogram window) So I selected the luminance image as reference, and then applied the LinearFit process to each of the R, G, and B frames by opening them back up and hitting the Apply button. I needed to re-auto stretch the images afterwards.
Combining the RGB Images
Once their average background and brightness levels are leveled, it was time to combine the LRGB images together. For that, I went to the "Combining Monochrome R, G and B Images into a Colour RGB Image and applying Luminance" tutorial. First, I opened the ChannelCombination process, and made sure that "RGB" was selected as the color space. Then I assigned the R, G, and B images that I had background extracted and linearly fitted to each of those channels, and hit the Apply Global button, which is the circular icon on the bottom of the process window.
It was showing some noise at that point, but that will be fixed soon. Remember, this is just a screen stretch, which tends to be less refined than when I will actually stretch the image. I will come back to this tutorial later to combine the luminance image with the RGB, since it is a good idea to process them separately, then bring them together, since they bring different features to the table.
To properly white balance the color image, I turned to the PhotometricColorCalibration process, which I have absolutely fallen in love with. This process uses a Sun type star that it finds in the image using plate solving as a white reference from which to re-balance the colors. In order to plate solve, you need to tell it where the camera is looking and what your pixel resolution is.
To tell it where this image is looking, I simply clicked "Search Coordinates," entered "M33," and it grabbed the celestial coordinates
for that object. After hitting "Get," I entered in the focal length and pixel size. Focal length on the Planewave CDK14 is 2563mm, and the pixel size on the FLI Kepler KL4040 is a whopping 9 microns! I entered these values and hit the Apply button, then waited.
A few minutes later, the result appeared.
The change is small this time, but other times I have used this process, it has a made a huge difference - especially on my DSLR images. It looks like these Astrodon filters are already color balanced. My own Astronomik filters are too, but sometimes they still require a small bit of tweaking.
Time to deal with the background noise! I followed the "Noise Reduction" and "Producing Masks" tutorials. First, since I wanted to reduce noise without blurring fine details in the brighter parts of the image, I used a mask that protects the brighter parts of the image, where the signal to noise ratio is already high, so that I could attack the dark areas more heavily. Since I have a luminance image that matches the color data, I used that as my mask. (You can also create a luminance frame from your RGB image, which the “Producing Masks” tutorial explains how to do). Now, masks work better when they are bright and non - linear, so I duplicated my luminance image first by clicking and dragging the tab with the name of the image (once I have re-maximized it so I can see it) to the workspace. Then I turned off the auto screen opened up the ScreenTransferFunction process. Then I hit the little radioactive icon to apply an auto stretch again, and I opened theHistogramTransformation process. I then clicked and draggred the "New Instance" icon (triangle) from the ScreenTransferFunction process to the bottom bar of the HistogramTransformation window. This applies the same parameters that the auto stretch calculated to the actual histogram of the image. It is a quick and dirty way to stretch an image. Then I hit the Reset button on the ScreenTransferFunction window, closed it, and hit the Apply button in HistogramTransformation to apply the stretch.
To apply the mask to my color image, I selected the color image to make the window active again, I went up to Mask > Select Mask, and I selected my cloned, stretched luminance image.
Now, the red areas are the areas the mask is protecting, so since I wanted to apply the noise reduction to the dark areas, I inverted the mask by going to Mask > Invert Mask.
There we go.
I opened up MultiscaleLinearTransform for the noise reduction. Since I did not need to see the mask anymore, I went up to Mask > Show Mask. Now, do not forget you have the mask still applied a few times I have tried to do a stretch or other processing and it looks really weird or does not work, and it was because I left the mask on! Following the tutorial's recommendation, I set the settings for the four layers, and hit Apply.
If you want to see the effect of changes made to the parameters without having to run this many times, you can create a small preview window by clicking the "New Preview Mode" button at the top of PixInsight, selecting a portion of the image (I would pick one with some bright and some dark areas both), and then hit the "Real Time Preview" (open circle) icon at the bottom of the MultiscaleLinearTransform window. It still takes a bit to apply, but less time, and then once you are happy, you can go back to the whole image and apply it there. I think this worked pretty well here. I remove d the mask before I forgot it was still applied.
While I had the window open, I applied the same mask I created to the luminance channel as well, and ran the same MultiscaleLinearTransform on it.
Sharpening Fine Details I decided to try here a new process I have not tried yet for bringing out finer details deconvolution with DynamicPSF. I followed that section of the "Sharpening Fine Details" tutorial on the luminance image.
Deconvolution is awesome because it helps mitigate the blurring effects of the atmosphere, as is easily seen when processing planetary images. It is magical! I opened up the DynamicPSF process and hand selected about 85 stars "not too big, not too little" according to the tutorial.
I then made the window bigger and sorted the list by MAD (mean absolute difference), and scrolled through them to see where the most number of stars are clustered around.1.5e-03 and 2.5e-03 seem to be about the range. I deleted the ones outside this range. Next, I re-sorted the list by A (amplitude).The tutorial recommends excluding stars outside the range of 0.25-0.75 amplitude, but the brightest star I still have left is 0.285 in amplitude, so I justcut the ones below 0.1.Next I sortedby r (aspect ratio).The tutorial recommends keeping stars between 0.6-0.8, and all of mine are already pretty tight in that range, between 0.649 and 0.746, so I keptall 20 of them.Then I hit the "Export" icon (picture of the camera below the list of star data), and a tiny model star appeared underneath the window.
I had noticed that the stars, even in the center of the image, looked ever-so-slightly stretched. You can see that here with this model star. I closed the DynamicPSF-process, but keptthe star image open. First, I needed to make another kind of mask, involving Range Selection.In all honesty, I am a little out of my depth when it comes to masks, but Iamsure if I use them more, I will start to get a better feeling for them.For this, I just reliedon what the tutorial recommends. I re-opened the stretched luminance image I used earlieras a maskand then opened the Range Selection process and tweaked the settings as suggested in the “Producing Masks ”tutorial until the galaxy is selected.
Next, I needed to include a star mask with this as well, so I minimized the range mask for the moment and opened the Star Mask process, as described in part 5 of that same tutorial.I stretched it a bit with Histogram Transformation to reveal some dimmer stars. According to the tutorial, it will help to make the stars a little bit bigger before convolving this with the range mask, so I opened up Morphological Transformation, and copied the tutorial's instructions.
I part of the tutorial that makes the super-bright stars all black because none of mine are over the nebulous region of the galaxy.I skipped ahead to the making the more pronounced stars over nebulosity have more protection.
Next came smoothing the mask using A Trous Wavelet Transform; I applied it twice with the recommended settings to blur the mask.
Finally I could apply the mask.
After all of this, I lost track of what I was doing! I had to scroll back up as I was writing this to remember –deconvolution! I opened up the Deconvolution process, clicked on the External PSFtab, and gave it the star model I made earlier with DynamicPSF. I set other settings recommended by the tutorial and created a preview so I could play with the number of iterations without waiting forever for it to complete. All the way up to 50, it hadnot converged yet, so I wen tahead and ran 50 iterations on the whole image.
The difference is not enormous for all the work I did to get there, but you can definitely tell that the image is sharper. Pretty cool! All right, time to stretch! Do not forget to remove the mask!I almost did. Stretching When I usePhotoshop to process, the very first thing I do is stretch. But in PixInsight, thereare a lot of processes that work better on unstretched (linear) data.When your image comes out of stacking, all the brightness data are compressed into a very small region.
Stretching makes the data in that peak fill up more of the range of brightnesses so that you can actually see it.Allthe data is there, it just appears very dark in its linear state.
I opened up HistogramTransformation, turned off the screen stretch, and reset the histogram window.The image wasquite dark.I openedup a real-time preview so I could see what I was doing. I moved the gray point slider (the middle one) toward the left, and then I zoomedin on the lower histogram. The upper one shows what the result histogram will look like after the stretch is applied, and the preview window shows what the image will look like.
Stretching is a multi-step process; I hit Apply, and then Reset, and then I movedthe gray point slider some more.The histogram changes each time as the data fill up more of the brightness range.As you stretch, the histogram will move off the far-leftedge, and you can kill some extra background there if needed by moving the black point up to the base of the peak.Donotgo too crazy though -astro images with perfectly black backgroundstend tolook "fake."After a few iterations, my image was thennon-linear, and a screen stretch was no longer required.
Then I did thesame process with the RGB image. With those two killer images, it was time to combine the luminance with the RGB!
Applying L to RGB
Since the luminance filter passes all visible wavelengths, that image tends to have a higher SNR (signal-to-noise ratio),and thus finer detail because itisnot lost in the noise.
While you can make a good image with just RGB, applying a luminance channel can really make the fine details come out more, plus give you more flexibility with contrast, since you can act on the L alone and not do weird things to the color.The application process is simple andis described in part 3 of the "Combining Monochrome R, G and B Images into a Colour RGB Image and applying Luminance" tutorial.I opened up LRGB Combination, and disabled the R, G, and B channels, since they were already combined.I selectedthe L image, left the channel weights set to 1, and left the other settings as they were, since Iwouldplay with saturation and lightness inCurvesTransformation instead later on.I did tick "Chrominance Noise Reduction."Then I made sure the RGB image window is active and hit Apply.
This is getting awesome!!
Almost there folks, almost there...First, I applied
to boost the dynamic range, just to see what it does.It works best with a range selection mask, and so Ire-usedthe range selection -star mask mask I made earlier.
You know, Iwasnot a fan.The core is gone!Undo!I tried fewer layers -4 -but it was even worse. SoI increased to 10, and that looked much better!
It provided more definition in the arms. Definitely awesome. And, lastly, I applied a CurvesTransformation. (After removing the mask first...) I useda real-time preview window to see what I was doing. I moved the whole RGB/K curve into more of an S-shape, and then bumped up saturation in the middle.Drumroll please...All right, here it is!
Now that is a dang fine image! When can I get me one of these telescopes and one of these cameras??
As far as processing goes, this data was easy to work with.I could have not done anything to it but stretch and combine, and it would have been awesome.
The colors were very nearly already balanced -I think the PhotometricColorCalibrationonly did a tiny bit.
I just cannot get enough of the detail in this image! I have zoomed in and looked all around, and everywhere there is something sharp and marvelous. Excellent detail in the HII regions and other nebulous regions, several tiny background galaxies, and so many starsthat are resolved in the galaxy!I showed off the image to my coworkers, and they also couldnotget over the level of detail that was revealed.The fact that I can see structure in the nebulae of another galaxy 2.7 million lightyears awayjust leaves my jaw on the floor. Thanks again to Tolga and Cary for letting me have this data and this experience!