More short exposures or fewer long exposures –
Which is better?
When I first started out with deep sky imaging, I too wanted to know the answer to this question. Would more short exposures or fewer long exposures be the best way to go for deep sky imaging? There were so many different opinions, depending on who I asked, and at the time I thought I’d never get to the bottom of this!! Eventually after doing some homework and lots of research, I eventually managed to get the answers I was looking for. So I decided to put together the following article, so that anyone else wondering the same thing will also have the answer!
So what do we mean by “more short exposures or fewer long exposures?”
For example, you can take 12 subs at 10 seconds each, giving a total exposure time of 120 seconds. Or you can collect the same overall exposure time by using fewer long exposures e.g. 2 subs at 60 seconds each (again totaling 120 seconds). So the simple question is this, which method would give you the most complete image with the most detail and the least noise?
First I’m going to go through a few fundamentals, explaining how an image actually builds up onto the cameras sensor during an imaging session. This will help you understand how we arrive at the final answer (don’t cheat by skipping to the end!).
What do we mean by noise?
When we talk about noise, we mean the elements of an image recorded by the cameras sensor (chip) which we don’t want. In an ideal world we would only wan to receive the pure photons of light from the object and that’s it!. Unfortunately the reality is that there will be other information (the noise) recorded at the same time. When this mixes in with the good information (the signal), it degrades the image. This noise comes in three main forms, photon noise, read noise and dark noise.
Photon noise or “shot noise” is actually created by the object itself. This is because the photons received by the cameras chip are arriving at random intervals. In other words over a set period of time the packets of photons received, by nature will actually vary in quantity as they travel through space and arrive at your camera. This means that each sub taken at the same length of time would have received slightly different amounts of photons. This randomness of photons received produces this type of noise.
Read noise is caused by the internal electronics of the camera. This is due to a number of processes which happen between the chip receiving the photons & converting them into the final digital value which forms the image.
Thermal noise, sometimes referred to as dark noise or dark current is created by thermal variations in the cameras chip. This will be increased significantly when using an uncooled DSLR camera. Where as cooled CCD cameras will remove almost all of this type of noise.
What can we do about all this noise then?
Here are the main steps we can take to remove a good amount of this unwanted noise.
Take dark frames
For example, thermal noise can mostly be removed by taking dark frames. These dark frames must be taken with the camera lens or scope cap fitted so no light can enter the cameras chip. These must have the same exposure time and temperature as the light frames taken (the subs). Then the information recorded from these dark frames can be subtracted from the light frames in the stacking process. This is an effective method but it’s difficult to keep up with the correct temperatures during an imaging session.
As you can imagine, a session running over a period of 2 hours for example could experience quite high temperature variations. The ideal would be to take a dark sub straight after every light sub, but this will loose you precious clear sky time! For this reason, a lot of people tend to take the dark frames at the end of their imaging sessions.
Higher signal to noise ratio
The main thing we can do about reducing the effect of noise is to record more signal in the imaging process. The longer the exposure time, the less overall percentage of noise will be present (higher signal to noise or SNR). This is because of something called the square route rule. As the signal is increased with longer exposure times, the noise will increase to the value of the square root of the signal.
In other words, if the exposure signal increases by 100 times, the noise would increase by 10 times – the square root of the signal increase (giving an increase of 10% noise). Where as, if the signal increases by 400 times – the noise will have increased by a value of 20 times, the square root of the signal increase (giving only an increase of 5% of the signal). You can see here that even though the amount of noise is higher, as a percentage of the overall signal it’s a much lower ratio! This in turn produces a much better image result.
Stacking the subs
The next thing to consider is the stacking process. Apart from the bad data from the dark frames, flat frames and the bias frames being subtracted. Taking a number of exposures and stacking them together will mainly achieve two things. Increased signal to noise ratio and increased dynamic range.
Increasing signal to noise ratio
The reason stacking increases signal to noise ratio is due to all the exposures carrying random noise readings. The fact that these are all random enables the stacking software to average a lot of this noise out, thereby smoothing it out or diluting it.
Increasing dynamic range
Simply put, dynamic range is the difference between the brightest and dimmest values recorded in a pixel on the cameras chip. Each of these pixels will have received an amount of photons during an exposure duration. It’s the build up of these photons which give the amount of brightness. The act of stacking multiple images together will combine all these values together, increasing the pixel values, therefore increasing the brightness levels leading to an image revealing the more fainter areas.
Coaxing that detail out
After the stacking process has been carried out, the image will then need to be processed. It’s during this final processing stage that all the hidden data or “signal” will be revealed. This is known as stretching the image, mainly by adjusting the curves and levels settings in which ever software you are using. The problem here is that not only will you be bringing all the “signal” out, you will be bringing out the “noise” too.
Going through all the processes outlined above will have gone a long way to ensure this noise is reduced to the minimum. So the less noise there is, the more you will be able to stretch the image without spoiling the end result. Keeping the image nice and smooth with minimal grainy effects is what we’re looking for here!
So going back to the original question…..
We have already said that longer exposures lead to a higher signal to noise ratio, which for the reasons outlined above is more beneficial for producing a smooth and detailed image. This will enable you to stretch an image more, eking that lovely signal out without degrading the end result with too much of the noise showing it’s ugly head!
The next thing we can say is that using shorter exposure times will give you limited signal collection. In other words, the fainter parts of the object won’t have been registered onto the cameras chip. This is because the chip will not have had time to collect enough photons emitting from those parts of the object, hence building enough signal to be registered. So no matter how many images you then stack, zero multiplied by any number of subs is still zero data! I have heard people say “the data is there, it’s just very faint”……The answer to this is simply that the silicon chip receiving the signal from the object has it’s limits and will always have some missing signal.
Also there is something called the “absolute sensitivity threshold”. This is when the signal received (amount of photons) is equivalent to the amount of noise. Only signal greater than this threshold will be of any use to you and the image. The effect of less signal received and low signal to noise ratio will also loose colour accuracy in the image, as this week signal will be swamped by the increased noise.
Hopefully I have explained well enough and you can now see the obvious answer is that less longer exposures gives better results!
The below two images are quick and dirty unguided sub stacks of the M101 pinwheel Galaxy without any dark,flat or bias frames included. The total exposure times are the same, using 800 ISO and I carried out some straight forward stretches on Photoshop trying to pull out as much detail in both images as I could.
You can clearly see that the 2 sub version has more detail and even though both images are very noisy, there is less less noise in this image.
Obviously these images are very rough looking, as a 2 minute total exposure time using an unmoded DSLR is nowhere enough to get any kind of decent image. Remember you need to make sure you are taking as many subs as you can to get the best results possible.
There are of course limits to how long you will be able to expose for. This all depends on the type of camera you have, the accuracy of the tracking, the type of objects you are imaging, the scopes/lenses optics etc. You will also get to a point where you are saturating the cameras chip with either too much light from certain parts of the image or too much thermal noise coming into play. You will need to find the sweet spot to get the best results. Remember “signal” is King! Just make sure these subs are long enough to collect enough signal without literally “overcooking” things!
Hope you have found this article enlightening and it’s answered that question clearly. Maybe next time someone asks you if more short exposures or fewer long exposures is better. You will be able to explain or even point them to this article
If you have any questions please leave a comment and I will answer the best I can.
May your long exposures be plentiful and long!
Oh……and did I mention, don’t forget to share, clear skies!.