Long story short - I was looking for a way to detect more or less reliably if a user selected a region which contains a star. I wanted to be able to clearly distinguish between the following two images:

After a long journey I finally ended up with the following working solution based on the CImg library:

1 2 3 4 5 |
CImg <uint16_t> image; ... double q = image.variance(0) / image.variance_noise(0); double qClip = (q > 1 ? q : 1); double snr = std::sqrt(qClip - 1); |

For the two images above the code gives the following results:

1 2 3 4 |
~/snr$ ./snr no_star.fits SNR: 0.0622817 ~snr$ ./snr test_star.fits SNR: 1.5373 |

For many people this is where the journey ends. But for some of you it may just begin ðŸ™‚ Follow me into the rabbit hole and find out why the solution shown above actually worksâ€¦

Before I decided to write this article I knew the term basically from my studies in the field of electrical engineering. At that point in time it was basically a theoretical definition which stated that the is somehow defined as the **ratio of signal power** to the **noise power**. This sounds logical but also is a bit abstract and may not help you much.

Basically **the problem** is that **you need both** - the "actual signal" and the "noise signal" to calculate their powers to finally calculate the resulting . But more about that later....

First, I would like to get further down to the actual meaning of the in image processing. For electrical signals measured in voltage it may be hard to exactly imagine what the definition above means, but for 2D signals - i.e. images - it is much easier to visualize the meaning of the signal-to noise ratio .

During my research I found some more or less valuable discussions about the calculation of the - some of then were on researchgate.net and on dspguide.com. But lets start with the definitions given by Wikipedia (at least for scientific purposes it is still a valuable source of information).

Traditionally, the has been defined as the ratio of the average signal value to the standard deviation of the background:

with

and

So basically for the **average** signal all pixel values are summed up and divided by the number of pixels. This will result in an average "brightness" of the image. This goes to the numerator.

The denominator of the equation is the **standard deviation** of the background . In this case the standard deviation measures the strength of the noise. The higher the standard deviation i.e. the deviation from the mean pixel value for each single pixel, the stronger the noise is. This way the denominator increases with bigger noise and the decreases.

So far, so good... but as already mentioned above, we only have the one image with the star which includes both - the signal (star) and the noise (background). Without additional effort (automatic detection or user interaction) the program which should calculate the will not be able to calculate .

The first approach, to get rid of this chicken-and-egg problem is proposed in the same Wikipedia article (also when the intention behind is a different one):

In case of a "high-contrast" scene, many imaging systems set the background to uniform black, forcing to . This way the previous definition of the does no longer work. Instead will be replaced by the standard deviation of the signal. This way the new is defined as:

If you look at the equation you may notice that this only depends on the one image signal and no longer requires an extraction of the background signal.

So I thought this already is a possible solution. I implemented the following small program to test this algorithm on the given star image.

1 2 3 4 5 6 7 8 9 |
CImg<float> image; . . . double meanOfImage = image.mean(); double varianceOfImage = image.variance(0 /* 0 = second moment */); double snr = meanOfImage / varianceOfImage; std::cout << "SNR: " << snr << std::endl; |

Executing this program gives the following result:

1 2 3 4 5 6 |
~/snr$ ./snr_high_contrast no_star.fits SNR: 1.76239 ~/snr$ ./snr_high_contrast test_star.fits SNR: 0.604044 ~/snr$ ./snr_high_contrast high_contrast_star.fits SNR: 0.000659178 |

As you can see it is already possible to distinguish between having a star and not having a star. That's actually good news. But the value somehow is inversely proportional and thus a little bit counter intuitive. At least from my perspective it would be nice if the SNR would increase if the signal increases compared to the noise. You could of course achieve that by inverting the fraction. But still one problem that remains in my eyes is the great range in which the values can move. The following picture illustrates the different values for each star:

Then I stumbled upon the following PDF document

https://www2.ph.ed.ac.uk/~wjh/teaching/dia/documents/noise.pdf

which defines another as:

The document explains in detail how the can be calculated **IF** the noise can be measured somewhere in the image - e.g. in the background **or** at least the noise **is known**. Basically this is the same requirement as it was written in the Wikipedia article but this time I realized that it is sufficient to know how the noise looks like (this is of course equally true for the equation shown in the Wikipedia article). I did not put additional effort in researching this but I guess the noise in astronomical images is similar in most cases.

Based on this assumption we would be able to calculate a value for even if we have no image. Instead we * estimate* this value. When I looked closer to the CImg API I realized that this idea (of course...) is not new but there is even a function which is doing exactly that:

The CImg library provides a variance estimator function (

*variance_noise()*):

http://cimg.eu/reference/structcimg__library_1_1CImg.html#a1c6097b1ed4f54e4b3ff331f6a2dc7d6

With this function a more or less typical noise is simulated so that a more complicated background extraction and noise measurement does not need to be implemented. It is even possible to choose between different estimator functions. For my purpose I decided to use the "second moment" (*variance_noise(0)*) to calculate the noise estimation.

This in the end lead to the final piece of code which turned out to work quite well:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 |
#include <cmath> // std::sqrt() #include <memory> #include <iostream> #include <CCfits/CCfits> // Disable X11 dependency #define cimg_display 0 #include <CImg.h> using ImageT = cimg_library::CImg<uint16_t>; void readFits(ImageT * outImg, const std::string & inFilename) { std::unique_ptr <CCfits::FITS> pInfile(new CCfits::FITS(inFilename, CCfits::Read, true)); CCfits::PHDU& image = pInfile->pHDU(); // read all user-specifed, coordinate, and checksum keys in the image image.readAllKeys(); // Set image dimensions outImg->resize(image.axis(0) /*x*/, image.axis(1) /*y*/, 1 /*z*/, 1 /*1 color*/); // At this point the assumption that there is only 1 layer std::valarray<unsigned long> imgData; image.read(imgData); cimg_forXY(*outImg, x, y) { (*outImg)(x, outImg->height() - 1 - y) = imgData[outImg->offset(x, y)]; } } int main(int argc, char *argv[]) { ImageT image; if (argc != 2) { std::cerr << "Usage: ./snr <my_fits_file.fits>" << std::endl; return 1; } std::string filename = argv[1]; try { readFits(& image, filename); double varianceOfImage = image.variance(0 /* 0 = Calc. variance as "second moment" */); double estimatedVarianceOfNoise = image.variance_noise(0 /* Uses "second moment" to compute noise variance */); double q = varianceOfImage / estimatedVarianceOfNoise; // The simulated variance of the noise will be different from the noise in the real image. // Therefore it can happen that q becomes < 1. If that happens it should be limited to 1. double qClip = (q > 1 ? q : 1); double snr = std::sqrt(qClip - 1); std::cout << "SNR: " << snr << std::endl; } catch(CCfits::FitsException & exc) { std::cerr << "Error loading FITS file: " << exc.message() << std::endl; return 1; } return 0; } |

The example above can be compiled like shown below. It requires three additional libraries. *libCCfits* and *libcfitsio* are just used for loading a FITS example file. I think the dependency to *libpthread* is introduced by *libcfitsio*.

1 |
~/snr$ g++ snr.cpp -lCCfits -lcfitsio -lpthread -o snr |

The execution of the resulting program for the two different images as shown above gives the following result:

1 2 3 4 5 6 |
~/snr$ ./snr no_star.fits SNR: 0.0622817 ~/snr$ ./snr weak_star.fits SNR: 1.5373 ~/snr$ ./snr high_contrast_star.fits SNR: 12.2823 |

When you look at those numbers they look somehow intuitive. For pure noise we have a small value. For a weak star the value is already about times bigger and for a "high-contrast" star it is still significantly bigger. The following picture shows the different values for each star image:

The three test files can be downloaded here:

- https://www.lost-infinity.com/wp-content/uploads/no_star.fits
- https://www.lost-infinity.com/wp-content/uploads/weak_star.fits
- https://www.lost-infinity.com/wp-content/uploads/high_contrast_star.fits

In the end, when it turned out to be that easy, I was wondering if it is even worth to write an article about it. However, if you read upon here I hope it saved you some time ðŸ™‚ For further research I added the most important resources below:

- https://en.wikipedia.org/wiki/Signal-to-noise_ratio_(imaging)
- https://www.researchgate.net/post/How_is_SNR_calculated_in_images
- https://www.dspguide.com/ch25/3.htm
- https://www2.ph.ed.ac.uk/~wjh/teaching/dia/documents/noise.pdf
- https://nptel.ac.in/courses/117104069/chapter_1/1_11.html
- https://nptel.ac.in/courses/117104069/chapter_1/1_12.html
- https://kogs-www.informatik.uni-hamburg.de/~neumann/BV-WS-2010/Folien/BV-4-10.pdf

Clear skies!