# Easy 2D Signal-to-Noise Ratio (SNR) calculation for images to find stars without extracting the background noise (C++) This article describes how a 2D signal-to-noise ratio (SNR) can be used to decide if there is a star in the Long story short - I was looking for a way to detect more or less reliably if a user selected a region which contains a star. I wanted to be able to clearly distinguish between the following two images:

#### Solution with the CImg library

After a long journey I finally ended up with the following working solution based on the CImg library which in a way calculates the Signal-to-noise ratio (SNR):

For the two images above the code gives the following results:

For many people this is where the journey ends. But for some of you it may just begin 🙂 Follow me into the rabbit hole and find out why the solution shown above actually works…

#### SNR = SNR?

Before I decided to write this article I knew the term  basically from my studies in the field of electrical engineering. At that point in time it was basically a theoretical definition which stated that the  is somehow defined as the ratio of signal power to the noise power. This sounds logical but also is a bit abstract and may not help you much.

Basically the problem is that you need both - the "actual signal" and the "noise signal" to calculate their powers to finally calculate the resulting . But more about that later....

First, I would like to get further down to the actual meaning of the  in image processing. For electrical signals measured in voltage it may be hard to exactly imagine what the definition above means, but for 2D signals - i.e. images - it is much easier to visualize the meaning of the signal-to noise ratio .

During my research I found some more or less valuable discussions about the calculation of the  - some of then were on researchgate.net and on dspguide.com. But lets start with the definitions given by Wikipedia (at least for scientific purposes it is still a valuable source of information).

Traditionally, the  has been defined as the ratio of the average signal value  to the standard deviation  of the background:



with



and



So basically for the average signal  all pixel values are summed up and divided by the number of pixels. This will result in an average "brightness" of the image. This goes to the numerator.

The denominator of the equation is the standard deviation of the background . In this case the standard deviation measures the strength of the noise. The higher the standard deviation i.e. the deviation from the mean pixel value for each single pixel, the stronger the noise is. This way the denominator increases with bigger noise and the  decreases.

#### Getting rid of the chicken-and-egg problem

So far, so good... but as already mentioned above, we only have the one image with the star which includes both - the signal (star) and the noise (background). Without additional effort (automatic detection or user interaction) the program which should calculate the  will not be able to calculate .

The first approach, to get rid of this chicken-and-egg problem is proposed in the same Wikipedia article (also when the intention behind is a different one):

In case of a "high-contrast" scene, many imaging systems set the background to uniform black, forcing  to . This way the previous definition of the  does no longer work. Instead  will be replaced by the standard deviation  of the signal. This way the new  is defined as:



If you look at the equation you may notice that this  only depends on the one image signal and no longer requires an extraction of the background signal.

So I thought this already is a possible solution. I implemented the following small program to test this algorithm on the given star image.

Executing this program gives the following result:

As you can see it is already possible to distinguish between having a star and not having a star. That's actually good news. But the value somehow is inversely proportional and thus a little bit counter intuitive. At least from my perspective it would be nice if the SNR would increase if the signal increases compared to the noise. You could of course achieve that by inverting the fraction. But still one problem that remains in my eyes is the great range in which the values can move. The following picture illustrates the different  values for each star:

#### Yet another definition of SNR

Then I stumbled upon the following PDF document

https://www2.ph.ed.ac.uk/~wjh/teaching/dia/documents/noise.pdf

which defines another  as:



The document explains in detail how the  can be calculated IF the noise can be measured somewhere in the image - e.g. in the background or at least the noise is known. Basically this is the same requirement as it was written in the Wikipedia article but this time I realized that it is sufficient to know how the noise looks like (this is of course equally true for the equation shown in the Wikipedia article). I did not put additional effort in researching this but I guess the noise in astronomical images is similar in most cases.

Based on this assumption we would be able to calculate a value for  even if we have no image. Instead we estimate this value. When I looked closer to the CImg API I realized that this idea (of course...) is not new but there is even a function which is doing exactly that:

The CImg library provides a variance estimator function (variance_noise()):

http://cimg.eu/reference/structcimg__library_1_1CImg.html#a1c6097b1ed4f54e4b3ff331f6a2dc7d6

With this function a more or less typical noise is simulated so that a more complicated background extraction and noise measurement does not need to be implemented. It is even possible to choose between different estimator functions. For my purpose I decided to use the "second moment" (variance_noise(0)) to calculate the noise estimation.

#### The final solution

This in the end lead to the final piece of code which turned out to work quite well:

The example above can be compiled like shown below. It requires three additional libraries. libCCfits and libcfitsio are just used for loading a FITS example file. I think the dependency to libpthread is introduced by libcfitsio.

The execution of the resulting program for the two different images as shown above gives the following result:

#### Conclusion and summary

When you look at those numbers they look somehow intuitive. For pure noise we have a small  value. For a weak star the value is already about  times bigger and for a "high-contrast" star it is still significantly bigger. The following picture shows the different  values for each star image: