Segmenting and quantifying objects in images

From Endrov

One of the most common tasks is detecting objects in images (segmenting) and quantifying their properties. Endrov allows you to this in several ways. This is a short introduction to some of the most common methods and how you can do it with flows.

Contents

Basics

The process usually follows this pattern:

Basicsegmentationflow.png

The first step is grabbing the image want to process (here called ch0). Then you usually need to "massage" the image before being able to separate the different objects. Finally the objects are measured and the output stored somewhere.

Example approach 1: Thresholding

Threshold1D.svg

One of the classic techniques is by using a method called "thresholding". There are many such algorithms but they all boil down to defining an intensity level where pixels above the level are "foreground" and the others are background. This level can be either automatically picked using some optimizing criteria, or it can be set manually. It is very important that you fix your exposure time settings when using this technique (but you should always do it before comparing images).

The most classic automatic algorithm is called Otsu thresholding:

Otsuthreshold.png

We however prefer setting the threshold manually, as it gives more control, and as it then can be set to the same level for all the images. This is done using a pure mathematical operation:

Manualthreshold.png


Showing the output

In both examples above we have created a new output channel. To do so, you need only create a new reference, with an unused name. To get the output, right-click this channel and select evaluate.

Then use e.g. the 2D viewer. Auto-adjust the contrast-brightness to be able see the image. You can do an overlay to evaluate the performance:

Manualvsotsu.svg

With this channel setup:

Overlaybar.png

Noise reduction

Thresholding is hypersensitive to noise:

Threshold1Dnoise.png

You can get much better regions by first smoothing the image. This is what it looks like if you first perform a simple Gaussian blur. The sigma of the blur decides how much to even out the image. Evening out the image too much will however also remove features.


Background removal

Another problem with thresholding is that it inherently does not cope with varying background levels. It is simply not possible to find a suitable cut-off level (see image below). The latter is easy to induce by just having a slightly unaligned lamp or unevenly thick sample.

Threshold1Dvaryingbg.svg

The most common and simple solution to the problem is to do a local background substraction. To calculate the background one can use a "rolling average". For example, you can calculate the average in a rectangle around the local point, and then subtract it:

Rollingaveragerect.png

Sometimes you get a better result using a circle around your point instead, however it is slightly slower to execute. You have you try and see what makes most sense in your case.

Note on rolling average: If you work with floating point (likely), you will obtain many many colors. However, the automatic thresholding calculators are sensitive to having too many colors - to get around this problem, you should quantize the colors, essentially generating an image with fewer color levels. This is done like this:

Quantizeflow.png

The more levels, the better the precision, but also longer calculation time.

Example approach 2: Watershedding

Watershed.svg

Watershedding is a different approach than thresholding. It is based on the idea of filling basins, starting from a number of seed points. In the example on the left there are two seed points and time evolves downwards. Once the red and green basins meet, the stop growing laterally and this defines the final region boundary. On the way the green field could easily fill up the right basin, but had there been another seed point there, there would have been another large area.

Watershedding is less sensitive to noise and varying background than thresholding, but the same tricks can be used again. Instead, a good watershed ultimately depends on having good seed points. These can be chosen in many ways but usually as the local minimas or maximas.

Watershed with a simple seed selector

The simplest way is to just create a flow using the find local maxima operator:

Findminfeatureflow.png

However it will pick up too many seed points to be relevant. You can reduce this problem by first using a Gaussian blur:

Findminfeatureflowavg.png

These are two example outputs, where only a mild blur has been applied. Note that some nearby seeds disappear, giving fewer seeds per peak.

Watershedblurcomparison.svg

Generating seeds with scale space theory

There is a whole lot of theory behind scale spaces, but for our purposes it boils down to this: Convolve the image with a Mexican hat wavelet. This is done like this:

The parameter sigma, as with the averaging example, is there to select the size of the feature you are looking for. Objects larger than, or smaller than, the given size will be omitted. Thus if you have a large range of object sizes, you may have to combine several of these images to cover them all.


Quantification

Endrov does this kind of quantification using the Particle measure, which has it's own article.