The segmentation is a process that tries to recognize objects with the same characteristic from an image and label each object differently in the output image.
There are a lot of methods of recognizing the objects in a image. Some of them use the histogram of the image to determine the object-threshholds (Bhattacharyya method) but in this case I am using a neural network to determine to which object does a pixel belong to.
The algorithm can compute the mean value and the variance for each pixel with a window who's size can be changed by the user.
These values are associated to each pixel and are used as an input into the neural network. This network will decide to which object will the pixel belong to.
The number of objects is equal with the number of neurons from the output layer and it must be supplied by the user.
The neural network used is a Circular Self-Organizing map (by Teuvo Kohonen) who's planar (as in not circular) description can be found here.
original 'dam' image
labeled image (all labels are shown)
only label #9 and label #10 are shown
As you can see the neural network succeeded in recognizing the 'crack in the dam' object in label #9 and #10, so it was possible to isolate that region.
values used to get to this result:
- output neurons = 20 (approximated number of objects)
- window size = 1 (this means that a window size of 1*2+1 (3x3) was used to get the mean values)
- max epochs = 200 (how many iterations before the algorithm stops)
- input = mean values (the only parameter associated with each pixel)
More epochs normally give a better result, but also takes more time to process. It took about 2 minutes to process this image on a PII 350Mhz.
If evolution (appears in the output window) is below 250 than you can consider that the network is stabilized. If you obtain a bigger value you'll have to increase the max epochs number.
The evolution is the square error between the last two epochs.