How to COMPLETELY remove a background in MatLab - image

I am trying to remove a background using MatLab.
I have achieved what looks like very good results using the traditional
imsubtracted = im - background;
However, the blackness that has replaced the background is not pure black. Further image processing reveals that there is a significant amount of noise left over. Is it possible to either completely remove the background or make it uniformly the same color?
Please note, I am dealing with very small objects in a rather large black space.

Once you subtract the background, you should threshold the resulting image to create a binary foreground mask. Set all the differences less than a threshold to 0 (background), and set the ones greater than or equal to a threshold to 1 (foreground). You can then use morphology such as imopen to get rid of small noisy specks in the background and imclose to get rid of small gaps or holes in the foreground.
Once you are happy with your foreground mask, you can use it as a logical index to set the background pixels to whatever color you choose.

Related

OpenCV algorithm to determine true color (black, white, grey) of black and white scanned image?

I'm working on a small program in OpenCV that can automatically clean scanned manga pages. Here are the theoretical before and afters:
Before:
http://raw.senmanga.com/Bleach/541/7/
After:
http://mangastream.com/read/bleach/26534422/10
The cleaned image in the second link was done manually in photoshop.
As you can see, I only have to work in black, white, and grey, but a comparison between the raw and the finished images shows that some pixels on the scanned image, although supposed to be black are actually returned as white by the scanner. I was thinking perhaps I could draw on information from the surrounding pixels as well in order to determine true color of a pixel, but before I work on this idea, I was wondering if there are any algorithms already that can do this true color determination for me? I cannot find a better scanner, so a hardware improvement is not an option.
Take your 8bit depth image
Apply grayscale (if its not already) and histogram equalization first.
Then apply low pass filter (gaussian blur) to reduce noises.
You may think of some type of cluster filtering instead of blurring, if you want to. Idea is this: Create a window and search the whole image starting from top left, set all the pixels inside window to black, if there is enough black pixels inside window.
Then group your pixels as such:
Group 1: pixels having gray <5
Group 2: pixels having 5< gray<250
Group 3: pixels having 250< gray
Take back your non-blurred image (image after histogram equalization),
Group 1 - write 0
Group 2 - write 127
Group 3 - write 255
It is difficult to comment because the original image cannot be displayed, but I guess scanning at higher resolution and applying median filter can get rid of smaller patches.
You can also check Image Inpainting functions, they are used for fixing this type of problems.

Background Extraction in MATLAB

I am working on a project in MATLAB which will extract background from an image, like if this is an image
it should give me locations/coordinates of background(blue part) or person's image, so far I have calculated
1) edges using canny
2) connected component
is there any detailed work, algorithm or paper on it ? so I can do it.
Edit
Problem I am facing is if I detect edges, it gives me binary image, so if I assume that all pixels who have value 0 (black color), is my background then how would I differentiate that I(r,c) is the part of person or part of background ?
Note that this is just one way to do it, but it should work.
Assuming you can make a matrix with the following values:
1 if it is (in the range of) your background color
0 otherwise
And assuming the background is only 'outside' the person (though it may still work if there is just a bit of hair around the background), then a simple way to check if something is the background would be to
observe the neighborhood of each pixel in the matrix
if the average value is high enough (say over 0.2) then assume it is a background pixel, otherwise assume it is a non-background pixel.
Store the result in your new matrix and you have all the locations of background pixels
So far it is quite straightforward and does not even use the fact that you already calculated the edges. Now with those edges you can make the following improvement:
If a pixel is far enough 'inside' the edges (simpler: close enough to the center of them), do not consider it a candidate for background. This should help in case someone has big blue eyes.

Fast way of getting the dominant color of an image

I have a question about how to get the dominant color of an image (a photo). I thought of this algorithm: loop through all pixels and get their color, either red, green, yellow, orange, blue, magenta, cyan, white, grey or black (with some margin of course) and it's darkness (light, dark or normal) and afterwards check which colors occurred the most. I think this is slow and not very precise. Is there a better way?
If it matters, it's a UIImage taken from an iPhone or iPod touch camera which is at most 5 Mpx. The reason it has to be fast is that simply showing a progress indicator doesn't make very much sense as this is for an app for people with bad sight, or no sight at all. Because it's for a mobile device, it may not take very much memory (at most 50 MB).
Your general approach should work, but I'd highlight some details.
Instead of your given list of colors, generate a number of color "bins" in the color spectrum to count pixels. Here's another question that has some algorithms for that: Generating spectrum color palettes Make the number of bins configurable, so you can experiment to get the results you want.
Next, for each pixel you're considering, you need to find the "nearest" color bin to increment. You'll need to define "nearest"; see this article on "color difference": http://en.wikipedia.org/wiki/Color_difference
For performance, you don't need to look at every pixel. Since image elements usually cover large areas (e.g., the sky, grass, etc.), you can get the result you want by only sampling a few pixels. I'd guess that you could get good results sampling every 10th pixel, or even every 100th. You can experiment with that factor as well.
[Editor's note: The paragraph below was edited to accommodate Mike Fairhurst's comment.]
Averaging pixels can also be done, as in this demo:jsfiddle.net/MUsT8/

How do I locate black rectangles in a grid and extract the binary code from that

i'm working in a project to recognize a bit code from an image like this, where black rectangle represents 0 bit, and white (white space, not visible) 1 bit.
Somebody have any idea to process the image in order to extract this informations? My project is written in java, but any solution is accepted.
thanks all for support.
I'm not an expert in image processing, I try to apply Edge Detection using Canny Edge Detector Implementation, free java implementation find here. I used this complete image [http://img257.imageshack.us/img257/5323/colorimg.png], reduce it (scale factor = 0.4) to have fast processing and this is the result [http://img222.imageshack.us/img222/8255/colorimgout.png]. Now, how i can decode white rectangle with 0 bit value, and no rectangle with 1?
The image have 10 line X 16 columns. I don't use python, but i can try to convert it to Java.
Many thanks to support.
This is recognising good old OMR (optical mark recognition).
The solution varies depending on the quality and consistency of the data you get, so noise is important.
Using an image processing library will clearly help.
Simple case: No skew in the image and no stretch or shrinkage
Create a horizontal and vertical profile of the image. i.e. sum up values in all columns and all rows and store in arrays. for an image of MxN (width x height) you will have M cells in horizontal profile and N cells in vertical profile.
Use a thresholding to find out which cells are white (empty) and which are black. This assumes you will get at least a couple of entries in each row or column. So black cells will define a location of interest (where you will expect the marks).
Based on this, you can define in lozenges in the form and you get coordinates of lozenges (rectangles where you have marks) and then you just add up pixel values in each lozenge and based on the number, you can define if it has mark or not.
Case 2: Skew (slant in the image)
Use fourier (FFT) to find the slant value and then transform it.
Case 3: Stretch or shrink
Pretty much the same as 1 but noise is higher and reliability less.
Aliostad has made some good comments.
This is OMR and you will find it much easier to get good consistent results with a good image processing library. www.leptonica.com is a free open source 'C' library that would be a very good place to start. It could process the skew and thresholding tasks for you. Thresholding to B/W would be a good start.
Another option would be IEvolution - http://www.hi-components.com/nievolution.asp for .NET.
To be successful you will need some type of reference / registration marks to allow for skew and stretch especially if you are using document scanning or capturing from a camera image.
I am not familiar with Java, but in Python, you can use the imaging library to open the image. Then load the height and the widths, and segment the image into a grid accordingly, by Height/Rows and Width/Cols. Then, just look for black pixels in those regions, or whatever color PIL registers that black to be. This obviously relies on the grid like nature of the data.
Edit:
Doing Edge Detection may also be Fruitful. First apply an edge detection method like something from wikipedia. I have used the one found at archive.alwaysmovefast.com/basic-edge-detection-in-python.html. Then convert any grayscale value less than 180 (if you want the boxes darker just increase this value) into black and otherwise make it completely white. Then create bounding boxes, lines where the pixels are all white. If data isn't terribly skewed, then this should work pretty well, otherwise you may need to do more work. See here for the results: http://imm.io/2BLd
Edit2:
Denis, how large is your dataset and how large are the images? If you have thousands of these images, then it is not feasible to manually remove the borders (the red background and yellow bars). I think this is important to know before proceeding. Also, I think the prewitt edge detection may prove more useful in this case, since there appears to be less noise:
The previous method of segmenting may be applied, if you do preprocess to bin in the following manner, in which case you need only count the number of black or white pixels and threshold after some training samples.

Get dominant colors from image discarding the background

What is the best (result, not performance) algorithm to fetch dominant colors from an image. The algorithm should discard the background of the image.
I know I can build an array of colors and how many they appear in the image, but I need a way to determine what is the background and what is the foreground, and keep only the second (foreground) in mind while read the dominant colors.
The problem is very hard especially for gradient backgrounds or backrounds with patterns (not plain)
Isolating the foreground from the background is beyond the scope of this particular answer, but...
I've found that applying a pixelation filter to an image will draw out a really good set of 'average' colours.
Before
After
I sometimes use this approach to derive a pallete of colours with a particular mood. I first find a photograph with the general tones I'm after, pixelate and then sample from the resulting image.
(Thanks to Pietro De Grandi for the image, found on unsplash.com)
The colour summarizer is a pretty sweet spot for info on this subject, not to mention their seemingly free XML Web API that will produce descriptive colour statistics for an image of your choosing, reporting back the following formatted with swatches in HTML or as XML...
what is the average color hue, saturation and value in my image?
what is the RGB colour that is most representative of the image?
what do the RGB and HSV histograms look like?
what is the image's human readable colour description (e.g. dark pure blue)?
The purpose of this utility is to generate metadata that summarizes an
image's colour characteristics for inclusion in an image database,
such as Flickr. In particular this tool is being used to generate
metadata for Flickr's Color Fields group.
In my experience though.. this tool still misses the "human-readable" / obvious "main" color, A LOT of the time. Silly machines!
I would say this problem is closer to "impossible" than "very hard". The only approach to it that I can think of would be to make the assumption that the background of an image is likely to consist of solid blocks of similar colors, while the foreground is likely to consist of smaller blocks of dissimilar colors.
If this assumption is generally true, then you could scan through the whole image and weight pixels according to how similar or dissimilar they are to neighboring pixels. In other words, if a pixel's neighbors (within some arbitrary radius, perhaps) were all similar colors, you would not incorporate that pixel into the overall estimate. If the neighbors tend to be very different colors, you would weight the pixel heavily, perhaps in proportion to the degree of difference.
This may not work perfectly, but it would definitely at least tend to exclude large swaths of similar colors.
As far as my knowledge of image processing algorithms extends , there is no certain way to get the "foreground"; it is only possible to get the borders between objects. You'll probably have to make do with an average, or your proposed array count method. In that, you'll want to give colours with higher saturation a higher "score" as they're much more prominent.

Resources