How to find yellow objects on given picture? - algorithm

I've this picture:
(this is just subimage of bigger image but only this part is for me important). I need algorithm for finding all these yellow objects in the image and find from them the object which contains the most yellow points. This is just one picture of thousands of similar pictures with more or less these yellow objects. What is the way to do this? I found that the scanline algorithm is good for this, but I haven't found some example which would help me. If you have some ideas or even algorithm it would be perfect. Those color lines are not important I just put them as some border in which I need to find the yellow objects.
Thanks a lot for answers

There are two basic steps:
Thresholding: Generate an array of yellow and not-yellow pixels. If the images you're working with are all like the example you provided, this should be very easy, but try adaptive thresholding if you have to deal with varying shades and hues. Store, e.g., a value of -1 for pixels that are yellow, and 0 everywhere else.
Segmentation: Initialize an ID value to 1. Scan every pixel of the thresholded image. When you encounter a pixel with a value of -1 (i.e., a yellow pixel), use a flood fill routine to write the ID value into this pixel and all the yellow pixels connected to it. Before the flood fill routine exits, you can store information such as the number of pixels it found and the average X and Y coordinates in an array indexed by the ID value. Then increment the ID value and resume scanning until you've covered the entire image.
Then search the data generated by the flood fill routine to find which yellow areas were the largest, and where they were located.
Here's a program that does something quite similar with red objects instead of yellow ones, and then draws circles around them.

It looks like OpenCV has blob detection options. I found this article showing how to detect the blobs using greyscale value, which you should be able to change to use the color value of your target color. It also mentions using the area of the blob as a threshold, so you should be able to use that to find the largest one in the image.
http://www.learnopencv.com/blob-detection-using-opencv-python-c/

One approach would be to generate a quad-tree of the image. Using this quad-tree it's pretty simple to find conjunct pieces that form a blob (even with holes) and calculate the size of these.

Related

anyway to remove algorithmically discolorations from aerial imagery

I don't know much about image processing so please bear with me if this is not possible to implement.
I have several sets of aerial images of the same area originating from different sources. The pictures have been taken during different seasons, under different lighting conditions etc. Unfortunately some images look patchy and suffer from discolorations or are partially obstructed by clouds or pix-elated, as par example picture1 and picture2
I would like to take as an input several images of the same area and (by some kind of averaging them) produce 1 picture of improved quality. I know some C/C++ so I could use some image processing library.
Can anybody propose any image processing algorithm to achieve it or knows any research done in this field?
I would try with a "color twist" transform, i.e. a 3x3 matrix applied to the RGB components. To implement it, you need to pick color samples in areas that are split by a border, on both sides. You should fing three significantly different reference colors (hence six samples). This will allow you to write the nine linear equations to determine the matrix coefficients.
Then you will correct the altered areas by means of this color twist. As the geometry of these areas is intertwined with the field patches, I don't see a better way than contouring the regions by hand.
In the case of the second picture, the limits of the regions are blurred so that you will need to blur the region mask as well and perform blending.
In any case, don't expect a perfect repair of those problems as the transform might be nonlinear, and completely erasing the edges will be difficult. I also think that colors are so washed out at places that restoring them might create ugly artifacts.
For the sake of illustration, a quick attempt with PhotoShop using manual HLS adjustment (less powerful than color twist).
The first thing I thought of was a kernel matrix of sorts.
Do a first pass of the photo and use an edge detection algorithm to determine the borders between the photos - this should be fairly trivial, however you will need to eliminate any overlap/fading (looks like there's a bit in picture 2), you'll see why in a minute.
Do a second pass right along each border you've detected, and assume that the pixel on either side of the border should be the same color. Determine the difference between the red, green and blue values and average them along the entire length of the line, then divide it by two. The image with the lower red, green or blue value gets this new value added. The one with the higher red, green or blue value gets this value subtracted.
On either side of this line, every pixel should now be the exact same. You can remove one of these rows if you'd like, but if the lines don't run the length of the image this could cause size issues, and the line will likely not be very noticeable.
This could be made far more complicated by generating a filter by passing along this line - I'll leave that to you.
The issue with this could be where there was development/ fall colors etc, this might mess with your algorithm, but there's only one way to find out!

Background Extraction in MATLAB

I am working on a project in MATLAB which will extract background from an image, like if this is an image
it should give me locations/coordinates of background(blue part) or person's image, so far I have calculated
1) edges using canny
2) connected component
is there any detailed work, algorithm or paper on it ? so I can do it.
Edit
Problem I am facing is if I detect edges, it gives me binary image, so if I assume that all pixels who have value 0 (black color), is my background then how would I differentiate that I(r,c) is the part of person or part of background ?
Note that this is just one way to do it, but it should work.
Assuming you can make a matrix with the following values:
1 if it is (in the range of) your background color
0 otherwise
And assuming the background is only 'outside' the person (though it may still work if there is just a bit of hair around the background), then a simple way to check if something is the background would be to
observe the neighborhood of each pixel in the matrix
if the average value is high enough (say over 0.2) then assume it is a background pixel, otherwise assume it is a non-background pixel.
Store the result in your new matrix and you have all the locations of background pixels
So far it is quite straightforward and does not even use the fact that you already calculated the edges. Now with those edges you can make the following improvement:
If a pixel is far enough 'inside' the edges (simpler: close enough to the center of them), do not consider it a candidate for background. This should help in case someone has big blue eyes.

3D-Anaglyph creation algorithm, using depth map image: where to find?

I'm looking for a generic algorithm to calculate a red/cian anaglyph starting from the original image and his b/w depth map (example: http://www.swell3d.com/2008/07/turn-2d-painting-into-3d-anagl.html)
That algorythm are used, for example, in Photoshop but I can't find a readable explanation to reproduce it.
Thanks
After some researches I found what I was looking for.
First, I've readed some Photoshop/Gimp tutorials that describes how to make anaglyphs from two inputs: an image and its grayscale depth map. The core of the process is the use of "Displace Tool" and the depth map as a displacement map.
One of the several youtube tutorials: http://www.youtube.com/watch?v=gfYMe_vYhu4
So, I took some documentation about Gimp's Displace Tool by looking at this http://docs.gimp.org/en/plug-in-displace.html and directly at the source code of the tool (the method is very similar to the one proposed by Asgeir).
This lets us to produce two stereo images from the input, by looking at the depth map. The red and cyan colors of every image are calculated by reading this page http://3dtv.at/Knowhow/AnaglyphComparison_en.aspx ("Optimized" matrices are the best ones).
Then, the sum of the two images in one will produce the final anaglyph. Thanks everybody.
There are two algorithms involved. The first uses the original image and the depth map to produce a left and a right image. The second combines these images into a red-cyan anaglyph.
There are a couple ways to accomplish the first part. One is to take the original image and texture map it onto a fine mesh that lies flat in the XY plane. Then you tweak the Z values of each vertex in the mesh according to the corresponding value in the depth map. You've basically created a textured bas relief. You then use a 3D rendering algorithm to render the image from two vantage points that are offset horizontally by a small amount (essentially from the vantage point of a person's left and right eyes as they would view the bas relief).
There is probably a way to directly shift the pixels left and right which is a good fast approximation to what I described above.
Once you have the left and right images, you pass one through a cyan filter and one through a red filter. If you have RGB sources, that's as simple as taking the red channel from one image and combing it with the green and blue channels from the other image.
Anaglyphs work best with muted colors. If you have strong primaries, it won't look as good. You can use an algorithm to reduce the color saturation of the original image before you begin.
From the description in the link you provided I would assume that it is something like
for each pixel in depthmap
x_offset = (depthmap[x][y] / 255.0f) * MAX_PIXEL_OFFSET * DIRECTION
output[x + x_offset][y] = color_buffer[x][y]
blend output with color_buffer
Where MAX_PIXEL_OFFSET is the maximum shift in pixels and DIRECTION is -1 for one color and 1 for the other. This is assuming that the depthbuffer is one byte per pixel, range [0..255] and that 0 in the depthbuffer represents maximum distance.

Algorithm to calculate 'treat as white' value on a scanned image

I've got an image that I'm scanning in from the scanner. There's an area of the image that deliberately doesn't contain anything (so just white). The rest of the image contains data that needs analysing. This plain white area (called the 'reference area') should be used to determine what value the analysis code should treat as "white". Coming from a scanner, white isn't always going to be 255.
Then the rest of the image (the analysis area) is then extrapolated to be in-between 0 and this white point.
I've tried getting the average (mean) of all pixels in the reference area, but the value isn't always what I want.
Any ideas on the best algorithm to use to calculate this "treat as white" value?
The best way to do it will depend on various things, such as what form of noise, artefacts or other sources of error occur in your data, what later processing you need to do. Having said that, given that you have a known reference area, a fairly simple approach should work.
Rather than finding the mean, find the k-th lowest value in the reference area, where k is, say, 15% of the number of pixels in the reference area. The idea of this is to find the dimmest white in the reference area, so that everything brighter than that will be saturated to white when you adjust the image values. You probably don't want to pick the absolute lowest pixel value from the reference area, because then you're very likely to pick a pixel that was not actually white (a speck of dust/smudge/sensor noise or some other artefact).
More generally, you may want to look into automatic thresholding algorithms, which will give you other (somewhat more sophisticated) ways of selecting the white-point.
I'm assuming greyscale image processing for all of this. Full colour constancy (part of which comes from determining the white point of an image) is a much harder problem, though having a white reference area in your image would certainly help with this too.

How do I locate black rectangles in a grid and extract the binary code from that

i'm working in a project to recognize a bit code from an image like this, where black rectangle represents 0 bit, and white (white space, not visible) 1 bit.
Somebody have any idea to process the image in order to extract this informations? My project is written in java, but any solution is accepted.
thanks all for support.
I'm not an expert in image processing, I try to apply Edge Detection using Canny Edge Detector Implementation, free java implementation find here. I used this complete image [http://img257.imageshack.us/img257/5323/colorimg.png], reduce it (scale factor = 0.4) to have fast processing and this is the result [http://img222.imageshack.us/img222/8255/colorimgout.png]. Now, how i can decode white rectangle with 0 bit value, and no rectangle with 1?
The image have 10 line X 16 columns. I don't use python, but i can try to convert it to Java.
Many thanks to support.
This is recognising good old OMR (optical mark recognition).
The solution varies depending on the quality and consistency of the data you get, so noise is important.
Using an image processing library will clearly help.
Simple case: No skew in the image and no stretch or shrinkage
Create a horizontal and vertical profile of the image. i.e. sum up values in all columns and all rows and store in arrays. for an image of MxN (width x height) you will have M cells in horizontal profile and N cells in vertical profile.
Use a thresholding to find out which cells are white (empty) and which are black. This assumes you will get at least a couple of entries in each row or column. So black cells will define a location of interest (where you will expect the marks).
Based on this, you can define in lozenges in the form and you get coordinates of lozenges (rectangles where you have marks) and then you just add up pixel values in each lozenge and based on the number, you can define if it has mark or not.
Case 2: Skew (slant in the image)
Use fourier (FFT) to find the slant value and then transform it.
Case 3: Stretch or shrink
Pretty much the same as 1 but noise is higher and reliability less.
Aliostad has made some good comments.
This is OMR and you will find it much easier to get good consistent results with a good image processing library. www.leptonica.com is a free open source 'C' library that would be a very good place to start. It could process the skew and thresholding tasks for you. Thresholding to B/W would be a good start.
Another option would be IEvolution - http://www.hi-components.com/nievolution.asp for .NET.
To be successful you will need some type of reference / registration marks to allow for skew and stretch especially if you are using document scanning or capturing from a camera image.
I am not familiar with Java, but in Python, you can use the imaging library to open the image. Then load the height and the widths, and segment the image into a grid accordingly, by Height/Rows and Width/Cols. Then, just look for black pixels in those regions, or whatever color PIL registers that black to be. This obviously relies on the grid like nature of the data.
Edit:
Doing Edge Detection may also be Fruitful. First apply an edge detection method like something from wikipedia. I have used the one found at archive.alwaysmovefast.com/basic-edge-detection-in-python.html. Then convert any grayscale value less than 180 (if you want the boxes darker just increase this value) into black and otherwise make it completely white. Then create bounding boxes, lines where the pixels are all white. If data isn't terribly skewed, then this should work pretty well, otherwise you may need to do more work. See here for the results: http://imm.io/2BLd
Edit2:
Denis, how large is your dataset and how large are the images? If you have thousands of these images, then it is not feasible to manually remove the borders (the red background and yellow bars). I think this is important to know before proceeding. Also, I think the prewitt edge detection may prove more useful in this case, since there appears to be less noise:
The previous method of segmenting may be applied, if you do preprocess to bin in the following manner, in which case you need only count the number of black or white pixels and threshold after some training samples.

Resources