I have the original gray scale image below which I dilated and eroded in order to get the binary image. I have two questions:
1) How can I remove the shadows / extra noise (Examples of what I am referring to are circled red part of the images)?
2) In the binary image, the black singular beads are larger than they should be and the white ones are smaller. Other than eroding and dilating, is there a way to normalize their sizes? (see circled blue for clarification).
I've done some work with regionprops, but has not yielded what I need.
you can try use bwareaopen to remove small objects from binary image.
Related
I've seen a few libraries that pixelate images, some of them even feature non-square shapes such as The Pixelator's circle and diamond shapes.
I'm looking however to make a particular shape, I want a "pixel" that is 19x27 px. Essentially, the image would still look pixelated but it would use tallish rectangle shapes as the pixel base.
Are there any libraries out there that do this, if not, what alterations to existing algorithms/functions would I need to make to accomplish this?
Unless I am not understanding your question, the algorithm you need is quite simple!
Just break your image up into a grid of rectangles the size you want (in this case 19x27). Loop over each section of the grid and take the average color of the pixels inside (you can simply take the average of each channel in RGB independently). Then set all of the pixels contained inside to the average color.
This would give you an image that is the same size as your input. You could of course resize your image first to a more appropriate output size.
You might want to look up convolution matrices.
In a shader, you would use your current pixel location to grab a set of nearby pixels from the original image to render to a pixel in a new buffer image.
It is actually just a slight variation of the Box Blur image processing algorithm except that instead of grabbing from the nearby pixels you would grab by the divisions of the original image relative to the 19x27 divisions of the resulting image.
I have a mortar image and it has some blackish pixels on the inside edges.
I need to make those blackish pixels colored same as the gray mortar color.
Look at the beginning of the image I have pointed out some blackish pixels on the edges
one (time consuming) solution is to switch colors to grey of every pixel that passes the following condition:
pixel is part of a component that is connected to a mortar but is not part of the mortar rectangle, this can be achieved with modifications to "Connected-component labeling" algorithm.
one way to do this modification is:
scan the image one time with Connected-component labeling algorithm (with threshold to contain grey and black colors pixels).
discard every component that doesnt contain a mortar (in your case this can be achieved with a demand of minimum pixels in a component).
scan the image a second time, each fixel that is on a line with an end and is black, switch color to grey.
this can be done in number of ways, one way is to search the line ends and then changes colors on all the line (you can flag the line ends in Connected-component labeling algorithm to save time).
hope this helps.
I got a png image like this:
The blue color is represent transparent. And all the circle is a pixel group. So, I would like to find the biggest one, and remove all the small pixel, which is not group with the biggest one. In this example, the biggest one is red colour circle, and I will retain it. But the green and yellow are to small, so I will remove them. After that, I will have something like this:
Any ideas? Thanks.
If you consider only the size of objects, use the following algorithm: labellize the connex components of the mask image of the objects (all object pixels are white, transparent ones are black). Then compute the areas of the connex components, and filter them. At this step, you have a label map and a list of authorized labels. You can read the label map and overwrite the mask image with setting every pixel to white if it has an authorized label.
OpenCV does not seem to have a labelling function, but cvFloodFill can do the same thing with several calls: for each unlabeled white pixel, call FloodFill with this pixel as marker. Then you can store the result of this step in an array (of the size of the image) by assigning each newly assigned pixel with its label. Repeat this as long as you have unlabellized pixels.
Else you can recode the connex component function for binary images, this algorithm is well known and easy to implement (maybe start with Matlab's bwlabel).
The handiest way to filter objects if you have an a priori knowledge of their size is to use morphological operators. In your case, with opencv, once you've loaded your image (OpenCV supports PNG), you have to do an "openning", that is an erosion followed by a dilation.
The small objects (smaller than the size of the structuring element you chose) will disappear with erosion, while the bigger will remain and be restored with the dilation.
(reference here, cv::morphologyEx).
The shape of the big object might be altered. If you're only doing detection, it is harmless, but if you want your object to avoid transformation you'll need to apply a "top hat" transform.
I have a binary image in a matrix that I obtained by some image processing. When I say Binary Image I mean it has zeros and ones indicating complete black and complete white pixels. The image is mostly white and has some black spots. I now want to expand this black spots by some factor. How can I do it?
Have you tried dilate?
imdilate(1-binary_image, SE)
Where SE is the structural element.
Maybe you've noticed but Google Image search now has a feature where you can narrow results by color. Does anyone know how they do this? Obviously, they've indexed information about each image.
I am curious what the best methods of analyzing an image's color data to allow simple color searching.
Thanks for any and all ideas!
Averaging the colours is a great start. Just downscale your image to 10% of the original size using a Bicubic or Bilinear filter (or something advanced anyway). This will vastly reduce the colour noise and give you a result which is closer to how humans perceive the image. I.e. a pixel-raster consisting purely of yellow and blue pixels would become clean green.
If you don't blur or downsize the image, you might still end up with an average of green, but the deviation would be huge.
The Google feature offers 12 colors with which to match images. So I would calculate the Lab coordinate of each of these swatches and plot the (a*, b*) coordinate of each of these colors on a two dimensional space. I'd drop the L* component because luminance (brightness) of the pixel should be ignored. Using the 12 points in the (a*, b*) space, I'd calculate a partitioning using a Voronoi Diagram. Then for a given image, I'd take each pixel, calculate its (a*, b*) coordinate. Do this for every pixel in the image and so build up the histogram of counts in each Voronoi partition. The partition that contains the highest pixel count would then be considered the image's 'color'.
This would form the basis of the algorithm, although there would be refinements related to ignoring black and white background regions which are perceptually not considered to be part of the subject of the image.
Average color of all pixels? Make a histogram and find the average of the 'n' peaks?