I have an image which has plain dots scattered across. The dots are of the same size and they are solid (I can read the color to decide whether a point is inside or not). What is the most efficient algorithm to find the exact number of the dots?
I thought of Monte Carlo, but I don't know the sufficient number of random points that I should use. Any advice?
Edit: it's a white image that contains dots only.
This is good case for image processing algorithms.
For example, using OpenCV library, you could exploit the next approach:
If image format is color, convert it to gray scale (cvtColor)
Make image binary (pure black and white) with color inversion (cvThreshold with THRESH_BINARY_INV) to make white spots on the black background
Find connected components (findContours) - after that contours.size gives you number of dots
If you don't want to use any libraries, key point is connected components labeling
The simplest way to make CCL for small dots - using of Floodfill algorithm.
Make flood fill for background pixels, mark them with 0.
Scan through all pixels. When you meet unmarked one (at X,Y), start new flood fill with next marker value K (1,2 etc).
After each floodfill return to the next coordinate (X+1,Y) and continue scanning.
The last K value is the number of spots.
Related
I am writing a code that generate start and end points of strokes of a picture (Raster images) to let robot arm paint.
I have wrote an algorithm but with too many overlapping strokes:
https://github.com/Evrid/Painting-stroke-generation-for-robot-arm-or-CNC-machine
The input of my algorithm:
and the output (which is mirrored and re-assigned to the colors I have) with 50 ThresholdOfError (you can see the strokes are overlapping):
Things to notice are:
*The strokes needs to be none overlapping (if overlapping then have too many strokes)
*Painting have different colors, the same color better draw together
*The stroke size is like rectangles
*Some coloring area are disconnected, like below only yellow from a sun flower:
I am not sure which algorithm should I use, here is some possible ones I have thought about:
Method 1.Generate 50k (or more) random direction and position large size rectangles, if its area overlap the same color area and not overlapping other rectangles, then keep it, then decrease generated rectangle size and after a couple rounds keep decreasing again
Method 2.Extract certain color first then generate random direction and position large size rectangles (we have less area and calculation time)
Method 3.Do edge detection first, then rectangles are generated with direction along the edge, if its area overlap the same color area and not overlapping other rectangles, then keep it, then decrease generated rectangle size and after a couple rounds keep decreasing again
Method 4: Generate random circle, let the pen draw points instead (but may result too many points)
Any suggestions about which algorithm I should use?
I would start with:
Quantize your image to your palette
so reduce colors to your palette first see:
Effective gif/image color quantization?
Converting BMP image to set of instructions for a plotter?
segmentate your image by similar colors
for this you can use flood fill or growth fill to create labels (region index) in form of ROI
see Fracture detection in hand using image proccessing
for each ROI create infill path with thick brush
this is simple hatching you do this by generating zig zag like path with "big" brush width in major direction of ROI so use either AABB or OBB or PCA to detect major direction (direction with biggest size of ROI) and just AND it with polygon ROI
for each ROI create outline path with "thin" brush
IIRC this is also called contour extraction, simply select boundary pixels of selected ROI
then you can use A* on ROI boundary to sort the pixels into 2 halves (or more if complex shape with holes or thin parts) so backtrack the pixels and then reorder them to form a closed loop(s)
this will preserve details on boundary (while using infill with thick brush)
Something like this:
In case your colors are combinable you can use CMY color space and Substractive color mixing and process each C,M,Y channel separately (max 3 overlapping strokes) to have much better color match.
If you want much better colors you can also add dithering however that will slow down the painting a lot as it requires much much more path segments and its not optimal for plotter with tool up/down movement (they are better for printing heads or printing triggered without additional movements ...). To partially overcome this issue you could use partial dithering where you can specify the amount of dithering created (leading to less segments)
there are a lot of things you can improve/add to this like:
remove outline from ROI (to limit the overlaps and prevent details overpaint)
do all infills first and then all outlines
set infill brush width based on ROI size
adjust infill hatching pattern to better match your arm kinematics
order ROIs so they painted faster (variation of Traveling Sailsman problem TSP)
infill with more than just one brush width to preserve details near borders
Suggest you use the flood fill algorithm.
Start at top right pixel.
Flood fill that pixel color. https://en.wikipedia.org/wiki/Flood_fill
Fit rectangles into the filled area.
Move onto the next pixel that is not in the filled area.
When the entire picture has been covered, sort the rectangles by color.
Rencently I'm trying to search for some ways to detect lines in CT scans.I found that all the Hough Transform family and some other algorithms are required to deal with contours born after edge detector.I found the contours are not what I want and a lot of short lines created by these 2 steps.I get perplexed by this.Can any handsome tell me what to do with this?Some methods or algorithms used in grayscale-image straightly but not in binary-image? using opencv or numpy is perfect! Many thanks!
Below is the test picture.I'm working to detect left-top straight lines and filter out the others.
You have pretty consistent background so I would:
detect contours
as any pixel with not background color that is neighboring background color.
Segmentate/label the contour points to form ordered "polylines"
create ID buffer and set ID=0 (background or object pixels)
find any yet not processed contour pixel
if none found stop
flood fill the contour in ID buffer by ID
increment ID
go to 2
now ID buffer contains your labeled contours
for each contour create ordered list of pixels forming contour "polyline"
to speed this up you can remember each contour start point from #2 or even do this step directly in step #2.
detect straight lines in contour "polylines".
that is simple straight lines have similar slope angle between neighboring point. You can also apply regression or whatever ... the slope or unit direction vectors must be computed on pixels that are at least 5 pixels distant to each other otherwise rasterization pixelation will corrupt the results.
see some related stuff:
Efficiently calculating a segmented regression on a large dataset
Given n points on a 2D plane, find the maximum number of points that lie on the same straight line
I have an image like this (thresholding, noise removal, etc. completed):
My final output should be an image without any of the jagged edges, and smaller than the given image. By this, I mean to say that the only difference between the 2 images must be that in the new one, the jagged edges must be removed, and not the jagged edges filled in. Like so (the final image must be the region within the red border, the red border is shown only for explanation):
I was thinking of something along the lines of using Hough transforms, or of using dilations and then erosions, but nothing seems to be working (probably my fault, because I have not worked in too much detail with them before).
Note that the language I'd like t do this in is MATLAB.
There are 2 primary aims to this:
To get the edges themselves, using Hough transforms
So that the 'Extrema' property returns the desired pints when using regionprops, like so:
The question, in a more concise form:
How would I go about extracting this T in MATLAB, such that it does not have rugged edges, but the overall figure is not larger than the original, as shown in the second figure above? In other words, what set of transformations (in MATLAB) would I use to smoothen the borders of the image with as little of the area lost as little as possible (but no area added) such that ruggedness disappears?
Is there a more efficient way of extracting the corner (extrema) points as shown in figure 2 above without requiring to go through step 1?
EDIT:
A few more sample images:
NB: All images in consideration will be composed of rectangles approximately at 90 to each other, and no other figure. So smoothening an image with a curved edge, for example, would be beyond the scope of an answer to this question (or even, for that matter, a trapezium, although I think that smoothening 2 straight edges should be the same, irrespective of whether the edge has another parallel to it or not).
Here are a few more images, for reference:
I'm not sure if my answer would satisfy your requirements. I'm putting it here because I think it's too long for a comment.
since you want the final output to be smaller than the input image, erode the input image. You can pick an appropriate kernel size.
perform a corner detection on this eroded image. This will give you all strong corners, but without any order
trace the boundaries of the eroded image. This should give you an ordered list of boundary pixels
now, with the help of these ordered boundary points you can order the corners that you found earlier
filter corner points that form approximately 90 degrees of angle. You can do this considering each 3 ordered corner points (two green points and the red point in between in the image below. It's just for illustration, not corner points that I calculated. At the end of this operation, you have all red points in the image below which are at strong corners, in addition to other yellow and green corner points)
now you can either find the equation of the line connecting 2 consecutive red points
or
fit a least-squares-line to the points between (and including) each 2 consecutive red points
since you did all this processing on a eroded image that is essentially smaller than the original image, you should get a smaller shape
I'm trying to take a source image, and recreate it on a transparent canvas using only overlapping mono-colored squares. The goal is to use as few squares as possible.
In other words, I'm taking a blank transparent image, and drawing squares of various colors until I recreate the source image, with the goal being to use as few squares as possible.
For example:
Here is a source image. It has two colors: red and green. I want to use only squares, that may overlap, to recreate the source image.
The ideal solution would be a large red square, and then two green squares drawn on top - that is what I want my algorithm to find, with any source image - the position, size, color and order of each square.
My target image that I intend to process is this:
(8x enlargement)
It has 1411 non-transparent pixels (worst case), and with a brute force solution that does not use overlapping squares, I've recreated the image using 1246 squares.
My current solution is a brute force method along the lines of:
Create a list of all colors used in the source image. Each item is a "layer". A layer has a color and a 2D array representing pixels. The order is important, but I don't know what order the layers need to be in, so its arbitrary initially.
For each layer in the list, initialize the 2D array. Each element corresponds to a pixel in the source image. Pixels that are the same color as the layer's chosen color is marked as '1'. Pixels that are in a layer ABOVE the current layer are marked as "don't care". All other pixels are marked as '0'.
Use some algorithm to process each layer, using the smallest number of squares to reach every pixel marked '1', without touching any pixels marked '0'.
Rearrange the order of layers and go back to Step 2. Do this for every possible combination of layers, then check to see which ordering uses the least number of squares in total.
Someone has perhaps a better explanation in a response; but brute force testing every permutation is not viable, because my target image has 31 colors (resulting in 31! permutations).
As for why I'm doing this? I'm trying to create an image in a game (Starbound), where I can only use squares. The lazy solution is to use a square for each pixel, but that's just too many squares.
Just a suggestion for a possible solution. I haven't tried it.
It's a greedy approach.
For every pixel, compute the largest uniform square that contains it.
Then choose the largest of all squares and mark all pixels it covers as "covered".
Then among all unmarked pixels, choose the largest covering square, and so on until no unmarked pixel remains.
Ties do no matter, just take any largest square and mark its pixels.
UPDATE: overlaps offer opportunities for reduction in the number of squares.
Consider all possible permutations of the filling order of the shapes. The shapes drawn first, on the bottom layers, can be (partly) hidden by some others. Process the shapes starting from the top layer. When you process a shape to associate every pixel with the largest uniform square that contains it, treat all covered pixels as don't care.
In the given example, fill the green squares first; then when filling the red square, the green pixels can be considered red or not, depending on convenience.
If you cannot try all permutations, then try them at random. Heuristic approaches such as genetic algorithms or simulated annealing could help. Nothing straightforward here.
It would be hard to guarantee an optimal solution. The brute force search would be huge. This calls for a heuristic.
Start at the edges. Walking the outside edge, find the most frequent color. Draw squares
to fill the background.
Iterate, working inwards drawing smaller and smaller squares which cover the most
pixels which are the wrong color. Ending with single-pixel squares.
Working inwards means to reduce the size of the bounding box, outside
of which all pixels are the correct color. At each step, the upper limit on the size of a square would be fitting in the bounding box. Choose the squares which give the best score.
Score is based on old vs new color being wrong or right, so there are 4 possible values for each pixel. One example function for per-pixel score would be:
wrong -> wrong: 0
wrong -> right: 1
right -> right: 1
right -> wrong: -2
I think that if you always reduce the number of wrong squares on the edge of the bounding box and never increase the size of the square, then the algorithm must halt with a solution without needing to backtrack. A backtracking solution could probably do better.
An "erosion-based" heuristic.
Consider all outline pixels, i.e. having at least a neighbor outside the shape.
Among these pixels, choose a color (the most frequent one ?).
For all outline pixels of this color, compute the largest square that does not exceed the shape.
Fill these squares, from larger to smaller, until the complete outline is covered.
Remove the correctly filled pixels and restart the procedure on the eroded shape.
In the case of the red square, all outline pixels will be covered by the red square itself, and the first filling will "consume" the whole area.
Then, removing the pixels covered in red, the two green square will remain.
All green outline pixels will now be covered by the two green squares, and the two first fillings will "consume" all green area.
I have an image like in the left side. I want to get covered areas or the arc points of polygons for getting image like in the right side. I have got end point-values of all lines.
How can I do that (get all covered areas)? Any algorithm or ideas?
The easiest way to do this is with a recursive fill technique.
Assuming you have a black and white image to start with, you drop a pixel of color on one region. You recursively fill the areas to the up, down, left, and right of that pixel. When each of those pixels returns (because all surrounding pixels are colored or black for wall) you return.
You can do this iteratively for each x,y coordinate, skipping it if it's already colord by a previous run. In doing this, you can iterate over colors as well, if you so desire.
This is a classic case of binary image segmentation, as far as I can see in the limited resolution of the input image. Invert your image, maybe erode it to fill holes in your lines, and then do an image segmentation. A trivial algorithm for this is to perform a forward scan of the image and assigning each pixel the region value of its backward (directly left or any above direction) white neighbours, or a new region value if it has only black backward neighbours, joining regions when there are neighbours with different region numbers.
As a second approach, if you have a list of unbroken lines, you might try a graph approach. Consider each line as an edge in a graph, and each intersection point as a node, and find the minimal cycles in the graph. These are your rooms.