I have a pictures of goods. There is a white border around some images. White color is not uniform and has some shades (poor quality etc.). I need to cut this border. To remove white color I use:
bd.threshold(bd, rect, pt, ">", threshold, color, maskColor);
There are some none transparent pixels after threshold because threshold color is unique for every image. BitmapData.getColorBoundsRect return region include none transparent pixels. I need region without this pixels(only image). Check each pixel is bad for big pictures. What is the most economical way to do this(find green region on picture below)? Sorry for my bad english and thanks for any help.
There are four edges of the image: left, right, top, down. Check each of them starting from the edge and moving towards inside of the image.
For example, let's take top edge (y = 0).
choose any horisontal position in the edge, for example, x = 10.
check pixel at (x, y).
if it is transparent, move the edge down: y++;
goto 2 and repeat until the pixel is not transparent.
choose different horisontal position and goto 2.
Repeat several times with different x. If there are only occasional non-transparent pixels, repeating the process 5-10 times will give you a new top edge which will most probably be 100% precise. It doesn't matter if the image is big, you only check several places in the edge. Do the same for left, right and bottom edges. Then copy the image defined by these edges.
If the edge quality is really bad then it's better to edit all images manually.
Related
I am writing a code that generate start and end points of strokes of a picture (Raster images) to let robot arm paint.
I have wrote an algorithm but with too many overlapping strokes:
https://github.com/Evrid/Painting-stroke-generation-for-robot-arm-or-CNC-machine
The input of my algorithm:
and the output (which is mirrored and re-assigned to the colors I have) with 50 ThresholdOfError (you can see the strokes are overlapping):
Things to notice are:
*The strokes needs to be none overlapping (if overlapping then have too many strokes)
*Painting have different colors, the same color better draw together
*The stroke size is like rectangles
*Some coloring area are disconnected, like below only yellow from a sun flower:
I am not sure which algorithm should I use, here is some possible ones I have thought about:
Method 1.Generate 50k (or more) random direction and position large size rectangles, if its area overlap the same color area and not overlapping other rectangles, then keep it, then decrease generated rectangle size and after a couple rounds keep decreasing again
Method 2.Extract certain color first then generate random direction and position large size rectangles (we have less area and calculation time)
Method 3.Do edge detection first, then rectangles are generated with direction along the edge, if its area overlap the same color area and not overlapping other rectangles, then keep it, then decrease generated rectangle size and after a couple rounds keep decreasing again
Method 4: Generate random circle, let the pen draw points instead (but may result too many points)
Any suggestions about which algorithm I should use?
I would start with:
Quantize your image to your palette
so reduce colors to your palette first see:
Effective gif/image color quantization?
Converting BMP image to set of instructions for a plotter?
segmentate your image by similar colors
for this you can use flood fill or growth fill to create labels (region index) in form of ROI
see Fracture detection in hand using image proccessing
for each ROI create infill path with thick brush
this is simple hatching you do this by generating zig zag like path with "big" brush width in major direction of ROI so use either AABB or OBB or PCA to detect major direction (direction with biggest size of ROI) and just AND it with polygon ROI
for each ROI create outline path with "thin" brush
IIRC this is also called contour extraction, simply select boundary pixels of selected ROI
then you can use A* on ROI boundary to sort the pixels into 2 halves (or more if complex shape with holes or thin parts) so backtrack the pixels and then reorder them to form a closed loop(s)
this will preserve details on boundary (while using infill with thick brush)
Something like this:
In case your colors are combinable you can use CMY color space and Substractive color mixing and process each C,M,Y channel separately (max 3 overlapping strokes) to have much better color match.
If you want much better colors you can also add dithering however that will slow down the painting a lot as it requires much much more path segments and its not optimal for plotter with tool up/down movement (they are better for printing heads or printing triggered without additional movements ...). To partially overcome this issue you could use partial dithering where you can specify the amount of dithering created (leading to less segments)
there are a lot of things you can improve/add to this like:
remove outline from ROI (to limit the overlaps and prevent details overpaint)
do all infills first and then all outlines
set infill brush width based on ROI size
adjust infill hatching pattern to better match your arm kinematics
order ROIs so they painted faster (variation of Traveling Sailsman problem TSP)
infill with more than just one brush width to preserve details near borders
Suggest you use the flood fill algorithm.
Start at top right pixel.
Flood fill that pixel color. https://en.wikipedia.org/wiki/Flood_fill
Fit rectangles into the filled area.
Move onto the next pixel that is not in the filled area.
When the entire picture has been covered, sort the rectangles by color.
I am writing a small image analysis program just for fun. Image analysis has always fascinated me. I am trying to locate regions on a scanned document. These regions are going to be marked by clearly defined filled black rectangles (pre-printed on the page).
My problem is locating the rectangles. I know SIFT\SURF find "features" but I am trying to find something specific. Here is what I was thinking of doing. I am not sure if this is the "right" way or there is a better idea.
First off using some library I will turn the image into greyscale, perhaps a PGM since that is what I'm used to working with in school. For the analysis I first plan to run the image through a state of the art deskew algorithm in OpenCV or something else that I find. Once I have my deskewed image I will then threshhold it at some pretty high thresshold. The rectangles are going to be straight black hence me using a pretty high threshhold. I will then experimentally determine a good size black rectangle to slide across the image. While sliding my rectangle across the image I will determine the areas where the greatest percentage of pixles are the same. I will have a cutoff, say 90%. If 90% of the pixles contained in my window are black I must have found a rectangle. My reasoning is that a true black rectangle slid over something that is "pretty much" a black rectangle is most likely a black rectangle. Since I deskewed the image I can assume that the rectangles are straight up and down "enough". I can then track the (x,y) offsets where the rectangles are found on the image and mark them.
Would anyone suggest a better approach?
There are many approaches that might work. (One can easily come up with 10 or more approaches.)
Idea #1 - Canny edge detection; find rectangle fit to contours
cv::Canny
cv::findContours
cv::minAreaRect, or
cv::boundingRect might also work, if the deskewing works as advertised.
Idea #2 - Find all lines using Hough transform; Iterates through all regions created from line intersections.
Idea #3 - (Improvement on #2) Restrict the Hough transform to horizontal and vertical lines by pre-processing.
Idea #4 - Compute Horizontal and Vertical profiles on the entire image; find dips; iterate through all candidate regions.
This idea is based on the assumption that the black rectangles are large enough that they leave a "depression" in both the horizontal and vertical projection profiles, which would be detectable despite other noise objects in the image.
cv::reduce
With dim = 0 or 1 for reducing to a row or column respectively,
With CV_REDUCE_AVG flag
Apply cv::threshold to the horizontal and vertical projection profiles, separately.
For each profile now thresholded into zero/non-zero, find runs of zeroes. These are the possible row ranges and column ranges that could contain the dark rectangles.
For each combination of candidate row range and column range, calculate the average pixel value to decide if it is a true dark rectangle.
Idea #5 - Use integral image (summed area table) to quickly calculate the average pixel value in arbitrary rectangles
cv::integral
To compute the sum (and average) of a rectangle from an integral image, see the Wikipedia article on Summed Area Table
Preprocessing idea - use morphological dilation (or erosion) to "erase" things that cannot be the large continuous black box.
Preprocessing idea - use pre-processing to enhance horizontal and vertical edges; suppress edges in other directions.
I don't know if it is a better approach, but the first thing that came to mind would be a scan-line solution (assuming black or white pixels): I'd check each scanline from top to bottom. In each scanline I'd check each pixel from left to right. A "first" black pixel would be a possible upperleft corner of a rect. If there were enough following contiguous black pixels on the line to meet my desired minimum width, keep the [left, width] in a list of possible rects. Find all possible rect starts and widths on the line.
For a rect to stay in the list and grow in height, the next scanline would have to have the same [left, width] occurrence, otherwise the rect is finished (if its height meets my desired minimum height) or discarded or ignored as too short in height.
You can easily add logic for situations like two rectangles too close to one another vertically or horizontally. Overlapping rectangles would be trickier but still possible to detect with added code.
Here's some pseudocode:
for s := 1 to scanlinecount do
begin
pixel := 1
while pixel <= scanlinewidth do
if black(s, pixel) then // possible rect
begin
left := pixel
repeat
inc(pixel)
until (pixel > scanlinewidth) or white(s, pixel)
width := pixel - left
if width >= MINWIDTH then // wide enough
rememberrect(s, left, width) // bumps height if already in list
end
else inc(pixel)
end
Your list of found rects stores the starting scanline, leftmost pixel, width, and height for each rect found. The "rememberrect" routine checks each rect in the list:
rememberrect(currentline, left, width):
for r := 1 to rectlist.count do
if rectlist[r].left = left
& rectlist[r].width = width
& rectlist[r].y + rectlist[r].height = currentline then
begin // found rect continuing on scanline
inc(rectlist[r].height)
exit
end
inc(rectlist.count) // add new rect to list
rectlist[rectlist.count].left := left
rectlist[rectlist.count].width := width
rectlist[rectlist.count].y := currentline
rectlist[rectlist.count].height := 1
If the group of black pixels on the current scanline has the same leftmost pixel and width as a group on the previous scanline (you'll know they're vertically contiguous because the starting scanline of the rect in the list plus its height will equal the current scanline) then rememberrect bumps the height of the found and remembered rect by 1. Otherwise, remember the new rect with initial height 1.
After the last scanline you'll have a long list of rect candidates, many of them only 1 pixel high. Delete or ignore any rects in the list that aren't high enough. To avoid growing a long list of futile candidates: at the start of each scanline mark all rects found so far as "finished". If rememberrect grows an existing rect or adds a new rect, mark that rect as "grown". At the end of each scanline, any rect still marked as finished that isn't tall enough can be deleted from the list.
I am writing a program in Matlab to detect a circle.
I've already managed to detect shapes such as the square, rectangle and the triangle, basically by searching for corners, and determining what shape it is based on the distance between them. The images are black and white, with black being the background and white the shape, so for me to find the corners I just have to search each pixel in the image until I find a white pixel.
However I just can't figure out how I can identify the circle.
Here it the an example of how a circle input would look like:
It is difficult to say what the best method is without more information: for example, whether more than one circle may be present, whether it is always centred in the image, and how resilient the algorithm needs to be to distortions. Also whether you need to determine the location and dimensions of the shape or simply a 'yes'/'no' output.
However a really simple approach, assuming only one circle is present, is as follows:
Scan the image from top to bottom until you find the first white pixel at (x1,y1)
Scan the image from bottom to top until you find the last white pixel at (x2,y2)
Derive the diameter of the suspected circle as y2 - y1
Derive the centre of the suspected circle as ((x1+x2)/2, y1+(y2-y1)/2)
Now you are able to score each pixel in the image as to whether it matches this hypothetical circle or not. For example, if a pixel is inside the suspected circle, score 0 if it is white and 1 if it black, and vice-versa if it is outside the suspected circle.
Sum the pixel scores. If the result is zero then the image contains a perfect circle. A higher score indicates an increasing level of distortion.
I think you may read about this two topics:
Theoretical:
Binary images
Hough transform
Matlab:
Circle Detection via Standard Hough Transform
Hough native in matlab
Binary images
I have an image like in the left side. I want to get covered areas or the arc points of polygons for getting image like in the right side. I have got end point-values of all lines.
How can I do that (get all covered areas)? Any algorithm or ideas?
The easiest way to do this is with a recursive fill technique.
Assuming you have a black and white image to start with, you drop a pixel of color on one region. You recursively fill the areas to the up, down, left, and right of that pixel. When each of those pixels returns (because all surrounding pixels are colored or black for wall) you return.
You can do this iteratively for each x,y coordinate, skipping it if it's already colord by a previous run. In doing this, you can iterate over colors as well, if you so desire.
This is a classic case of binary image segmentation, as far as I can see in the limited resolution of the input image. Invert your image, maybe erode it to fill holes in your lines, and then do an image segmentation. A trivial algorithm for this is to perform a forward scan of the image and assigning each pixel the region value of its backward (directly left or any above direction) white neighbours, or a new region value if it has only black backward neighbours, joining regions when there are neighbours with different region numbers.
As a second approach, if you have a list of unbroken lines, you might try a graph approach. Consider each line as an edge in a graph, and each intersection point as a node, and find the minimal cycles in the graph. These are your rooms.
As part of more complex algorithm I need following:
let say I have a circle with radius R1 drawn on discrete grid (image) (green on image below)
I want to draw circle that have radius R2 that is bigger then R1 with one pixel (red on image below).
At each algorithm step to draw circles with increasing radius in a way that each time I have a filled circle.
How can I find the points to fill at each step so at the end of each step I have fully filed circle?
I have thinking of some circle rasterization algorithm, but this will lead to some gaps in filling. Another way is to use some mathematical morphology operation like dilation but this seems to be computationally expensive to do.
I am generally looking for way to do this on arbitrary shape but initially circle algorithm will be enough.
Your best option is to draw and fill a slightly larger red circle, and then draw and fill the green circle. Then redo on next iteration.
To only draw the 1px border is quite tricky. Your sample image is not even quite consistent. At some places a white pixel occurs diagonally to a green pixel, and in other places that pixel is red.
Edit:
borderPixels = emptySet
For each green pixel, p
For each neighbor n to p
If n is white
Add n to *borderPixels`
Do whatever you like with borderPixels (such as color them red)
My current solution for circle.
Based on well known Midpoint circle algorithm
create set of points for 1 octant for R1 radius (light green pixels)
create set of points for 1 octant for R2 radius (dark orange pixels)
for each row in image compare X coordinate for orange and green pixels and get 0 or 1 (or whatever) number of pixels in-between (light orange).
repeat for each octant (where for some octants columns instead of rows have to be compared)
This algorithm can be applied for other types of parametric shapes (Bezier curve based for example)
For non-parametric shapes (pixel based) image convolution (dilation) with kernel with central symmetry (circle). In other words for each pixel in shape looking for neighbors in circle with small radius and setting them to be part of the set. (expensive computation)
Another option is to draw a circle/shape with a 2pixel wide red border, and then draw a green filled circle/shape with NO border. Which should leave an approximately 1px wide edge.
It depends on how whatever technique you use resolves lines to pixels.
Circle algorithms tend to be optimised for drawing circles.....See the link here