I am writing a small image analysis program just for fun. Image analysis has always fascinated me. I am trying to locate regions on a scanned document. These regions are going to be marked by clearly defined filled black rectangles (pre-printed on the page).
My problem is locating the rectangles. I know SIFT\SURF find "features" but I am trying to find something specific. Here is what I was thinking of doing. I am not sure if this is the "right" way or there is a better idea.
First off using some library I will turn the image into greyscale, perhaps a PGM since that is what I'm used to working with in school. For the analysis I first plan to run the image through a state of the art deskew algorithm in OpenCV or something else that I find. Once I have my deskewed image I will then threshhold it at some pretty high thresshold. The rectangles are going to be straight black hence me using a pretty high threshhold. I will then experimentally determine a good size black rectangle to slide across the image. While sliding my rectangle across the image I will determine the areas where the greatest percentage of pixles are the same. I will have a cutoff, say 90%. If 90% of the pixles contained in my window are black I must have found a rectangle. My reasoning is that a true black rectangle slid over something that is "pretty much" a black rectangle is most likely a black rectangle. Since I deskewed the image I can assume that the rectangles are straight up and down "enough". I can then track the (x,y) offsets where the rectangles are found on the image and mark them.
Would anyone suggest a better approach?
There are many approaches that might work. (One can easily come up with 10 or more approaches.)
Idea #1 - Canny edge detection; find rectangle fit to contours
cv::Canny
cv::findContours
cv::minAreaRect, or
cv::boundingRect might also work, if the deskewing works as advertised.
Idea #2 - Find all lines using Hough transform; Iterates through all regions created from line intersections.
Idea #3 - (Improvement on #2) Restrict the Hough transform to horizontal and vertical lines by pre-processing.
Idea #4 - Compute Horizontal and Vertical profiles on the entire image; find dips; iterate through all candidate regions.
This idea is based on the assumption that the black rectangles are large enough that they leave a "depression" in both the horizontal and vertical projection profiles, which would be detectable despite other noise objects in the image.
cv::reduce
With dim = 0 or 1 for reducing to a row or column respectively,
With CV_REDUCE_AVG flag
Apply cv::threshold to the horizontal and vertical projection profiles, separately.
For each profile now thresholded into zero/non-zero, find runs of zeroes. These are the possible row ranges and column ranges that could contain the dark rectangles.
For each combination of candidate row range and column range, calculate the average pixel value to decide if it is a true dark rectangle.
Idea #5 - Use integral image (summed area table) to quickly calculate the average pixel value in arbitrary rectangles
cv::integral
To compute the sum (and average) of a rectangle from an integral image, see the Wikipedia article on Summed Area Table
Preprocessing idea - use morphological dilation (or erosion) to "erase" things that cannot be the large continuous black box.
Preprocessing idea - use pre-processing to enhance horizontal and vertical edges; suppress edges in other directions.
I don't know if it is a better approach, but the first thing that came to mind would be a scan-line solution (assuming black or white pixels): I'd check each scanline from top to bottom. In each scanline I'd check each pixel from left to right. A "first" black pixel would be a possible upperleft corner of a rect. If there were enough following contiguous black pixels on the line to meet my desired minimum width, keep the [left, width] in a list of possible rects. Find all possible rect starts and widths on the line.
For a rect to stay in the list and grow in height, the next scanline would have to have the same [left, width] occurrence, otherwise the rect is finished (if its height meets my desired minimum height) or discarded or ignored as too short in height.
You can easily add logic for situations like two rectangles too close to one another vertically or horizontally. Overlapping rectangles would be trickier but still possible to detect with added code.
Here's some pseudocode:
for s := 1 to scanlinecount do
begin
pixel := 1
while pixel <= scanlinewidth do
if black(s, pixel) then // possible rect
begin
left := pixel
repeat
inc(pixel)
until (pixel > scanlinewidth) or white(s, pixel)
width := pixel - left
if width >= MINWIDTH then // wide enough
rememberrect(s, left, width) // bumps height if already in list
end
else inc(pixel)
end
Your list of found rects stores the starting scanline, leftmost pixel, width, and height for each rect found. The "rememberrect" routine checks each rect in the list:
rememberrect(currentline, left, width):
for r := 1 to rectlist.count do
if rectlist[r].left = left
& rectlist[r].width = width
& rectlist[r].y + rectlist[r].height = currentline then
begin // found rect continuing on scanline
inc(rectlist[r].height)
exit
end
inc(rectlist.count) // add new rect to list
rectlist[rectlist.count].left := left
rectlist[rectlist.count].width := width
rectlist[rectlist.count].y := currentline
rectlist[rectlist.count].height := 1
If the group of black pixels on the current scanline has the same leftmost pixel and width as a group on the previous scanline (you'll know they're vertically contiguous because the starting scanline of the rect in the list plus its height will equal the current scanline) then rememberrect bumps the height of the found and remembered rect by 1. Otherwise, remember the new rect with initial height 1.
After the last scanline you'll have a long list of rect candidates, many of them only 1 pixel high. Delete or ignore any rects in the list that aren't high enough. To avoid growing a long list of futile candidates: at the start of each scanline mark all rects found so far as "finished". If rememberrect grows an existing rect or adds a new rect, mark that rect as "grown". At the end of each scanline, any rect still marked as finished that isn't tall enough can be deleted from the list.
Related
I am writing a code that generate start and end points of strokes of a picture (Raster images) to let robot arm paint.
I have wrote an algorithm but with too many overlapping strokes:
https://github.com/Evrid/Painting-stroke-generation-for-robot-arm-or-CNC-machine
The input of my algorithm:
and the output (which is mirrored and re-assigned to the colors I have) with 50 ThresholdOfError (you can see the strokes are overlapping):
Things to notice are:
*The strokes needs to be none overlapping (if overlapping then have too many strokes)
*Painting have different colors, the same color better draw together
*The stroke size is like rectangles
*Some coloring area are disconnected, like below only yellow from a sun flower:
I am not sure which algorithm should I use, here is some possible ones I have thought about:
Method 1.Generate 50k (or more) random direction and position large size rectangles, if its area overlap the same color area and not overlapping other rectangles, then keep it, then decrease generated rectangle size and after a couple rounds keep decreasing again
Method 2.Extract certain color first then generate random direction and position large size rectangles (we have less area and calculation time)
Method 3.Do edge detection first, then rectangles are generated with direction along the edge, if its area overlap the same color area and not overlapping other rectangles, then keep it, then decrease generated rectangle size and after a couple rounds keep decreasing again
Method 4: Generate random circle, let the pen draw points instead (but may result too many points)
Any suggestions about which algorithm I should use?
I would start with:
Quantize your image to your palette
so reduce colors to your palette first see:
Effective gif/image color quantization?
Converting BMP image to set of instructions for a plotter?
segmentate your image by similar colors
for this you can use flood fill or growth fill to create labels (region index) in form of ROI
see Fracture detection in hand using image proccessing
for each ROI create infill path with thick brush
this is simple hatching you do this by generating zig zag like path with "big" brush width in major direction of ROI so use either AABB or OBB or PCA to detect major direction (direction with biggest size of ROI) and just AND it with polygon ROI
for each ROI create outline path with "thin" brush
IIRC this is also called contour extraction, simply select boundary pixels of selected ROI
then you can use A* on ROI boundary to sort the pixels into 2 halves (or more if complex shape with holes or thin parts) so backtrack the pixels and then reorder them to form a closed loop(s)
this will preserve details on boundary (while using infill with thick brush)
Something like this:
In case your colors are combinable you can use CMY color space and Substractive color mixing and process each C,M,Y channel separately (max 3 overlapping strokes) to have much better color match.
If you want much better colors you can also add dithering however that will slow down the painting a lot as it requires much much more path segments and its not optimal for plotter with tool up/down movement (they are better for printing heads or printing triggered without additional movements ...). To partially overcome this issue you could use partial dithering where you can specify the amount of dithering created (leading to less segments)
there are a lot of things you can improve/add to this like:
remove outline from ROI (to limit the overlaps and prevent details overpaint)
do all infills first and then all outlines
set infill brush width based on ROI size
adjust infill hatching pattern to better match your arm kinematics
order ROIs so they painted faster (variation of Traveling Sailsman problem TSP)
infill with more than just one brush width to preserve details near borders
Suggest you use the flood fill algorithm.
Start at top right pixel.
Flood fill that pixel color. https://en.wikipedia.org/wiki/Flood_fill
Fit rectangles into the filled area.
Move onto the next pixel that is not in the filled area.
When the entire picture has been covered, sort the rectangles by color.
I'm trying to come up with an algorithm to optimize the shape of a polygon (or multiple polygons) to maximize the value contained within that shape.
I have data with 3 columns:
X: the location on the x axis
Y: the location on the y axis
Value: Value of the block which can have positive and negative values.
This data is from a regular grid so the spacing between each x and y value is consistent.
I want to create a bounding polygon that maximizes the contained value with the added condition.
There needs to be a minimum radius maintained at all points of the polygon. This means that we will either lose some positive value blocks or gain some negative value blocks.
The current algorithm I'm using does the following
Finds the maximum block value as a starting point (or user defined)
Finds all blocks within the minimum radius and determines if it is a viable point by checking the overall value is positive
Removes all blocks in the minimum search radius from further value calculations and flags them as part of the final shape
Moves onto the next point determined by a spiraling around the original point. (center is always a grid point so moves by deltaX or deltaY)
This appears to be picking up some cells that aren't needed. I'm sure there are shape algorithms out there but I don't have any idea what to look up to find help.
Below is a picture that hopefully helps outline the question. Positive cells are shown in red (negative cells are not shown). The black outline shows the shape my current routine is returning. I believe the left side should be brought in more. The minimum radius is 100m the bottom left black circle is approximately this.
Right now the code is running in R but I will probably move to something else if I can get the algorithm correct.
In response to the unclear vote the problem I am trying to solve without the background or attempted solution is:
"Create a bounding polygon (or polygons) around a series of points to maximize the contained value, while maintaining a minimum radius of curvature along the polygon"
Edit:
Data
I should have included some data it can be found here.
The file is a csv. 4 columns (X,Y,Z [not used], Value), length is ~25k size is 800kb.
Graphical approach
I would approach this graphically. My intuition tells me that the inside points are fully inside the casted circles with min radius r from all of the footprint points nearby. That means if you cast circle from each footprint point with radius r then all points that are inside at least half of all neighboring circles are inside your polygon. To be less vague if you are deeply inside polygon then you got Pi*r^2 such overlapping circles at any pixel. if you are on edge that you got half of them. This is easily computable.
First I need the dataset. As you did provide just jpg file I do not have the vales just the plot. So I handle this problem like a binary image. First I needed to recolor the image to remove jpg color distortions. After that this is my input:
I choose black background to easily apply additive math on image and also I like it more then white and leave the footprint red (maximally saturated). Now the algorithm:
create temp image
It should be the same size and cleared to black (color=0). Handle its pixels like integer counters of overlapping circles.
cast circles
for each red pixel in source image add +1 to each pixel inside the circle with minimal radius r around the same pixel but in the temp image. The result is like this (Blue are the lower bits of my pixelformat):
As r I used r=24 as that is the bottom left circle radius in your example +/-pixel.
select inside pixels only
so recolor temp image. All the pixels with color < 0.5*pi*r^2 recolor to black and the rest to red. The result is like this:
select polygon circumference points only
Just recolor all red pixels near black pixels to some neutral color blue and the rest to black. Result:
Now just polygonize the result. To compare with the input image you can combine them both (I OR them together):
[Notes]
You can play with the min radius or the area treshold property to achieve different behavior. But I think this is pretty close match to your problem.
Here some C++ source code for this:
//picture pic0,pic1;
// pic0 - source
// pic1 - output/temp
int x,y,xx,yy;
const int r=24; // min radius
const int s=float(1.570796*float(r*r)); // half of min radius area
const DWORD c_foot=0x00FF0000; // red
const DWORD c_poly=0x000000FF; // blue
// resize and clear temp image
pic1=pic0;
pic1.clear(0);
// add min radius circle to temp around any footprint pixel found in input image
for (y=r;y<pic1.ys-r;y++)
for (x=r;x<pic1.xs-r;x++)
if (pic0.p[y][x].dd==c_foot)
for (yy=-r;yy<=r;yy++)
for (xx=-r;xx<=r;xx++)
if ((xx*xx)+(yy*yy)<=r*r)
pic1.p[y+yy][x+xx].dd++;
pic1.save("out0.png");
// select only pixels which are inside footprint with min radius (half of area circles are around)
for (y=0;y<pic1.ys;y++)
for (x=0;x<pic1.xs;x++)
if (pic1.p[y][x].dd>=s) pic1.p[y][x].dd=c_foot;
else pic1.p[y][x].dd=0;
pic1.save("out1.png");
// slect only outside pixels
pic1.growfill(c_foot,0,c_poly);
for (y=0;y<pic1.ys;y++)
for (x=0;x<pic1.xs;x++)
if (pic1.p[y][x].dd==c_foot) pic1.p[y][x].dd=0;
pic1.save("out2.png");
pic1|=pic0; // combine in and out images to compare
pic1.save("out3.png");
I use my own picture class for images so some members are:
xs,ys size of image in pixels
p[y][x].dd is pixel at (x,y) position as 32 bit integer type
clear(color) - clears entire image
resize(xs,ys) - resizes image to new resolution
[Edit1] I got a small bug in source code
I noticed some edges were too sharp so I check the code and I forgot to add the circle condition while filling so it filled squares instead. I repaired the source code above. I really just added line if ((xx*xx)+(yy*yy)<=r*r). The results are slightly changed so I also updated the images with new results
I played with the inside area coefficient ratio and this one:
const int s=float(0.75*1.570796*float(r*r));
Leads to even better match for you. The smaller it is the more the polygon can overlap outside footprint. Result:
If the solution set must be a union of disks of given radius, I would try a greedy approach. (I suspect that the problem might be intractable - exponential running time - if you want an exact solution.)
For all pixels (your "blocks"), compute the sum of values in the disk around it and take the one with the highest sum. Mark this pixel and adjust the sums of all the pixels that are in its disk by deducing its value, because the marked pixel has been "consumed". Then scan all pixels in contact with it by an edge or a corner, and mark the pixel with the highest sum.
Continue this process until all sums are negative. Then the sum cannot increase anymore.
For an efficient implementation, you will need to keep a list of the border pixels, i.e. the unmarked pixels that are neighbors of a marked pixel. After you have picked the border pixel with the largest sum and marked it, you remove it from the list and recompute the sums for the unmarked pixels inside its disk; you also add the unmarked pixels that touch it.
On the picture, the pixels are marked in blue and the border pixels in green. The highlighted pixels are
the one that gets marked,
the ones for which the sum needs to be recomputed.
The computing time will be proportional to the area of the image times the area of a disk (for the initial computation of the sums), plus the area of the shape times the area of a disk (for the updates of the sums), plus the total of the lengths of the successive perimeters of the shape while it grows (to find the largest sum). [As the latter terms might be costly - on the order of the product of the area of the shape by its perimeter length -, it is advisable to use a heap data structure, which will reduce the sum of the lengths to the sum of their logarithm.]
I have a pictures of goods. There is a white border around some images. White color is not uniform and has some shades (poor quality etc.). I need to cut this border. To remove white color I use:
bd.threshold(bd, rect, pt, ">", threshold, color, maskColor);
There are some none transparent pixels after threshold because threshold color is unique for every image. BitmapData.getColorBoundsRect return region include none transparent pixels. I need region without this pixels(only image). Check each pixel is bad for big pictures. What is the most economical way to do this(find green region on picture below)? Sorry for my bad english and thanks for any help.
There are four edges of the image: left, right, top, down. Check each of them starting from the edge and moving towards inside of the image.
For example, let's take top edge (y = 0).
choose any horisontal position in the edge, for example, x = 10.
check pixel at (x, y).
if it is transparent, move the edge down: y++;
goto 2 and repeat until the pixel is not transparent.
choose different horisontal position and goto 2.
Repeat several times with different x. If there are only occasional non-transparent pixels, repeating the process 5-10 times will give you a new top edge which will most probably be 100% precise. It doesn't matter if the image is big, you only check several places in the edge. Do the same for left, right and bottom edges. Then copy the image defined by these edges.
If the edge quality is really bad then it's better to edit all images manually.
For a game, I'm drawing dense clusters of several thousand randomly-distributed circles with varying radii, defined by a sequence of (x,y,r) triples. Here's an example image consisting of 14,000 circles:
I have some dynamic effects in mind, such as merging clusters, but for this to be possible I'll need to redraw all the circles every frame.
Many (maybe 80-90%) of the circles that are drawn are covered over by subsequent draws. Therefore I suspect that with preprocessing I can significantly speed up my draw loop by eliminating covered circles. Is there an algorithm that can identify them with reasonable efficiency?
I can tolerate a fairly large number of false negatives (ie draw some circles that are actually covered), as long as it's not so many that drawing efficiency suffers. I can also tolerate false positives as long as they're almost positive (eg remove some circles that are only 99% covered). I'm also amenable to changes in the way the circles are distributed, as long as it still looks okay.
This kind of culling is essentially what hidden surface algorithms (HSAs) do - especially the variety called "object space". In your case the sorted order of the circles gives them an effective constant depth coordinate. The fact that it's constant simplifies the problem.
A classical reference on HSA's is here. I'd give it a read for ideas.
An idea inspired by this thinking is to consider each circle with a "sweep line" algorithm, say a horizontal line moving from top to bottom. The sweep line contains the set of circles that it's touching. Initialize by sorting the input list of the circles by top coordinate.
The sweep advances in "events", which are the top and bottom coordinates of each circle. When a top is reached, add the circle to the sweep. When its bottom occurs, remove it (unless it's already gone as described below). As a new circle enters the sweep, consider it against the circles already there. You can keep events in a max (y-coordinate) heap, adding them lazily as needed: the next input circle's top coordinate plus all the scan line circles' bottom coordinates.
A new circle entering the sweep can do any or all of 3 things.
Obscure circles in the sweep with greater depth. (Since we are identifying circles not to draw, the conservative side of this decision is to use the biggest included axis-aligned box (BIALB) of the new circle to record the obscured area for each existing deeper circle.)
Be obscured by other circles with lesser depth. (Here the conservative way is to use the BIALB of each other relevant circle to record the obscured area of the new circle.)
Have areas that are not obscured.
The obscured area of each circle must be maintained (it will generally grow as more circles are processed) until the scan line reaches its bottom. If at any time the obscured area covers the entire circle, it can be deleted and never drawn.
The more detailed the recording of the obscured area is, the better the algorithm will work. A union of rectangular regions is one possibility (see Android's Region code for example). A single rectangle is another, though this is likely to cause many false positives.
Similarly a fast data structure for finding the possibly obscuring and obscured circles in the scan line is also needed. An interval tree containing the BIALBs is likely to be good.
Note that in practice algorithms like this only produce a win if the number of primitives is huge because fast graphics hardware is so ... fast.
Based on the example image you provided, it seems your circles have a near-constant radius. If their radius cannot be lower than a significant number of pixels, you could take advantage of the simple geometry of circles to try an image-space approach.
Imagine you divide your rendering surface in a grid of squares so that the smallest rendered circle can fit into the grid like this:
the circle radius is sqrt(10) grid units and covers at least 21 squares, so if you mark the squares entirely overlapped by any circle as already painted, you will have eliminated approximately 21/10pi fraction of the circle surface, that is about 2/3.
You can get some ideas of optimal circle coverage by squares here
The culling process would look a bit like a reverse-painter algorithm:
For each circle from closest to farthest
if all squares overlapped (even partially) by the circle are painted
eliminate the circle
else
paint the squares totally overlapped by the circle
You could also 'cheat' by painting grid squares not entirely covered by a given circle (or eliminating circles that overflow slightly from the already painted surface), increasing the number of eliminated circles at the cost of some false positives.
You can then render the remaining circles with a Z-buffer algorithm (i.e. let the GPU do the rest of the work).
CPU-based approach
This assumes you implement the grid as a memory bitmap, with no help from the GPU.
To determine the squares to be painted, you can use precomputed patterns based on the distance of the circle center relative to the grid (the red crosses in the example images) and the actual circle radius.
If the relative variations of diameter are small enough, you can define a two dimensional table of patterns indexed by circle radius and distance of the center from the nearest grid point.
Once you've retrieved the proper pattern, you can apply it to the appropriate location by using simple symmetries.
The same principle can be used for checking if a circle fits into an already painted surface.
GPU-based approach
It's been a long time since I worked with computer graphics, but if the current state of the art allows, you could let the GPU do the drawing for you.
Painting the grid would be achieved by rendering each circle scaled to fit the grid
Checking elimination would require to read the value of all pixels containing the circle (scaled to grid dimensions).
Efficiency
There should be some sweet spot for the grid dimension. A denser grid will cover a higher percentage of the circles surface and thus eliminate more circles (less false negatives), but the computation cost will grow in o(1/grid_stepĀ²).
Of course, if the rendered circles can shrink to about 1 pixel diameter, you could as well dump the whole algorithm and let the GPU do the work. But the efficiency compared with the GPU pixel-based approach grows as the square of the grid step.
Using the grid in my example, you could probably expect about 1/3 false negatives for a completely random set of circles.
For your picture, which seems to define volumes, 2/3 of the foreground circles and (nearly) all of the backward ones should be eliminated. Culling more than 80% of the circles might be worth the effort.
All this being said, it is not easy to beat a GPU in a brute-force computation contest, so I have only the vaguest idea of the actual performance gain you could expect. Could be fun to try, though.
Here's a simple algorithm off the top of my head:
Insert the N circles into a quadtree (bottom circle first)
For each pixel, use the the quadtree to determine the top-most circle (if it exists)
Fill in the pixel with the color of the circle
By adding a circle, I mean add the center of the circle to the quadtree. This creates 4 children to a leaf node. Store the circle in that leaf node (which is now no longer a leaf). Thus each non-leaf node corresponds to a circle.
To determine the top-most circle, traverse the quadtree, testing each node along the way if the pixel intersects the circle at that node. The top-most circle is the one deepest down the tree that intersects the pixel.
This should take O(M log N) time (if the circles are distributed nicely) where M is the number of pixels and N is the number of circles. Worse case scenario is still O(MN) if the tree is degenerate.
Pseudocode:
quadtree T
for each circle c
add(T,c)
for each pixel p
draw color of top_circle(T,p)
def add(quadtree T, circle c)
if leaf(T)
append four children to T, split along center(c)
T.circle = c
else
quadtree U = child of T containing center(c)
add(U,c)
def top_circle(quadtree T, pixel p)
if not leaf(T)
if intersects(T.circle, p)
c = T.circle
quadtree U = child of T containing p
c = top_circle(U,p) if not null
return c
If a circle is completely inside another circle, then it must follow that the distance between their centres plus the radius of the smaller circle is at most the radius of the larger circle (Draw it out for yourself to see!). Therefore, you can check:
float distanceBetweenCentres = sqrt((topCircle.centre.x - bottomCircle.centre.x) * (topCircle.centre.x - bottomCircle.centre.x) + (topCircle.centre.y - bottomCircle.centre.y) * (topCircle.centre.y - bottomCircle.centre.y));
if((bottomCircle.radius + distanceBetweenCentres) <= topCircle.radius){
// The bottom circle is covered by the top circle.
}
To improve the speed of the computation, you can first check if the top circle has a larger radius that the bottom circle, as if it doesn't, it can't possibly cover the bottom circle. Hope that helps!
You don't mention a Z component, so I assume they are in Z order in your list and drawn back-to-front (i.e., painter algorithm).
As the previous posters said, this is an occlusion culling exercise.
In addition to the object space algorithms mentioned, I'd also investigate screen-space algorithms such as Hierarchical Z-Buffer. You don't even need z values, just bitflags indicating if something is there or not.
See: http://www.gamasutra.com/view/feature/131801/occlusion_culling_algorithms.php?print=1
As part of more complex algorithm I need following:
let say I have a circle with radius R1 drawn on discrete grid (image) (green on image below)
I want to draw circle that have radius R2 that is bigger then R1 with one pixel (red on image below).
At each algorithm step to draw circles with increasing radius in a way that each time I have a filled circle.
How can I find the points to fill at each step so at the end of each step I have fully filed circle?
I have thinking of some circle rasterization algorithm, but this will lead to some gaps in filling. Another way is to use some mathematical morphology operation like dilation but this seems to be computationally expensive to do.
I am generally looking for way to do this on arbitrary shape but initially circle algorithm will be enough.
Your best option is to draw and fill a slightly larger red circle, and then draw and fill the green circle. Then redo on next iteration.
To only draw the 1px border is quite tricky. Your sample image is not even quite consistent. At some places a white pixel occurs diagonally to a green pixel, and in other places that pixel is red.
Edit:
borderPixels = emptySet
For each green pixel, p
For each neighbor n to p
If n is white
Add n to *borderPixels`
Do whatever you like with borderPixels (such as color them red)
My current solution for circle.
Based on well known Midpoint circle algorithm
create set of points for 1 octant for R1 radius (light green pixels)
create set of points for 1 octant for R2 radius (dark orange pixels)
for each row in image compare X coordinate for orange and green pixels and get 0 or 1 (or whatever) number of pixels in-between (light orange).
repeat for each octant (where for some octants columns instead of rows have to be compared)
This algorithm can be applied for other types of parametric shapes (Bezier curve based for example)
For non-parametric shapes (pixel based) image convolution (dilation) with kernel with central symmetry (circle). In other words for each pixel in shape looking for neighbors in circle with small radius and setting them to be part of the set. (expensive computation)
Another option is to draw a circle/shape with a 2pixel wide red border, and then draw a green filled circle/shape with NO border. Which should leave an approximately 1px wide edge.
It depends on how whatever technique you use resolves lines to pixels.
Circle algorithms tend to be optimised for drawing circles.....See the link here