We have axis aligned rect as: { top, left, width, height: number}
We want to test given:
a list of axis aligned rects rs
an axis aligned rect: r
if r is entirely covered by the union of rs.
What's the fastest way to do this?
What I have found is there are fast data structures to test intersections of rs and r (e.g. https://github.com/mourner/rbush), so I can first find what rects in rs intersect with r and then subtract from r all these rects, and see if you have any remained area. This seems to work well if rs has not much overlapping, because you don't end up with a lot of intersecting rects.
Any better solutions?
You can think of a scanline process.
Sort all rectangles by the ordinate of the top edge. Then move from top edge to top edge, keeping a list of "active rectangles" updated (i.e. all rectangles that cross the current horizontal).
Consider the horizontal extent covered by these rectangles between two successive horizontals, and check if they full cover the corresponding slice of the target rectangle.
Related
Following Problem:
Let's say we have a rectangular map containing several rectangles.
I want to find all possible "maximal" rectangles which can be placed on the map without intersecting any of the other rectangles. With maximal I mean rectangles which can't be enlarged in any direction.
Does someone know an algorithm to perform this task?
enter image description here
I'll suggest a solution that may not be the faster but should do the job.
The idea is to place each of your rectangles one by one. Everytime you add a rectangle in the map, check if it intersects any of the others. If it does, then just break these both rectangles into the correct corresponding new rectangles, depending on the shape of the intersection.
It's easy to see that if a new rectangle (green) covers the whole affected rectangle (red), only the new would be broke apart. Otherwise, both will.
Just by adding one by one and breaking them apart correctly, you will have your full set of rectangles in the end, which you just need to iterate to find your "maximal", if it means the ones with biggest area.
This algorithm should be something around O(n²) where n is the number of rectangles you add.
You can do this with simple interval algebra. For illustration, I'll put some numbers on the example you gave us. I'll use the Cartesian first quadrant, origin in the lower-left corner. The entire field is then (0, 0) to (30, 20); rectangles have corners at (10, 0) to (30, 8); (0, 12) to (13, 20); (20, 15) to (25, 17).
To solve this, group rectangle edges by their orientation with respect to their rectangles; specifically, which direction they face (away from the rectangle's center). Edges of the window face inward. This gives us four sets, one for each "facing" direction. An edge is described with the common coordinate in one direction and a range in the other. I'll keep them in the same x-y notation. For instance, sorted by y-coordinate:
bottom low-rt up-rt
face_up = (0-10, 0), (10-30, 8), (20-25, 17)
The top edge of the top rectangle coincides with the top of the window; it faces up,. but has no room for a white rectangle.
Similarly, the down-facing edges are
up-left up-rt top
face_down = (0-13, 12), (20-25, 15), (13-30, 20)
Now it's a simple matter to match each rectangle edge with the first one that blocks its open region. For instance, consider the low-rt rectangle's face_up edge, (10-30, 8) We compare to face_down edges in order of y-coordinate, starting with y >= 8.
(0-13, 12) is the first y > 8 edge. Checking x intervals, we see that [0, 13] and [10-30] do overlap; we have a block with corners at (10, 8) and (12, 13) to expand sideways in each direction. This results in your blue rectangle.
Similarly, we will match the (face_up) left side of the window bottom with the same face_down edge. This results in your orange rectangle. Then we match the top of the up-rt (smallest) rectangle with the window top: green rectangle.
Continuing in the opposite direction, we find that two face_down edges match existing rectangles. We also match the bottom of the up-rt rectangle with the top of the low-rt rectangle, yielding the yellow rectangle.
Repeat this process with the face_left and face_right edges to obtain the red and purple rectangles, as well as a second hit of the orange rectangle.
Is that enough of an outline for you to implement?
Sorting the edges in order gives us the ability to do a linear search from each edge. If I'm casting this code properly, the result is an O(N) algorithm for N rectangles, although the original edge sorting is O(log N).
I am writing a small image analysis program just for fun. Image analysis has always fascinated me. I am trying to locate regions on a scanned document. These regions are going to be marked by clearly defined filled black rectangles (pre-printed on the page).
My problem is locating the rectangles. I know SIFT\SURF find "features" but I am trying to find something specific. Here is what I was thinking of doing. I am not sure if this is the "right" way or there is a better idea.
First off using some library I will turn the image into greyscale, perhaps a PGM since that is what I'm used to working with in school. For the analysis I first plan to run the image through a state of the art deskew algorithm in OpenCV or something else that I find. Once I have my deskewed image I will then threshhold it at some pretty high thresshold. The rectangles are going to be straight black hence me using a pretty high threshhold. I will then experimentally determine a good size black rectangle to slide across the image. While sliding my rectangle across the image I will determine the areas where the greatest percentage of pixles are the same. I will have a cutoff, say 90%. If 90% of the pixles contained in my window are black I must have found a rectangle. My reasoning is that a true black rectangle slid over something that is "pretty much" a black rectangle is most likely a black rectangle. Since I deskewed the image I can assume that the rectangles are straight up and down "enough". I can then track the (x,y) offsets where the rectangles are found on the image and mark them.
Would anyone suggest a better approach?
There are many approaches that might work. (One can easily come up with 10 or more approaches.)
Idea #1 - Canny edge detection; find rectangle fit to contours
cv::Canny
cv::findContours
cv::minAreaRect, or
cv::boundingRect might also work, if the deskewing works as advertised.
Idea #2 - Find all lines using Hough transform; Iterates through all regions created from line intersections.
Idea #3 - (Improvement on #2) Restrict the Hough transform to horizontal and vertical lines by pre-processing.
Idea #4 - Compute Horizontal and Vertical profiles on the entire image; find dips; iterate through all candidate regions.
This idea is based on the assumption that the black rectangles are large enough that they leave a "depression" in both the horizontal and vertical projection profiles, which would be detectable despite other noise objects in the image.
cv::reduce
With dim = 0 or 1 for reducing to a row or column respectively,
With CV_REDUCE_AVG flag
Apply cv::threshold to the horizontal and vertical projection profiles, separately.
For each profile now thresholded into zero/non-zero, find runs of zeroes. These are the possible row ranges and column ranges that could contain the dark rectangles.
For each combination of candidate row range and column range, calculate the average pixel value to decide if it is a true dark rectangle.
Idea #5 - Use integral image (summed area table) to quickly calculate the average pixel value in arbitrary rectangles
cv::integral
To compute the sum (and average) of a rectangle from an integral image, see the Wikipedia article on Summed Area Table
Preprocessing idea - use morphological dilation (or erosion) to "erase" things that cannot be the large continuous black box.
Preprocessing idea - use pre-processing to enhance horizontal and vertical edges; suppress edges in other directions.
I don't know if it is a better approach, but the first thing that came to mind would be a scan-line solution (assuming black or white pixels): I'd check each scanline from top to bottom. In each scanline I'd check each pixel from left to right. A "first" black pixel would be a possible upperleft corner of a rect. If there were enough following contiguous black pixels on the line to meet my desired minimum width, keep the [left, width] in a list of possible rects. Find all possible rect starts and widths on the line.
For a rect to stay in the list and grow in height, the next scanline would have to have the same [left, width] occurrence, otherwise the rect is finished (if its height meets my desired minimum height) or discarded or ignored as too short in height.
You can easily add logic for situations like two rectangles too close to one another vertically or horizontally. Overlapping rectangles would be trickier but still possible to detect with added code.
Here's some pseudocode:
for s := 1 to scanlinecount do
begin
pixel := 1
while pixel <= scanlinewidth do
if black(s, pixel) then // possible rect
begin
left := pixel
repeat
inc(pixel)
until (pixel > scanlinewidth) or white(s, pixel)
width := pixel - left
if width >= MINWIDTH then // wide enough
rememberrect(s, left, width) // bumps height if already in list
end
else inc(pixel)
end
Your list of found rects stores the starting scanline, leftmost pixel, width, and height for each rect found. The "rememberrect" routine checks each rect in the list:
rememberrect(currentline, left, width):
for r := 1 to rectlist.count do
if rectlist[r].left = left
& rectlist[r].width = width
& rectlist[r].y + rectlist[r].height = currentline then
begin // found rect continuing on scanline
inc(rectlist[r].height)
exit
end
inc(rectlist.count) // add new rect to list
rectlist[rectlist.count].left := left
rectlist[rectlist.count].width := width
rectlist[rectlist.count].y := currentline
rectlist[rectlist.count].height := 1
If the group of black pixels on the current scanline has the same leftmost pixel and width as a group on the previous scanline (you'll know they're vertically contiguous because the starting scanline of the rect in the list plus its height will equal the current scanline) then rememberrect bumps the height of the found and remembered rect by 1. Otherwise, remember the new rect with initial height 1.
After the last scanline you'll have a long list of rect candidates, many of them only 1 pixel high. Delete or ignore any rects in the list that aren't high enough. To avoid growing a long list of futile candidates: at the start of each scanline mark all rects found so far as "finished". If rememberrect grows an existing rect or adds a new rect, mark that rect as "grown". At the end of each scanline, any rect still marked as finished that isn't tall enough can be deleted from the list.
For a game, I'm drawing dense clusters of several thousand randomly-distributed circles with varying radii, defined by a sequence of (x,y,r) triples. Here's an example image consisting of 14,000 circles:
I have some dynamic effects in mind, such as merging clusters, but for this to be possible I'll need to redraw all the circles every frame.
Many (maybe 80-90%) of the circles that are drawn are covered over by subsequent draws. Therefore I suspect that with preprocessing I can significantly speed up my draw loop by eliminating covered circles. Is there an algorithm that can identify them with reasonable efficiency?
I can tolerate a fairly large number of false negatives (ie draw some circles that are actually covered), as long as it's not so many that drawing efficiency suffers. I can also tolerate false positives as long as they're almost positive (eg remove some circles that are only 99% covered). I'm also amenable to changes in the way the circles are distributed, as long as it still looks okay.
This kind of culling is essentially what hidden surface algorithms (HSAs) do - especially the variety called "object space". In your case the sorted order of the circles gives them an effective constant depth coordinate. The fact that it's constant simplifies the problem.
A classical reference on HSA's is here. I'd give it a read for ideas.
An idea inspired by this thinking is to consider each circle with a "sweep line" algorithm, say a horizontal line moving from top to bottom. The sweep line contains the set of circles that it's touching. Initialize by sorting the input list of the circles by top coordinate.
The sweep advances in "events", which are the top and bottom coordinates of each circle. When a top is reached, add the circle to the sweep. When its bottom occurs, remove it (unless it's already gone as described below). As a new circle enters the sweep, consider it against the circles already there. You can keep events in a max (y-coordinate) heap, adding them lazily as needed: the next input circle's top coordinate plus all the scan line circles' bottom coordinates.
A new circle entering the sweep can do any or all of 3 things.
Obscure circles in the sweep with greater depth. (Since we are identifying circles not to draw, the conservative side of this decision is to use the biggest included axis-aligned box (BIALB) of the new circle to record the obscured area for each existing deeper circle.)
Be obscured by other circles with lesser depth. (Here the conservative way is to use the BIALB of each other relevant circle to record the obscured area of the new circle.)
Have areas that are not obscured.
The obscured area of each circle must be maintained (it will generally grow as more circles are processed) until the scan line reaches its bottom. If at any time the obscured area covers the entire circle, it can be deleted and never drawn.
The more detailed the recording of the obscured area is, the better the algorithm will work. A union of rectangular regions is one possibility (see Android's Region code for example). A single rectangle is another, though this is likely to cause many false positives.
Similarly a fast data structure for finding the possibly obscuring and obscured circles in the scan line is also needed. An interval tree containing the BIALBs is likely to be good.
Note that in practice algorithms like this only produce a win if the number of primitives is huge because fast graphics hardware is so ... fast.
Based on the example image you provided, it seems your circles have a near-constant radius. If their radius cannot be lower than a significant number of pixels, you could take advantage of the simple geometry of circles to try an image-space approach.
Imagine you divide your rendering surface in a grid of squares so that the smallest rendered circle can fit into the grid like this:
the circle radius is sqrt(10) grid units and covers at least 21 squares, so if you mark the squares entirely overlapped by any circle as already painted, you will have eliminated approximately 21/10pi fraction of the circle surface, that is about 2/3.
You can get some ideas of optimal circle coverage by squares here
The culling process would look a bit like a reverse-painter algorithm:
For each circle from closest to farthest
if all squares overlapped (even partially) by the circle are painted
eliminate the circle
else
paint the squares totally overlapped by the circle
You could also 'cheat' by painting grid squares not entirely covered by a given circle (or eliminating circles that overflow slightly from the already painted surface), increasing the number of eliminated circles at the cost of some false positives.
You can then render the remaining circles with a Z-buffer algorithm (i.e. let the GPU do the rest of the work).
CPU-based approach
This assumes you implement the grid as a memory bitmap, with no help from the GPU.
To determine the squares to be painted, you can use precomputed patterns based on the distance of the circle center relative to the grid (the red crosses in the example images) and the actual circle radius.
If the relative variations of diameter are small enough, you can define a two dimensional table of patterns indexed by circle radius and distance of the center from the nearest grid point.
Once you've retrieved the proper pattern, you can apply it to the appropriate location by using simple symmetries.
The same principle can be used for checking if a circle fits into an already painted surface.
GPU-based approach
It's been a long time since I worked with computer graphics, but if the current state of the art allows, you could let the GPU do the drawing for you.
Painting the grid would be achieved by rendering each circle scaled to fit the grid
Checking elimination would require to read the value of all pixels containing the circle (scaled to grid dimensions).
Efficiency
There should be some sweet spot for the grid dimension. A denser grid will cover a higher percentage of the circles surface and thus eliminate more circles (less false negatives), but the computation cost will grow in o(1/grid_step²).
Of course, if the rendered circles can shrink to about 1 pixel diameter, you could as well dump the whole algorithm and let the GPU do the work. But the efficiency compared with the GPU pixel-based approach grows as the square of the grid step.
Using the grid in my example, you could probably expect about 1/3 false negatives for a completely random set of circles.
For your picture, which seems to define volumes, 2/3 of the foreground circles and (nearly) all of the backward ones should be eliminated. Culling more than 80% of the circles might be worth the effort.
All this being said, it is not easy to beat a GPU in a brute-force computation contest, so I have only the vaguest idea of the actual performance gain you could expect. Could be fun to try, though.
Here's a simple algorithm off the top of my head:
Insert the N circles into a quadtree (bottom circle first)
For each pixel, use the the quadtree to determine the top-most circle (if it exists)
Fill in the pixel with the color of the circle
By adding a circle, I mean add the center of the circle to the quadtree. This creates 4 children to a leaf node. Store the circle in that leaf node (which is now no longer a leaf). Thus each non-leaf node corresponds to a circle.
To determine the top-most circle, traverse the quadtree, testing each node along the way if the pixel intersects the circle at that node. The top-most circle is the one deepest down the tree that intersects the pixel.
This should take O(M log N) time (if the circles are distributed nicely) where M is the number of pixels and N is the number of circles. Worse case scenario is still O(MN) if the tree is degenerate.
Pseudocode:
quadtree T
for each circle c
add(T,c)
for each pixel p
draw color of top_circle(T,p)
def add(quadtree T, circle c)
if leaf(T)
append four children to T, split along center(c)
T.circle = c
else
quadtree U = child of T containing center(c)
add(U,c)
def top_circle(quadtree T, pixel p)
if not leaf(T)
if intersects(T.circle, p)
c = T.circle
quadtree U = child of T containing p
c = top_circle(U,p) if not null
return c
If a circle is completely inside another circle, then it must follow that the distance between their centres plus the radius of the smaller circle is at most the radius of the larger circle (Draw it out for yourself to see!). Therefore, you can check:
float distanceBetweenCentres = sqrt((topCircle.centre.x - bottomCircle.centre.x) * (topCircle.centre.x - bottomCircle.centre.x) + (topCircle.centre.y - bottomCircle.centre.y) * (topCircle.centre.y - bottomCircle.centre.y));
if((bottomCircle.radius + distanceBetweenCentres) <= topCircle.radius){
// The bottom circle is covered by the top circle.
}
To improve the speed of the computation, you can first check if the top circle has a larger radius that the bottom circle, as if it doesn't, it can't possibly cover the bottom circle. Hope that helps!
You don't mention a Z component, so I assume they are in Z order in your list and drawn back-to-front (i.e., painter algorithm).
As the previous posters said, this is an occlusion culling exercise.
In addition to the object space algorithms mentioned, I'd also investigate screen-space algorithms such as Hierarchical Z-Buffer. You don't even need z values, just bitflags indicating if something is there or not.
See: http://www.gamasutra.com/view/feature/131801/occlusion_culling_algorithms.php?print=1
Say, I wanna scratch off some rectangular holes from a rectangle board. For example,
situation 1, holes intersect:
a borad x with hole 0,1,2 in it, rectangle 0 and 1 intersect.
xxxxxxxxxxx
xxxxxx222xx
x000xx222xx
x00011222xx
x00011xxxxx
xxx111xxxxx
xxxxxxxxxxx
or simpler, situation 2, no holes intersect:
xxxxxxxxxxx
xxxxx2222xx
x00xx2222xx
x00xx2222xx
x00x111xxxx
xxxx111xxxx
xxxxxxxxxxx
The latter is more like 'invert a set of rectangles within a big rectangle'.
My Question is: How to calculate a set of sub rectangles which exactly cover the board x?
Input: a larger rect, and a set of hole rects
Output: a set of sub rects cover exactly the larger rect with holes
the rect struct may like CCRect below, coordination type is float:
typedef struct {float x; float y;} CGPoint;
typedef struct {float width, float height} CGSize;
typedef struct {CGPoint origin; CGSize size;} CGRect;
Any great idea?
There is a missing constraint in your question. How do you want to optimize the result. Are you seeking for having fewer resulting rectangles as possible ?
Are the edges on a grid ?
I would do it like this :
start with one big rectangle and a two methods, one for splitting rectangles, the other fo joining them
split the main rectangle in two for each side of the hole rectangles. Extend their borders as much as possible and split the plane along this line. Once you've done that, you endup with lot of small rectangles. I guess you want to have as few rectangle as possible.
Pass one - remove holes : For each little rectangle if the coordinate fill inside a hole rectangle you had in the begining, discard it.
Pass two - join the remaining rectangles : for each rectangle, if it can form a rectangle with a neighbor, join them
This pass two is tricky, there are tons of optimisation to do there.
A simple optization will be to join alternatively verticaly then horizontaly. This way you will end up with bigger rectangles.
Edit:
You can speedup dramaticaly pass2 if you build a BSP tree during pass 1. Each time you split, it create a new node where 2 leaves are the child rectangles. It will make it much faster to find the neighbors in pass 2.
Given is a 2D are with the polygons. I need to find out the polygons visible in a perpendicular line of sight from the a given line segment lying within that area. e.g.
Further,
What can be the optimizations when the polygons have only vertical and horizontal edges.
I'd suggest the following ...
Rotate the problem so your 'line of sight' segment is aligned to the x axis.
Find the (axis aligned) bounding rectangle (BR) of each polygon.
Sort the polygons using the Y coordinate of the bottom edge of each BR
Create a clipping 'range buffer' to mark the portions of the viewing segment that will be no longer visible.
For each polygon C (current) in the sorted list do ...
Use C's left and right bounds as its initial clipping range.
Trim C's clipping range with the range already marked as clipped in the 'range buffer'.
Now for each subsequent polygon S of a similar depth (ie where S's BR bottom edge starts below C's BR top edge) ...
loop to next S if it doesn't overlap horizontally with C
determine if S is overlapping from the left or right (eg by comparing the BR horizontal midpoints of S and C). If S overlaps from the right and S's left-most vertex is below C's right-most vertex, then truncate C's clipping range accordingly. (Likewise if S overlaps from the left.)
If the residual clipping range isn't empty, then at least part of C is visible from your viewing segment. Now add C's residual clipping range to the clipping 'range buffer'.