Handling selection of shapes on a board - algorithm

I have a board as a canvas with several shapes drawn on it, some of them are triangles, circles, rectangles but all are contained inside their own bound delimited rectangle.
"The circle will be inside a rectangle"
I put two circles A, B on the board where A is over B and has some area colliding.
If I click on A area corresponding to the container box but not the actual A circle shape area I wont select the A circle, however that would stop me from selecting B since my A container overlaps and is over B one.
In an event base framework, the child event will go to the parent not siblings I guess.
So my choice was to do a check for all shape container which have some area at point x ordered by z index. Then for each container check if the shape inside it collide.
It does not seem super efficient but is there any other ways?
---------
| --------
| | |
-----| |
--------

You're handling it about as well as it can be handled - windowing systems generally obey Z order (layers).
This will be better in the long run anyway, especially if you want to be able to select multiple items by drawing a selection box around them.
There are algorithms for finding if rectangles overlap by converting them into 2d representations on both the x and y axis. You can do the same thing, and then compare your point to see which objects your point overlaps:
Algorithm to detect intersection of two rectangles?
Just treat your point selection (or rectangle selection if you draw a bounding box to select multiple items) as another rectangle to be compared as overlapping to the others.
-Adam

There are tricks you can play if you really need speed. For example:
If you are working with a deep pallet you can use the low order bits of the color to tag the objects. Then peeking at the pixel gives you the object or at least lest you quickly drastically cull the list.
Even at low bit depth, if the objects are all monochrome you can use the whole color
If you're in low enough resolution you can keep an array that says what object owns what pixel.
At higher res you can do the same but use RLE to keep the size down (also look into quad trees)
And so forth...
If it's just simple implementation you're after, one quick trick is to record the X & Y, repaint the screen, and note which object paints that pixel.
-- MarkusQ

Related

Algorithm for calculating all boxes selected by a rectangular selection

Supposed you've got a large amount of boxes drawn, and the user can draw a rectangular area over them.
While I'll be implementing it inside a browser, let's abstract it away and say we've got the coordinates of every point of every rectangle.
What are the most efficient data structures and algorithms here, given I want to check which boxes a) intersect b) are contained by the selection?
My current idea is to:
Sort all boxes by x
Via binsearch, check which boxes overlap x-wise with the selection area, then, for every x-wise overlapping box, check if they align y-wise as well.
or
Sort all boxes by x and y, each in separate array
Via binsearch, first find all x-overlapping boxes, then all y-overlapping boxes, then check which boxes are in both sets,
... though I'm pretty sure there's some well-known algorithm for such a problem.
I suppose by selected via some rectangle you mean either intersects some rectangle or is contained in some rectangle. If the "drawn boxes" are of fixed position, one approach which comes to mind is binary space partition. Roughly speaking, an (ideally balanced) binary space partition tree could be generated for the "drawn boxes". If the selection rectangle is positioned, the positions of its corners would be matched against the binary space partition tree, and large halfspaces could be excluded from explicit checking for intersection.

Check if a point lies within an axis-aligned rectangle efficiently as possible, including the edge?

I am working on an interactive web application, and I'm currently working on implementing a multi-select feature similar to the way windows allows you to select multiple desktop icons by dragging a rectangle.
Due to limitations of the library I'm required to use, implementing this has already become quite resource intensive:
On initial click, store the position of the mouse cursor.
On each pixel that the mouse cursor moves, perform the following:
Destroy the previous selection rectangle, if it exists, so it doesn't appear on the screen anymore.
Calculate the width and height of the new selection retangle using the current cursor position and the current cursor position.
Create a new selection rectangle using the original cursor position, the width and the height
Display this rectangle on the screen
As you can see, there are quite a few things happening every time the cursor moves a single pixel. I've looked into this as much as I can and there's no way I can make it any more efficient or any faster.
My next step is actually selecting the objects on the screen when the selection rectangle moves over them. I need to implement this algorithm myself so I have freedom to make it as efficient/fast as possible. What I need to do is iterate through the objects on the screen and check each one to see if it lies in the rectangle. So the loop here is going to consume more resources. So, I need the checking to be done as efficiently as possible.
Each object that can be selected can be represented by a single point, P(x, y).
How can I check if P(x, y) is within the rectangles I create in the fastest/most efficient way?
Here's the relevant information:
The can be an arbitrary number of objects that can be selected on the screen at any one time
The selection rectangles will always be axis-aligned
The information I have about the rectangles is their original point, their height, and their width.
How can I achieve what I need to do as fast as possible?
Checking whether point P lies inside rectangle R is simple and fast
(in coordinate system with origin in the top left corner)
(P.X >= R.Left) and (P.X <= R.Right) and (P.Y >= R.Top) and (P.Y <= R.Bottom)
(precalculate Right and Bottom coordinates of rectangle)
Perhaps you could accelerate overall algorithm if objects fulfill to some conditions, that allow don't check all the objects at every step.
Example: sort object list by X coordinate and check only those objects that lies in Left..Right range
More advanced approach: organize objects in some space-partitioning data structure like kd-tree and execute range search very fast
You can iterate through every object on screen and check whether it lies in the rectangle in a Cartesian coordinate system using the following condition:
p.x >= rect.left && p.x <= rect.right && p.y <= rect.top && p.y >= rect.bottom
If are going to have not more than 1000 points on screen, just use the naive O(n) method by iterating through each point. If you are completely sure that you need to optimize this further, read on.
Depending on the frequency of updating the points and number of points being updated each frame, you may want to use a different method potentially involving a data structure like Range Trees, or settle for the naive O(n) method.
If the points aren't going to move around much and are sparse (i.e. far apart from each other), you can use a Range Tree or similar for O(log n) checks. Bear in mind though that updating such a spatial partitioning structure is resource intensive, and if you have a lot of points that are going to be moving around quite a bit, you may want to look at something else.
If a few points are going to be moving around over large distances, you may want to look at partitioning the screen into a grid of "buckets", and check only those buckets that are contained by the rectangle. Whenever a point moves from one bucket to another, the grid will have to update the affected buckets.
If memory is a constraint, you may want to look at using a modified Quad Tree which is limited by tree depth instead of bucket size, if the grid approach is not efficient enough.
If you have a lot of points moving around a lot every frame, I think you may be better of with the grid approach or just with the naive O(n) approach. Experiment and choose an approach that best suites your problem.

Circle covering algorithm with varying radii

For a game, I'm drawing dense clusters of several thousand randomly-distributed circles with varying radii, defined by a sequence of (x,y,r) triples. Here's an example image consisting of 14,000 circles:
I have some dynamic effects in mind, such as merging clusters, but for this to be possible I'll need to redraw all the circles every frame.
Many (maybe 80-90%) of the circles that are drawn are covered over by subsequent draws. Therefore I suspect that with preprocessing I can significantly speed up my draw loop by eliminating covered circles. Is there an algorithm that can identify them with reasonable efficiency?
I can tolerate a fairly large number of false negatives (ie draw some circles that are actually covered), as long as it's not so many that drawing efficiency suffers. I can also tolerate false positives as long as they're almost positive (eg remove some circles that are only 99% covered). I'm also amenable to changes in the way the circles are distributed, as long as it still looks okay.
This kind of culling is essentially what hidden surface algorithms (HSAs) do - especially the variety called "object space". In your case the sorted order of the circles gives them an effective constant depth coordinate. The fact that it's constant simplifies the problem.
A classical reference on HSA's is here. I'd give it a read for ideas.
An idea inspired by this thinking is to consider each circle with a "sweep line" algorithm, say a horizontal line moving from top to bottom. The sweep line contains the set of circles that it's touching. Initialize by sorting the input list of the circles by top coordinate.
The sweep advances in "events", which are the top and bottom coordinates of each circle. When a top is reached, add the circle to the sweep. When its bottom occurs, remove it (unless it's already gone as described below). As a new circle enters the sweep, consider it against the circles already there. You can keep events in a max (y-coordinate) heap, adding them lazily as needed: the next input circle's top coordinate plus all the scan line circles' bottom coordinates.
A new circle entering the sweep can do any or all of 3 things.
Obscure circles in the sweep with greater depth. (Since we are identifying circles not to draw, the conservative side of this decision is to use the biggest included axis-aligned box (BIALB) of the new circle to record the obscured area for each existing deeper circle.)
Be obscured by other circles with lesser depth. (Here the conservative way is to use the BIALB of each other relevant circle to record the obscured area of the new circle.)
Have areas that are not obscured.
The obscured area of each circle must be maintained (it will generally grow as more circles are processed) until the scan line reaches its bottom. If at any time the obscured area covers the entire circle, it can be deleted and never drawn.
The more detailed the recording of the obscured area is, the better the algorithm will work. A union of rectangular regions is one possibility (see Android's Region code for example). A single rectangle is another, though this is likely to cause many false positives.
Similarly a fast data structure for finding the possibly obscuring and obscured circles in the scan line is also needed. An interval tree containing the BIALBs is likely to be good.
Note that in practice algorithms like this only produce a win if the number of primitives is huge because fast graphics hardware is so ... fast.
Based on the example image you provided, it seems your circles have a near-constant radius. If their radius cannot be lower than a significant number of pixels, you could take advantage of the simple geometry of circles to try an image-space approach.
Imagine you divide your rendering surface in a grid of squares so that the smallest rendered circle can fit into the grid like this:
the circle radius is sqrt(10) grid units and covers at least 21 squares, so if you mark the squares entirely overlapped by any circle as already painted, you will have eliminated approximately 21/10pi fraction of the circle surface, that is about 2/3.
You can get some ideas of optimal circle coverage by squares here
The culling process would look a bit like a reverse-painter algorithm:
For each circle from closest to farthest
if all squares overlapped (even partially) by the circle are painted
eliminate the circle
else
paint the squares totally overlapped by the circle
You could also 'cheat' by painting grid squares not entirely covered by a given circle (or eliminating circles that overflow slightly from the already painted surface), increasing the number of eliminated circles at the cost of some false positives.
You can then render the remaining circles with a Z-buffer algorithm (i.e. let the GPU do the rest of the work).
CPU-based approach
This assumes you implement the grid as a memory bitmap, with no help from the GPU.
To determine the squares to be painted, you can use precomputed patterns based on the distance of the circle center relative to the grid (the red crosses in the example images) and the actual circle radius.
If the relative variations of diameter are small enough, you can define a two dimensional table of patterns indexed by circle radius and distance of the center from the nearest grid point.
Once you've retrieved the proper pattern, you can apply it to the appropriate location by using simple symmetries.
The same principle can be used for checking if a circle fits into an already painted surface.
GPU-based approach
It's been a long time since I worked with computer graphics, but if the current state of the art allows, you could let the GPU do the drawing for you.
Painting the grid would be achieved by rendering each circle scaled to fit the grid
Checking elimination would require to read the value of all pixels containing the circle (scaled to grid dimensions).
Efficiency
There should be some sweet spot for the grid dimension. A denser grid will cover a higher percentage of the circles surface and thus eliminate more circles (less false negatives), but the computation cost will grow in o(1/grid_stepĀ²).
Of course, if the rendered circles can shrink to about 1 pixel diameter, you could as well dump the whole algorithm and let the GPU do the work. But the efficiency compared with the GPU pixel-based approach grows as the square of the grid step.
Using the grid in my example, you could probably expect about 1/3 false negatives for a completely random set of circles.
For your picture, which seems to define volumes, 2/3 of the foreground circles and (nearly) all of the backward ones should be eliminated. Culling more than 80% of the circles might be worth the effort.
All this being said, it is not easy to beat a GPU in a brute-force computation contest, so I have only the vaguest idea of the actual performance gain you could expect. Could be fun to try, though.
Here's a simple algorithm off the top of my head:
Insert the N circles into a quadtree (bottom circle first)
For each pixel, use the the quadtree to determine the top-most circle (if it exists)
Fill in the pixel with the color of the circle
By adding a circle, I mean add the center of the circle to the quadtree. This creates 4 children to a leaf node. Store the circle in that leaf node (which is now no longer a leaf). Thus each non-leaf node corresponds to a circle.
To determine the top-most circle, traverse the quadtree, testing each node along the way if the pixel intersects the circle at that node. The top-most circle is the one deepest down the tree that intersects the pixel.
This should take O(M log N) time (if the circles are distributed nicely) where M is the number of pixels and N is the number of circles. Worse case scenario is still O(MN) if the tree is degenerate.
Pseudocode:
quadtree T
for each circle c
add(T,c)
for each pixel p
draw color of top_circle(T,p)
def add(quadtree T, circle c)
if leaf(T)
append four children to T, split along center(c)
T.circle = c
else
quadtree U = child of T containing center(c)
add(U,c)
def top_circle(quadtree T, pixel p)
if not leaf(T)
if intersects(T.circle, p)
c = T.circle
quadtree U = child of T containing p
c = top_circle(U,p) if not null
return c
If a circle is completely inside another circle, then it must follow that the distance between their centres plus the radius of the smaller circle is at most the radius of the larger circle (Draw it out for yourself to see!). Therefore, you can check:
float distanceBetweenCentres = sqrt((topCircle.centre.x - bottomCircle.centre.x) * (topCircle.centre.x - bottomCircle.centre.x) + (topCircle.centre.y - bottomCircle.centre.y) * (topCircle.centre.y - bottomCircle.centre.y));
if((bottomCircle.radius + distanceBetweenCentres) <= topCircle.radius){
// The bottom circle is covered by the top circle.
}
To improve the speed of the computation, you can first check if the top circle has a larger radius that the bottom circle, as if it doesn't, it can't possibly cover the bottom circle. Hope that helps!
You don't mention a Z component, so I assume they are in Z order in your list and drawn back-to-front (i.e., painter algorithm).
As the previous posters said, this is an occlusion culling exercise.
In addition to the object space algorithms mentioned, I'd also investigate screen-space algorithms such as Hierarchical Z-Buffer. You don't even need z values, just bitflags indicating if something is there or not.
See: http://www.gamasutra.com/view/feature/131801/occlusion_culling_algorithms.php?print=1

How to detect a click on an edge of a multigraph?

I have written a win32 api-based GUI app which uses GDI+ features such as DrawCurve() and DrawLine().
This app draws lines and curves that represent a multigraph.
The data structure for the edge is simply a struct of five int's. (x1, y1, x2, y2, and id)
If there is only one edge between two vertices, a straight line segment is drawn using DrawLine().
If there are more than one edges, curves are drawn using DrawCurve() -- Here, I spread straight-line edges about the midpoint of two vertices, making them curves. A point some unit pixels apart from it is calculated using the normal line equation. If more edges are added then a pixel two unit pixels apart from the midpoint is selected, then next time 3 unit pixels, and so on.
Now I have two questions on detecting the click on edges.
In finding straight-line edges, to minimize the search time, what should I do?
It's quite simple to check if the pixel clicked is on the line segment but comparing all edges would be inefficient if the number of edges large. It seems possible to do it in O(log n), where n is the number of edges.
EDIT: at this point the edges (class Edge) are stored in std::map that maps edge id (int)'s
to Edge objects and I'm considering declaring another container that maps pixels to edge id's.
I'm considering using binary search trees but what can be the key? Or should I use just a 2D pixel array?
Can I get the array of points used by DrawCurve()? If this is impossible, then I should re-calculate the cardinal spline, get the array of points, and check if the point clicked by the user matches any point in that array.
If you have complex shaped lines you can do as follows:
Create an internal bitmap the size of your graph and fill it with black.
When you render your graph also render to this bitmap the edges you want to have click-able, but, render them with a different color. Store these color values in a table together with the corresponding ID. The important thing here is that the colors are different (unique).
When the graph is clicked, transfer the X and Y co-ordinates to your internal bitmap and read the pixel. If non-black, look up the color value in your table and get the associated ID.
This way do don't need to worry about the shape at all, neither is there a need to use your own curve algorithm and so forth. The cost is extra memory, this will a consideration, but unless it is a huge graph (in which case you can buffer the drawing) it is in most cases not an issue. You can render the internal bitmap in a second pass to have main graphics appear faster (as usual).
Hope this helps!
(tip: you can render the "internal" lines with a wider Pen so it gets more sensitive).

Converting vector-contoured regions (borders) to a raster map (pixel grid)

I have a map that is cut up into a number of regions by borders (contours) like countries on a world map. Each region has a certain surface-cover class S (e.g. 0 for water, 0.03 for grass...). The borders are defined by:
what value of S is on either side of it (0.03 on one side, 0.0 on the other, in the example below)
how many points the border is made of (n=7 in example below), and
n coordinate pairs (x, y).
This is one example.
0.0300 0.0000 7
2660607.5 6332685.5 2660565.0 6332690.5 2660541.5 6332794.5
2660621.7 6332860.5 2660673.8 6332770.5 2660669.0 6332709.5
2660607.5 6332685.5
I want to make a raster map in which each pixel has the value of S corresponding to the region in which the center of the pixel falls.
Note that the borders represent step changes in S. The various values of S represent discrete classes (e.g. grass or water), and are not values that can be averaged (i.e. no wet grass!).
Also note that not all borders are closed loops like the example above. This is a bit like country borders: e.g. the US-Canada border isn't a closed loop, but rather a line joining up at each end with two other borders: the Canada-ocean and the US-ocean "borders". (Closed-loop borders do exist nevertheless!)
Can anyone point me to an algorithm that can do this? I don't want to reinvent the wheel!
The general case for processing this sort of geometry in vector form can be quite difficult, especially since nothing about the structure you describe requires the geometry to be consistent. However, since you just want to rasterize it, then treating the problem as a Voronoi diagram of line segments can be more robust.
Approximating the Voronoi diagram can be done graphically in OpenGL by drawing each line segment as a pair of quads making a tent shape. The z-buffer is used to make the closest quad take precedence, and thus color the pixel based on whichever line is closest. The difference here is that you will want to color the polygons based on which side of the line they are on, instead of which line they represent. A good paper discussing a similar algorithm is Hoff et al's Fast Computation of Generalized Voronoi Diagrams Using Graphics Hardware
The 3d geometry will look something like this sketch with 3 red/yellow segments and 1 blue/green segment:
This procedure doesn't require you to convert anything into a closed loop, and doesn't require any fancy geometry libraries. Everything is handled by the z-buffer, and should be fast enough to run in real time on any modern graphics card. A refinement would be to use homogeneous coordinates to make the bases project to infinity.
I implemented this algorithm in a Python script at http://www.pasteall.org/9062/python. One interesting caveat is that using cones to cap the ends of the lines didn't work without distorting the shape of the cone, because the cones representing the end points of the segments were z-fighting. For the sample geometry you provided, the output looks like this:
I'd recommend you to use a geometry algorithm library like CGAL. Especially the second example in the "2D Polygons" page of the reference manual should provide you what you need. You can define each "border" as a polygon and check if certain points are inside the polygons. So basically it would be something like
for every y in raster grid
for every x in raster grid
for each defined polygon p
if point(x,y) is inside polygon p
pixel[X][Y] = inside_color[p]
I'm not so sure about what to do with the outside_color because the outside regions will overlap, won't they? Anyway, looking at your example, every outside region could be water, so you just could do a final
if pixel[X][Y] still undefined then pixel[X][Y] = water_value
(or as an alternative, set pixel[X][Y] to water_value before iterating through the polygon list)
first, convert all your borders into closed loops (possibly including the edges of your map), and indentify the inside colour. this has to be possible, otherwise you have an inconsistency in your data
use bresenham's algorithm to draw all the border lines on your map, in a single unused colour
store a list of all the "border pixels" as you do this
then for each border
triangulate it (delaunay)
iterate through the triangles till you find one whose centre is inside your border (point-in-polygon test)
floodfill your map at that point in the border's interior colour
once you have filled in all the interior regions, iterate through the list of border pixels, seeing which colour each one should be
choose two unused colors as markers "empty" and "border"
fill all area with "empty" color
draw all region borders by "border" color
iterate through points to find first one with "empty" color
determine which region it belongs to (google "point inside polygon", probably you will need to make your borders closed as Martin DeMello suggested)
perform flood-fill algorithm from this point with color of the region
go to next "empty" point (no need to restart search - just continue)
and so on till no "empty" points will remain
The way I've solved this is as follows:
March along each segment; stop at regular intervals L.
At each stop, place a tracer point immediately to the left and to the right of the segment (at a certain small distance d from the segment). The tracer points are attributed the left and right S-value, respectively.
Do a nearest-neighbour interpolation. Each point on the raster grid is attributed the S of the nearest tracer point.
This works even when there are non-closed lines, e.g. at the edge of the map.
This is not a "perfect" analytical algorithm. There are two parameters: L and d. The algorithm works beautifully as long as d << L. Otherwise you can get inaccuracies (usually single-pixel) near segment junctions, especially those with acute angles.

Resources