Visiting the points in a triangle in a random order - algorithm

For a right triangle specified by an equation aX + bY <= c on integers
I want to plot each pixel(*) in the triangle once and only once, in a pseudo-random order, and without storing a list of previously hit points.
I know how to do this with a line segment between 0 and x
pick a random point'o' along the line,
pick 'p' that is relatively prime to x
repeat for up to x times: Onext = (Ocur + P) MOD x
To do this for a triangle, I would
1. Need to count the number of pixels in the triangle sans lists
2. Map an integer 0..points into a x,y pair that is a valid pixel inside the triangle
I hope any solution could be generalized to pyramids and higher dimensional shapes.
(*) I use the CG term pixel for the pair of integer points X,Y such that the equation is satisfied.

Since you want to guarantee visiting each pixel once and only once, it's probably better to think in terms of pixels rather than the real triangles.
You can slice the triangles horizontally and get bunch of horizontal scan lines. Connect the scan lines together and you have converted your "triangle" into a long line. Apply your point visiting algorithm to your long chain of scan lines.
By the way, this mapping only needs to happen on paper, all you need is a function that can return (x, y) given (t) along the virtual scan line.
Edit:
To convert two points to a line segment, you can look for Bresenham's scan conversion. Once you get the 3 line segments converted into series of points, you can put all points into a bucket and group all points by y. Within the same y-value, sort points by x. The smallest x within a y-value is the begin point of the scan line and the largest x within the y-value is the end point of the scan line. This is called "scan converting triangle". You can find more info if you Google.

Here's a solution for Triangle Point Picking.
What you have to do is choose two vectors (sides) of your triangle, multiply each with a random number in [0,1] and add them up. This will provide a uniform distribution in the quadrilateral defined by the vectors. You'll have to check whether the result lies inside the original triangle; if it doesn't either transform it back in or simply discard it and try again.

One method is to put all of the pixels into an array and then shuffle the array (this is O(n)), then visit the pixels in the order in the shuffled array. This could require quite a lot of memory though.

Here's a method which wastes some CPU time but probably doesn't waste as much as a more complicated method would do.
Compute a rectangle that circumscribes the triangle. It will be easy to "linearize" that rectangle, each scan line followed by the next. Use the algorithm that you already know in order to traverse the pixels of the rectangle. When you hit each pixel, check if the pixel is in the triangle, and if not then skip it.

I would consider the lines of the triangle as single line, which is cut into segments. The segments would be stored in an array where the length of the segment also stored as well as the offset in the total length of the lines. Then depending on the value of O, you can select which array element contains the pixel you want to draw at that moment based on this information and paint the pixel based on the values in the element.

Related

Find the rectangle with the maximum area, containing a specific point in an occupancy grid

Problem
Given an occupancy grid, for example:
...................*
*...............*...
*..*.............*..
...........*........
....................
..*.......X.........
............*.*.*...
....*..........*....
...*........*.......
..............*.....
Where, * represents an occupied block, . represents a free block and X represents a point (or block) of interest, what is the most time-efficient algorithm to find the largest rectangle which includes X, but does not include any obstacles, i.e. any *?
For example, the solution to the provided grid would be:
.....######........*
*....######.....*...
*..*.######......*..
.....######*........
.....######.........
..*..#####X.........
.....######.*.*.*...
....*######....*....
...*.######.*.......
.....######...*.....
My Thoughts
Given we have a known starting point X, I can't help but think there must be a straightforwards solution to "snap" lines to the outer boundaries to create the largest rectangle.
My current thinking is to snap lines to the maximum position offsets (i.e. go to the next row or column until you encounter an obstacle) in a cyclic manner. E.g. you propagate a horizontal line from the point X down until there is a obstacle along that line, then you propagate a vertical line left until you encounter an obstacle, then a horizontal line up and a vertical line right. You repeat this starting at with one of the four moving lines to get four rectangles, and then you select the rectangle with the largest area. However, I do not know if this is optimal, nor the quickest approach.
This problem is a well-known one in Computational Geometry. A simplified version of this problem (without a query point) is briefly described here. The problem with query point can be formulated in the following way:
Let P be a set of n points in a fixed axis-parallel rectangle B in the plane. A P-empty rectangle (or just an empty rectangle for short) is any axis-parallel rectangle that is contained in
B and its interior does not contain any point of P. We consider the problem of preprocessing
P into a data structure so that, given a query point q, we can efficiently find the largest-area
P-empty rectangle containing q.
The paragraph above has been copied from this paper, where authors describe an algorithm and data structure for the set with N points in the plane, which allow to find a maximal empty rectangle for any query point in O(log^4(N)) time. Sorry to say, it's a theoretic paper, which doesn't contain any algorithm implementation details.
A possible approach could be to somehow (implicitly) rule out irrelevant occupied cells: those that are in the "shadow" of others with respect to the starting point:
0 1 X
01234567890123456789 →
0....................
1....................
2...*................
3...........*........
4....................
5..*.......X.........
6............*.......
7....*...............
8....................
9....................
↓ Y
Looking at this picture, you could state that
there are only 3 relevant xmin values for the rectangle: [3,4,5], each having an associated ymin and ymax, respectively [(3,6),(0,6),(0,9)]
there are only 3 relevant xmax values for the rectangle: [10,11,19], each having an associated ymin and ymax, respectively [(0,9),(4,9),(4,5)]
So the problem can be reduced to finding the rectangle with the highest area out of the 3x3 set of unique combinations of xmin and xmax values
If you take into account the preparation part of selecting relevant occupied cells, this has the complexity of O(occ_count), not taking into sorting if this would still be needed and with occ_count being the number of occupied cells.
Finding the best solution (in this case 3x3 combinations) would be O(min(C,R,occ_count)²). The min(C,R) includes that you could choose the 'transpose' the approach in case R<C, (which is actually true in this example) and that that the number of relevant xmins and xmaxs have the number of occupied cells as an upper limit.

Minimum amount of rectangles from multi-colored grid

I've been working for some time in an XNA roguelike game and I can't get my head around the following problem: developing an algorithm to divide a matrix of non-binary values into the fewest rectangles grouping these values.
Example: given the following matrix
01234567
0 ---##*##
1 ---##*##
2 --------
The algorithm should return:
3x3 rectangle of '-'s starting at (0,0)
2x2 rectangle of '#'s starting at (3, 0)
1x2 rectangle of '*'s starting at (5, 0)
2x2 rectangle of '#'s starting at (6, 0)
5x1 rectangle of '-'s starting at (3, 2)
Why am I doing this: I've gotten a pretty big dungeon type with a size of approximately 500x500. If I were to individually call the "Draw" method for each tile's Sprite, my FPS would be far too low. It is possible to optimize this process by grouping similar-textured tiles and applying texture repetition to them, which would dramatically decrease the amount of GPU draw calls for that. For example, if my map were the previous matrix, instead of calling draw 16 times, I'd call it only 5 times.
I've looked at some algorithms which can give you the biggest rectangle of a type inside a given binary matrix, but that doesn't fit my problem.
Thanks in advance!
You can use breadth first searches to separate each area of different tile type.
Picking a partitioning within the individual shapes is an NP-hard problem (see https://en.wikipedia.org/wiki/Graph_partition), so you can't find an efficient solution that guarantees the minimum number of rectangles. However if you don't mind an extra rectangle or two for each shape and your shapes are relatively small, you can come up with algorithms that split the shape into a number of rectangles close to the minimum.
An off the top of my head guess for something that could potentially work would be to pick a tile with the maximum connecting tiles and start growing a rectangle from it using a recursive algorithm to maximize the size. Remove the resulting rectangle from the shape, then repeat until there are no more tiles not included in a rectangle. Again, this won't produce perfect results, there are graphs on which this will return with more than the minimum amount of rectangles, but it's an easy to implement ballpark solution. With a little more effort I'm sure you will be able to find better heuristics to use and get better results too.
One possible building block is a routine to check, given two points, whether the rectangle formed by using those points as opposite corners is all of the same type. I think that a fast (but unreliable) means of testing this can be based on mapping each type to a large random number, and then working out the sum of the numbers within a rectangle modulo a large prime. Take one of the numbers within the rectangle. If the sum of the numbers within the rectangle is the size of the rectangle times the one number sampled, assume that the all of the numbers in the rectangle are the same.
In one dimension we can work out all of the cumulative sums a, a+b, a+b+c, a+b+c+d,... in time O(N) and then, for any two points, work out the sum for the interval between them by subtracting cumulative sums: b+c+d = a+b+c+d - a. In two dimensions, we can use cumulative sums to work out, for each point, the sum of all of the numbers from positions which have x and y co-ordinates no greater than the (x, y) coordinate of that position. For any rectangle we can work out the sum of the numbers within that rectangle by working out A-B-C+D where A,B,C,D are two-dimensional cumulative sums.
So with pre-processing O(N) we can work out a table which allows us to compute the sum of the numbers within a rectangle specified by its opposite corners in time O(1). Unless we are very unlucky, checking this sum against the size of the rectangle times a number extracted from within the rectangle will tell us whether the rectangle is all of the same type.
Based on this, repeatedly start with a random point not covered. Take a point just to its left and move that point left as long as the interval between the two points is of the same type. Then move that point up as long as the rectangle formed by the two points is of the same type. Now move the first point to the right and down as long as the rectangle formed by the two points is of the same type. Now you think you have a large rectangle covering the original point. Check it. In the unlikely event that it is not all of the same type, add that rectangle to a list of "fooled me" rectangles you can check against in future and try again. If it is all of the same type, count that as one extracted rectangle and mark all of the points in it as covered. Continue until all points are covered.
This is a greedy algorithm that makes no attempt at producing the optimal solution, but it should be reasonably fast - the most expensive part is checking that the rectangle really is all of the same type, and - assuming you pass that test - each cell checked is also a cell covered so the total cost for the whole process should be O(N) where N is the number of cells of input data.

Efficiently project 2D plane onto 1D line

I have a array of [width, height, x, y] vectors, like so: [[width_1, height_1, x_1, y_1],...,[width_n, height_n, x_n, y_n]] representing a 2D plane of blocks. This vector is potentially long (n > 10k).
An example:
must be projected like:
The problem is however that the blocks are not neatly stacked, but can be in any shape and position
The criterion for which block should be project doesn't really matter. In the example I took the first (on the x-axis) largest; which seems reasonable.
What is important is that a list (vector) is maintained of which other blocks were occluded by the projected block. The blocks bear metadata which is important, so I should be able to answer the question "to what line segment was this block projected?"
So concretely how can a 2D plane be efficiently projected onto a line, in a sense "cast a shadow", in a way that maintains a method of seeing what blocks partake in a line segment (shadow)?
Edit: while the problem is rather generic, the concrete problem is that I have a document that has multiple columns and floating images for which I would like to generate a "minimap" which indicates where to find certain annotations (colors)
Assuming that the rectangles are always aligned with the axes, as in your example, I would use a sweep line approach:
Sort the rectangle tops/bottoms according to their y value. For every element, keep a reference to the full rectangle data.
Scan the list in increasing y order, maintaining a set S of rectangles representing the rectangles that contain the current y value. For every top of a rectangle r, add r to S. Similarly, for every bottom of r, remove r from S. Every time you do it, a segment is being closed and a new one is started. If you inspect S at this point, you have all the rectangles that participate in the segment, so this is the place to apply a policy for choosing the segment color.
If you need to know later what segments a rectangle belongs to, you can build a mapping between rectangles and segments lists, and update it during the scan.

How to find the pixel that is farthest from another in the same pixel group

By "Group", I mean a set of pixels such that every pixel at least have one adjacent pixel in the same set, the drawing shows an example of a group.
I would like to find the pixel which is having the greatest straight line distance from a designated pixel (for example, the green pixel). And the straight line connecting the two pixels (the red line) must not leave the group.
My solution is looping through the degrees and simulating the progress of the lines starting from the green pixel with the degree and see which line travelled the farthest distance.
longestDist = 0
bestDegree = -1
farthestX = -1
farthestY = -1
FOR EACH degree from 0 to 360
dx=longestDist * cos(degree);
dy=longestDist * sin(degree);
IF Point(x+dx , y+dy) does not belong to the group
Continue with next degree
//Because it must not be the longest line, so skip it
END IF
(farthestX , farthestY) = simulate(x,y,degree)
d = findDistance(x , y , farthestX , farthestY)
IF d > longestDist
longestDist = d
bestDegree = degree
END IF
END FOR
It is obviously not the best algorithm. Thus I am asking for help here.
Thank you and sorry for my poor English.
I wouldn't work with angles. But I'm pretty sure the largest distance will always be between two pixels at the edge of the set, thus I'd trace the outline: From any pixel in the set go to any direction until you reach the edge of the set. Then move (couter)clockwise along the edge. Do this with any pixel as starting point and you'll be able to find the largest distance. It's still pretty greedy, but I thought it might give you an alternative starting point to improve upon.
Edit: What just came to my mind: When you have a start pixel s and the end pixel e. In the first iteration using s the corresponding e will be adjacent (the next one along the edge in clockwise direction). As you iterate along the edge the case might occur, that there is no straight line through the set between s and e. In that case the line will hit another part of the set-edge (the pixel p) though. You can continue iteration of the edge at that pixel (e = p)
Edit2: And if you hit a p you'll know that there can be no longer distance between s and e so in the next iteration of s you can skip that whole part of the edge (between s and p) and start at p again.
Edit3: Use the above method to find the first p. Take that p as next s and continue. Repeat until you reach your first p again. The max distance will be between two of those p unless the edge of the set is convex in which case you wont find a p.
Disclaimer: This is untested and are just ideas from the top of my head, no drawings have been made to substantiate my claims and everything might be wrong (i.e. think about it for yourself before you implement it ;D)
First, note that the angle discretization in your algorithm may depend on the size of the grid. If the step is too large, you can miss certain cells, if it is too small, you will end up visiting the same cell again and again.
I would suggest that you enumerate the cells in the region and test the condition for each one individually instead. The enumeration can be done using breadth-first or depth-first search (I think the latter would be preferable, since it will allow one to establish a lower bound quickly and do some pruning).
One can maintain the farthest point X found so far and for each new point in the region, check whether (a) the point is further away than the one found so far and (b) it's connected to the origin by a straight line passing through the cells of the region only. If both conditions are satisfied, update the X, else go on with the search. If condition (a) is not satisfied, condition (b) doesn't have to be checked.
The complexity of this solution would be O(N*M), where N is the number of cells in the region and M is the larger dimension of the region (max(width,height)). If performance is of essence, more sophisticated heuristics can be applied, but for a reasonably sized grid this should work fine.
Search for pixel, not for slope. Pseudocode.
bestLength = 0
for each pixel in pixels
currentLength = findDistance(x, y, pixel.x, pixel.y)
if currentLength > bestLength
if goodLine(x, y, pixel.x, pixel.y)
bestLength = currentLength
bestX = pixel.x
bestY = pixel.y
end
end
end
You might want to sort pixels descending by |dx| + |dy| before that.
Use a double data-structure:
One that contains the pixels sorted by angle.
The second one sorted by distance (for fast access, this should also contain "pointers" for the first data structure).
Walk through the angle sorted one, and check for each pixel that the line is within the region. Some pixels will have the same angle, so you can walk from the origin along the line, and go till you go out from the region. You can eliminate all the pixels which are beyond that point. Also, if the maximum distance increased, remove all pixels which have a shorter distance.
Treat your region as a polygon instead of a collection of pixels. From this you can get a list of line segments (the edges of your polygon).
Draw a line from your start pixel to each pixel you are checking. The longest line that does not intersect any of the line segments of your polygon indicates your most distant pixel that is reachable by a straight line from your pixel.
There are various optimizations you can make to this and a few edges cases to check, but let me know if you understand the idea before i post those... in particular, do you understand what I mean by treating as a polygon instead of a collection of pixels?
To add, this approach will be significantly faster than any angle based approach or approach that requires "walking" for all but the smallest collections of pixels. You can further optimize because your problem is equivalent to finding the most distant endpoint of a polygon edge that can be reached via an unintersected straight line from your start point. This can be done in O(N^2), where N is the number of edges. Note that N will be much much smaller than the number of pixels, and many of the algorithms proposed that use angles an/or pixel iteration are be going to instead depend on the number of pixels.

Find out how to flood fill a polygon with the smallest number of vector lines

Say I have a vector polygon with holes. I need to flood fill it by drawing connected segments. Of course, since there are holes, I can't fill it using a single continous polyline: I'll need to interrupt my path sometimes, then move to an area which was skipped and start another polyline there.
My goal is to find a set of polylines needed to fill the whole polygon. Better if I can find the smallest set (that is, the way I can fill the polygon with the minimum number of interruptions).
Bonus question: how could I do that for partial density fills? Say, I don't want to fill at 100% density but I want a 50% (this will require that fill lines, supposing they're parallel each other and have a single-unit width, are put at a distance of two units).
I couldn't find a similar question here, although there are many related to flood-fill algorithms.
Any ideas or pointers?
Update: this picture from Wikipedia shows a good hypotetical flood path. I believe I could do that using a bitmap. However I've got a vector polygon. Should I rasterize it?
I'm assuming here that the distance between lines is 1 unit.
A crude implementation, with no guarantee to find the minimum number of polyline, is:
Start with an empty set of polylines.
Determine minx and maxx of the polygon.
Loop x from xmin to xmax, with a step of 1. Line L is the vertical line at x.
Intersect vertical line L with your polygon (quick algorithm, easy to find). That will give you a set of segments: {(x,y1)-(x,y2)}.
For all polylines, and all segments, merge segment + end of polylines (see note 1 below). When you merge a segment and a polyline, append a small stretch at the end of the polyline (to joint it to the segment), and the segment itself. For all segments that you can't merge using that, add a new polyline in the global set.
At the end, try to merge again polylines if possible (ends close together).
Optimal algorithm for merging new segments to existing polylines should be easy to find (hashing on y), or a brute force algorithm may suffice:
number of new segments per line scan should not be too high if your polygons do not have zillions of holes,
number of global polylines at every step should not be too large,
you compare only with the end segment of each polylines, not the whole of it.
Added note (1): To cover the case where your polygon has nearly-vertical edges, the merge process should not look only at y-delta, but allow a merge if any two y range overlaps (that means end of polyline y-range overlap segment y-range).

Resources