Calculating the bounding box for geographic coordinates - algorithm

Given a list of coordinates, how do I calculate the minimum bounding rectangle (MBR), avoiding the global gotchas described in the
Unlocking the Mysteries of the Bounding Box?
Google Maps API method fitBounds() seems to be handling the gotchas well.
Edit:
Using an example from the article above, let's say I have to calculate a bounding box for two locations in the imaginary country of Boxtopia: point A(170, 40) and point B(-170, 50). If construct my bounding box using xmin, ymin as the southwest corner and xmax, ymax as the northwest corner, I'll get (-170, 40) and (170, 50) respectively, a box which would span 340 degrees instead of minimum 20 degrees.

For the 1D problem, you are looking for the largest empty interval on a 'circle'. Sorting the data, and then looking for the largest gap will provide the answer in O(N.log(N)).

One easy solution is to sort the x coordinates and incrementally add add 360 to the lowest values and see if it results in a smaller bound.

Related

Algorithm to optimally fit a sphere between other spheres in a 3D bounding box?

I'm struggling with a 3D problem for which I'm trying to find an efficient algorithm.
I have a bounding box with given width, height, and depth.
I also have a list of spheres. That is, a center coordinate (xi,yi,zi) and radius ri for each sphere.
The spheres are guaranteed to fit within the bounding box, and to not overlap eachother.
So my situation is like this:
Now I have a new sphere with radius r, which I have to fit inside the bounding box, not overlapping any of the previous spheres.
I also have a target point T = (x,y,z) and my goal is to fit this new sphere (given the conditions above) as close as possible to this target point.
I'm trying to construct an efficient algorithm to find an optimal position for the new sphere. Optimal as in: as close to the target point as possible. Or a "false" result if there is no space to fit this new sphere between or around the existing ones anywhere within the bounding box.
I have thought of all sorts of complex approaches, such as building some sort of parametric description of the remaining volume, starting with the bounding box and subtracting the existing spheres one by one. But it doesn't seem to lead me towards a workable solution.
Note that there are a lot of known 'sphere packing' algorithms, but they tend to just fill volumes with random spheres. Also they often use a trial and error approach, just doing a certain amount of random attempts and then terminate.
Whereas I have a given specific new sphere size, and I need to fit that in (or find out that it's not possible).
A possible approach is by computing the "distance map" of the spheres, i.e. the function that returns for every point (x, y, z) the distance to the closest sphere, which is also the distance to the closest center minus the radius of the corresponding sphere. The map is made of the intersection of (hyper)conical surfaces.
Then you can explore the distance map around the target point and find the closest point with a value that exceeds the target radius.
If I am right, the distance map is directly related to the additively weighted Voronoi diagram of the sphere centers (https://en.wikipedia.org/wiki/Weighted_Voronoi_diagram), and the vertices of the diagram correspond to local maxima. Hence the closest Voronoi vertex with a value that exceeds the target radius will give a solution.
Unfortunately, the construction of this diagram won't be a barrel of laughs. Check the article "Euclidean Voronoi diagram of 3D balls and its computation
via tracing edges" and its bibliography.
A possibly workable solution to estimate the distance map is by discretizing space in a regular grid of cubes, and for every cube obtain a lower and an upper bound of the distance function.
For a single given sphere and a given cube, it is possible to find the minimum and maximum value analytically. Then considering all spheres, you can find the smallest maximum and smallest minimum, which are an upper and lower bound of the true distance (the largest minimum won't do). Then you keep all the spheres such that the minimum remains below that upper bound and you get a (hopefully short) list of candidates.
Here you can check the distances to the spheres in the list, and if the upper bound is smaller than the target radius, you can drop the cube. If you find an upper bound above the target radius, you have found a solution.
Otherwise, if the uncertainty range on the distance function is too large, subdivide the cube in smaller ones for a more accurate estimate of the upper and lower bounds.
To obtain a solution close to the target point, you will visit the cubes by increasing distance from the target (using nested digital spheres), until you find a match.
A key point in this process is to quickly find the spheres closest to a given cube, for the initial estimates. A data structure such as a kD-tree or similar might be helpful.

(Elasticsearch) Is there a way to add a ± range to the edges of a geo bounding box query?

I'd like to be able to do a geo bounding box query in elasticsearch, with some margin for error (mostly incase max and min are given at the same location). Ideally this would give nice rounded corners to the rectangle, with a quater circle on each corner, but simply adding x miles to the top right and subtracting them from bottom left would be sufficient.
I could do this by converting the distance to a differnce in lat long, but this is quite complicated as the mapping changes depending on the actual latitude, so I'd rather not do it that way.
Thanks in advance for any suggestions.
Whilst you cannot define a perfectly rounded rectangle, you can approximate it using a Geo Polygon Query instead of a bounding box, you just need to programmatically calculate some points on the rounded corners; for instance assuming your top-left corner has coordinates x0, y0:
And so on for the other corners, with as many intermediate points as you want.
However, it seems a little bit of an overkill, you might just want to enlarge your bounding box.
UPDATE: another way can be the implementation of a custom function through Scripting, since a GeoPoint's latitude and longitude can be accessed through scripting.

How to find collision center of two rectangles? Rects can be rotated

I've just implemented collision detection using SAT and this article as reference to my implementation. The detection is working as expected but I need to know where both rectangles are colliding.
I need to find the center of the intersection, the black point on the image above (but I don't have the intersection area neither). I've found some articles about this but they all involve avoiding the overlap or some kind of velocity, I don't need this.
The information I've about the rectangles are the four points that represents them, the upper right, upper left, lower right and lower left coordinates. I'm trying to find an algorithm that can give me the intersection of these points.
I just need to put a image on top of it. Like two cars crashed so I put an image on top of the collision center. Any ideas?
There is another way of doing this: finding the center of mass of the collision area by sampling points.
Create the following function:
bool IsPointInsideRectangle(Rectangle r, Point p);
Define a search rectangle as:
TopLeft = (MIN(x), MAX(y))
TopRight = (MAX(x), MAX(y))
LowerLeft = (MIN(x), MIN(y))
LowerRight = (MAX(x), MIN(y))
Where x and y are the coordinates of both rectangles.
You will now define a step for dividing the search area like a mesh. I suggest you use AVG(W,H)/2 where W and H are the width and height of the search area.
Then, you iterate on the mesh points finding for each one if it is inside the collition area:
IsPointInsideRectangle(rectangle1, point) AND IsPointInsideRectangle(rectangle2, point)
Define:
Xi : the ith partition of the mesh in X axis.
CXi: the count of mesh points that are inside the collision area for Xi.
Then:
And you can do the same thing with Y off course. Here is an ilustrative example of this approach:
You need to do the intersection of the boundaries of the boxes using the line to line intersection equation/algorithm.
http://en.wikipedia.org/wiki/Line-line_intersection
Once you have the points that cross you might be ok with the average of those points or the center given a particular direction possibly. The middle is a little vague in the question.
Edit: also in addition to this you need to work out if any of the corners of either of the two rectangles are inside the other (this should be easy enough to work out, even from the intersections). This should be added in with the intersections when calculating the "average" center point.
This one's tricky because irregular polygons have no defined center. Since your polygons are (in the case of rectangles) guaranteed to be convex, you can probably find the corners of the polygon that comprises the collision (which can include corners of the original shapes or intersections of the edges) and average them to get ... something. It will probably be vaguely close to where you would expect the "center" to be, and for regular polygons it would probably match exactly, but whether it would mean anything mathematically is a bit of a different story.
I've been fiddling mathematically and come up with the following, which solves the smoothness problem when points appear and disappear (as can happen when the movement of a hitbox causes a rectangle to become a triangle or vice versa). Without this bit of extra, adding and removing corners will cause the centroid to jump.
Here, take this fooplot.
The plot illustrates 2 rectangles, R and B (for Red and Blue). The intersection sweeps out an area G (for Green). The Unweighted and Weighted Centers (both Purple) are calculated via the following methods:
(0.225, -0.45): Average of corners of G
(0.2077, -0.473): Average of weighted corners of G
A weighted corner of a polygon is defined as the coordinates of the corner, weighted by the sin of the angle of the corner.
This polygon has two 90 degree angles, one 59.03 degree angle, and one 120.96 degree angle. (Both of the non-right angles have the same sine, sin(Ɵ) = 0.8574929...
The coordinates of the weighted center are thus:
( (sin(Ɵ) * (0.3 + 0.6) + 1 - 1) / (2 + 2 * sin(Ɵ)), // x
(sin(Ɵ) * (1.3 - 1.6) + 0 - 1.5) / (2 + 2 * sin(Ɵ)) ) // y
= (0.2077, -0.473)
With the provided example, the difference isn't very noticeable, but if the 4gon were much closer to a 3gon, there would be a significant deviation.
If you don't need to know the actual coordinates of the region, you could make two CALayers whose frames are the rectangles, and use one to mask the other. Then, if you set an image in the one being masked, it will only show up in the area where they overlap.

Algorithm for Finding Longest Stretch of a Value at any Angle in a 2D Matrix

I am currently working on a computer-vision program that requires me to determine the "direction" of a color blob in an image. The color blob generally follows an elliptical shape and thus can be used to track direction (with respect to an initially defined/determined orientation) through time.
The means by which I figured I would calculate changes in direction are described as follows:
Quantize possible directions (360 degrees) into N directions (potentially 8, for 45 degree angle increments).
Given a stored matrix representing the initial state (t0) of the color blob, also acquire a matrix representing the current state (tn) of the blob.
Iterate through these N directions and search for the longest stretch of the color value for that given direction. (e.g. if the ellipse is rotated 45 degrees with 0 being vertical, the longest length should be attributed to the 45 degree mark / or 225 degrees).
The concept itself isn't complicated, but I'm having trouble with the following:
Calculating the longest stretch of a value at any angle in an image. This is simple for angles such as 0, 45, 90, etc. but more difficult for the in-between angles. "Quantizing" the angles is not as easy to me as it sounds.
Please do not worry about potential issue with distinguishing angles such as 0 and 90. Inertia can be used to determine the most likely direction of the color blob (in other words, based upon past orientation states).
My main concern is identifying the "longest stretch" in the matrix.
Thank you for your help!
You can use image moments as suggested here: Matlab - Image Momentum Calculation.
In matlab you would use regionprops with the property 'Orientation', but the wiki article in the previous answer should give you all of the information you need to code it in the language of your choice.

Moving GPS position with a certain distance (in meters) in a known direction

I have some GPS sample data taken from a device. What I need to do is to "move" the data to the "left" by, let's say, 1 to 5 meters. I know how to do the moving part, the only problem is that the moving is not as accurate as I want it to be.
What I currently do:
I take the GPS coordinates (latitude, longitude pairs)
I convert them using plate carrée transformation.
I scale the resulting coordinates to the longitudinal distance (distance on x) and the latitudinal distance (distance on y) - imagine the entire GPS sample data is inside a rectangle being bound by the maximum and minimum latitude/longitude. I compute these distances using the formula for the Great Circle Distance between the extreme values for longitude and latitude.
I move the points x meters in the wanted direction
I convert back to GPS coordinates
I don't really have the accuracy I want. For example moving to the left by 3 meters means less than 3 meters (around 1.8m - maybe 2).
What are the known solutions for doing such things? I need a solution that deviates at most by 0.2-0.5 meters from the real point (not 1.2 like in the current case).
LATER: Is this kind of approach good? By this kind I mean to transform the GPS coordinates into plane coordinates and back to GPS. Is there other way?
LATER2: The approach of converting to a conformal map is probably the one that will be used. In case of a small rectangle, and since there are not roads at the poles probably Mercator will be used. Opinions?
Thanks,
Iulian
PS: I'm working on small areas - so imagine the bounding rectangle I'm talking about to have the length of each side no more than 5 kilometers. (So a 5x5km rectangle is maximum).
There are two issues with your solution:
plate carrée transformation is not conformal (i.e. angles are not preserved)
you can not measure distances along lat or lon that way since that are not great circles (approximately you are off by a factor cos(lat) for your x).
Within small rectangles you may assume that lon/lat can be linearly mapped to x/y pairs but you have to keep in mind that a "square" in lon/lat maps to a rectangle with aspect ratio of approx cos(lat)/1.

Resources