Approximating time nearest a point from a set - algorithm

I have a set of [x, y, time] values and a reference point [x, y]. The values represent 2D movement of an object over time. I would like to approximate the time at which the trajectory of the object was closest the reference point. In this diagram, the blue crosses are the set of points and the red cross is the reference point.
My initial approach would be to derive the linear trajectory of the object (green line below), and then find the point on the line that forms a right angle with the reference point (PT). The distance of that point between its two nearest neighbours along the trajectory line (T2 and T3) would provide an interpolated time value.
Is there a more efficient algorithm, or set of algorithms, for calculating the time value of PT? A linear approximation for trajectory (as opposed to a spline) is acceptable, as is some tolerance in the accuracy of the calculated time. A reference implementation (in pseudo-code/C/Java/whatever) for study would be most welcome. Many thanks.
EDIT: maybe it's easier to re-phrase in terms of GPS. Given a polyline of GPS points and the time of their reading, at what time precisely did the path go closest to this other point?

Related

Data structure for piecewise circular trajectory in plane

I'm trying to design a data-structure to hold/express a piecewise circular trajectory in the Euclidian plane. The trajectory is constrained to be continuous and have finite curvature everywhere, and therefore the circular arcs meet tangentially.
Storing all the circle centers, radii, and touching points would allow for inspecting the geometry anywhere in O(1) but would require explicit enforcement of the continuity and curvature constraints due to data redundancy. In my view, this would make the code messy.
Storing only the circle touching points (which are waypoints along the curve) along with the curve's initial direction would be sufficient in principle, and avoid data redundancy, but then it would be necessary to do an O(n) calculation to inspect the geometry of arc n, since that arc depends on all the arcs preceding it in the trajectory.
I would like to avoid data redundancy, but I also don't want to make the cost of geometric inspection prohibitive.
Does anyone have any high-level idea/advice to share?
For the most efficient traversal of the trajectory, if I am right you need
the ending curvilinear abscissas of every arc (cumulative),
the radii,
the starting angles,
the coordinates of the centers,
so that for a given s you find the index of the arc, then the azimuth and the coordinates of the point. (Either incrementally for a sequence of points, or by dichotomy for a single point.) That takes five parameters per arc.
Only the cumulative abscissas are global, but you can't do without them for single-point accesses. You can drop the radii and starting angles and retrieve them for any arc from the difference of curvilinear abscissas and the limit angles (see below). This reduces to three parameters.
On the other hand, knowing just the coordinates of the centers and those of the starting and ending points is enough to recover the whole geometry, and this takes two parameters per arc.
The meeting point of two arcs is found on the line through the centers, and if you know one radius, the other follows. And the limit angle is given by the direction of the line. So for an incremental traversal, this non-redundant description can do.
For convenient computation, knowing s and the arc index, consider the vectors from the center to the centers of the adjoining arcs. Rotate them so that the first becomes horizontal. The components of the other will give you the amplitude angle. The fraction (s - Si-1) / (Si - Si-1) of the amplitude gives you the azimuth of the point, to which you apply the counter-rotation.
I'd store items with the data required to get info for any point of that element. For example, an arc needs x, y, initial direction, radius, lenght (or end point, or angle difference or whatever you find easiest).
Because you need continuity (same x,y, same bearing, perhaps same curvature) between two ending points then a node with this properties is needed. Notice these properties are common to arcs and straights (a special arc identified by radius = 0). So you can treat a node the same as an item.
The trajectory should be calculated before any request. So you have all items-data in advance.
The container depends on how you request info.
If the trajectory can be somehow represented in a grid, then you better use a quad-tree.
I guess you must find the item from a x,y or accumulated length input. You will have to iterate through the container to find the element closest to the input data. Sorted data may help.
My choice is a simple vector with the consecutive elements, which happens to be sorted on accumulated trajectory length.
Finding by x,y on a x-sorted container (or a tree) is not so simple, due to some x,y may have perpendiculars to several items, consecutive or not, near or not, and you need to select the nearest one.

Find the diameter of a set of n points in d-dimensional space

I am interesting in finding the diameter of two points sets, in 128 dimensions. The first has 10000 points and the second 1000000. For that reason I would like to do something better than the naive approach which takes O(n²). The algorithm will be able to handle any number of points and dimensions, but I am currently very interested in these two particular data sets.
I am very interesting in gaining speed over accuracy, thus, based on this, I would find the (approximate) bounding box of the point set, by computing the min and max value per coordinate, thus O(n*d) time. Then, if I find the diameter of this box, the problem is solved.
In the 3d case, I could find the diameter of the one side, since I know the two edges and then, I could apply the Pythagorean theorem on the other, which is vertical to this side. I am not sure for this however and for sure, I can't see how to generalize it to d dimensions.
An interesting answer can be found here, but it seems to be specific for 3 dimensions and I want a method for d dimensions.
Interesting paper: On computing the diameter of a point set in high dimensional Euclidean space. Link. However, implementing the algorithm seems too much for me in this phase.
The classic 2-approximation algorithm for this problem, with running time O(nd), is to choose an arbitrary point and then return the maximum distance to another point. The diameter is no smaller than this value and no larger than twice this value.
I would like to add a comment, but not enough reputation for that...
I just want to warn other readers that the "bounding box" solution is very inaccurate. Take for example the Euclidean ball of radius one. This set has diameter two, but its bounding box is [-1, 1]^d, which has diameter twice the square root of d. For d = 128, this is already a very bad approximation.
For a crude estimate, I would stay with David Eisenstat's answer.
There is a precision based algorithm which performs very well on any dimension, which is based on computing the dimension of an axial bounding box.
The idea is that it's possible to find the lower and upper boundaries of the axis bounding box length function since it's partial derivatives are limited, and depend on the angle between the axises.
The limit of the local maxima derivatives between two axises in 2d space can be computed as:
sin(a/2)*(1 + tan(a/2))
That means that, for example, for 90deg between axises the boundary is 1.42 (sqrt(2))
Which reduces to a/2 when a => 0, so the upper boundary is proportional to the angle.
For a multidimensional case the formula varies slightly, but still it's easy to compute.
So, the search of local minima convolves in logarithmic time.
The good news is that we can run the search of such local maxima in parallel.
Also, we can filter out both the regions of the search based on the best achieved result so far, as well as the points themselves, which are belo the lower limit of the search in the worst region.
The worst case of the algorithm is where all of the points are placed on the surface of a sphere.
This can be firther improved: when we detect a local search which operates on just few points, we swap to bruteforce for this particular axis. It works fast, because we need only the points which are subject to that particular local search, which can be determined as points actually bound by two opposite spherical cones of a particular angle sharing the same axis.
It's hard to figure out the big O notation, because it depends on desired precision and the distribution of points (bad when most of the points are on a sphere's surface).
The algorithm i use is here:
Set the initial angle a = pi/2.
Take one axis for each dimension. The angle and the axises form the initial 'bucket'
For each axis, compute the span on that axis by projecting all the points onto the axis, and finding min and max of the coordinates on the axis.
Compute the upper and lower bounds of the diameter which is interesting. It's based on the formula: sin(a/2)*(1 + tan(a/2)) and multiplied by assimetry cooficient, computed from the length of the current axis projections.
For the next step, kill all of the points which fall under the lower bound in each dimension at the same time.
For each exis, If the amount of points above the upper bound is less then some reasonable amount (experimentally computed) then compute using a bruteforce (N^2) on the set of the points in question, and adjust the lower bound, and kill the axis for the next step.
For the next step, Kill all of the axises, which have all of their points under the lower bound.
If the precision is satisfactory (upper bound - lower bound) < epsilon, then return the upper bound as the result.
For all of the survived axises, there is a virtual cone on that axis (actually, the two opposite cones), which covers some area on a virtual sphere which encloses a face of the cube. If i'm not mistaken, it's angle would be a * sqrt(2). Set the new angle to a / sqrt(2). Create a whole bucket of new axises (2 * number of dimensions), so the new cone areas would cover the initial cone area. It's the hard part for me, as i have not enough imagination for n>3-dimensional case.
Continue from step (3).
You can paralellize the procedure, synchronizing the limits computed so far for the points from (5) through (7).
I'm going to summarize the algorithm proposed by Timothy Shields.
Pick random point x.
Pick point y furthest from x.
If not done, let x = y, and go to step 2
The more times you repeat, the more accurate the result will be... ??
EDIT: actually this algorithm is not very good. Think about a 2D rectangle with vertices ABCD. There are two maxima: between AC and BD, which are separated by a sizable valley. This algorithm will get stuck at one or the other 50/50. If AC is slightly larger than BD, you'll be getting the wrong answer 50% of the time no matter how many times you iterate. Other regular polygons have the same issue, and in higher dimensions it is even worse.

Intersection of a 3D Grid's Vertices

Imagine an enormous 3D grid (procedurally defined, and potentially infinite; at the very least, 10^6 coordinates per side). At each grid coordinate, there's a primitive (e.g., a sphere, a box, or some other simple, easily mathematically defined function).
I need an algorithm to intersect a ray, with origin outside the grid and direction entering it, against the grid's elements. I.e., the ray might travel halfway through this huge grid, and then hit a primitive. Because of the scope of the grid, an iterative method [EDIT: (such as ray marching) ]is unacceptably slow. What I need is some closed-form [EDIT: constant time ]solution for finding the primitive hit.
One possible approach I've thought of is to determine the amount the ray would converge each time step toward the primitives on each of the eight coordinates surrounding a grid cell in some modular arithmetic space in each of x, y, and z, then divide by the ray's direction and take the smallest distance. I have no evidence other than intuition to think this might work, and Google is unhelpful; "intersecting a grid" means intersecting the grid's faces.
Notes:
I really only care about the surface normal of the primitive (I could easily find that given a distance to intersection, but I don't care about the distance per se).
The type of primitive intersected isn't important at this point. Ideally, it would be a box. Second choice, sphere. However, I'm assuming that whatever algorithm is used might be generalizable to other primitives, and if worst comes to worst, it doesn't really matter for this application anyway.
Here's another idea:
The ray can only hit a primitive when all of the x, y and z coordinates are close to integer values.
If we consider the parametric equation for the ray, where a point on the line is given by
p=p0 + t * v
where p0 is the starting point and v is the ray's direction vector, we can plot the distance from the ray to an integer value on each axis as a function of t. e.g.:
dx = abs( ( p0.x + t * v.x + 0.5 ) % 1 - 0.5 )
This will yield three sawtooth plots whose periods depend on the components of the direction vector (e.g. if the direction vector is (1, 0, 0), the x-plot will vary linearly between 0 and 0.5, with a period of 1, while the other plots will remain constant at whatever p0 is.
You need to find the first value of t for which all three plots are below some threshold level, determined by the size of your primitives. You can thus vastly reduce the number of t values to be checked by considering the plot with the longest (non-infinite) period first, before checking the higher-frequency plots.
I can't shake the feeling that it may be possible to compute the correct value of t based on the periods of the three plots, but I can't come up with anything that isn't scuppered by the starting position not being the origin, and the threshold value not being zero. :-/
Basically, what you'll need to do is to express the line in the form of a function. From there, you will just mathematically have to calculate if the ray intersects with each object, as and then if it does make sure you get the one it collides with closest to the source.
This isn't fast, so you will have to do a lot of optimization here. The most obvious thing is to use bounding boxes instead of the actual shapes. From there, you can do things like use Octrees or BSTs (Binary Space Partitioning).
Well, anyway, there might be something I am overlooking that becomes possible through the extra limitations you have to your system, but that is how I had to make a ray tracer for a course.
You state in the question that an iterative solution is unacceptably slow - I assume you mean iterative in the sense of testing every object in the grid against the line.
Iterate instead over the grid cubes that the line intersects, and for each cube test the 8 objects that the cube intersects. Look to Bresenham's line drawing algorithm for how to find which cubes the line intersects.
Note that Bresenham's will not return absolutely every cube that the ray intersects, but for finding which primitives to test I'm fairly sure that it'll be good enough.
It also has the nice properties:
Extremely simple - this will be handy if you're running it on the GPU
Returns results iteratively along the ray, so you can stop as soon as you find a hit.
Try this approach:
Determine the function of the ray;
Say the grid is divided in different planes in z axis, the ray will intersect with each 'z plane' (the plane where the grid nodes at the same height lie in), and you can easily compute the coordinate (x, y, z) of the intersect points from the ray function;
Swipe z planes, you can easily determine which intersect points lie in a cubic or a sphere;
But the ray may intersects with the cubics/spheres between the z planes, so you need to repeat the 1-3 steps in x, y axises. This will ensure no intersection is left off.
Throw out the repeated cubics/spheres found from x,y,z directions searches.

What's the best method to compare original trajectory with two compressed trajectory

Suppose to have a GPS trajectory - i.e.: a series of spatio-temporal coords, every coord is a (x,y,t) information, where x is longitude, y is latitude and t is the time stamp.
Suppose each trajectory identified by 1000 (x,y) points, a compressed trajectory is trajectory with fewer points than the original, for instance 300 points. A compression algorithm (Douglas-Peucker, Bellman, etc) decide what points will be in compressed trajectory and what point will be discarded.
Each algorithm make his own choice. Better algorithms choice the points not only by spatial characteristics (x, y) but using spatio-temporal characteristics (x,y,t).
Now I need a way to compare two compressed trajectories against the original to understand what compression algorithm better reduce a spatio-temporal (temporal component is really important) trajectory.
I've thinked of DTW algorithm to check trajectory similarity, but this probably don't care about temporal component. What algorithm can I use to make this control?
What is the best compression algorithm depends to a large extent on what you are trying to achieve with it, and is dependent on other external variables. Typically, were going to identify and remove spikes, and then remove redundant data. For example;
Known minimum and maximum velocity, acceleration, and ability to tuen will let you remove spikes. If we look at the join distance between a pair of points divided by the time where
velocity = sqrt((xb - xa)^2 + (yb - ya))/(tb-ta)
we can eliminate points where the distance couldn't be travelled in the elapsed time given the speed constraint. We can do the same with acceleration constraints, and change in direction constraints for a given velocity. These constraints change whether the GPS receiver is static, hand held, in a car, in an aeroplane etc...
We can remove redundant points using a moving window looking at three points, where if the an interpolated (x,y,t) for middle point can be compared with an observed point, and the observed point removed if it lies within a specified distance + time tolerance of the interpolated point. We can also curve fit the data and consider the distance to the curve rather than using a moving 3 point window.
The compression may also have differing goals based on the constraints given, e.g. to simply reduce the data size by removing redundant observations and spikes, or to smooth the data as well.
For the former, after checking for spikes based on defined constraints, we simply check the 3d distance of each point to the polyline connecting the compressed points. This is achieved by finding the pair of points before and after the point that has been removed, interpolating a position on the line connecting those points based on the observed time, and comparing the interpolated position with the observed position. The amount of points removed will increase as we allow this distance tolerance to increase.
For the latter we also have to consider how well the smoothed result models the data, the weights imposed by the constraints, and the design shape / curve parameters.
Hope this makes some sense.
Maybe you could use mean square distance between trajectories over time.
Probably, simply looking at distance at time 1s,2s,... will be enough, but you can also do it more precise between time stamps integrating, (x1(t)-x2(t))^2 + (y1(t)-y2(t))^2. Note that between 2 time stamps both trajectories will be straight line.
I've found what I need to compute spatio-temporal error.
As written in paper "Compression and Mining of GPS Trace Data:
New Techniques and Applications" by Lawson, Ravi & Hwang:
Synchronized Euclidean distance (sed) measures the distance between
two points at identical time stamps. In Figure 1, five time steps (t1
through t5) are shown. The simplified line (which can be thought of as
the compressed representation of the trace) is comprised of only two
points (P't1 and P't5); thereby, it does not include points P't2, P't3
and P't4. To quantify the error introduced by these missing points,
distance is measured at the identical time steps. Since three points
were removed between P't1 and P't5, the line is divided into four
equal sized line segments using the three points P't2, P't3 and P't4
for the purposes of measuring the error. The total error is measured
as the sum of the distance between all points at the synchronized time
instants, as shown below. (In the following expression, n represents
the total number of points considered.)

GPS intermediate points

I'm developing an algorithm that involves inserting intermediary points between GPS coordinates. To be precisely let's assume that I have these 2 coordinates:
25.60593,45.65151
25.60662,45.65164000000001
I want to add extra points between these coordinates (on the same "line"). Is it wise to consider a linear variation (in this case I consider the intermediate points as points on the segment defined by the 2 given coordinates).
This involves actual GPS data obtained from a computed route -> this would mean that the distance is less than 1000 meters between each consecutive shape point (on highway the distance between consecutive points can be quite big due to the fact that it is a straight line).
Is linear variation a good approximation, or are they any other methods? (given the fact that the distance between points is less than 1km)
Thanks,
Iulian
Given that the distance between the points is less than 1 km, AND *ASSUMING* that you're not playing very close to the North or South Pole, linear interpolation is probably good enough.
After linear interpolation is shown in live testing on real data NOT to be good enough, you can try spherical trig interpolation. Math involved.
If both of those fall short, then you will need very specialized expert assistance.
If the you have two points a and b, and the distance from a to b is less than 1km, then the maximimum distance between the midpoint (coordinates obtained by averaging) and the "middle point" (coordinates obtained by going halway along the shortest path between a and b) is tiny (1 micrometer) if a is on the equator, around 5cm at the arctic circle and 25cm at 85 north. If the distance between a and b was up to 10km, then the figures above should be multiplied by 100. These figures were calculated using the GPS ellipsoid, WGS84.
PS You have to be a bit careful averaging longitudes. The average of 179 and -179 is 180, not 0.

Resources