I am developing an application that logs a GPS trace over time.
After the trace is complete, I need to convert the time based data to distance based data, that is to say, where the original trace had a lon/lat record every second, I need to convert that into having a lon/lat record every 20 meters.
Smoothing the original data seems to be a well understood problem and I suppose I need something like a smoothing algorithm, but I'm struggling to think how to convert from a time based data set to a distance based data set.
This is an excellent question and what makes it so interesting is the data points should be assumed random. Which means you cannot expect a beginning to end data graph that represents a well behaved polynomial (like SINE or COS wave). So you will have to work in small increments such that values on your x-axis (so to speak) do not oscillate meaning Xn cannot be less than Xn-1. The next consideration would be the case of overlap or near overlap of data points. Imagine I’m recording my GPS coordinates and we have stopped to chat or rest and I walk randomly within a twenty five foot circle for the next five minutes. So the question would be how to ignore this type of “data noise”?
For simplicity let’s consider linear calculations where there is no approximation between two points; it’s a straight line. This will probably be more than sufficient for your calculations. Now given the comment above regarding random data points, you will want to traverse your data from your start point to the end point sequentially. Sequential termination occurs when you exceed the last data point or you have exceeded the overall distance to produce coordinates (like a subset). Let’s assume your plot precision is X. This would be your 20 meters. As you traverse there will be three conditions:
The distance between the two points is greater than your
precision. Therefore save the start point plus the precision X. This
will also become your new start point.
The distance between the two points is equal to your precision.
Therefore save the start point plus the precision X (or save end
point). This will also become your new start point.
The distance between the two points is less than your precision.
Therefore precision is adjusted to precision minus end point. The end
point will become your new start point.
Here is pseudo-code that might help get you started. Note, point y minus point x = distance between. And, point x plus value = new point on line between poing x and point y at distance value.
recordedPoints = received from trace;
newPlotPoints = emplty list of coordinates;
plotPrecision = 20
immedPrecision = plotPrecision;
startPoint = recordedPoints[0];
for(int i = 1; i < recordedPoints.Length – 1; i++)
{
Delta = recordedPoints[i] – startPoint;
if (immedPrecision < Delta)
{
newPlotPoints.Add(startPoint + immedPrecision);
startPoint = startPoint + immedPrecision;
immedPrecsion = plotPrecsion;
i--;
}
else if (immedPrecision = Delta)
{
newPlotPoints.Add(startPoint + immedPrecision);
startPoint = startPoint + immediatePrecision;
immedPrecision = plotPrecision;
}
else if (immedPrecision > Delta)
{
// Store last data point regardless
if (i == recordedPoints.Length - 1)
{
newPlotPoints.Add(startPoint + Delta)
}
startPoint = recordedPoints[i];
immedPrecision = Delta - immedPrecision;
}
}
Previously I mentioned "data noise". You can wrap the "if" and "else if's" in another "if" which detemines scrubs this factor. The easiest way is to ignore a data point if it has not moved a given distance. Keep in mind this magic number must be small enough such that sequentially recorded data points which are ignored don't sum to something large and valuable. So putting a limit on ignored data points might be a benefit.
With all this said, there are many ways to accurately perform this operation. One suggestion to take this subject to the next level is Interpolation. For .NET there is a open source library at http://www.mathdotnet.com. You can use their Numberics library which contains Interpolation at http://numerics.mathdotnet.com/interpolation/. If you choose such a route your next major hurdle will be deciding the appropriate Interpolation technique. If you are not a math guru here is a bit of information to get you started http://en.wikipedia.org/wiki/Interpolation. Frankly, Polynomial Interpolation using two adjacent points would be more than sufficient for your approximations provided you consider the idea of Xn is not < Xn-1 otherwise your approximation will be skewed.
The last item to note, these calculations are two-dimensional and do consider altitude (Azimuth) or the curvature of the earth. Here is some additional information in that regard: Calculate distance between two latitude-longitude points? (Haversine formula).
Never the less, hopefully this will point you in the correct direction. With no doubt this is not a trivial problem therefore keeping the data point range as small as possible while still being accurate will be to your benefit.
One other consideration might be to approximate using actual data points using the precision to disregard excessive data. Therefore you are not essentially saving two lists of coordinates.
Cheers,
Jeff
Related
I'm trying to come up with a way to arrive at a "score" based on an integer number of "points" that is adjustable using a small number (3-5?) of parameters. Preferably it would be simple enough to reasonably enter as a function/calculation in a spreadsheet for tuning the parameters by the "designer" (not a programmer or mathematician). The first point has the most value and eventually additional points have a fixed or nearly fixed value. The transition from the initial slope of point value to final slope would be smooth. See example shapes below.
Points values are always positive integers (0 pts = 0 score)
At some point, curve is linear (or nearly), all additional points have fixed value
Preferably, parameters are understandable to a lay person, e.g.: "smoothness of the curve", "value of first point", "place where the additional value of points is fixed", etc
For parameters, an example of something ideal would be:
Value of first point: 10
Value of point #: 3 is: 5
Minimum value of additional points: 0.75
Exact shape of curve not too important as long as the corner can be more smooth or more sharp.
This is not for a game but more of a rating system with multiple components (several of which might use this kind of scale) will be combined.
This seems like a non-traditional kind of question for SO/SE. I've done mostly financial software in my career, I'm hoping there some domain wisdom for this kind of thing I can tap into.
Implementation of Prune's Solution:
Google Sheet
Parameters:
Initial value (a)
Second value (b)
Minimum value (z)
Your decay ratio is b/a. It's simple from here: iterate through your values, applying the decay at each step, until you "peg" at the minimum:
x[n] = max( z, a * (b/a)^n )
// Take the larger of the computed "decayed" value,
// and the specified minimum.
The sequence x is your values list.
You can also truncate intermediate results if you want integers up to a certain point. Just apply the floor function to each computed value, but still allow z to override that if it gets too small.
Is that good enough? I know there's a discontinuity in the derivative function, which will be noticeable if the minimum and decay aren't pleasantly aligned. You can adjust this with a relative decay, translating the exponential decay curve from y = 0 to z.
base = z
diff = a-z
ratio = (b-z) / diff
x[n] = z + diff * ratio^n
In this case, you don't need the max function, since the decay has a natural asymptote of 0.
Consider array of points in 2D,3D,(4D...) space ( e.g. nodes of unstructured mesh ). Initially the index of a point in array is not related to its position in space. In simple case, assume I already know some nearest neighbor connectivity graph.
I would like some heuristics which increase probability that two points which are close to each other in space would have similar index (would be close in array).
I understand that exact solution is very hard (perhaps similar to Travelling salesman problem ) but I don't need exact solution, just something which increase probability.
My ideas on solution:
some naive solution would be like:
1. for each point "i" compute fitness E_i given by sum of distances in array (i.e. index-wise) from its spatial neighbors (i.e. space-wise)
E_i = -Sum_k ( abs( index(i)-index(k) ) )
where "k" are spatial nearest neighbors of "i"
2. for pairs of points (i,j) which have low fitness (E_i,E_j)
try to swap them,
if fitness improves, accept
but the detailed implementation and its performance optimization is not so clear.
Other solution which does not need precomputed nearest-neighbors would be based on some Locality-sensitive_hashing
I think this could be quite common problem, and there may exist good solutions, I do not want to reinvent the wheel.
Application:
improve cache locality, considering that memory access is often bottleneck of graph-traversal
it could accelerate interpolation of unstructured grid, more specifically search for nodes which are near the smaple (e.g. centers of Radial-basis function).
I'd say space filling curves (SPC) are the standard solution to map proximity in space to a linear ordering. The most common ones are Hilbert-curves and z-curves (Morton order).
Hilbert curves have the best proximity mapping, but they are somewhat expensive to calculate. Z-ordering still has a good proximity mapping but is very easy to calculate. For z-ordering, it is sufficient to interleave the bits of each dimension. Assuming integer values, if you have a 64bit 3D point (x,y,z), the z-value is $x_0,y_0,z_0,x_1,y_1,z_1, ... x_63,y_63,z_63$, i.e. a 192 bit value consisting of the first bit of every dimension, followed by the second bit of every dimension, and so on. If your array is ordered according to that z-value, points that are close in space are usually also close in the array.
Here are example functions that interleave (merge) values into a z-value (nBitsPerValue is usually 32 or 64):
public static long[] mergeLong(final int nBitsPerValue, long[] src) {
final int DIM = src.length;
int intArrayLen = (src.length*nBitsPerValue+63) >>> 6;
long[] trg = new long[intArrayLen];
long maskSrc = 1L << (nBitsPerValue-1);
long maskTrg = 0x8000000000000000L;
int srcPos = 0;
int trgPos = 0;
for (int j = 0; j < nBitsPerValue*DIM; j++) {
if ((src[srcPos] & maskSrc) != 0) {
trg[trgPos] |= maskTrg;
} else {
trg[trgPos] &= ~maskTrg;
}
maskTrg >>>= 1;
if (maskTrg == 0) {
maskTrg = 0x8000000000000000L;
trgPos++;
}
if (++srcPos == DIM) {
srcPos = 0;
maskSrc >>>= 1;
}
}
return trg;
}
You can also interleave the bits of floating point values (if encoded with IEEE 754, as they usually are in standard computers), but this results in non-euclidean distance properties. You may have to transform negative values first, see here, section 2.3.
EDIT
Two answer the questions from the comments:
1) I understand how to make space filling curve for regular
rectangular grid. However, if I have randomly positioned floating
points, several points can map into one box. Would that algorithm work
in that case?
There are several ways to use floating point (FP) values. The simplest is to convert them to integer values by multiplying them by a large constant. For example multiply everything by 10^6 to preserve 6 digit precision.
Another way is to use the bitlevel representation of the FP value to turn it into an integer. This has the advantage that no precision is lost and you don't have to determine a multiplication constant. The disadvantage is that euclidean distance metric do not work anymore.
It works as follows: The trick is that the floating point values do not have infinite precision, but are limited to 64bit. Hence they automatically form a grid. The difference to integer values is that floating point values do not form a quadratic grid but a rectangular grid where the rectangles get bigger with growing distance from (0,0). The grid-size is determined by how much precision is available at a given point. Close to (0,0), the precision (=grid_size) is 10^-28, close to (1,1), it is 10^-16 see here. This distorted grid still has the proximity mapping, but distances are not euclidean anymore.
Here is the code to do the transformation (Java, taken from here; in C++ you can simply cast the float to int):
public static long toSortableLong(double value) {
long r = Double.doubleToRawLongBits(value);
return (r >= 0) ? r : r ^ 0x7FFFFFFFFFFFFFFFL;
}
public static double toDouble(long value) {
return Double.longBitsToDouble(value >= 0.0 ? value : value ^ 0x7FFFFFFFFFFFFFFFL);
}
These conversion preserve ordering of the converted values, i.e. for every two FP values the resulting integers have the same ordering with respect to <,>,=. The non-euclidean behaviour is caused by the exponent which is encoded in the bit-string. As mentioned above, this is also discussed here, section 2.3, however the code is slightly less optimized.
2) Is there some algorithm how to do iterative update of such space
filling curve if my points moves in space? ( i.e. without reordering
the whole array each time )
The space filling curve imposes a specific ordering, so for every set of points there is only one valid ordering. If a point is moved, it has to be reinserted at the new position determined by it's z-value.
The good news is that small movement will likely mean that a point may often stay in the same 'area' of your array. So if you really use a fixed array, you only have to shift small parts of it.
If you have a lot of moving objects and the array is to cumbersome, you may want to look into 'moving objects indexes' (MX-CIF-quadtree, etc). I personally can recommend my very own PH-Tree. It is a kind of bitwise radix-quadtree that uses a z-curve for internal ordering. It is quite efficient for updates (and other operations). However, I usually recommend it only for larger datasets, for small datasets a simple quadtree is usually good enough.
The problem you are trying to solve has meaning iff, given a point p and its NN q, then it is true that the NN of q is p.
That is not trivial, since for example the two points can represent positions in a landscape, so the one point can be high in a mountain, so going from the bottom up to mountain costs more that the other way around (from the mountain to the bottom). So, make sure you check that's not your case.
Since TilmannZ already proposed a solution, I would like to emphasize on LSH you mentioned. I would not choose that, since your points lie in a really low dimensional space, it's not even 100, so why using LSH?
I would go for CGAL's algorithm on that case, such as 2D NNS, or even a simple kd-tree. And if speed is critical, but space is not, then why not going for a quadtree (octree in 3D)? I had built one, it won't go beyond 10 dimensions in an 8GB RAM.
If however, you feel that your data may belong in a higher dimensional space in the future, then I would suggest using:
LSH from Andoni, really cool guy.
FLANN, which offers another approach.
kd-GeRaF, which is developed by me.
I have a set of n complex numbers that move through the complex plane from time step 1 to nsampl . I want to plot those numbers and their trace over time (y-axis shows imaginary part, x-axis the real part). The numbers are stored in a n x nsampl vector. However in each time step the order of the n points is random. Thus in each time step I pick a point in the last time step, find its nearest neighbor in the current time step and put it at the same position as the current point. Then I repeat that for all other n-1 points and go on to the next time step. This way every point in the previous step is associated with exactly one point in the new step (1:1 relation). My current implementation and an example are given below. However my implementation is terribly slow (takes about 10s for 10 x 4000 complex numbers). As I want to increase both, the set size n and the time frames nsampl this is really important to me. Is there a smarter way to implement this to gain some performance?
Example with n=3 and nsampl=2:
%manually create a test vector X
X=zeros(3,2); % zeros(n,nsampl)
X(:,1)=[1+1i; 2+2i; 3+3i];
X(:,2)=[2.1+2i; 5+5i; 1.1+1.1i]; % <-- this is my vector with complex numbers
%vector sort algorithm
for k=2:nsampl
Xlast=[real(X(:,k-1)) imag(X(:,k-1))]; % create vector with x/y-coords from last time step
Xcur=[real(X(:,k)) imag(X(:,k))]; % create vector with x/y-coords from current time step
for i=1:size(X,1) % loop over all n points
idx = knnsearch(Xcur(i:end,:),Xlast(i,:)); %find nearest neighbor to Xlast(i,:), but only use the points not already associated, thus Xcur(i:end,:) points
idx = idx + i - 1;
Xcur([i idx],:) = Xcur([idx i],:); %sort nearest neighbor to the same position in the vector as it was in the last time step
end
X(:,k) = Xcur(:,1)+1i*Xcur(:,2); %revert x/y coordinates to a complex number
end
Result:
X(:,2)=[1.1+1.1i; 2.1+2i; 5+5i];
Can anyone help me to speed up this code?
The problem you are tying to solve is combinatorial optimizartion which is solved by the hungarian algorithm (aka munkres). Luckily there is a implementation for matlab available for download. Download the file and put it either on your search path or next to your function. The code to use it is:
for k=2:size(X,2)
%build up a cost matrix, here cost is the distance between two points.
pairwise_distances=abs(bsxfun(#minus,X(:,k-1),X(:,k).'));
%let the algorithm find the optimal pairing
permutation=munkres(pairwise_distances);
%apply it
X(:,k)=X(permutation,k);
end
this is an update to the previous question that I had about locating peaks and troughs. The previous question was this:
peaks and troughs in MATLAB (but with corresponding definition of a peak and trough)
This time around, I did the suggested answer, but I think there is still something wrong with the final algorithm. Can you please tell me what I did wrong in my code? Thanks.
function [vectpeak, vecttrough]=peaktroughmodified(x,cutoff)
% This function is a modified version of the algorithm used to identify
% peaks and troughs in a series of prices. This will be used to identify
% the head and shoulders algorithm. The function gives you two vectors:
% PEAKS - an indicator vector that identifies the peaks in the function,
% and TROUGHS - an indicator vector that identifies the troughs of the
% function. The input is the vector of exchange rate series, and the cutoff
% used for refining possible peaks and troughs.
% Finding all possible peaks and troughs of our vector.
[posspeak,possploc]=findpeaks(x);
[posstrough,posstloc]=findpeaks(-x);
posspeak=posspeak';
posstrough=posstrough';
% Initialize vector of peaks and troughs.
numobs=length(x);
prelimpeaks=zeros(numobs,1);
prelimtroughs=zeros(numobs,1);
numpeaks=numel(possploc);
numtroughs=numel(posstloc);
% Indicator for possible peaks and troughs.
for i=1:numobs
for j=1:numpeaks
if i==possploc(j);
prelimpeaks(i)=1;
end
end
end
for i=1:numobs
for j=1:numtroughs
if i==posstloc(j);
prelimtroughs(i)=1;
end
end
end
% Vector that gives location.
location=1:1:numobs;
location=location';
% From the list of possible peaks and troughs, find the peaks and troughs
% that fit Chang and Osler [1999] definition.
% "A peak is a local minimum at least x percent higher than the preceding
% trough, and a trough is a local minimum at least x percent lower than the
% preceding peak." [Chang and Osler, p.640]
% cutoffs
peakcutoff=1.0+cutoff; % cutoff for peaks
troughcutoff=1.0-cutoff; % cutoff for troughs
% First peak and first trough are initialized as previous peaks/troughs.
prevpeakloc=possploc(1);
prevtroughloc=posstloc(1);
% Initialize vectors of final peaks and troughs.
vectpeak=zeros(numobs,1);
vecttrough=zeros(numobs,1);
% We first check whether we start looking for peaks and troughs.
for i=1:numobs
if prelimpeaks(i)==1;
if i>prevtroughloc;
ratio=x(i)/x(prevtroughloc);
if ratio>peakcutoff;
vectpeak(i)=1;
prevpeakloc=location(i);
else vectpeak(i)=0;
end
end
elseif prelimtroughs(i)==1;
if i>prevpeakloc;
ratio=x(i)/x(prevpeakloc);
if ratio<troughcutoff;
vecttrough(i)=1;
prevtroughloc=location(i);
else vecttrough(i)=0;
end
end
else
vectpeak(i)=0;
vecttrough(i)=0;
end
end
end
I just ran it, and it seems to work if you make this change:
peakcutoff= 1/cutoff; % cutoff for peaks
troughcutoff= cutoff; % cutoff for troughs
I tested it with the following code, with a cutoff of 0.1 (peaks must be 10 times larger than troughs), and it looks reasonable
x = randn(1,100).^2;
[vectpeak,vecttrough] = peaktroughmodified(x,0.1);
peaks = find(vectpeak);
troughs = find(vecttrough);
plot(1:100,x,peaks,x(peaks),'o',troughs,x(troughs),'o')
I strongly urge you to read up on vectorization in matlab. There are many wasted lines in your program, and it makes it difficult to read and will also make it very slow with big datasets. For instance, prelimpeaks and prelimtroughs can be completely defined without loops, in a single line for each:
prelimpeaks(possploc) = 1;
prelimtroughs(posstloc) = 1;
I think there are better techniques for finding peaks and troughs than the percentage threshold technique given above. Fit the least squares fit parabola to the data set, a technique for doing this is in the 1946 Frank Peters paper, "Parabolic Correlation, a New Descrptive Statistic." The fitted parabola will likely have an index of curvature, as Peters defines it. Find peaks and troughs by testing which points, when eliminated, minimize the absolute value of the index of curvature of the parabola. Once these points are discovered, test for which are peaks and which are troughs by considering how the index of curvature changes when the point is excluded, which will depend on whether the original parabola had a positive or negative index of curvature. If you become concerned about contiguous points the elimination of which achieves the minimum absolute value curvature, constrain by setting a minimum distance the identified points must be from each other. Another constraint would have to be the number of points identified. Without this constraint, this algorithm would remove all but two points, a straight line without curvature.
Sometimes there are steep changes between contiguous points and both should be included in extreme points. Perhaps a percentage threshold test for contiguous points that overrides the minimum distance constraint would be useful.
Another solution might be to compute the Fast Fourier Transform of the series and remove points that minimize the lower spectra. FFT functions are more readily available than code that finds least square fits parabola. There is a matrix manipulation technique for determining the least square fit parabola that is easier to manage than Peter's approach. I saw it documented on the web someplace, but lost the link. Advice from anybody able to arrive at a least square fit parabola using matrix vector notation would be appreciated.
Say you have a 2D area and you want to generate random points within it, by setting
x = random() * width
y = random() * height
do the points clump around the centre of the area? I remember reading something saying they would, but I can't quite figure out why, and how to prevent it.
Yes. The fewer points you have, the more they will appear to form clusters.
To avoid this, you can use "stratified sampling". it basically means you divide your surface evenly in smaller areas and place your points in there.
For your example, you would divide the square in n*n subsquares. Each point would be placed randomly inside it's subsquare. You can even adjust the randomness factor to make the pattern more or less random/regular:
// I assume random() return a number in the range [0, 1).
float randomnessFactor = 0.5;
int n = 100;
for(int ySub=0; ySub<n; ++ySub){
for(int xSub=0; xSub<n; ++xSub){
float regularity = 0.5 * (1-randomnessFactor);
x = regularity + randomnessFactor * random() + xSub / (float) (n-1);
y = regularity + randomnessFactor * random() + ySub / (float) (n-1);
plot(x, y);
}
}
The reason this works is that you don't actually want randomness. (Clumps are random.) You want the points evenly spread, but without the regular pattern. Placing the points on a grid and offsetting them a bit hides the regularity.
Truly random points will create clusters (or clumps) - it's the effect that can cause confusion when plotting real world data (like cancer cases) and lead to people thinking that there are "hot spots" which must be caused by something.
However, you also need to be careful when generating random numbers that you don't create a new generator every time you want a new number - this will use the same seed value which will result in all the values clustering about a point.
It depends on the distribution of the random number generator. Assuming a perfectly even distribution, then the points are likely to be distributed in a reasonably uniform way.
Also, asking if they clump around the middle is pre-supposing that you don't have the ability to test this!
From my experience, randomly generated points do not clump in the center of the area since every pixel of your screen has the same probability of being selected.
While numbers generated with random() are not truely random, they will be sufficent for putting objects randomly on your screen.
If the random number generator's random() function yields a gaussian distribution, then yes.
You get a clump at the origin if you use polar coordinates instead of carthesian:
r = rand() * Radius;
phi = rand() * 2 * Pi;
The reason is that statistically, the circle r=[0,1] will contain as many points as the ring r=[1,2] even though the ring is three times larger.
Pseudorandom points won't necessarily clump "around the center" of an area, but they will tend to cluster in various random points in an area; in fact these clumps often occur more frequently than people think. A more even distribution of space is often achieved by using so-called quasirandom or low-discrepancy sequences, such as the Sobol sequence, whose Wikipedia article shows a graphic illustrating the difference between Sobol and pseudorandom sequences.
They will not clump, but will form various interesting patterns, in 2d or 3d, depending on the generator you use.