How to combine Rational B-spline Surfaces into one or fewer? How do metrics such as tolerance, u/v degree, u/v span influence the final result, if any?
In general, there is no way to create a single rational B-spline surface as the exact merge result of the 4 input rational B-spline surfaces. So, you will have to settle with an approximation. Consequently, there is no need for this approximating surface to be rational. The approximation schemes typically are divided into two categories:
1) Given degree and number of spans in U and V directions, try to find the "best fit" surface to the 4 surfaces. Typically, the max deviation between the output surface and the input surfaces are also computed so that users will know how well this surface fit the input.
2) Given degree in U and V directions and a tolerance value, try to find the "best fit" surface to the 4 surfaces where the max deviation between the output and the input is smaller than the input tolerance value.
The 2nd approach will normally use the algorithm for the 1st approach and iterate over the number of spans in U/V direction to determine the optimum number of spans. Therefore, typically it will take a lot longer of time when compared with the 1st approach.
Related
For my simulation purposes, I want to generate a randomly distributed k number of spheres (having the same radii) in a confined 3D space (inside a rectangle) where k is in order of 1000. Those spheres should not impinge on one another.
So, I want to generate random k points in a 3D space at least d distance away from one another; considering the number of points and the frequency at which I need those points for simulation, I don't want to apply brute force; I'm looking for some efficient algorithms achieving this.
How about just starting with some regular tessellation of the space (i.e. some primitive 3d lattice) and putting a single point somewhere in each tile? You'd then only need to check a small number of neighboring tiles for proximity.
To get a more statistically uniform, i.e. less regular, set of points, you could:
perturb points in space
generate an overly dense lattice and reject some points
"warp" the space so that the lattice was more dense in certain areas
You could perturb the points sequentially, giving you a monte-carlo chain over their coordinates, and potentially saving work elsewhere. Presumably you could tailor this so that the equilibrium distribution was what you wanted.
I have a large number of points in 3D space (x,y,z) represented as an array of 3 float structs. I also have access to a strong graphics card with CUDA capability. I want the following:
Divide the points in the array into clusters so that every point within a cluster has a maximum euclidean distance of X to at least one other point within the cluster.
Examle in 2D:
The "brute force" way of doing this is of course to calculate the distance between every point and every other point, to see if any of the distances is below the threshold X, and if so mark those points as belonging to the same cluster. This is an O(n²) algorithm.
This can be done in parallel in CUDA ofcourse with n² threads, but is there a better way?
The algorithm can be reduced to O(n) by using binning:
impose a 3D grid spaced as X, that is a 3D lattice (each cell of the lattice is a cubic bin);
assign each points in space to the corresponding bin (the bin that geometrically contains that points);
every time you need to evaluate the distances from one point, you just use only the points in the bin of the point itself and the ones in the 26 neighbouring bins (3x3x3 = 27)
The points in the other bins are further than X, so you don't need to evaluate the distances at all.
In this way, assuming a constant density in the points, you will have to compute the distance only for a constant number of pair points / total number of points.
Assigning the points to the bins is O(n) as well.
If the points are not uniformly distributed, the bins can be smaller (and you must consider more than 26 neighbours to evaluate the distances) and eventually sparse.
This is a typical trick used for molecular dynamics, ray tracing, meshing,... However I know of the term binning from molecular dynamics simulation: the name can change (link-cell, kd-trees too use the same principle, even if more articulated), the algorithm remains the same!
And, good news, the algorithm is well suited for parallel implementation.
refs:
https://en.wikipedia.org/wiki/Cell_lists
I am looking for the following type of algorithm:
There are n matched pairs of points in 2D. How can I identify outlying pairs of points according to Affine / Helmert transformation and omit them from the transformation key? We do not know the exact number of such outlying pairs.
I cannot use Trimmed Least Squares method because there is a basic assumption that a k percentage of pairs is correct. But we do not have any information about the sample and do not know the k... In such a sample of all pairs could be correct or vice versa.
Which types of algorithms are suitable for this problem?
Use RANSAC:
Repeat the following steps a fixed number of times:
Randomly select as much pairs as are necessary to compute the transformation parameters.
Compute the parameters.
Compute the subset of pairs that have small projection error (the 'consensus set').
If the consensus set is large enough, compute a projection for it (e.g. with Least Squares).
Computer the consensus set's projection error
Remember the model if it is the best you found so far.
You have to experiment to find good values for
"a fixed number of times"
"small projection error"
"consensus set is large enough".
The simplest approach is compute your transformation based on all points, compute the residuals for each point, remove the points with high residuals until you reach an acceptable transformation or hit the minimum number of acceptable input points. The residual for any given point is the join distance between the forward transformed value for a point, and the intended target point.
Note that the residuals between an affine transformation and a Helmert (conformal) transformation will be very different as these transformations do different things. The non-uniform scale of the affine has more 'stretch' and will hence lead to smaller residuals.
Suppose to have a GPS trajectory - i.e.: a series of spatio-temporal coords, every coord is a (x,y,t) information, where x is longitude, y is latitude and t is the time stamp.
Suppose each trajectory identified by 1000 (x,y) points, a compressed trajectory is trajectory with fewer points than the original, for instance 300 points. A compression algorithm (Douglas-Peucker, Bellman, etc) decide what points will be in compressed trajectory and what point will be discarded.
Each algorithm make his own choice. Better algorithms choice the points not only by spatial characteristics (x, y) but using spatio-temporal characteristics (x,y,t).
Now I need a way to compare two compressed trajectories against the original to understand what compression algorithm better reduce a spatio-temporal (temporal component is really important) trajectory.
I've thinked of DTW algorithm to check trajectory similarity, but this probably don't care about temporal component. What algorithm can I use to make this control?
What is the best compression algorithm depends to a large extent on what you are trying to achieve with it, and is dependent on other external variables. Typically, were going to identify and remove spikes, and then remove redundant data. For example;
Known minimum and maximum velocity, acceleration, and ability to tuen will let you remove spikes. If we look at the join distance between a pair of points divided by the time where
velocity = sqrt((xb - xa)^2 + (yb - ya))/(tb-ta)
we can eliminate points where the distance couldn't be travelled in the elapsed time given the speed constraint. We can do the same with acceleration constraints, and change in direction constraints for a given velocity. These constraints change whether the GPS receiver is static, hand held, in a car, in an aeroplane etc...
We can remove redundant points using a moving window looking at three points, where if the an interpolated (x,y,t) for middle point can be compared with an observed point, and the observed point removed if it lies within a specified distance + time tolerance of the interpolated point. We can also curve fit the data and consider the distance to the curve rather than using a moving 3 point window.
The compression may also have differing goals based on the constraints given, e.g. to simply reduce the data size by removing redundant observations and spikes, or to smooth the data as well.
For the former, after checking for spikes based on defined constraints, we simply check the 3d distance of each point to the polyline connecting the compressed points. This is achieved by finding the pair of points before and after the point that has been removed, interpolating a position on the line connecting those points based on the observed time, and comparing the interpolated position with the observed position. The amount of points removed will increase as we allow this distance tolerance to increase.
For the latter we also have to consider how well the smoothed result models the data, the weights imposed by the constraints, and the design shape / curve parameters.
Hope this makes some sense.
Maybe you could use mean square distance between trajectories over time.
Probably, simply looking at distance at time 1s,2s,... will be enough, but you can also do it more precise between time stamps integrating, (x1(t)-x2(t))^2 + (y1(t)-y2(t))^2. Note that between 2 time stamps both trajectories will be straight line.
I've found what I need to compute spatio-temporal error.
As written in paper "Compression and Mining of GPS Trace Data:
New Techniques and Applications" by Lawson, Ravi & Hwang:
Synchronized Euclidean distance (sed) measures the distance between
two points at identical time stamps. In Figure 1, five time steps (t1
through t5) are shown. The simplified line (which can be thought of as
the compressed representation of the trace) is comprised of only two
points (P't1 and P't5); thereby, it does not include points P't2, P't3
and P't4. To quantify the error introduced by these missing points,
distance is measured at the identical time steps. Since three points
were removed between P't1 and P't5, the line is divided into four
equal sized line segments using the three points P't2, P't3 and P't4
for the purposes of measuring the error. The total error is measured
as the sum of the distance between all points at the synchronized time
instants, as shown below. (In the following expression, n represents
the total number of points considered.)
I'm developing an algorithm that involves inserting intermediary points between GPS coordinates. To be precisely let's assume that I have these 2 coordinates:
25.60593,45.65151
25.60662,45.65164000000001
I want to add extra points between these coordinates (on the same "line"). Is it wise to consider a linear variation (in this case I consider the intermediate points as points on the segment defined by the 2 given coordinates).
This involves actual GPS data obtained from a computed route -> this would mean that the distance is less than 1000 meters between each consecutive shape point (on highway the distance between consecutive points can be quite big due to the fact that it is a straight line).
Is linear variation a good approximation, or are they any other methods? (given the fact that the distance between points is less than 1km)
Thanks,
Iulian
Given that the distance between the points is less than 1 km, AND *ASSUMING* that you're not playing very close to the North or South Pole, linear interpolation is probably good enough.
After linear interpolation is shown in live testing on real data NOT to be good enough, you can try spherical trig interpolation. Math involved.
If both of those fall short, then you will need very specialized expert assistance.
If the you have two points a and b, and the distance from a to b is less than 1km, then the maximimum distance between the midpoint (coordinates obtained by averaging) and the "middle point" (coordinates obtained by going halway along the shortest path between a and b) is tiny (1 micrometer) if a is on the equator, around 5cm at the arctic circle and 25cm at 85 north. If the distance between a and b was up to 10km, then the figures above should be multiplied by 100. These figures were calculated using the GPS ellipsoid, WGS84.
PS You have to be a bit careful averaging longitudes. The average of 179 and -179 is 180, not 0.