I have two matrices, one containing 3D coordinates that are nominal positions per a CAD model and the other containing 3D coordinates of actual measured positions using a CMM. Every nominal point has a corresponding measurement, or in other words the two matrices are of equal length and width. I'm not sure what the best way is to fit the measured points to the nominal points. I need a way of calculating the translation and rotation to apply to all of the measured points that produce the minimum distance between each nominal/measured pair of points while not exceeding allowed tolerances on maximum distance at any other point. This is similar to Registration of point clouds but different in that each pair of nominal/measured points has a unique tolerance/limit on how far apart they are allowed to be. That limit is higher for some pairs and lower for others. I'm programming in .Net and have looked into Point Cloud Library (PCL), OpenCV, Excel, and basic matrix operations as possible approaches.
This is a sample of the data
X Nom Y Nom Z Nom X Meas Y Meas Z Meas Upper Tol Lower Tol
118.81 2.24 -14.14 118.68 2.24 -14.14 1.00 -0.50
118.72 1.71 -17.19 118.52 1.70 -17.16 1.00 -0.50
115.36 1.53 -24.19 115.14 1.52 -23.98 0.50 -0.50
108.73 1.20 -27.75 108.66 1.20 -27.41 0.20 -0.20
Below is the type of matrix I need to calculate in order to best fit the measured points to the nominal points. I will multiply it by the measured point matrix to best fit to the nominal point matrix.
Transformation
0.999897324 -0.000587540 0.014317661
0.000632725 0.999994834 -0.003151567
-0.014315736 0.003160302 0.999892530
-0.000990993 0.001672040 0.001672040
This is indeed a job for a rigid registration algorithm.
In order to handle your tolerances you have a couple of options:
Simple option: Run rigid registration, check afterwards if result is within tolerances
Bit harder option: Offset your points in the CAD, where you have imbalanced tolerances
the rest the same as the previous option.
Hardest option: What you probably want to do is and have the offset as in the second option, and also add a weight function based on measured position and set tolerance. This weight function should effect the energy function in such a way, that the individual function vectors are larger when you have a small tolerance and smaller when you have a larger tolerence.
So now about implementation, for options 1 and 2 your fastest way to result would probably be:
Use PCL C++ version in a visual studio 2010 environment. There's lots of information about installation of PCL and VS2010 and get it running. Also PCL has a nice ICP registration tutorial that should get you going.
Use VTK for python, it has an ICP algorithm:
Installing VTK for Python
http://www.vtk.org/Wiki/VTK/Examples/Python/IterativeClosestPoints
If your really want option 3 you can do:
Make the weight function in PCL library source code and compile it
Make the complete ICP algorithm yourself in .net:
http://www.math.tau.ac.il/~dcor/Graphics/adv-slides/ICP.ppt
Use math.numerics sparse matrix/vector algebra and solvers to create your own optimizer
Realize the Lev-Marq or Gauss-Newton optimizer from:
imm methods for non-linear least squares problems, K. Madsen, 2004
Generate your own function vector and jacobian matrix (with weight function)
Have quite some patience to get is all to work together :)
Post the result for the others on StackOverflow that are waiting for ICP in C# .net
Related
I want to develop a game where the universe is a maximum 65536 x 65536 grid. I do not want the universe to be random, what I want is for it to be procedurally generated according to location. What it should generate is a number from 0 to 15.
0 means empty space. Most of the universe (probably 50-80%) is empty space.
1 - 9 a planet of that technology level
10-15 various anomalies (black hole, star, etc.)
Given an address from 0x8000-0xFFFF or 0 or 1-0x7fff for the X address, and the same range for the Y address, returns a number from 0 to 15. Presumably this would place planets nearer to 0,0 more plentiful than those at a distance of
The idea being, the function is called passing the two values and returns the planet number. I used to have a function to do this, but it has gotten lost over various moves.
While the board could be that big, considering how easy it would be to get lost, I'll probably cut the size to 1200 in both directions, -600 to +600. Even that would be huge.
I've tried a number of times, but I've come to the conclusion that I lack the sufficient math skills to do this. It's probably no more than 10 lines. As it is intended to be multiplayer, it'll probably be either a PHP application on the back end or a desktop application connecting to a server.
Any help would be appreciated. I can probably read any commonly used programming language you might use.
Paul Robinson
See How to draw sky chart? for the planetary position math. Especially pay attention to the image with equations you can use it to compute period of your planet based on its distance and mass to system central mass... for simple circular orbit just match the centripedal force with gravity like I did in here:
Is it possible to make realistic n-body solar system simulation in matter of size and mass?
So for example:
G = 6.67384e-11;
v = sqrt(G*M/a); // orbital speed
T = sqrt((4.0*M_PI*M_PI*a*a*a)/(G*(m+M))); // orbital period
pos = (a,0,0); // start position
vel = (0,sqrt(G*M/a),0); // start speed
The distribution of planets and their sizes complies specific (empirically obtained) rules (that is one of the reasons why we are still looking for 10th planet). I can't remember the name of the rule, however from quick look on Google Image from here can be used too:
Distribution of the planets in solar system according to their mass
and their distances from the Sun. The distances (X-axis) are in AU and
masses (Y-axis) in Yotta (10^24)
Jupiter mass is M=1,898e27 kg so mass units are in 10^24 kg.
So just match your PRNG generation to such curve and be done with it.
Looking for an algorithm to compute actual distance from a latitude/longitude/elevation to the visible horizon taking into account the actual surrounding terrain and the curve of the earth. Assume you have enough terrain data for the surrounding several hundred miles from any of the open elevation datasets. The problem can be simplified to an approximate by checking a few cardinal directions. Ideally I'd like to be able to compute the real solution as well.
Disclosure: I'm the developer and maintainer of the below mentioned software package.
I'm not sure if you're still looking for a solution as this question is already a bit older. However, one solution for your problem would be to apply the open-source package HORAYZON (https://github.com/ChristianSteger/HORAYZON). It's based on the high-performance ray tracing library Intel Embree (https://www.embree.org) and thus very fast and it considers Earth's curvature. With this package, you can compute the horizon angle and the distance to the horizon line for one or multiple arbitrary location(s) on a Digital Elevation Model (DEM) and set various options like the number of cardinal sampling directions, the maximal search distance for the horizon, etc. However - I'm not sure what you mean by "real solution". Do you mean the "perfect" solution - i.e. by considering elevation information from all DEM cells without doing a discrete sampling along the azimuth angle? Unfortunately, this cannot be done with the above mentioned package (but one could theoretically implement it).
I'm working on a game, and I want to place some objects randomly throughout the world. However, I want the objects to be "clustered" in clumps. Is there any random distribution that clusters like this? Or is there some other technique I could use?
Consider using a bivariate normal (a.k.a. Gaussian) distribution. Generate separate normal values for the X and Y location. Bivariate normals are denser towards the center, sparser farther out, so your choice for the standard deviation of the distribution will determine how tight the clustering is - 2/3 of the items will be within 1 standard deviation of the distribution's center, 95% within 2 standard deviations, and almost all within 3 standard deviations.
Not sure if this may or may not be valid here on SO, but I was hoping someone can advise of the correct algorithm to use.
I have the following RAW data.
In the image you can see "steps". Essentially I wish to get these steps, but then get a moving average of all the data between. In the following image, you can see the moving average:
However you will notice that at the "steps", the moving average decreases the gradient where I wish to keep the high vertical gradient.
Is there any smoothing technique that will take into account a large vertical "offset", but smooth the other data?
Yup, I had to do something similar with images from a spacecraft.
Simple technique #1: use a median filter with a modest width - say about 5 samples, or 7. This provides an output value that is the median of the corresponding input value and several of its immediate neighbors on either side. It will get rid of those spikes, and do a good job preserving the step edges.
The median filter is provided in all number-crunching toolkits that I know of such as Matlab, Python/Numpy, IDL etc., and libraries for compiled languages such as C++, Java (though specific names don't come to mind right now...)
Technique #2, perhaps not quite as good: Use a Savitzky-Golay smoothing filter. This works by effectively making least-square polynomial fits to the data, at each output sample, using the corresponding input sample and a neighborhood of points (much like the median filter). The SG smoother is known for being fairly good at preserving peaks and sharp transistions.
The SG filter is usually provided by most signal processing and number crunching packages, but might not be as common as the median filter.
Technique #3, the most work and requiring the most experience and judgement: Go ahead and use a smoother - moving box average, Gaussian, whatever - but then create an output that blends between the original with the smoothed data. The blend, controlled by a new data series you create, varies from all-original (blending in 0% of the smoothed) to all-smoothed (100%).
To control the blending, start with an edge detector to detect the jumps. You may want to first median-filter the data to get rid of the spikes. Then broaden (dilation in image processing jargon) or smooth and renormalize the the edge detector's output, and flip it around so it gives 0.0 at and near the jumps, and 1.0 everywhere else. Perhaps you want a smooth transition joining them. It is an art to get this right, which depends on how the data will be used - for me, it's usually images to be viewed by Humans. An automated embedded control system might work best if tweaked differently.
The main advantage of this technique is you can plug in whatever kind of smoothing filter you like. It won't have any effect where the blend control value is zero. The main disadvantage is that the jumps, the small neighborhood defined by the manipulated edge detector output, will contain noise.
I recommend first detecting the steps and then smoothing each step individually.
You know how to do the smoothing, and edge/step detection is pretty easy also (see here, for example). A typical edge detection scheme is to smooth your data and then multiply/convolute/cross-corelate it with some filter (for example the array [-1,1] that will show you where the steps are). In a mathematical context this can be viewed as studying the derivative of your plot to find inflection points (for some of the filters).
An alternative "hackish" solution would be to do a moving average but exclude outliers from the smoothing. You can decide what an outlier is by using some threshold t. In other words, for each point p with value v, take x points surrounding it and find the subset of those points which are between v - t and v + t, and take the average of these points as the new value of p.
I'm sure the opposite has been asked many times but I couldn't find any answers on how to generate bad random numbers.
I want to write a small program for cluster analysis and want to generate some random Points for testing. If I would just insert 1000 Points with random coordinates they would be scattered all over the field which would make a cluster analysis worthless.
Is there a simple way to generate Random Numbers which build clusters?
I already thought about either not using random() but random()*random() which generates normally distributed numbers (I think I read this somewhere here on Stack Overflow).
Second approach would be picking a few areas at random and run the point generation again in this area which would of course produce a cluster in this area.
Do you have a better idea?
If you are deliberately producing well formed clusters (rather than completely random clusters), you could combine the two to find a cluster center, and then put lots of points around it in a normal distribution.
As well working in cartesian coords (x,y); you could use a radial method to distribute points for a particular cluster. Choose a random angle (0-2PI radians), then choose a radius.
Note that as circumference is proportional radius, the area distribution will be denser close to the centre - but the distribution per specific radius will be the same. Modify the radial distribution to produce a more tightly packed cluster.
OR you could use real world derived data for semi-random point distributions with natural clustering. Recently I've been doing quite a bit of geospatial cluster analysis. For this I have used real world data - zipcode centroids (which form natural clusters around cities); and restaurant locations. Another suggestion: you could use a stellar catalogue or galactic catalogue.
Generate few anchors. True random numbers. Then generate noise around them:
anchor + dist * (random() - 0.5))
this will generate clustered numbers, that will be evenly distributed in distance dist.
Add an additional dimension to your model.
Draw an irregular (i.e. not flat) surface.
Generate numbers in the extended space.
Discard all numbers which are on one side of the surface.
From every number left, drop the additional dimension.
Maybe I have misunderstood, but the gnu scientific library (written in c) has many distributions written within it - could you not pick coordinates from the Gaussian/poisson etc from that library?
http://www.gnu.org/software/gsl/manual/html_node/Random-Number-Distributions.html
They provide a simple example with the Poisson distribution from the link, too.
If you need your distribution to be bounded (for example y-coordinate not less than -1) then you can achieve that by rejection sampling from the uniform distribution in the gsl.
Blessings, Tom
My first thought was that you could implement your own using a linear congruential generator and experiment with the coefficients until you get a low enough period to suit your needs. A really low m coefficient should do the trick.
I also like your second idea of running a good RNG around a few pre-selected points to create clusters. You could either target specific areas for the clusters with this method, or generate those randomly as well.