I'm currently developing an application that will alert users of incoming rain. To do this I want to check certain area around user location for rainfall (different pixel colours for intensity on rainfall radar image). I would like the checked area to be a circle but I don't know how to do this efficiently.
Let's say I want to check radius of 50km. My current idea is to take subset of image with size 100kmx100km (user+50km west, user+50km east, user+50km north, user+50km south) and then check for each pixel in this subset if it's closer to user than 50km.
My question here is, is there a better solution that is used for this type of problems?
If the occurrence of the event you are searching for (rain or anything) is relatively rare, then there's nothing wrong with scanning a square or pixels and then, only after detecting rain in that square, checking whether that rain is within the desired 50km circle. Note that the key point here is that you don't need to check each pixel of the square for being inside the circle (that would be very inefficient), you have to search for your event (rain) first and only when you found it, check whether it falls into the 50km circle. To implement this efficiently you also have to develop some smart strategy for handling multi-pixel "stains" of rain on your image.
However, since you are scanning a raster image, you can easily implement the well-known Bresenham circle algorithm to find the starting and the ending point of the circle for each scan line. That way you can easily limit your scan to the desired 50km radius.
On the second thought, you don't even need the Bresenham algorithm for that. For each row of pixels in your square, calculate the points of intersection of that row with the 50km circle (using the usual schoolbook formula with square root), and then check all pixels that fall between these intersection points. Process all rows in the same fashion and you are done.
P.S. Unfortunately, the Wikipedia page I linked does not present Bresenham algorithm at all. It has code for Michener circle algorithm instead. Michener algorithm will also work for circle rasterization purposes, but it is less precise than Bresenham algorithm. If you care for precision, find a true Bresenham on somewhere. It is actually surprisingly diffcult to find on the net: most search hits erroneously present Michener as Bresenham.
There is, you can modify the midpoint circle algorithm to give you an array of for each y, the x coordinate where the circle starts (and ends, that's the same thing because of symmetry). This array is easy to compute, pseudocode below.
Then you can just iterate over exactly the right part, without checking anything.
Pseudo code:
data = new int[radius];
int f = 1 - radius, ddF_x = 1;
int ddF_y = -2 * radius;
int x = 0, y = radius;
while (x < y)
{
if (f >= 0)
{
y--;
ddF_y += 2; f += ddF_y;
}
x++;
ddF_x += 2; f += ddF_x;
data[radius - y] = x; data[radius - x] = y;
}
Maybe you can try something that will speed up your algorithm.
In brute force algorithm you will probably use equation:
(x-p)^2 + (y-q)^2 < r^2
(p,q) - center of the circle, user position
r - radius (50km)
If you want to find all pixels (x,y) that satisfy above condition and check them, your algorithm goes to O(n^2)
Instead of scanning all pixels in this circle I will check only only pixels that are on border of the circle.
In that case, you can use some more clever way to define circle.
x = p+r*cos(a)
y = q*r*sin(a)
a - angle measured in radians [0-2pi]
Now you can sample some angles, for example twenty of them, iterate and find all pairs (x,y) that are border for radius 50km. Now check are they on the rain zone and alert user.
For more safety I recommend you to use multiple radians (smaller than 50km), because your whole rain cloud can be inside circle, and your app will not recognize him. For example use 3 incircles (r = 5km, 15km, 30km) and do same thing. Efficiency of this algorithm only depends on number of angles and number of incircles.
Pseudocode will be:
checkRainDanger()
p,q <- position
radius[] <- array of radii
for c = 1 to length(radius)
a=0
while(a<2*pi)
x = p + radius[c]*cos(a)
y = q + radius[c]*sin(a)
if rainZone(x,y)
return true
else
a+=pi/10
end_while
end_for
return false //no danger
r2=r*r
for x in range(-r, +r):
max_y=sqrt(r2-x*x)
for y in range(-max_y, +max_y):
# x,y is in range - check for rain
I have a lot of points on the surface of the sphere.
How can I calculate the area/spot of the sphere that has the largest point density?
I need this to be done very fast. If this was a square for example I guess I could create a grid and then let the points vote which part of the grid is the best.
I have tried with transforming the points to spherical coordinates and then do a grid, both this did not work well since points around north pole are close on the sphere but distant after the transform.
Thanks
There is in fact no real reason to partition the sphere into a regular non-overlapping mesh, try this:
partition your sphere into semi-overlapping circles
see here for generating uniformly distributed points (your circle centers)
Dispersing n points uniformly on a sphere
you can identify the points in each circle very fast by a simple dot product..it really doesn't matter if some points are double counted, the circle with the most points still represents the highest density
mathematica implementation
this takes 12 seconds to analyze 5000 points. (and took about 10 minutes to write )
testcircles = { RandomReal[ {0, 1}, {3}] // Normalize};
Do[While[ (test = RandomReal[ {-1, 1}, {3}] // Normalize ;
Select[testcircles , #.test > .9 & , 1] ) == {} ];
AppendTo[testcircles, test];, {2000}];
vmax = testcircles[[First#
Ordering[-Table[
Count[ (testcircles[[i]].#) & /# points , x_ /; x > .98 ] ,
{i, Length[testcircles]}], 1]]];
To add some other, alternative schemes to the mix: it's possible to define a number of (almost) regular grids on sphere-like geometries by refining an inscribed polyhedron.
The first option is called an icosahedral grid, which is a triangulation of the spherical surface. By joining the centres of the triangles about each vertex, you can also create a dual hexagonal grid based on the underlying triangulation:
Another option, if you dislike triangles (and/or hexagons) is the cubed-sphere grid, formed by subdividing the faces of an inscribed cube and projecting the result onto the spherical surface:
In either case, the important point is that the resulting grids are almost regular -- so to evaluate the region of highest density on the sphere you can simply perform a histogram-style analysis, counting the number of samples per grid cell.
As a number of commenters have pointed out, to account for the slight irregularity in the grid it's possible to normalise the histogram counts by dividing through by the area of each grid cell. The resulting density is then given as a "per unit area" measure. To calculate the area of each grid cell there are two options: (i) you could calculate the "flat" area of each cell, by assuming that the edges are straight lines -- such an approximation is probably pretty good when the grid is sufficiently dense, or (ii) you can calculate the "true" surface areas by evaluating the necessary surface integrals.
If you are interested in performing the requisite "point-in-cell" queries efficiently, one approach is to construct the grid as a quadtree -- starting with a coarse inscribed polyhedron and refining it's faces into a tree of sub-faces. To locate the enclosing cell you can simply traverse the tree from the root, which is typically an O(log(n)) operation.
You can get some additional information regarding these grid types here.
Treating points on a sphere as 3D points might not be so bad.
Try either:
Select k, do approximate k-NN search in 3D for each point in the data or selected point of interest, then weight the result by their distance to the query point. Complexity may vary for different approximate k-NN algorithms.
Build a space-partitioning data structure like k-d Tree, then do approximate (or exact) range counting query with a ball range centered at each point in the data or selected point of interest. Complexity is O(log(n) + epsilon^(-3)) or O(epsilon^(-3)*log(n)) for each approximate range query with state of the art algorithms, where epsilon is the range error threshold w.r.t. the size of the querying ball. For exact range query, the complexity is O(n^(2/3)) for each query.
Partition the sphere into equal-area regions (bounded by parallels and meridians) as described in my answer there and count the points in each region.
The aspect ratio of the regions will not be uniform (the equatorial regions will be more "squarish" when N~M, while the polar regions will be more elongated).
This is not a problem because the diameters of the regions go to 0 as N and M increase.
The computational simplicity of this method trumps the better uniformity of domains in the other excellent answers which contain beautiful pictures.
One simple modification would be to add two "polar cap" regions to the N*M regions described in the linked answer to improve the numeric stability (when the point is very close to a pole, its longitude is not well defined). This way the aspect ratio of the regions is bounded.
You can use the Peters projection, which preserves the areas.
This will allow you to efficiently count the points in a grid, but also in a sliding window (box Parzen window) by using the integral image trick.
If I understand correctly, you are trying to find the densepoint on sphere.
if points are denser at some point
Consider Cartesian coordinates and find the mean X,Y,Z of points
Find closest point to mean X,Y,Z that is on sphere (you may consider using spherical coordinates, just extend the radius to original radius).
Constraints
If distance between mean X,Y,Z and the center is less than r/2, then this algorithm may not work as desired.
I am not master of mathematics but may be it can solve by analytical way as:
1.Short the coordinate
2.R=(Σ(n=0. n=max)(Σ(m=0. M=n)(1/A^diff_in_consecative))*angle)/Σangle
A=may any constant
This is really just an inverse of this answer of mine
just invert the equations of equidistant sphere surface vertexes to surface cell index. Don't even try to visualize the cell different then circle or you go mad. But if someone actually do it then please post the result here (and let me now)
Now just create 2D cell map and do the density computation in O(N) (like histograms are done) similar to what Darren Engwirda propose in his answer
This is how the code looks like in C++
//---------------------------------------------------------------------------
const int na=16; // sphere slices
int nb[na]; // cells per slice
const int na2=na<<1;
int map[na][na2]; // surface cells
const double da=M_PI/double(na-1); // latitude angle step
double db[na]; // longitude angle step per slice
// sherical -> orthonormal
void abr2xyz(double &x,double &y,double &z,double a,double b,double R)
{
double r;
r=R*cos(a);
z=R*sin(a);
y=r*sin(b);
x=r*cos(b);
}
// sherical -> surface cell
void ab2ij(int &i,int &j,double a,double b)
{
i=double(((a+(0.5*M_PI))/da)+0.5);
if (i>=na) i=na-1;
if (i< 0) i=0;
j=double(( b /db[i])+0.5);
while (j< 0) j+=nb[i];
while (j>=nb[i]) j-=nb[i];
}
// sherical <- surface cell
void ij2ab(double &a,double &b,int i,int j)
{
if (i>=na) i=na-1;
if (i< 0) i=0;
a=-(0.5*M_PI)+(double(i)*da);
b= double(j)*db[i];
}
// init variables and clear map
void ij_init()
{
int i,j;
double a;
for (a=-0.5*M_PI,i=0;i<na;i++,a+=da)
{
nb[i]=ceil(2.0*M_PI*cos(a)/da); // compute actual circle cell count
if (nb[i]<=0) nb[i]=1;
db[i]=2.0*M_PI/double(nb[i]); // longitude angle step
if ((i==0)||(i==na-1)) { nb[i]=1; db[i]=1.0; }
for (j=0;j<na2;j++) map[i][j]=0; // clear cell map
}
}
//---------------------------------------------------------------------------
// this just draws circle from point x0,y0,z0 with normal nx,ny,nz and radius r
// need some vector stuff of mine so i did not copy the body here (it is not important)
void glCircle3D(double x0,double y0,double z0,double nx,double ny,double nz,double r,bool _fill);
//---------------------------------------------------------------------------
void analyse()
{
// n is number of points and r is just visual radius of sphere for rendering
int i,j,ii,jj,n=1000;
double x,y,z,a,b,c,cm=1.0/10.0,r=1.0;
// init
ij_init(); // init variables and map[][]
RandSeed=10; // just to have the same random points generated every frame (do not need to store them)
// generate draw and process some random surface points
for (i=0;i<n;i++)
{
a=M_PI*(Random()-0.5);
b=M_PI* Random()*2.0 ;
ab2ij(ii,jj,a,b); // cell corrds
abr2xyz(x,y,z,a,b,r); // 3D orthonormal coords
map[ii][jj]++; // update cell density
// this just draw the point (x,y,z) as line in OpenGL so you can ignore this
double w=1.1; // w-1.0 is rendered line size factor
glBegin(GL_LINES);
glColor3f(1.0,1.0,1.0); glVertex3d(x,y,z);
glColor3f(0.0,0.0,0.0); glVertex3d(w*x,w*y,w*z);
glEnd();
}
// draw cell grid (color is function of density)
for (i=0;i<na;i++)
for (j=0;j<nb[i];j++)
{
ij2ab(a,b,i,j); abr2xyz(x,y,z,a,b,r);
c=map[i][j]; c=0.1+(c*cm); if (c>1.0) c=1.0;
glColor3f(0.2,0.2,0.2); glCircle3D(x,y,z,x,y,z,0.45*da,0); // outline
glColor3f(0.1,0.1,c ); glCircle3D(x,y,z,x,y,z,0.45*da,1); // filled by bluish color the more dense the cell the more bright it is
}
}
//---------------------------------------------------------------------------
The result looks like this:
so now just see what is in the map[][] array you can find the global/local min/max of density or whatever you need... Just do not forget that the size is map[na][nb[i]] where i is the first index in array. The grid size is controlled by na constant and cm is just density to color scale ...
[edit1] got the Quad grid which is far more accurate representation of used mapping
this is with na=16 the worst rounding errors are on poles. If you want to be precise then you can weight density by cell surface size. For all non pole cells it is simple quad. For poles its triangle fan (regular polygon)
This is the grid draw code:
// draw cell quad grid (color is function of density)
int i,j,ii,jj;
double x,y,z,a,b,c,cm=1.0/10.0,mm=0.49,r=1.0;
double dx=mm*da,dy;
for (i=1;i<na-1;i++) // ignore poles
for (j=0;j<nb[i];j++)
{
dy=mm*db[i];
ij2ab(a,b,i,j);
c=map[i][j]; c=0.1+(c*cm); if (c>1.0) c=1.0;
glColor3f(0.2,0.2,0.2);
glBegin(GL_LINE_LOOP);
abr2xyz(x,y,z,a-dx,b-dy,r); glVertex3d(x,y,z);
abr2xyz(x,y,z,a-dx,b+dy,r); glVertex3d(x,y,z);
abr2xyz(x,y,z,a+dx,b+dy,r); glVertex3d(x,y,z);
abr2xyz(x,y,z,a+dx,b-dy,r); glVertex3d(x,y,z);
glEnd();
glColor3f(0.1,0.1,c );
glBegin(GL_QUADS);
abr2xyz(x,y,z,a-dx,b-dy,r); glVertex3d(x,y,z);
abr2xyz(x,y,z,a-dx,b+dy,r); glVertex3d(x,y,z);
abr2xyz(x,y,z,a+dx,b+dy,r); glVertex3d(x,y,z);
abr2xyz(x,y,z,a+dx,b-dy,r); glVertex3d(x,y,z);
glEnd();
}
i=0; j=0; ii=i+1; dy=mm*db[ii];
ij2ab(a,b,i,j); c=map[i][j]; c=0.1+(c*cm); if (c>1.0) c=1.0;
glColor3f(0.2,0.2,0.2);
glBegin(GL_LINE_LOOP);
for (j=0;j<nb[ii];j++) { ij2ab(a,b,ii,j); abr2xyz(x,y,z,a-dx,b-dy,r); glVertex3d(x,y,z); }
glEnd();
glColor3f(0.1,0.1,c );
glBegin(GL_TRIANGLE_FAN); abr2xyz(x,y,z,a ,b ,r); glVertex3d(x,y,z);
for (j=0;j<nb[ii];j++) { ij2ab(a,b,ii,j); abr2xyz(x,y,z,a-dx,b-dy,r); glVertex3d(x,y,z); }
glEnd();
i=na-1; j=0; ii=i-1; dy=mm*db[ii];
ij2ab(a,b,i,j); c=map[i][j]; c=0.1+(c*cm); if (c>1.0) c=1.0;
glColor3f(0.2,0.2,0.2);
glBegin(GL_LINE_LOOP);
for (j=0;j<nb[ii];j++) { ij2ab(a,b,ii,j); abr2xyz(x,y,z,a-dx,b+dy,r); glVertex3d(x,y,z); }
glEnd();
glColor3f(0.1,0.1,c );
glBegin(GL_TRIANGLE_FAN); abr2xyz(x,y,z,a ,b ,r); glVertex3d(x,y,z);
for (j=0;j<nb[ii];j++) { ij2ab(a,b,ii,j); abr2xyz(x,y,z,a-dx,b+dy,r); glVertex3d(x,y,z); }
glEnd();
the mm is the grid cell size mm=0.5 is full cell size , less creates a space between cells
If you want a radial region of the greatest density, this is the robust disk covering problem with k = 1 and dist(a, b) = great circle distance (a, b) (see https://en.wikipedia.org/wiki/Great-circle_distance)
https://www4.comp.polyu.edu.hk/~csbxiao/paper/2003%20and%20before/PDCS2003.pdf
Consider using a geographic method to solve this. GIS tools, geography data types in SQL, etc. all handle curvature of a spheroid. You might have to find a coordinate system that uses a pure sphere instead of an earthlike spheroid if you are not actually modelling something on Earth.
For speed, if you have large numbers of points and want the densest location of them, a raster heatmap type solution might work well. You could create low resolution rasters, then zoom to areas of high density and create higher resolution only cells that you care about.
I have two sets of 3D points (original and reconstructed) and correspondence information about pairs - which point from one set represents the second one. I need to find 3D translation and scaling factor which transforms reconstruct set so the sum of square distances would be least (rotation would be nice too, but points are rotated similarly, so this is not main priority and might be omitted in sake of simplicity and speed). And so my question is - is this solved and available somewhere on the Internet? Personally, I would use least square method, but I don't have much time (and although I'm somewhat good at math, I don't use it often, so it would be better for me to avoid it), so I would like to use other's solution if it exists. I prefer solution in C++, for example using OpenCV, but algorithm alone is good enough.
If there is no such solution, I will calculate it by myself, I don't want to bother you so much.
SOLUTION: (from your answers)
For me it's Kabsch alhorithm;
Base info: http://en.wikipedia.org/wiki/Kabsch_algorithm
General solution: http://nghiaho.com/?page_id=671
STILL NOT SOLVED:
I also need scale. Scale values from SVD are not understandable for me; when I need scale about 1-4 for all axises (estimated by me), SVD scale is about [2000, 200, 20], which is not helping at all.
Since you are already using Kabsch algorithm, just have a look at Umeyama's paper which extends it to get scale. All you need to do is to get the standard deviation of your points and calculate scale as:
(1/sigma^2)*trace(D*S)
where D is the diagonal matrix in SVD decomposition in the rotation estimation and S is either identity matrix or [1 1 -1] diagonal matrix, depending on the sign of determinant of UV (which Kabsch uses to correct reflections into proper rotations). So if you have [2000, 200, 20], multiply the last element by +-1 (depending on the sign of determinant of UV), sum them and divide by the standard deviation of your points to get scale.
You can recycle the following code, which is using the Eigen library:
typedef Eigen::Matrix<double, 3, 1, Eigen::DontAlign> Vector3d_U; // microsoft's 32-bit compiler can't put Eigen::Vector3d inside a std::vector. for other compilers or for 64-bit, feel free to replace this by Eigen::Vector3d
/**
* #brief rigidly aligns two sets of poses
*
* This calculates such a relative pose <tt>R, t</tt>, such that:
*
* #code
* _TyVector v_pose = R * r_vertices[i] + t;
* double f_error = (r_tar_vertices[i] - v_pose).squaredNorm();
* #endcode
*
* The sum of squared errors in <tt>f_error</tt> for each <tt>i</tt> is minimized.
*
* #param[in] r_vertices is a set of vertices to be aligned
* #param[in] r_tar_vertices is a set of vertices to align to
*
* #return Returns a relative pose that rigidly aligns the two given sets of poses.
*
* #note This requires the two sets of poses to have the corresponding vertices stored under the same index.
*/
static std::pair<Eigen::Matrix3d, Eigen::Vector3d> t_Align_Points(
const std::vector<Vector3d_U> &r_vertices, const std::vector<Vector3d_U> &r_tar_vertices)
{
_ASSERTE(r_tar_vertices.size() == r_vertices.size());
const size_t n = r_vertices.size();
Eigen::Vector3d v_center_tar3 = Eigen::Vector3d::Zero(), v_center3 = Eigen::Vector3d::Zero();
for(size_t i = 0; i < n; ++ i) {
v_center_tar3 += r_tar_vertices[i];
v_center3 += r_vertices[i];
}
v_center_tar3 /= double(n);
v_center3 /= double(n);
// calculate centers of positions, potentially extend to 3D
double f_sd2_tar = 0, f_sd2 = 0; // only one of those is really needed
Eigen::Matrix3d t_cov = Eigen::Matrix3d::Zero();
for(size_t i = 0; i < n; ++ i) {
Eigen::Vector3d v_vert_i_tar = r_tar_vertices[i] - v_center_tar3;
Eigen::Vector3d v_vert_i = r_vertices[i] - v_center3;
// get both vertices
f_sd2 += v_vert_i.squaredNorm();
f_sd2_tar += v_vert_i_tar.squaredNorm();
// accumulate squared standard deviation (only one of those is really needed)
t_cov.noalias() += v_vert_i * v_vert_i_tar.transpose();
// accumulate covariance
}
// calculate the covariance matrix
Eigen::JacobiSVD<Eigen::Matrix3d> svd(t_cov, Eigen::ComputeFullU | Eigen::ComputeFullV);
// calculate the SVD
Eigen::Matrix3d R = svd.matrixV() * svd.matrixU().transpose();
// compute the rotation
double f_det = R.determinant();
Eigen::Vector3d e(1, 1, (f_det < 0)? -1 : 1);
// calculate determinant of V*U^T to disambiguate rotation sign
if(f_det < 0)
R.noalias() = svd.matrixV() * e.asDiagonal() * svd.matrixU().transpose();
// recompute the rotation part if the determinant was negative
R = Eigen::Quaterniond(R).normalized().toRotationMatrix();
// renormalize the rotation (not needed but gives slightly more orthogonal transformations)
double f_scale = svd.singularValues().dot(e) / f_sd2_tar;
double f_inv_scale = svd.singularValues().dot(e) / f_sd2; // only one of those is needed
// calculate the scale
R *= f_inv_scale;
// apply scale
Eigen::Vector3d t = v_center_tar3 - (R * v_center3); // R needs to contain scale here, otherwise the translation is wrong
// want to align center with ground truth
return std::make_pair(R, t); // or put it in a single 4x4 matrix if you like
}
For 3D points the problem is known as the Absolute Orientation problem. A c++ implementation is available from Eigen http://eigen.tuxfamily.org/dox/group__Geometry__Module.html#gab3f5a82a24490b936f8694cf8fef8e60 and paper http://web.stanford.edu/class/cs273/refs/umeyama.pdf
you can use it via opencv by converting the matrices to eigen with cv::cv2eigen() calls.
Start with translation of both sets of points. So that their centroid coincides with the origin of the coordinate system. Translation vector is just the difference between these centroids.
Now we have two sets of coordinates represented as matrices P and Q. One set of points may be obtained from other one by applying some linear operator (which performs both scaling and rotation). This operator is represented by 3x3 matrix X:
P * X = Q
To find proper scale/rotation we just need to solve this matrix equation, find X, then decompose it into several matrices, each representing some scaling or rotation.
A simple (but probably not numerically stable) way to solve it is to multiply both parts of the equation to the transposed matrix P (to get rid of non-square matrices), then multiply both parts of the equation to the inverted PT * P:
PT * P * X = PT * Q
X = (PT * P)-1 * PT * Q
Applying Singular value decomposition to matrix X gives two rotation matrices and a matrix with scale factors:
X = U * S * V
Here S is a diagonal matrix with scale factors (one scale for each coordinate), U and V are rotation matrices, one properly rotates the points so that they may be scaled along the coordinate axes, other one rotates them once more to align their orientation to second set of points.
Example (2D points are used for simplicity):
P = 1 2 Q = 7.5391 4.3455
2 3 12.9796 5.8897
-2 1 -4.5847 5.3159
-1 -6 -15.9340 -15.5511
After solving the equation:
X = 3.3417 -1.2573
2.0987 2.8014
After SVD decomposition:
U = -0.7317 -0.6816
-0.6816 0.7317
S = 4 0
0 3
V = -0.9689 -0.2474
-0.2474 0.9689
Here SVD has properly reconstructed all manipulations I performed on matrix P to get matrix Q: rotate by the angle 0.75, scale X axis by 4, scale Y axis by 3, rotate by the angle -0.25.
If sets of points are scaled uniformly (scale factor is equal by each axis), this procedure may be significantly simplified.
Just use Kabsch algorithm to get translation/rotation values. Then perform these translation and rotation (centroids should coincide with the origin of the coordinate system). Then for each pair of points (and for each coordinate) estimate Linear regression. Linear regression coefficient is exactly the scale factor.
A good explanation Finding optimal rotation and translation between corresponding 3D points
The code is in matlab but it's trivial to convert to opengl using the cv::SVD function
You might want to try ICP (Iterative closest point).
Given two sets of 3d points, it will tell you the transformation (rotation + translation) to go from the first set to the second one.
If you're interested in a c++ lightweight implementation, try libicp.
Good luck!
The general transformation, as well the scale can be retrieved via Procrustes Analysis. It works by superimposing the objects on top of each other and tries to estimate the transformation from that setting. It has been used in the context of ICP, many times. In fact, your preference, Kabash algorithm is a special case of this.
Moreover, Horn's alignment algorithm (based on quaternions) also finds a very good solution, while being quite efficient. A Matlab implementation is also available.
Scale can be inferred without SVD, if your points are uniformly scaled in all directions (I could not make sense of SVD-s scale matrix either). Here is how I solved the same problem:
Measure distances of each point to other points in the point cloud to get a 2d table of distances, where entry at (i,j) is norm(point_i-point_j). Do the same thing for the other point cloud, so you get two tables -- one for original and the other for reconstructed points.
Divide all values in one table by the corresponding values in the other table. Because the points correspond to each other, the distances do too. Ideally, the resulting table has all values being equal to each other, and this is the scale.
The median value of the divisions should be pretty close to the scale you are looking for. The mean value is also close, but I chose median just to exclude outliers.
Now you can use the scale value to scale all the reconstructed points and then proceed to estimating the rotation.
Tip: If there are too many points in the point clouds to find distances between all of them, then a smaller subset of distances will work, too, as long as it is the same subset for both point clouds. Ideally, just one distance pair would work if there is no measurement noise, e.g when one point cloud is directly derived from the other by just rotating it.
you can also use ScaleRatio ICP proposed by BaoweiLin
The code can be found in github