I have a lot of points on the surface of the sphere.
How can I calculate the area/spot of the sphere that has the largest point density?
I need this to be done very fast. If this was a square for example I guess I could create a grid and then let the points vote which part of the grid is the best.
I have tried with transforming the points to spherical coordinates and then do a grid, both this did not work well since points around north pole are close on the sphere but distant after the transform.
Thanks
There is in fact no real reason to partition the sphere into a regular non-overlapping mesh, try this:
partition your sphere into semi-overlapping circles
see here for generating uniformly distributed points (your circle centers)
Dispersing n points uniformly on a sphere
you can identify the points in each circle very fast by a simple dot product..it really doesn't matter if some points are double counted, the circle with the most points still represents the highest density
mathematica implementation
this takes 12 seconds to analyze 5000 points. (and took about 10 minutes to write )
testcircles = { RandomReal[ {0, 1}, {3}] // Normalize};
Do[While[ (test = RandomReal[ {-1, 1}, {3}] // Normalize ;
Select[testcircles , #.test > .9 & , 1] ) == {} ];
AppendTo[testcircles, test];, {2000}];
vmax = testcircles[[First#
Ordering[-Table[
Count[ (testcircles[[i]].#) & /# points , x_ /; x > .98 ] ,
{i, Length[testcircles]}], 1]]];
To add some other, alternative schemes to the mix: it's possible to define a number of (almost) regular grids on sphere-like geometries by refining an inscribed polyhedron.
The first option is called an icosahedral grid, which is a triangulation of the spherical surface. By joining the centres of the triangles about each vertex, you can also create a dual hexagonal grid based on the underlying triangulation:
Another option, if you dislike triangles (and/or hexagons) is the cubed-sphere grid, formed by subdividing the faces of an inscribed cube and projecting the result onto the spherical surface:
In either case, the important point is that the resulting grids are almost regular -- so to evaluate the region of highest density on the sphere you can simply perform a histogram-style analysis, counting the number of samples per grid cell.
As a number of commenters have pointed out, to account for the slight irregularity in the grid it's possible to normalise the histogram counts by dividing through by the area of each grid cell. The resulting density is then given as a "per unit area" measure. To calculate the area of each grid cell there are two options: (i) you could calculate the "flat" area of each cell, by assuming that the edges are straight lines -- such an approximation is probably pretty good when the grid is sufficiently dense, or (ii) you can calculate the "true" surface areas by evaluating the necessary surface integrals.
If you are interested in performing the requisite "point-in-cell" queries efficiently, one approach is to construct the grid as a quadtree -- starting with a coarse inscribed polyhedron and refining it's faces into a tree of sub-faces. To locate the enclosing cell you can simply traverse the tree from the root, which is typically an O(log(n)) operation.
You can get some additional information regarding these grid types here.
Treating points on a sphere as 3D points might not be so bad.
Try either:
Select k, do approximate k-NN search in 3D for each point in the data or selected point of interest, then weight the result by their distance to the query point. Complexity may vary for different approximate k-NN algorithms.
Build a space-partitioning data structure like k-d Tree, then do approximate (or exact) range counting query with a ball range centered at each point in the data or selected point of interest. Complexity is O(log(n) + epsilon^(-3)) or O(epsilon^(-3)*log(n)) for each approximate range query with state of the art algorithms, where epsilon is the range error threshold w.r.t. the size of the querying ball. For exact range query, the complexity is O(n^(2/3)) for each query.
Partition the sphere into equal-area regions (bounded by parallels and meridians) as described in my answer there and count the points in each region.
The aspect ratio of the regions will not be uniform (the equatorial regions will be more "squarish" when N~M, while the polar regions will be more elongated).
This is not a problem because the diameters of the regions go to 0 as N and M increase.
The computational simplicity of this method trumps the better uniformity of domains in the other excellent answers which contain beautiful pictures.
One simple modification would be to add two "polar cap" regions to the N*M regions described in the linked answer to improve the numeric stability (when the point is very close to a pole, its longitude is not well defined). This way the aspect ratio of the regions is bounded.
You can use the Peters projection, which preserves the areas.
This will allow you to efficiently count the points in a grid, but also in a sliding window (box Parzen window) by using the integral image trick.
If I understand correctly, you are trying to find the densepoint on sphere.
if points are denser at some point
Consider Cartesian coordinates and find the mean X,Y,Z of points
Find closest point to mean X,Y,Z that is on sphere (you may consider using spherical coordinates, just extend the radius to original radius).
Constraints
If distance between mean X,Y,Z and the center is less than r/2, then this algorithm may not work as desired.
I am not master of mathematics but may be it can solve by analytical way as:
1.Short the coordinate
2.R=(Σ(n=0. n=max)(Σ(m=0. M=n)(1/A^diff_in_consecative))*angle)/Σangle
A=may any constant
This is really just an inverse of this answer of mine
just invert the equations of equidistant sphere surface vertexes to surface cell index. Don't even try to visualize the cell different then circle or you go mad. But if someone actually do it then please post the result here (and let me now)
Now just create 2D cell map and do the density computation in O(N) (like histograms are done) similar to what Darren Engwirda propose in his answer
This is how the code looks like in C++
//---------------------------------------------------------------------------
const int na=16; // sphere slices
int nb[na]; // cells per slice
const int na2=na<<1;
int map[na][na2]; // surface cells
const double da=M_PI/double(na-1); // latitude angle step
double db[na]; // longitude angle step per slice
// sherical -> orthonormal
void abr2xyz(double &x,double &y,double &z,double a,double b,double R)
{
double r;
r=R*cos(a);
z=R*sin(a);
y=r*sin(b);
x=r*cos(b);
}
// sherical -> surface cell
void ab2ij(int &i,int &j,double a,double b)
{
i=double(((a+(0.5*M_PI))/da)+0.5);
if (i>=na) i=na-1;
if (i< 0) i=0;
j=double(( b /db[i])+0.5);
while (j< 0) j+=nb[i];
while (j>=nb[i]) j-=nb[i];
}
// sherical <- surface cell
void ij2ab(double &a,double &b,int i,int j)
{
if (i>=na) i=na-1;
if (i< 0) i=0;
a=-(0.5*M_PI)+(double(i)*da);
b= double(j)*db[i];
}
// init variables and clear map
void ij_init()
{
int i,j;
double a;
for (a=-0.5*M_PI,i=0;i<na;i++,a+=da)
{
nb[i]=ceil(2.0*M_PI*cos(a)/da); // compute actual circle cell count
if (nb[i]<=0) nb[i]=1;
db[i]=2.0*M_PI/double(nb[i]); // longitude angle step
if ((i==0)||(i==na-1)) { nb[i]=1; db[i]=1.0; }
for (j=0;j<na2;j++) map[i][j]=0; // clear cell map
}
}
//---------------------------------------------------------------------------
// this just draws circle from point x0,y0,z0 with normal nx,ny,nz and radius r
// need some vector stuff of mine so i did not copy the body here (it is not important)
void glCircle3D(double x0,double y0,double z0,double nx,double ny,double nz,double r,bool _fill);
//---------------------------------------------------------------------------
void analyse()
{
// n is number of points and r is just visual radius of sphere for rendering
int i,j,ii,jj,n=1000;
double x,y,z,a,b,c,cm=1.0/10.0,r=1.0;
// init
ij_init(); // init variables and map[][]
RandSeed=10; // just to have the same random points generated every frame (do not need to store them)
// generate draw and process some random surface points
for (i=0;i<n;i++)
{
a=M_PI*(Random()-0.5);
b=M_PI* Random()*2.0 ;
ab2ij(ii,jj,a,b); // cell corrds
abr2xyz(x,y,z,a,b,r); // 3D orthonormal coords
map[ii][jj]++; // update cell density
// this just draw the point (x,y,z) as line in OpenGL so you can ignore this
double w=1.1; // w-1.0 is rendered line size factor
glBegin(GL_LINES);
glColor3f(1.0,1.0,1.0); glVertex3d(x,y,z);
glColor3f(0.0,0.0,0.0); glVertex3d(w*x,w*y,w*z);
glEnd();
}
// draw cell grid (color is function of density)
for (i=0;i<na;i++)
for (j=0;j<nb[i];j++)
{
ij2ab(a,b,i,j); abr2xyz(x,y,z,a,b,r);
c=map[i][j]; c=0.1+(c*cm); if (c>1.0) c=1.0;
glColor3f(0.2,0.2,0.2); glCircle3D(x,y,z,x,y,z,0.45*da,0); // outline
glColor3f(0.1,0.1,c ); glCircle3D(x,y,z,x,y,z,0.45*da,1); // filled by bluish color the more dense the cell the more bright it is
}
}
//---------------------------------------------------------------------------
The result looks like this:
so now just see what is in the map[][] array you can find the global/local min/max of density or whatever you need... Just do not forget that the size is map[na][nb[i]] where i is the first index in array. The grid size is controlled by na constant and cm is just density to color scale ...
[edit1] got the Quad grid which is far more accurate representation of used mapping
this is with na=16 the worst rounding errors are on poles. If you want to be precise then you can weight density by cell surface size. For all non pole cells it is simple quad. For poles its triangle fan (regular polygon)
This is the grid draw code:
// draw cell quad grid (color is function of density)
int i,j,ii,jj;
double x,y,z,a,b,c,cm=1.0/10.0,mm=0.49,r=1.0;
double dx=mm*da,dy;
for (i=1;i<na-1;i++) // ignore poles
for (j=0;j<nb[i];j++)
{
dy=mm*db[i];
ij2ab(a,b,i,j);
c=map[i][j]; c=0.1+(c*cm); if (c>1.0) c=1.0;
glColor3f(0.2,0.2,0.2);
glBegin(GL_LINE_LOOP);
abr2xyz(x,y,z,a-dx,b-dy,r); glVertex3d(x,y,z);
abr2xyz(x,y,z,a-dx,b+dy,r); glVertex3d(x,y,z);
abr2xyz(x,y,z,a+dx,b+dy,r); glVertex3d(x,y,z);
abr2xyz(x,y,z,a+dx,b-dy,r); glVertex3d(x,y,z);
glEnd();
glColor3f(0.1,0.1,c );
glBegin(GL_QUADS);
abr2xyz(x,y,z,a-dx,b-dy,r); glVertex3d(x,y,z);
abr2xyz(x,y,z,a-dx,b+dy,r); glVertex3d(x,y,z);
abr2xyz(x,y,z,a+dx,b+dy,r); glVertex3d(x,y,z);
abr2xyz(x,y,z,a+dx,b-dy,r); glVertex3d(x,y,z);
glEnd();
}
i=0; j=0; ii=i+1; dy=mm*db[ii];
ij2ab(a,b,i,j); c=map[i][j]; c=0.1+(c*cm); if (c>1.0) c=1.0;
glColor3f(0.2,0.2,0.2);
glBegin(GL_LINE_LOOP);
for (j=0;j<nb[ii];j++) { ij2ab(a,b,ii,j); abr2xyz(x,y,z,a-dx,b-dy,r); glVertex3d(x,y,z); }
glEnd();
glColor3f(0.1,0.1,c );
glBegin(GL_TRIANGLE_FAN); abr2xyz(x,y,z,a ,b ,r); glVertex3d(x,y,z);
for (j=0;j<nb[ii];j++) { ij2ab(a,b,ii,j); abr2xyz(x,y,z,a-dx,b-dy,r); glVertex3d(x,y,z); }
glEnd();
i=na-1; j=0; ii=i-1; dy=mm*db[ii];
ij2ab(a,b,i,j); c=map[i][j]; c=0.1+(c*cm); if (c>1.0) c=1.0;
glColor3f(0.2,0.2,0.2);
glBegin(GL_LINE_LOOP);
for (j=0;j<nb[ii];j++) { ij2ab(a,b,ii,j); abr2xyz(x,y,z,a-dx,b+dy,r); glVertex3d(x,y,z); }
glEnd();
glColor3f(0.1,0.1,c );
glBegin(GL_TRIANGLE_FAN); abr2xyz(x,y,z,a ,b ,r); glVertex3d(x,y,z);
for (j=0;j<nb[ii];j++) { ij2ab(a,b,ii,j); abr2xyz(x,y,z,a-dx,b+dy,r); glVertex3d(x,y,z); }
glEnd();
the mm is the grid cell size mm=0.5 is full cell size , less creates a space between cells
If you want a radial region of the greatest density, this is the robust disk covering problem with k = 1 and dist(a, b) = great circle distance (a, b) (see https://en.wikipedia.org/wiki/Great-circle_distance)
https://www4.comp.polyu.edu.hk/~csbxiao/paper/2003%20and%20before/PDCS2003.pdf
Consider using a geographic method to solve this. GIS tools, geography data types in SQL, etc. all handle curvature of a spheroid. You might have to find a coordinate system that uses a pure sphere instead of an earthlike spheroid if you are not actually modelling something on Earth.
For speed, if you have large numbers of points and want the densest location of them, a raster heatmap type solution might work well. You could create low resolution rasters, then zoom to areas of high density and create higher resolution only cells that you care about.
I am working on my own deffered rendering engine. I am rendering the scene to the g-buffer containing diffuse color, view space normals and depth (for now). I have implemented directional light for the second rendering stage and it works great. Now I want to render a point light, which is a bit harder.
I need the point light position for the shader in view space because I have only depth in the g-buffer and I can't afford a matrix multiplication in every pixel. I took the light position and transformed it by the same matrix, by which I transform every vertex in shader, so it should align with verices in the scene (using D3DXVec3Transform). But that isn't the case: transformed position doesn't represent viewspace position nearly at all. It's x,y coordinates are off the charts, they are often way out of the (-1,1) range. The transformed position respects the camera orientation somewhat, but the light moves too quick and the y-axis is inverted. Only if the camera is at (0,0,0), the light stands at (0,0) in the center of the screen. Here is my relevant rendering code executed every frame:
D3DXMATRIX matView; // the view transform matrix
D3DXMATRIX matProjection; // the projection transform matrix
D3DXMatrixLookAtLH(&matView,
&D3DXVECTOR3 (x,y,z), // the camera position
&D3DXVECTOR3 (xt,yt,zt), // the look-at position
&D3DXVECTOR3 (0.0f, 0.0f, 1.0f)); // the up direction
D3DXMatrixPerspectiveFovLH(&matProjection,
fov, // the horizontal field of view
asp, // aspect ratio
znear, // the near view-plane
zfar); // the far view-plane
D3DXMATRIX vysl=matView*matProjection;
eff->SetMatrix("worldViewProj",&vysl); //vertices are transformed ok ín shader
//render g-buffer
D3DXVECTOR4 lpos; D3DXVECTOR3 lpos2(0,0,0);
D3DXVec3Transform(&lpos,&lpos2,&vysl); //transforming lpos into lpos2 using vysl, still the same matrix
eff->SetVector("poslight",&lpos); //but there is already a mess in lpos at this time
//render the fullscreen quad with wrong lighting
Not that relevant shader code, but still, I see the light position this way (passing IN.texture is just me being lazy):
float dist=length(float2(IN.texture0*2-1)-float2(poslight.xy));
OUT.col=tex2D(Sdiff,IN.texture0)/dist;
I have tried to transform a light only by matView without projection, but the problem is still the same. If I transform the light in a shader, it's the same result, so the problem is the matrix itself. But it is the same matrix as is transforming the vertices! How differently are vertices treated?
Can you please take a look at the code and tell me where the mistake is? It seems to me it should work ok, but it doesn't. Thanks in advance.
You don't need a matrix multiplication to reconstruct view position, here is a code snippet (from andrew lauritzen deffered light example)
tP is the projection transform, position screen is -1/1 pixel coordinate and viewspaceZ is linear depth that you sample from your texture.
float3 ViewPosFromDepth(float2 positionScreen,
float viewSpaceZ)
{
float2 screenSpaceRay = float2(positionScreen.x / tP._11,
positionScreen.y / tP._22);
float3 positionView;
positionView.z = viewSpaceZ;
positionView.xy = screenSpaceRay.xy * positionView.z;
return positionView;
}
Result of this transform D3DXVec3Transform(&lpos,&lpos2,&vysl); is a vector in homogeneous space(i.e. projected vector but not divided by w). But in you shader you use it's xy components without respecting this(w). This is (quite probably) the problem. You could divide vector by its w yourself or use D3DXVec3Project instead of D3DXVec3Transform.
It's working fine for vertices as (I suppose) you mul them by the same viewproj matrix in the vertex shader and pass transformed values to interpolator where hardware eventually divides it's xyz by interpolated 'w'.