Algorithm for connecting points in a graph with curved lines - algorithm

I need to develop an algorithm that connects points in a non-linear way, that is, with smooth curves, as in the image below:
The problem is that I can not find the best solution, either using Bezier Curves, Polimonial Interpolation, Curve Adjustment, among others.
In short, I need a formula that interpolates the points according to the figure above, generating N intermediate points between one coordinate and another.
In the image above, the first coordinate (c1) is (x = 1, y = 220) and the second (c2) is (x = 2, y = 40).
So if I want to create for example 4 intermediate coordinates between c1 and c2 I will have to get an array (x, y) of 4 elements something like this:
    
[1.2, 180], [1.4, 140], [1.6, 120], [1.8, 80]
Would anyone have any ideas?

I think any Piecewise curve interpolation should do it. Here small C++ example:
//---------------------------------------------------------------------------
const int n=7; // points
const int n2=n+n;
float pnt[n2]= // points x,y ...
{
1.0, 220.0,
2.0, 40.0,
3.0,-130.0,
4.0,-170.0,
5.0,- 40.0,
6.0, 90.0,
7.0, 110.0,
};
//---------------------------------------------------------------------------
void getpnt(float *p,float t) // t = <0,n-1>
{
int i,ii;
float *p0,*p1,*p2,*p3,a0,a1,a2,a3,d1,d2,tt,ttt;
// handle t out of range
if (t<= 0.0f){ p[0]=pnt[0]; p[1]=pnt[1]; return; }
if (t>=float(n-1)){ p[0]=pnt[n2-2]; p[1]=pnt[n2-1]; return; }
// select patch
i=floor(t); // start point of patch
t-=i; // parameter <0,1>
i<<=1; tt=t*t; ttt=tt*t;
// control points
ii=i-2; if (ii<0) ii=0; if (ii>=n2) ii=n2-2; p0=pnt+ii;
ii=i ; if (ii<0) ii=0; if (ii>=n2) ii=n2-2; p1=pnt+ii;
ii=i+2; if (ii<0) ii=0; if (ii>=n2) ii=n2-2; p2=pnt+ii;
ii=i+4; if (ii<0) ii=0; if (ii>=n2) ii=n2-2; p3=pnt+ii;
// loop all dimensions
for (i=0;i<2;i++)
{
// compute polynomial coeficients
d1=0.5*(p2[i]-p0[i]);
d2=0.5*(p3[i]-p1[i]);
a0=p1[i];
a1=d1;
a2=(3.0*(p2[i]-p1[i]))-(2.0*d1)-d2;
a3=d1+d2+(2.0*(-p2[i]+p1[i]));
// compute point coordinate
p[i]=a0+(a1*t)+(a2*tt)+(a3*ttt);
}
}
//---------------------------------------------------------------------------
void gl_draw()
{
glClearColor(1.0,1.0,1.0,1.0);
glClear(GL_COLOR_BUFFER_BIT);
glDisable(GL_DEPTH_TEST);
glDisable(GL_TEXTURE_2D);
// set 2D view
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glScalef(1.0/5.0,1.0/500.0,1.0);
glTranslatef(-4.0,0.0,0.0);
// render lines
glColor3f(1.0,0.0,0.0);
glBegin(GL_LINE_STRIP);
float p[2],t;
for (t=0.0;t<=float(n-1);t+=0.1f)
{
getpnt(p,t);
glVertex2fv(p);
}
glEnd();
// render points
glPointSize(4.0);
glColor3f(0.0,0.0,1.0);
glBegin(GL_POINTS);
for (int i=0;i<n2;i+=2) glVertex2fv(pnt+i);
glEnd();
glPointSize(1.0);
glFinish();
SwapBuffers(hdc);
}
//---------------------------------------------------------------------------
Here preview:
As you can see it is simple you just need n control points pnt (I extracted from your graph) and just interpolate ... The getpnt functions will compute any point on the curve addressed by parameter t=<0,n-1>. Internally it just select which cubic patch to use and compute as single cubic curve. In gl_draw you can see how to use it to obtain the points in between.
As your control points are uniformly distributed on the x axis:
x = <1,7>
t = <0,6>
I can write:
x = t+1
t = x-1
so you can compute any point for any x too...
The shape does not match your graph perfectly because the selected control points are not the correct ones. Any local minimum/maximum should be a control point and sometimes is safer to use also inflex points too. The starting and ending shape of the curve suggest hidden starting and ending control point which is not showed on the graph. You can use any number of points you need but beware if you break the x uniform distribution then you lose the ability to compute t from x directly!
As we do not know how the graph was created we can only guess ...

Related

How are should openGL matrices be created? Is there something wrong with my multiplication or order of translation/rotation?

So basically I'm trying to make a simple open gl 3D graphics engine using my own linear algebra to make projection and transformation matrices. OpenGL has a class called glUniformMatrix4fv() which I use to pass the matrices as a float[].
Here is my "Matrix" class to construct a float[] for that openGl method:
private float[] m= {
1,0,0,0,
0,1,0,0,
0,0,1,0,
0,0,0,1
};
public Matrix() {}
public Matrix(float[] m) {
this.m=m;
}
//gets value at x,y coords of matrix
public float getValue(int x,int y) {
return m[y*4 + x];
}
//sets value of x,y coord to n
public void setValue(int x,int y,float n) {
m[y*4 + x]=n;
}
To construct a transformation for object translation and rotation I first create translation Matrix (s is for scale). Also a vertex is basically just a size 4 float array I have my Vector/vertex info in:
public Matrix createTranslationMatrix(Vertex pos,float s) {
Matrix m=new Matrix();
m.setValue(0,0,s);
m.setValue(1,1,s);
m.setValue(2,2,s);
m.setValue(3,0,pos.getValue(0));
m.setValue(3,1,pos.getValue(1));
m.setValue(3,2,pos.getValue(2));
return m;
}
Then I create a rotation matrix which is a combo of x, y, and z rotation of object around origin
public Matrix createRotationMatrix(Vertex rot) {
//if rotation is screwed up maybe mess around with order of these :)
Matrix rotX=createRotationMatrixX(rot.getValue(0));
Matrix rotY=createRotationMatrixY(rot.getValue(1));
Matrix rotZ=createRotationMatrixZ(rot.getValue(2));
Matrix returnValue=multiply(rotX,rotY);
returnValue=multiply(returnValue,rotZ);
return returnValue;
}
private Matrix createRotationMatrixX(float num) {
float n=num;
n=(float)Math.toRadians(n);
Matrix rot=new Matrix();
rot.setValue(1, 1, (float)Math.cos(n));
rot.setValue(1, 2, (float)Math.sin(n));
rot.setValue(2, 1, (float)-Math.sin(n));
rot.setValue(2, 2, (float)Math.cos(n));
return rot;
}
//rotation mat Y
private Matrix createRotationMatrixY(float num) {
float n=num;
n=(float)Math.toRadians(n);
Matrix rot=new Matrix();
rot.setValue(0, 0, (float)Math.cos(n));
rot.setValue(0, 2, (float)-Math.sin(n));
rot.setValue(2, 0, (float)Math.sin(n));
rot.setValue(2, 2, (float)Math.cos(n));
return rot;
}
//rotation mat Z
private Matrix createRotationMatrixZ(float num) {
float n=num;
n=(float)Math.toRadians(n);
Matrix rot=new Matrix();
rot.setValue(0, 0, (float)Math.cos(n));
rot.setValue(0, 1, (float)Math.sin(n));
rot.setValue(1, 0, (float)-Math.sin(n));
rot.setValue(1, 1, (float)Math.cos(n));
return rot;
}
I combine the translation and create my objectTransform float[] using a matrix with multiply(rotationMat,translationMat):
public Matrix multiply(Matrix a, Matrix b){
Matrix m=new Matrix();
for(int y=0;y<4;y++) {
for(int x=0;x<4;x++) {
//if this doesn't work maybe try switching x and y around?
m.setValue(x,y,a.getValue(x,0)*b.getValue(0,y) + a.getValue(x,1)*b.getValue(1,y) + a.getValue(x,2)*b.getValue(2,y) + a.getValue(x,3)*b.getValue(3, y));
}
}
return m;
}
And my code for my worldTransorm is defined from by combining a transformation with negative values for position and rotation (so it moves vertex and rotates opposite from camera position and rotation), then combinging rotation and transformation like so multiply(translationMat,rotationMat) , so it theoretically moves opposite camera pos, THEN rotates opposite camera rotation.
then I create my projection using this function:
public Matrix createProjectionMatrix(float fov, float aspectRatio, float near, float far) {
float fovRad=1/(float)Math.tan(Math.toRadians(fov*.5));
Matrix projection=new Matrix(base);
projection.setValue(0,0,aspectRatio*fovRad);
projection.setValue(1,1,fovRad);
projection.setValue(2,2,far/(far-near));
projection.setValue(2,3,(-far*near)/(far-near));
projection.setValue(3,2,1);
projection.setValue(3,3,0);
return projection;
}
I combine my projection , worldTransform, and objectTransform with my Vec3 position (vector with mesh coordinates I import). These are all multiplied together in my openGL shader class like so:
gl_Position=projection * worldTransform * objectTransform * vec4(position,1);
Write now if I back my camera up by 3, rotate it around with hopes of finding the "triangle" mesh I made
float[] verts= {
//top left tri
-.5f,-.5f,0,
0,.5f,0,
.5f,-.5f,0,
};
Then I get a really small pixel moving really fast accross my screen from top to bottom. I also have the object spinning, but that (if my code worked properly) shouldn't be an issue, but if I don't have the object spinning, then I don't see any pixel at all. So my thinking is the object transformation is applying like the world transormation should be working, moving the vertex by "translation" then rotating it, or the triangle is really small and not scaled properly (do I have to offset it somehow?), but then it shouldn't be flying off the screen repeatedly as if its rotating around the camera. I've tried switching multiplication of translation and rotation for both types of transforms, but either the triangle doesn't appear at all or I just see a teensy tiny little pixel, almost orbitting the camera at high speeds (when I should see just the triangle and camera rotating seperately)
I know its a lot to ask but what am I doing wrong? Do I need to transpose something? Is my projection matrix out of wack? I feel like everything should be right :(

Different Processing rendering between native and online sketch

I get different results when running this sample with Processing directly, and with Processing.js in a browser. Why?
I was happy about my result and wanted to share it on open Processing, but the rendering was totally different and I don't see why. Below is a minimal working example.
/* Program that rotates a triange and draws an ellipse when the third vertex is on top of the screen*/
float y = 3*height/2;
float x = 3*width/2;
float previous_1 = 0.0;
float previous_2 = 0.0;
float current;
float angle = 0.0;
void setup() {
size(1100, 500);
}
void draw() {
fill(0, 30);
// rotate triangle
angle = angle - 0.02;
translate(x, y);
rotate(angle);
// display triangle
triangle(-50, -50, -30, 30, -90, -60);
// detect whether third vertex is on top by comparing its 3 successive positions
current = screenY(-90, -60); // current position of the third vertex
if (previous_1 < previous_2 && previous_1 < current) {
// draw ellipse at the extrema position
fill(128, 9, 9);
ellipse(-90, -60, 7, 10);
}
// update the 2 previous positions of the third vertex
previous_2 = previous_1;
previous_1 = current;
}
In processing, the ellipse is drawn when a triangle vertex is on top, which is my goal.
In online sketching, the ellipse is drawn during the whole time :/
In order to get the same results online as you get by running Processing locally you will need to specify the rendering mode as 3d when calling size
For example:
void setup() {
size(1100, 500, P3D);
}
You will also need to specify the z coordinate in the call to screenY()
current = screenY(-90, -60, 0);
With these two changes you should get the same results online as you get running locally.
Online:
Triangle Ellipse Example
Local:
The problem lies in the screenY function. Print out the current variable in your processing sketch locally and online. In OpenProcessing, the variable current grows quickly above multiple thousands, while it stays between 0 and ~260 locally.
It seems like OpenProcessing has a bug inside this function.
To fix this however, I would recommend you to register differently when you drew a triangle at the top of the circle, for example by using your angle variable:
// Calculate angle and modulo it by 2 * PI
angle = (angle - 0.02) % (2 * PI);
// If the sketch has made a full revolution
if (previous_1 < previous_2 && previous_1 < angle) {
// draw ellipse at the extrema position
fill(128, 9, 9);
ellipse(-90, -60, 7, 10);
}
// update the 2 previous angles of the third vertex
previous_2 = previous_1;
previous_1 = angle;
However, because of how you draw the triangles, the ellipse is at an angle of about PI / 3. To fix this, one option would be to rotate the screen by angle + PI / 3 like so:
rotate(angle + PI / 3);
You might have to experiment with the angle offset a bit more to draw the ellipse perfectly at the top of the circle.

Having a point from 3 static cameras prespectives how to restore its position in 3d space?

We have same rectangle position relative to 3 same type of staticly installed web cameras that are not on the same line. Say on a flat basketball field. Thus we have tham all inside one 3d space and (x, y, z); (ax, ay, az); positionas and orientations set for all of them.
We have a ball color and we found its position on all 3 images im1, im2, im3. Now having its position on 2d frames (p1x, p1y);(p2x, p2y);(p3x, p3y), and cameras pos\orientations how to get ball position in 3d space?
You need to unproject 2D screen coordinates into 3D coordinates in space.
You need to solve system of equation to find real point in 3D from 3 rays you got on the first step.
You can find source code for gluUnProject here. I also provide here my code for it:
public Vector4 Unproject(float x, float y, Matrix4 View)
{
var ndcX = x / Viewport.Width * 2 - 1.0f;
var ndcY = y / Viewport.Height * 2 - 1.0f;
var invVP = Matrix4.Invert(View * ProjectionMatrix);
// We don't z-coordinate of the point, so we choose 0.0f for it.
// We are going to find out it later.
var screenPos = new Vector4(ndcX, -ndcY, 0.0f, 1.0f);
var res = Vector4.Transform(screenPos, invVP);
return res / res.W;
}
Vector3 ComputeRay(Camera camera, Vector2 p)
{
var worldPos = Unproject(p.X, p.Y, camera.View);
var dir = new Vector3(worldPos) - camera.Eye;
return new Ray(camera.Eye, Vector3.Normalize(dir));
}
Now you need to find intersection of three such rays. Theoretically that would be enough to use only two rays. It depends on positions of your cameras.
If we had infinite precision floating point arithmetic and input was without noise that would be trivial. But in reality you might need to exploit some simple numerical scheme to find the point with an appropriate precision.

Basic space carving algorithm

I have the following problem as shown in the figure. I have point cloud and a mesh generated by a tetrahedral algorithm. How would I carve the mesh using the that algorithm ? Are landmarks are the point cloud ?
Pseudo code of the algorithm:
for every 3D feature point
convert it 2D projected coordinates
for every 2D feature point
cast a ray toward the polygons of the mesh
get intersection point
if zintersection < z of 3D feature point
for ( every triangle vertices )
cull that triangle.
Here is a follow up implementation of the algorithm mentioned by the Guru Spektre :)
Update code for the algorithm:
int i;
for (i = 0; i < out.numberofpoints; i++)
{
Ogre::Vector3 ray_pos = pos; // camera position);
Ogre::Vector3 ray_dir = (Ogre::Vector3 (out.pointlist[(i*3)], out.pointlist[(3*i)+1], out.pointlist[(3*i)+2]) - pos).normalisedCopy(); // vertex - camea pos ;
Ogre::Ray ray;
ray.setOrigin(Ogre::Vector3( ray_pos.x, ray_pos.y, ray_pos.z));
ray.setDirection(Ogre::Vector3(ray_dir.x, ray_dir.y, ray_dir.z));
Ogre::Vector3 result;
unsigned int u1;
unsigned int u2;
unsigned int u3;
bool rayCastResult = RaycastFromPoint(ray.getOrigin(), ray.getDirection(), result, u1, u2, u3);
if ( rayCastResult )
{
Ogre::Vector3 targetVertex(out.pointlist[(i*3)], out.pointlist[(3*i)+1], out.pointlist[(3*i)+2]);
float distanceTargetFocus = targetVertex.squaredDistance(pos);
float distanceIntersectionFocus = result.squaredDistance(pos);
if(abs(distanceTargetFocus) >= abs(distanceIntersectionFocus))
{
if ( u1 != -1 && u2 != -1 && u3 != -1)
{
std::cout << "Remove index "<< "u1 ==> " <<u1 << "u2 ==>"<<u2<<"u3 ==> "<<u3<< std::endl;
updatedIndices.erase(updatedIndices.begin()+ u1);
updatedIndices.erase(updatedIndices.begin()+ u2);
updatedIndices.erase(updatedIndices.begin()+ u3);
}
}
}
}
if ( updatedIndices.size() <= out.numberoftrifaces)
{
std::cout << "current face list===> "<< out.numberoftrifaces << std::endl;
std::cout << "deleted face list===> "<< updatedIndices.size() << std::endl;
manual->begin("Pointcloud", Ogre::RenderOperation::OT_TRIANGLE_LIST);
for (int n = 0; n < out.numberofpoints; n++)
{
Ogre::Vector3 vertexTransformed = Ogre::Vector3( out.pointlist[3*n+0], out.pointlist[3*n+1], out.pointlist[3*n+2]) - mReferencePoint;
vertexTransformed *=1000.0 ;
vertexTransformed = mDeltaYaw * vertexTransformed;
manual->position(vertexTransformed);
}
for (int n = 0 ; n < updatedIndices.size(); n++)
{
int n0 = updatedIndices[n+0];
int n1 = updatedIndices[n+1];
int n2 = updatedIndices[n+2];
if ( n0 < 0 || n1 <0 || n2 <0 )
{
std::cout<<"negative indices"<<std::endl;
break;
}
manual->triangle(n0, n1, n2);
}
manual->end();
Follow up with the algorithm:
I have now two versions one is the triangulated one and the other is the carved version.
It's not not a surface mesh.
Here are the two files
http://www.mediafire.com/file/cczw49ja257mnzr/ahmed_non_triangulated.obj
http://www.mediafire.com/file/cczw49ja257mnzr/ahmed_triangulated.obj
I see it like this:
So you got image from camera with known matrix and FOV and focal length.
From that you know where exactly the focal point is and where the image is proected onto the camera chip (Z_near plane). So any vertex, its corresponding pixel and focal point lies on the same line.
So for each view cas ray from focal point to each visible vertex of the pointcloud. and test if any face of the mesh hits before hitting face containing target vertex. If yes remove it as it would block the visibility.
Landmark in this context is just feature point corresponding to vertex from pointcloud. It can be anything detectable (change of intensity, color, pattern whatever) usually SIFT/SURF is used for this. You should have them located already as that is the input for pointcloud generation. If not you can peek pixel corresponding to each vertex and test for background color.
Not sure how you want to do this without the input images. For that you need to decide which vertex is visible from which side/view. May be it is doable form nearby vertexes somehow (like using vertex density points or corespondence to planar face...) or the algo is changed somehow for finding unused vertexes inside mesh.
To cast a ray do this:
ray_pos=tm_eye*vec4(imgx/aspect,imgy,0.0,1.0);
ray_dir=ray_pos-tm_eye*vec4(0.0,0.0,-focal_length,1.0);
where tm_eye is camera direct transform matrix, imgx,imgy is the 2D pixel position in image normalized to <-1,+1> where (0,0) is the middle of image. The focal_length determines the FOV of camera and aspect ratio is ratio of image resolution image_ys/image_xs
Ray triangle intersection equation can be found here:
Reflection and refraction impossible without recursive ray tracing?
If I extract it:
vec3 v0,v1,v2; // input triangle vertexes
vec3 e1,e2,n,p,q,r;
float t,u,v,det,idet;
//compute ray triangle intersection
e1=v1-v0;
e2=v2-v0;
// Calculate planes normal vector
p=cross(ray[i0].dir,e2);
det=dot(e1,p);
// Ray is parallel to plane
if (abs(det)<1e-8) no intersection;
idet=1.0/det;
r=ray[i0].pos-v0;
u=dot(r,p)*idet;
if ((u<0.0)||(u>1.0)) no intersection;
q=cross(r,e1);
v=dot(ray[i0].dir,q)*idet;
if ((v<0.0)||(u+v>1.0)) no intersection;
t=dot(e2,q)*idet;
if ((t>_zero)&&((t<=tt)) // tt is distance to target vertex
{
// intersection
}
Follow ups:
To move between normalized image (imgx,imgy) and raw image (rawx,rawy) coordinates for image of size (imgxs,imgys) where (0,0) is top left corner and (imgxs-1,imgys-1) is bottom right corner you need:
imgx = (2.0*rawx / (imgxs-1)) - 1.0
imgy = 1.0 - (2.0*rawy / (imgys-1))
rawx = (imgx + 1.0)*(imgxs-1)/2.0
rawy = (1.0 - imgy)*(imgys-1)/2.0
[progress update 1]
I finally got to the point I can compile sample test input data for this to get even started (as you are unable to share valid data at all):
I created small app with hard-coded table mesh (gray) and pointcloud (aqua) and simple camera control. Where I can save any number of views (screenshot + camera direct matrix). When loaded back it aligns with the mesh itself (yellow ray goes through aqua dot in image and goes through the table mesh too). The blue lines are casted from camera focal point to its corners. This will emulate the input you got. The second part of the app will use only these images and matrices with the point cloud (no mesh surface anymore) tetragonize it (already finished) now just cast ray through each landmark in each view (aqua dot) and remove all tetragonals before target vertex in pointcloud is hit (this stuff is not even started yet may be in weekend)... And lastly store only surface triangles (easy just use all triangles which are used just once also already finished except the save part but to write wavefront obj from it is easy ...).
[Progress update 2]
I added landmark detection and matching with the point cloud
as you can see only valid rays are cast (those that are visible on image) so some points on point cloud does not cast rays (singular aqua dots)). So now just the ray/triangle intersection and tetrahedron removal from list is what is missing...

Mathematically producing sphere-shaped hexagonal grid

I am trying to create a shape similar to this, hexagons with 12 pentagons, at an arbitrary size.
(Image Source)
The only thing is, I have absolutely no idea what kind of code would be needed to generate it!
The goal is to be able to take a point in 3D space and convert it to a position coordinate on the grid, or vice versa and take a grid position and get the relevant vertices for drawing the mesh.
I don't even know how one would store the grid positions for this. Does each "triagle section" between 3 pentagons get their own set of 2D coordinates?
I will most likely be using C# for this, but I am more interested in which algorithms to use for this and an explanation of how they would work, rather than someone just giving me a piece of code.
The shape you have is one of so called "Goldberg polyhedra", is also a geodesic polyhedra.
The (rather elegant) algorithm to generate this (and many many more) can be succinctly encoded in something called a Conway Polyhedron Notation.
The construction is easy to follow step by step, you can click the images below to get a live preview.
The polyhedron you are looking for can be generated from an icosahedron -- Initialise a mesh with an icosahedron.
We apply a "Truncate" operation (Conway notation t) to the mesh (the sperical mapping of this one is a football).
We apply the "Dual" operator (Conway notation d).
We apply a "Truncate" operation again. At this point the recipe is tdtI (read from right!). You can already see where this is going.
Apply steps 3 & 4 repeatedly until you are satisfied.
For example below is the mesh for dtdtdtdtI.
This is quite easy to implement. I would suggest using a datastructure that makes it easy to traverse the neighbourhood give a vertex, edge etc. such as winged-edge or half-edge datastructures for your mesh. You only need to implement truncate and dual operators for the shape you are looking for.
First some analysis of the image in the question: the spherical triangle spanned by neighbouring pentagon centers seems to be equilateral. When five equilateral triangles meet in one corner and cover the whole sphere, this can only be the configuration induced by a icosahedron. So there are 12 pentagons and 20 patches of a triangular cutout of a hexongal mesh mapped to the sphere.
So this is a way to construct such a hexagonal grid on the sphere:
Create triangular cutout of hexagonal grid: a fixed triangle (I chose (-0.5,0),(0.5,0),(0,sqrt(3)/2) ) gets superimposed a hexagonal grid with desired resolution n s.t. the triangle corners coincide with hexagon centers, see the examples for n = 0,1,2,20:
Compute corners of icosahedron and define the 20 triangular faces of it (see code below). The corners of the icosahedron define the centers of the pentagons, the faces of the icosahedron define the patches of the mapped hexagonal grids. (The icosahedron gives the finest regular division of the sphere surface into triangles, i.e. a division into congruent equilateral triangles. Other such divisions can be derived from a tetrahedron or an octahedron; then at the corners of the triangles one will have triangles or squares, resp. Furthermore the fewer and bigger triangles would make the inevitable distortion in any mapping of a planar mesh onto a curved surface more visible. So choosing the icosahedron as a basis for the triangular patches helps minimizing the distortion of the hexagons.)
Map triangular cutout of hexagonal grid to spherical triangles corresponding to icosaeder faces: a double-slerp based on barycentric coordinates does the trick. Below is an illustration of the mapping of a triangular cutout of a hexagonal grid with resolution n = 10 onto one spherical triangle (defined by one face of an icosaeder), and an illustration of mapping the grid onto all these spherical triangles covering the whole sphere (different colors for different mappings):
Here is Python code to generate the corners (coordinates) and triangles (point indices) of an icosahedron:
from math import sin,cos,acos,sqrt,pi
s,c = 2/sqrt(5),1/sqrt(5)
topPoints = [(0,0,1)] + [(s*cos(i*2*pi/5.), s*sin(i*2*pi/5.), c) for i in range(5)]
bottomPoints = [(-x,y,-z) for (x,y,z) in topPoints]
icoPoints = topPoints + bottomPoints
icoTriangs = [(0,i+1,(i+1)%5+1) for i in range(5)] +\
[(6,i+7,(i+1)%5+7) for i in range(5)] +\
[(i+1,(i+1)%5+1,(7-i)%5+7) for i in range(5)] +\
[(i+1,(7-i)%5+7,(8-i)%5+7) for i in range(5)]
And here is the Python code to map (points of) the fixed triangle to a spherical triangle using a double slerp:
# barycentric coords for triangle (-0.5,0),(0.5,0),(0,sqrt(3)/2)
def barycentricCoords(p):
x,y = p
# l3*sqrt(3)/2 = y
l3 = y*2./sqrt(3.)
# l1 + l2 + l3 = 1
# 0.5*(l2 - l1) = x
l2 = x + 0.5*(1 - l3)
l1 = 1 - l2 - l3
return l1,l2,l3
from math import atan2
def scalProd(p1,p2):
return sum([p1[i]*p2[i] for i in range(len(p1))])
# uniform interpolation of arc defined by p0, p1 (around origin)
# t=0 -> p0, t=1 -> p1
def slerp(p0,p1,t):
assert abs(scalProd(p0,p0) - scalProd(p1,p1)) < 1e-7
ang0Cos = scalProd(p0,p1)/scalProd(p0,p0)
ang0Sin = sqrt(1 - ang0Cos*ang0Cos)
ang0 = atan2(ang0Sin,ang0Cos)
l0 = sin((1-t)*ang0)
l1 = sin(t *ang0)
return tuple([(l0*p0[i] + l1*p1[i])/ang0Sin for i in range(len(p0))])
# map 2D point p to spherical triangle s1,s2,s3 (3D vectors of equal length)
def mapGridpoint2Sphere(p,s1,s2,s3):
l1,l2,l3 = barycentricCoords(p)
if abs(l3-1) < 1e-10: return s3
l2s = l2/(l1+l2)
p12 = slerp(s1,s2,l2s)
return slerp(p12,s3,l3)
[Complete re-edit 18.10.2017]
the geometry storage is on you. Either you store it in some kind of Mesh or you generate it on the fly. I prefer to store it. In form of 2 tables. One holding all the vertexes (no duplicates) and the other holding 6 indexes of used points per each hex you got and some aditional info like spherical position to ease up the post processing.
Now how to generate this:
create hex triangle
the size should be radius of your sphere. do not include the corner hexess and also skip last line of the triangle (on both radial and axial so there is 1 hex gap between neighbor triangles on sphere) as that would overlap when joining out triangle segments.
convert 60deg hexagon triangle to 72deg pie
so simply convert to polar coordiantes (radius,angle), center triangle around 0 deg. Then multiply radius by cos(angle)/cos(30); which will convert triangle into Pie. And then rescale angle with ratio 72/60. That will make our triangle joinable...
copy&rotate triangle to fill 5 segments of pentagon
easy just rotate the points of first triangle and store as new one.
compute z
based on this Hexagonal tilling of hemi-sphere you can convert distance in 2D map into arc-length to limit the distortions as much a s possible.
However when I tried it (example below) the hexagons are a bit distorted so the depth and scaling needs some tweaking. Or post processing latter.
copy the half sphere to form a sphere
simply copy the points/hexes and negate z axis (or rotate by 180 deg if you want to preserve winding).
add equator and all of the missing pentagons and hexes
You should use the coordinates of the neighboring hexes so no more distortion and overlaps are added to the grid. Here preview:
Blue is starting triangle. Darker blue are its copies. Red are pole pentagons. Dark green is the equator, Lighter green are the join lines between triangles. In Yellowish are the missing equator hexagons near Dark Orange pentagons.
Here simple C++ OpenGL example (made from the linked answer in #4):
//$$---- Form CPP ----
//---------------------------------------------------------------------------
#include <vcl.h>
#include <math.h>
#pragma hdrstop
#include "win_main.h"
#include "gl/OpenGL3D_double.cpp"
#include "PolyLine.h"
//---------------------------------------------------------------------------
#pragma package(smart_init)
#pragma resource "*.dfm"
TMain *Main;
OpenGLscreen scr;
bool _redraw=true;
double animx= 0.0,danimx=0.0;
double animy= 0.0,danimy=0.0;
//---------------------------------------------------------------------------
PointTab pnt; // (x,y,z)
struct _hexagon
{
int ix[6]; // index of 6 points, last point duplicate for pentagon
int a,b; // spherical coordinate
DWORD col; // color
// inline
_hexagon() {}
_hexagon(_hexagon& a) { *this=a; }
~_hexagon() {}
_hexagon* operator = (const _hexagon *a) { *this=*a; return this; }
//_hexagon* operator = (const _hexagon &a) { ...copy... return this; }
};
List<_hexagon> hex;
//---------------------------------------------------------------------------
// https://stackoverflow.com/a/46787885/2521214
//---------------------------------------------------------------------------
void hex_sphere(int N,double R)
{
const double c=cos(60.0*deg);
const double s=sin(60.0*deg);
const double sy= R/(N+N-2);
const double sz=sy/s;
const double sx=sz*c;
const double sz2=0.5*sz;
const int na=5*(N-2);
const int nb= N;
const int b0= N;
double *q,p[3],ang,len,l,l0,ll;
int i,j,n,a,b,ix;
_hexagon h,*ph;
hex.allocate(na*nb);
hex.num=0;
pnt.reset3D(N*N);
b=0; a=0; ix=0;
// generate triangle hex grid
h.col=0x00804000;
for (b=1;b<N-1;b++) // skip first line b=0
for (a=1;a<b;a++) // skip first and last line
{
p[0]=double(a )*(sx+sz);
p[1]=double(b-(a>>1))*(sy*2.0);
p[2]=0.0;
if (int(a&1)!=0) p[1]-=sy;
ix=pnt.add(p[0]+sz2+sx,p[1] ,p[2]); h.ix[0]=ix; // 2 1
ix=pnt.add(p[0]+sz2 ,p[1]+sy,p[2]); h.ix[1]=ix; // 3 0
ix=pnt.add(p[0]-sz2 ,p[1]+sy,p[2]); h.ix[2]=ix; // 4 5
ix=pnt.add(p[0]-sz2-sx,p[1] ,p[2]); h.ix[3]=ix;
ix=pnt.add(p[0]-sz2 ,p[1]-sy,p[2]); h.ix[4]=ix;
ix=pnt.add(p[0]+sz2 ,p[1]-sy,p[2]); h.ix[5]=ix;
h.a=a;
h.b=N-1-b;
hex.add(h);
} n=hex.num; // remember number of hexs for the first triangle
// distort points to match area
for (ix=0;ix<pnt.nn;ix+=3)
{
// point pointer
q=pnt.pnt.dat+ix;
// convert to polar coordinates
ang=atan2(q[1],q[0]);
len=vector_len(q);
// match area of pentagon (72deg) triangle as we got hexagon (60deg) triangle
ang-=60.0*deg; // rotate so center of generated triangle is angle 0deg
while (ang>+60.0*deg) ang-=pi2;
while (ang<-60.0*deg) ang+=pi2;
len*=cos(ang)/cos(30.0*deg); // scale radius so triangle converts to pie
ang*=72.0/60.0; // scale up angle so rotated triangles merge
// convert back to cartesian
q[0]=len*cos(ang);
q[1]=len*sin(ang);
}
// copy and rotate the triangle to cover pentagon
h.col=0x00404000;
for (ang=72.0*deg,a=1;a<5;a++,ang+=72.0*deg)
for (ph=hex.dat,i=0;i<n;i++,ph++)
{
for (j=0;j<6;j++)
{
vector_copy(p,pnt.pnt.dat+ph->ix[j]);
rotate2d(-ang,p[0],p[1]);
h.ix[j]=pnt.add(p[0],p[1],p[2]);
}
h.a=ph->a+(a*(N-2));
h.b=ph->b;
hex.add(h);
}
// compute z
for (q=pnt.pnt.dat,ix=0;ix<pnt.nn;ix+=pnt.dn,q+=pnt.dn)
{
q[2]=0.0;
ang=vector_len(q)*0.5*pi/R;
q[2]=R*cos(ang);
ll=fabs(R*sin(ang)/sqrt((q[0]*q[0])+(q[1]*q[1])));
q[0]*=ll;
q[1]*=ll;
}
// copy and mirror the other half-sphere
n=hex.num;
for (ph=hex.dat,i=0;i<n;i++,ph++)
{
for (j=0;j<6;j++)
{
vector_copy(p,pnt.pnt.dat+ph->ix[j]);
p[2]=-p[2];
h.ix[j]=pnt.add(p[0],p[1],p[2]);
}
h.a= ph->a;
h.b=-ph->b;
hex.add(h);
}
// create index search table
int i0,i1,j0,j1,a0,a1,ii[5];
int **ab=new int*[na];
for (a=0;a<na;a++)
{
ab[a]=new int[nb+nb+1];
for (b=-nb;b<=nb;b++) ab[a][b0+b]=-1;
}
n=hex.num;
for (ph=hex.dat,i=0;i<n;i++,ph++) ab[ph->a][b0+ph->b]=i;
// add join ring
h.col=0x00408000;
for (a=0;a<na;a++)
{
h.a=a;
h.b=0;
a0=a;
a1=a+1; if (a1>=na) a1-=na;
i0=ab[a0][b0+1];
i1=ab[a1][b0+1];
j0=ab[a0][b0-1];
j1=ab[a1][b0-1];
if ((i0>=0)&&(i1>=0))
if ((j0>=0)&&(j1>=0))
{
h.ix[0]=hex[i1].ix[1];
h.ix[1]=hex[i0].ix[0];
h.ix[2]=hex[i0].ix[1];
h.ix[3]=hex[j0].ix[1];
h.ix[4]=hex[j0].ix[0];
h.ix[5]=hex[j1].ix[1];
hex.add(h);
ab[h.a][b0+h.b]=hex.num-1;
}
}
// add 2x5 join lines
h.col=0x00008040;
for (a=0;a<na;a+=N-2)
for (b=1;b<N-3;b++)
{
// +b hemisphere
h.a= a;
h.b=+b;
a0=a-b; if (a0< 0) a0+=na; i0=ab[a0][b0+b+0];
a0--; if (a0< 0) a0+=na; i1=ab[a0][b0+b+1];
a1=a+1; if (a1>=na) a1-=na; j0=ab[a1][b0+b+0];
j1=ab[a1][b0+b+1];
if ((i0>=0)&&(i1>=0))
if ((j0>=0)&&(j1>=0))
{
h.ix[0]=hex[i0].ix[5];
h.ix[1]=hex[i0].ix[4];
h.ix[2]=hex[i1].ix[5];
h.ix[3]=hex[j1].ix[3];
h.ix[4]=hex[j0].ix[4];
h.ix[5]=hex[j0].ix[3];
hex.add(h);
}
// -b hemisphere
h.a= a;
h.b=-b;
a0=a-b; if (a0< 0) a0+=na; i0=ab[a0][b0-b+0];
a0--; if (a0< 0) a0+=na; i1=ab[a0][b0-b-1];
a1=a+1; if (a1>=na) a1-=na; j0=ab[a1][b0-b+0];
j1=ab[a1][b0-b-1];
if ((i0>=0)&&(i1>=0))
if ((j0>=0)&&(j1>=0))
{
h.ix[0]=hex[i0].ix[5];
h.ix[1]=hex[i0].ix[4];
h.ix[2]=hex[i1].ix[5];
h.ix[3]=hex[j1].ix[3];
h.ix[4]=hex[j0].ix[4];
h.ix[5]=hex[j0].ix[3];
hex.add(h);
}
}
// add pentagons at poles
_hexagon h0,h1;
h0.col=0x00000080;
h0.a=0; h0.b=N-1; h1=h0; h1.b=-h1.b;
p[2]=sqrt((R*R)-(sz*sz));
for (ang=0.0,a=0;a<5;a++,ang+=72.0*deg)
{
p[0]=2.0*sz*cos(ang);
p[1]=2.0*sz*sin(ang);
h0.ix[a]=pnt.add(p[0],p[1],+p[2]);
h1.ix[a]=pnt.add(p[0],p[1],-p[2]);
}
h0.ix[5]=h0.ix[4]; hex.add(h0);
h1.ix[5]=h1.ix[4]; hex.add(h1);
// add 5 missing hexagons at poles
h.col=0x00600060;
for (ph=&h0,b=N-3,h.b=N-2,i=0;i<2;i++,b=-b,ph=&h1,h.b=-h.b)
{
a = 1; if (a>=na) a-=na; ii[0]=ab[a][b0+b];
a+=N-2; if (a>=na) a-=na; ii[1]=ab[a][b0+b];
a+=N-2; if (a>=na) a-=na; ii[2]=ab[a][b0+b];
a+=N-2; if (a>=na) a-=na; ii[3]=ab[a][b0+b];
a+=N-2; if (a>=na) a-=na; ii[4]=ab[a][b0+b];
for (j=0;j<5;j++)
{
h.a=((4+j)%5)*(N-2)+1;
h.ix[0]=ph->ix[ (5-j)%5 ];
h.ix[1]=ph->ix[ (6-j)%5 ];
h.ix[2]=hex[ii[(j+4)%5]].ix[4];
h.ix[3]=hex[ii[(j+4)%5]].ix[5];
h.ix[4]=hex[ii[ j ]].ix[3];
h.ix[5]=hex[ii[ j ]].ix[4];
hex.add(h);
}
}
// add 2*5 pentagons and 2*5 missing hexagons at equator
h0.a=0; h0.b=N-1; h1=h0; h1.b=-h1.b;
for (ang=36.0*deg,a=0;a<na;a+=N-2,ang-=72.0*deg)
{
p[0]=R*cos(ang);
p[1]=R*sin(ang);
p[2]=sz;
i0=pnt.add(p[0],p[1],+p[2]);
i1=pnt.add(p[0],p[1],-p[2]);
a0=a-1;if (a0< 0) a0+=na;
a1=a+1;if (a1>=na) a1-=na;
ii[0]=ab[a0][b0-1]; ii[2]=ab[a1][b0-1];
ii[1]=ab[a0][b0+1]; ii[3]=ab[a1][b0+1];
// hexagons
h.col=0x00008080;
h.a=a; h.b=0;
h.ix[0]=hex[ii[0]].ix[0];
h.ix[1]=hex[ii[0]].ix[1];
h.ix[2]=hex[ii[1]].ix[1];
h.ix[3]=hex[ii[1]].ix[0];
h.ix[4]=i0;
h.ix[5]=i1;
hex.add(h);
h.a=a; h.b=0;
h.ix[0]=hex[ii[2]].ix[2];
h.ix[1]=hex[ii[2]].ix[1];
h.ix[2]=hex[ii[3]].ix[1];
h.ix[3]=hex[ii[3]].ix[2];
h.ix[4]=i0;
h.ix[5]=i1;
hex.add(h);
// pentagons
h.col=0x000040A0;
h.a=a; h.b=0;
h.ix[0]=hex[ii[0]].ix[0];
h.ix[1]=hex[ii[0]].ix[5];
h.ix[2]=hex[ii[2]].ix[3];
h.ix[3]=hex[ii[2]].ix[2];
h.ix[4]=i1;
h.ix[5]=i1;
hex.add(h);
h.a=a; h.b=0;
h.ix[0]=hex[ii[1]].ix[0];
h.ix[1]=hex[ii[1]].ix[5];
h.ix[2]=hex[ii[3]].ix[3];
h.ix[3]=hex[ii[3]].ix[2];
h.ix[4]=i0;
h.ix[5]=i0;
hex.add(h);
}
// release index search table
for (a=0;a<na;a++) delete[] ab[a];
delete[] ab;
}
//---------------------------------------------------------------------------
void hex_draw(GLuint style) // draw hex
{
int i,j;
_hexagon *h;
for (h=hex.dat,i=0;i<hex.num;i++,h++)
{
if (style==GL_POLYGON) glColor4ubv((BYTE*)&h->col);
glBegin(style);
for (j=0;j<6;j++) glVertex3dv(pnt.pnt.dat+h->ix[j]);
glEnd();
}
if (0)
if (style==GL_POLYGON)
{
scr.text_init_pixel(0.1,-0.2);
glColor3f(1.0,1.0,1.0);
for (h=hex.dat,i=0;i<hex.num;i++,h++)
if (abs(h->b)<2)
{
double p[3];
vector_ld(p,0.0,0.0,0.0);
for (j=0;j<6;j++)
vector_add(p,p,pnt.pnt.dat+h->ix[j]);
vector_mul(p,p,1.0/6.0);
scr.text(p[0],p[1],p[2],AnsiString().sprintf("%i,%i",h->a,h->b));
}
scr.text_exit_pixel();
}
}
//---------------------------------------------------------------------------
void TMain::draw()
{
scr.cls();
int x,y;
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(0.0,0.0,-5.0);
glRotated(animx,1.0,0.0,0.0);
glRotated(animy,0.0,1.0,0.0);
hex_draw(GL_POLYGON);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(0.0,0.0,-5.0+0.01);
glRotated(animx,1.0,0.0,0.0);
glRotated(animy,0.0,1.0,0.0);
glColor3f(1.0,1.0,1.0);
glLineWidth(2);
hex_draw(GL_LINE_LOOP);
glCirclexy(0.0,0.0,0.0,1.5);
glLineWidth(1);
scr.exe();
scr.rfs();
}
//---------------------------------------------------------------------------
__fastcall TMain::TMain(TComponent* Owner) : TForm(Owner)
{
scr.init(this);
hex_sphere(10,1.5);
_redraw=true;
}
//---------------------------------------------------------------------------
void __fastcall TMain::FormDestroy(TObject *Sender)
{
scr.exit();
}
//---------------------------------------------------------------------------
void __fastcall TMain::FormPaint(TObject *Sender)
{
_redraw=true;
}
//---------------------------------------------------------------------------
void __fastcall TMain::FormResize(TObject *Sender)
{
scr.resize();
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(60,float(scr.xs)/float(scr.ys),0.1,100.0);
_redraw=true;
}
//-----------------------------------------------------------------------
void __fastcall TMain::Timer1Timer(TObject *Sender)
{
animx+=danimx; if (animx>=360.0) animx-=360.0; _redraw=true;
animy+=danimy; if (animy>=360.0) animy-=360.0; _redraw=true;
if (_redraw) { draw(); _redraw=false; }
}
//---------------------------------------------------------------------------
void __fastcall TMain::FormKeyDown(TObject *Sender, WORD &Key, TShiftState Shift)
{
Caption=Key;
if (Key==40){ animx+=2.0; _redraw=true; }
if (Key==38){ animx-=2.0; _redraw=true; }
if (Key==39){ animy+=2.0; _redraw=true; }
if (Key==37){ animy-=2.0; _redraw=true; }
}
//---------------------------------------------------------------------------
I know it is a bit of a index mess and also winding rule is not guaranteed as I was too lazy to made uniform indexing. Beware the a indexes of each hex are not linear and if you want to use them to map to 2D map you would need to recompute it using atan2 on x,y of its center point position.
Here previews:
Still some distortions are present. They are caused by fact that we using 5 triangles to connect at equator (so connection is guaranteed). That means the circumference is 5*R instead of 6.28*R. How ever this can be still improved by a field simulation. Just take all the points and add retractive forces based on their distance and bound to sphere surface. Run simulation and when the oscillations lower below threshold you got your sphere grid ...
Another option would be find out some equation to remap the grid points (similarly what I done for triangle to pie conversion) that would have better results.

Resources