Let's say I have a set of points in 3D. The points are uniformly spaced on the x and y axes. So one can think of the points as a function z = f(x,y). As an example, x can be from {0,1,2} and y can be {0,1,2}, giving us a total of nine 3D points on a square grid. I am trying to implement a simple algorithm to generate a triangle mesh of these points, given their coordinates. I do not know much aboout mesh generation, but I do know that my points are evenly spaced in the x and y dimensions on a grid. So if my points were of the form:
0 0 0
0 5 0
0 0 0
Where the row number represents the y coordinate and the column number represents the x coordinate, and the value represents the z coordinate. This set of points should generate a triangular mesh that looks like a square base pyramid where the peak of the pyramid is at (1,1,5). I am looking for a simple algorithm that I could code up that would generate such a mesh, given the specifics of this problem.
I have heard of Delaunay triangulation, but am not sure if it is applicable to this problem. Thanks.
A very easy solution is to consider the four vertices of every mesh and create two triangles, using a diagonal and the four sides.
Related
here is a problem that will turn your brain inside out, I'm trying to deal with it for a quite some time already.
Suppose you have sphere located in the origin of a 3d space. The sphere is segmented into a grid of equidistant points. The procedure that forms grid isn't that important but what seems simple to me is to use regular 3d computer graphics sphere generation procedure (The algorithm that forms the sphere described in the picture below)
Now, after I have such sphere (i.e. icosahedron of some degree) I need a computationally trivial procedure that will be capable to snap (an angle) of a random unit vector to it's closest icosahedron edge points. Also it is acceptable if the vector will be snapped to a center point of triangle that the vector is intersecting.
I would like to emphasise that it is important that the procedure should be computationally trivial. This means that procedures that actually create a sphere in memory and then involve a search among every triangle in sphere is not a good idea because such search will require access to global heap and ram which is slow because I need to perform this procedure millions of times on a low end mobile hardware.
The procedure should yield it's result through a set of mathematical equations based only on two values, the vector and degree of icosahedron (i.e. sphere)
Any thoughts? Thank you in advance!
============
Edit
One afterthought that just came to my mind, it seems that within diagram below step 3 (i.e. Project each new vertex to the unit sphere) is not important at all, because after bisection, projection of every vertex to a sphere would preserve all angular characteristics of a bisected shape that we are trying to snap to. So the task simplifies to identifying a bisected sub triangle coordinates that are penetrated by vector.
Make a table with 20 entries of top-level icosahedron faces coordinates - for example, build them from wiki coordinate set)
The vertices of an icosahedron centered at the origin with an
edge-length of 2 and a circumscribed sphere radius of 2 sin (2π/5) are
described by circular permutations of:
V[] = (0, ±1, ±ϕ)
where ϕ = (1 + √5)/2
is the golden ratio (also written τ).
and calculate corresponding central vectors C[] (sum of three vectors for vertices of every face).
Find the closest central vector using maximum of dot product (DP) of your vector P and all C[]. Perhaps, it is possible to reduce number of checks accounting for P components (for example if dot product of P and some V[i] is negative, there is no sense to consider faces being neighbors of V[i]). Don't sure that this elimination takes less time than direct full comparison of DP's with centers.
When big triangle face is determined, project P onto the plane of that face and get coordinates of P' in u-v (decompose AP' by AB and AC, where A,B,C are face vertices).
Multiply u,v by 2^N (degree of subdivision).
u' = u * 2^N
v' = v * 2^N
iu = Floor(u')
iv = Floor(v')
fu = Frac(u')
fv = Frac(v')
Integer part of u' is "row" of small triangle, integer part of v' is "column". Fractional parts are trilinear coordinates inside small triangle face, so we can choose the smallest value of fu, fv, 1-fu-fv to get the closest vertice. Calculate this closest vertex and normalize vector if needed.
It's not equidistant, you can see if you study this version:
It's a problem of geodesic dome frequency and some people have spent time researching all known methods to do that geometry: http://geo-dome.co.uk/article.asp?uname=domefreq, see that guy is a self labelled geodesizer :)
One page told me that the progression goes like this: 2 + 10·4N (12,42,162...)
You can simplify it down to a simple flat fractal triangle, where every triangle devides into 4 smaller triangles, and every time the subdivision is rotated 12 times around a sphere.
Logically, it is only one triangle rotated 12 times, and if you solve the code on that side, then you have the lowest computation version of the geodesic spheres.
If you don't want to keep the 12 sides as a series of arrays, and you want a lower memory version, then you can read about midpoint subdivision code, there's a lot of versions of midpoint subdivision.
I may have completely missed something. just that there isn't a true equidistant geodesic dome, because a triangle doesn't map to a sphere, only for icos.
I am having trouble figuring out the math for a problem I am having.
I have an n by m rectangle and need to place k points in it such that points are, as close as possible, completely equidistant from each other.
Normally this is pretty easy, I construct the regular polygon with k points on it, center it in the center of my rectangle, stretch it to fit and voila, its vertices are my output points.
The catch is that when calculating distances on this rectangle, you can wrap. As such the point (0,0) is just 1 pixel away from the point (n-1,m-1) (using 0-indexed coordinates).
I've been messing around a bit on paper and getting nowhere fast with this. Does anybody have any idea how to calculate this?
tl;dr:
Inputs: n - width of rectangle, m - height of rectangle, k - number of points
Outputs: k pairs of (x,y) coordinates
Constraints: the output coordinates are equidistant from one another under wrapping coordinate systems.
Any advice on where I could look for such a geometry problem?
Given an NxNxN cube (image), how can I find all the 2x2x2 boxes within the NxNxN cube? of course if N is even, we can find 2x2x2 boxes without overlapping, but when the N is odd, there is overlapping between some of the 2x2x2 boxes found in the bigger cube.
So,
1- How can I find all the non-overlapped 2x2x2 boxes in a bigger NxNxN cube where N is even?
input: NxNxN cube
output: all the possible non-overlapped 2x2x2 cubes.
2- How can I find all the overlapped 2x2x2 boxes in a bigger NxNxN cube where N is odd? This time, the overlapped areas in the 2x2x2 boxes should be zero in second (or more) visits; i.e. each overlapped area should be visited (counted) once not more.
input: NxNxN cube
output: all the possible overlapped 2x2x2 cubes with zero values for the overlapped voxels in 2nd or more visits.
I will give you an answer for the part where N is even. The rest can be easily adapted, I hope you can do this yourself :-) Or at least try it - if you've got problems, just come back to us.
I don't have MATLAB installed anymore, so I hope this is free of typo errors. But the idea should be clear:
%
N = 10;
% First create all possible starting coordinates of 2x2x2 cubes within the big cube:
coords = 1:2:(N-1); % could easily be adapted to other small-cube sizes like 3x3x3 if you want to...
% Now create all possible combinations of starting-coordinates in every direction (as it is a cube, the starting points in x, y, z directions are the same):
sets = {coords, coords, coords};
[x y z] = ndgrid(sets{:});
cartProd = [x(:) y(:) z(:)]; % taken from here: http://stackoverflow.com/a/4169488/701049 --> you could instead also use this function: https://www.mathworks.com/matlabcentral/fileexchange/10064-allcomb-varargin- which generates all possible combinations
% Now cartProd contains all possible start-points of small cubes as row-vectors. If you want, you can easily calculate the corresponding vectors of end-points by simply adding +1 to every entry which will effectively yield a small-cube size of 2. If you want to further modify it to support e.g. 3x3x3 cubes, simply add +2 to every dimension.
endPoints = cartProd+1;
% E.g.: the first small cube starts at [x,y,z] = cartProd(1,:) and ends at [x_e, y_e, z_e] = endPoints(1,:).
Have fun :-)
Hint: for the odd big cube -> Simply treat it as evenly-sized cube, e.g. treat a 9x9x9 cube as 10x10x10, take my algorithm from above, and then move the most outer small-cubes one step to the center. That means, take the small cubes with the biggest x, y or z coordinate and substract 1 in that direction. So that the starting coordinate for all small cubes with x_max=9 will be changed to x=8. Then the same for y_max=9 and z_max=9.
Can someone tell me any algorithm by which I can find total number of integral coordinates lying inside or on a quadrilateral. The coordinates of the quadrilateral will be given as input and you have to tell total no. of coordinates lying inside or on a quadrilateral.
For example if the points given are (5,3) (1,1) (3,4) (6,1) then answer should be 14. If you draw the quadrilateral then you will find that only 14 integral coordinates like (3,2),(5,1)..etc lies inside and on the quadrilateral.
If your quadrilateral vertices have integer coordinates, then it is possible to use Pick's theorem.
A = i + b/2 - 1
where A is area, i is the number of interior integer points, and b is the number of integer points at the borders (edges).
You can find quad area with any method (for example, see here), and find number of border points on every edge as GCD(dx, dy) (+1 term excluded to avoid counting vertices twice)
Normal marching cubes finds 12 edges per cube, but you can do 3 edges per cube, save the edges inside an array, and then go through the cubes again, referencing the edges from the cubes adjacent rather than calculating them.
The process to reference adjacent cubes isn't clearly discussed on the Internet so anyone using marching cubes would be welcome to help find the details of the solution. do you know an implementation already?
here is a picture showing the 3 edges in yellow that you need for each cube, instead of 12.
EDIT- I just found this solution, although it's just a part of it:
Imagine 3 edges coming from the corner of the cube with lowerest coordinates. Then all other edges just belong to other cubes. If our cube has coordinates (x,y,z), the neiboring cubes have coordinates (x+1,y,z), (x,y+1,z), (x,y,z+1), (x+1,y+1,z), (x+1,y,z+1), (x,y+1,z+1). You can imagine the edge as a vector. Then the corner of the cube have edges (1,0,0), (0,1,0), (0,0,1). The cube with coordinates (x+1,y,z) have edges (0,1,0) and (0,0,1) that belong to our cube. The cube (x+1,y+1,z) has only one edge (0,0,1) that belongs to our cube. So if you store 4 elements for the cube you can access them like that:
edge1 = cube[x][y][z][0];
edge2 = cube[x][y][z][1];
edge3 = cube[x][y][z][2];
edge4 = cube[x+1][y][z][1];
edge5 = cube[x+1][y][z][2];
edge6 = cube[x][y+1][z][0];
edge7 = cube[x][y+1][z][2];
edge8 = cube[x][y][z+1][0];
edge9 = cube[x][y][z+1][1];
edge10 = cube[x+1][y+1][z][2];
edge11 = cube[x+1][y][z+1][1];
edge12 = cube[x][y+1][z+1][0];
Now which points edge7 connect? The answer is (x,y+1,z) and (x,y+1,z)+(0,0,1)=(x,y+1,z+1).
Now which cubes edge7 connect? It is more harder. We see that coordinate z is changes along the edge this means that neibour cube has the same z coordinate. Now all others coordinates change. Where we have +1, the cube has large coordinate. Where we have +0, the cube has smaller coordinates. So the edge connects cubes (x,y,z) and (x-1,y+1,z). Other 2 cubes that has the same edge are (x,y+1,z) and (x-1,y,z).
-=-=-=-=-=-=-=-=-=-=-=--=-=-=-=-=-=-=--=-=
EDIT2-
So I am doing this, and it isn't so simple. I have a loop which simultaneously calculate 8 points, 12 edges, the interpolation of edges, the bit values and a vertex the values for the edges, all in one loop.
so I am doing a new loop previous to it to calculate as much as possible and place it in arrays to used in the complicated loop.
I can recycle the interpolated values of the intersection points along edges, in an array, although I will have to recalculate all the points again in the complicated loop, because the values of the points I used to decide bit numbers that reference values in the vertex table. That confuses me! I thought that once I have the edge intersection values, I could use those directly to get the triangle tables, without having to calculate the points all over again!
in fact no.
anyway, here is another bit of information with someone that already did it, if only it was readable!
http://www.new-npac.org/projects/sv2all/sv2/vtk/patented/vtkImageMarchingCubes.cxx
scroll to this line: Cubes are responsible for edges on their min faces.
A simple way to reduce edge calculations in the way you are suggesting is to compute cubes one axis aligned plane at a time.
If you kept all of the cubes, with their edges, in memory, it would be easy to compute each edge only once and to find adjacent edges by indexing. However, you usually don't want to keep all the cubes in memory at once because of the space requirements.
A solution to this is to compute one plane of cubes at a time. i.e. an axis aligned cross-section, starting from one side and progressing to the opposite side. You then only need to keep at most two full planes of cubes in memory at a time. As you move through each plane you can reference shared edges in the previous plane and previously computed cubes in the current plane. As you move to the next plane you can deallocate the plane you will no longer need.
Edit: This article discusses doing just what I suggest:
http://alphanew.net/index.php?section=articles&site=marchoptim&lang=eng
Funny, because when I implemented my own MCs I came up with similar solution.
When you start working with MCs you treat them as a distinct cubes but if you want to go for high performance you'll need to create entire mesh as a whole, and creating vertex indices etc. is not so easy here. It gets even more interesting when you want to add smooth per-vertex normals :).
To solve this I created a simple index cache mechanism to store vertex indices for each edge.
Then, for each computed edge I have cube position x,y,z and edge index and I do as follows:
For each axis:
if the edge is on '+' side of axis:
replace edge index with its '-' side sibling
increment cube position along axis
This simple operation gives me the correct cube position, and edge index of 0,1,2. Then I compute a total cache index from x,y,z,edgeIndex values with simple bit rotations.
When I have cache index I check if it's bigger than -1. If it is then there was an already computed vertex at this edge and I can reuse it. If it's -1 I need to create a new vertex and store its index in the cache. This way you'll compute each vertex only once, and you can even add a normal value shared between every triangle containing your vertex.
Yes, I think I do it similar to kolenda. I have a struct with 5 ints: (cube)index and 4 vertexindices (A, B, C, D).
for the most inner loop (x), I have just lastXCache and nextXCache. On the 4 edges pointing in the -x direction, i ask if lastXCache.A != -1 and if so, assign the previously calculated value, etc.
In the +x direction I store calculated vertices in nextXCache. when cube is done: lastXCache = nextXCache;
For y and z direction it needs to be a list (unity term for mutable array), next y is next row (so sizex) and next z is the next plane (so sizex * sizey)
only diadvantage is that this way it has to run cube after cube, so serially. But you can calculate different chunks in parallel.
Another way I thought of that could be more parallel would need 2 passes: 1. calculate 3 edges every cube, when 1 is done -> 2. draw the triangles.
Don't really know what is better, but the way it actually works seems to be fast enough. even better with unity jobs. Create one IJob for 1 chunk/mesh.