i have to solve a distance problem and i´m getting pretty upset because i don´t know how to do it despite having tried nearly everything that i´ve found on the web... here´s my problem:
i work in the automotive industry and we use tessellated data (like STL, in my case the JT-Format). I have a part that needs to be welded. and i have the coordinates of the weldpoint. to ensure that the weldpoint is located correctly i want to calculate if the weldpoint hits the part or, in other words, i want to check if the weldpoint collides with the part. if yes, then the part can be welded. otherwise the weldpoint would be in the air and it couldnt be welded. therefor i want to calculate the distance between the part (which is basically a set of triangles or polygons in the mentioned format) and the point. if the distance to one of the triangles is less then the also given radius of the weldpoint, then there must be a collision and thus the weldpoint is located correctly and can be welded.
a how to, pseudo-code or whatever that could be useful would be very appreciated. i´m coding in c++ using the JTOpen Toolkit. Please note that the point hasn´t necessarily have to lie within the triangle. Maybe an example could help you and me to understand the problems/answers (no collision in the following example):
let v1, v2, v3 be the vertices of a triangle and px, py, pz the coordinates of the weldpoint (radius 1.8). I also get normals (n1, n2, n3) to every vertex but i dont know what to to with them...
v1x = -273.439
v1y = -787.775
v1z = 854.273
v2x = -274.247
v2y = -788.085
v2z = 855.244
v3x = -272.077
v3y = -787.864
v3z = 855.377
px = 140.99
py = -787.78
pz = 458.93
n1x = -0.113447
n1y = 0.97007
n1z = 0.214693
n2x = -0.113423
n2y = 0.970069
n2z = 0.214712
n3x = -0.110158
n3y = 0.969844
n3z = 0.217413
thank you in advance!
The locus of the points at the same distance of a triangle is a complex surface made of
two triangles parallel to the original one, at the given distance;
three half cylindres corresponding to points at equal distances of the edges;
the spheres with points at equal distances of the vertices.
If you look facing the triangle, you will observe that these surfaces are split by
the three triangle sides,
the six normals to the sides at the vertices.
Hence to find the distance of a given point, you need to project it orthogonally to the plane of the triangle and find its location among 7 regions delimited by half-lines and segments. Using an appropriate spatial rotation, the problem can be solved in 2D. Then knowing the region, you will use either the distance to the plane, to an edge or to a vertex.
Note that in the case of a tessellation, several triangles have to be considered. If there are many of them, acceleration systems will be needed. This is a broad and a little technical topic.
I am trying to find or search for a method that quickly finds all cubes of size L that would be contained by a viewing frustum. Maybe even using cuda.
I have made a DDA traversal for raycasting, which is like a 1D case to me and simple, as I only move along the line at a known distance.
My instinct was to create a bounding box of the frustum, and subdivide this space into a spatial grid of size L cubes. Then test each cell's center of the grid for being inside the frustum. Considering the frustum is a pyramid, it seems that about half the cells would be occupied by a bounding box and I feel that this method is just doing too much work. It will surely work though, I am hoping for a less naive or faster geometric approach.
Perhaps ray cast the left wall first, then right wall second and then line cast in between these? So in a nutshell, looking for the R3 version of something like a DDA traversal.
The fastest way to detect if a vertex resides within a frustum is dot product. The frustum consists of 4 planes, that is top, bottom, left, right, and two z values, front and back clipping. For each vertex check two things: First, is it outside front or back panel? And if not, is it inside the four planes?
To check if a vertex is outside front or back panel you check vertex.Z against your frustrum:
isInsideZ = vertex.Z >= frustrum.Zmin && vertex.Z <= frustrum.Zmax;
To check if it's inside the four frustrum 'walls' you need to compute the cross vectors for them, oriented towards the frustrum's center. Then check the dot product of each cross vector and the position vector to your vertex relative to the respective plane. You obtain this position vector by subtracting some arbitrary point on the plane from the vertex you test. Should the dot product be positive, the vertex is above that plane.
isAbove[i] = Vector3D.Dot(cross[i], vertex - planeloc[i]) > 0;
Where planeloc[i] is any point located on the respective plane i.
The vertex is inside the frustrum if all conditions are met:
isInside = isInsideZ && isAbove[0] && isAbove[1] && isAbove[2] && isAbove[3];
This sounds a bit awkward to handle, but a lot of things can be done outside the grinding loop, such as computing the cross products, i.e. frustrum plane normals, or the plane location vectors. For example, if a plane is spanned by (1,0,0), (1,1,0) then (1,0,0) already represents a point located on that plane.
Normal marching cubes finds 12 edges per cube, but you can do 3 edges per cube, save the edges inside an array, and then go through the cubes again, referencing the edges from the cubes adjacent rather than calculating them.
The process to reference adjacent cubes isn't clearly discussed on the Internet so anyone using marching cubes would be welcome to help find the details of the solution. do you know an implementation already?
here is a picture showing the 3 edges in yellow that you need for each cube, instead of 12.
EDIT- I just found this solution, although it's just a part of it:
Imagine 3 edges coming from the corner of the cube with lowerest coordinates. Then all other edges just belong to other cubes. If our cube has coordinates (x,y,z), the neiboring cubes have coordinates (x+1,y,z), (x,y+1,z), (x,y,z+1), (x+1,y+1,z), (x+1,y,z+1), (x,y+1,z+1). You can imagine the edge as a vector. Then the corner of the cube have edges (1,0,0), (0,1,0), (0,0,1). The cube with coordinates (x+1,y,z) have edges (0,1,0) and (0,0,1) that belong to our cube. The cube (x+1,y+1,z) has only one edge (0,0,1) that belongs to our cube. So if you store 4 elements for the cube you can access them like that:
edge1 = cube[x][y][z][0];
edge2 = cube[x][y][z][1];
edge3 = cube[x][y][z][2];
edge4 = cube[x+1][y][z][1];
edge5 = cube[x+1][y][z][2];
edge6 = cube[x][y+1][z][0];
edge7 = cube[x][y+1][z][2];
edge8 = cube[x][y][z+1][0];
edge9 = cube[x][y][z+1][1];
edge10 = cube[x+1][y+1][z][2];
edge11 = cube[x+1][y][z+1][1];
edge12 = cube[x][y+1][z+1][0];
Now which points edge7 connect? The answer is (x,y+1,z) and (x,y+1,z)+(0,0,1)=(x,y+1,z+1).
Now which cubes edge7 connect? It is more harder. We see that coordinate z is changes along the edge this means that neibour cube has the same z coordinate. Now all others coordinates change. Where we have +1, the cube has large coordinate. Where we have +0, the cube has smaller coordinates. So the edge connects cubes (x,y,z) and (x-1,y+1,z). Other 2 cubes that has the same edge are (x,y+1,z) and (x-1,y,z).
-=-=-=-=-=-=-=-=-=-=-=--=-=-=-=-=-=-=--=-=
EDIT2-
So I am doing this, and it isn't so simple. I have a loop which simultaneously calculate 8 points, 12 edges, the interpolation of edges, the bit values and a vertex the values for the edges, all in one loop.
so I am doing a new loop previous to it to calculate as much as possible and place it in arrays to used in the complicated loop.
I can recycle the interpolated values of the intersection points along edges, in an array, although I will have to recalculate all the points again in the complicated loop, because the values of the points I used to decide bit numbers that reference values in the vertex table. That confuses me! I thought that once I have the edge intersection values, I could use those directly to get the triangle tables, without having to calculate the points all over again!
in fact no.
anyway, here is another bit of information with someone that already did it, if only it was readable!
http://www.new-npac.org/projects/sv2all/sv2/vtk/patented/vtkImageMarchingCubes.cxx
scroll to this line: Cubes are responsible for edges on their min faces.
A simple way to reduce edge calculations in the way you are suggesting is to compute cubes one axis aligned plane at a time.
If you kept all of the cubes, with their edges, in memory, it would be easy to compute each edge only once and to find adjacent edges by indexing. However, you usually don't want to keep all the cubes in memory at once because of the space requirements.
A solution to this is to compute one plane of cubes at a time. i.e. an axis aligned cross-section, starting from one side and progressing to the opposite side. You then only need to keep at most two full planes of cubes in memory at a time. As you move through each plane you can reference shared edges in the previous plane and previously computed cubes in the current plane. As you move to the next plane you can deallocate the plane you will no longer need.
Edit: This article discusses doing just what I suggest:
http://alphanew.net/index.php?section=articles&site=marchoptim&lang=eng
Funny, because when I implemented my own MCs I came up with similar solution.
When you start working with MCs you treat them as a distinct cubes but if you want to go for high performance you'll need to create entire mesh as a whole, and creating vertex indices etc. is not so easy here. It gets even more interesting when you want to add smooth per-vertex normals :).
To solve this I created a simple index cache mechanism to store vertex indices for each edge.
Then, for each computed edge I have cube position x,y,z and edge index and I do as follows:
For each axis:
if the edge is on '+' side of axis:
replace edge index with its '-' side sibling
increment cube position along axis
This simple operation gives me the correct cube position, and edge index of 0,1,2. Then I compute a total cache index from x,y,z,edgeIndex values with simple bit rotations.
When I have cache index I check if it's bigger than -1. If it is then there was an already computed vertex at this edge and I can reuse it. If it's -1 I need to create a new vertex and store its index in the cache. This way you'll compute each vertex only once, and you can even add a normal value shared between every triangle containing your vertex.
Yes, I think I do it similar to kolenda. I have a struct with 5 ints: (cube)index and 4 vertexindices (A, B, C, D).
for the most inner loop (x), I have just lastXCache and nextXCache. On the 4 edges pointing in the -x direction, i ask if lastXCache.A != -1 and if so, assign the previously calculated value, etc.
In the +x direction I store calculated vertices in nextXCache. when cube is done: lastXCache = nextXCache;
For y and z direction it needs to be a list (unity term for mutable array), next y is next row (so sizex) and next z is the next plane (so sizex * sizey)
only diadvantage is that this way it has to run cube after cube, so serially. But you can calculate different chunks in parallel.
Another way I thought of that could be more parallel would need 2 passes: 1. calculate 3 edges every cube, when 1 is done -> 2. draw the triangles.
Don't really know what is better, but the way it actually works seems to be fast enough. even better with unity jobs. Create one IJob for 1 chunk/mesh.
We are given a set of triangles. Each triangle is a triplet of points. Each point is a triplet of real numbers. We can calculate surface normal for each triangle. For Gouraud shading however, we need vertex normals. Therefore we have to visit each vertex and look at the triangles that share that vertex, average their surface normals and we get vertex normal.
What is the most efficient algorithm and data structure to achieve this?
A naive approach is this (pseudo python code):
MAP = dict()
for T in triangles:
for V in T.vertices:
key = hash(V)
if MAP.has(key):
MAP[key].append(T)
else:
MAP[key] = []
MAP[key].append(T)
VNORMALS = dict()
for key in MAP.keys():
VNORMALS[key] = avg([T.surface_normal for T in MAP[key]])
Is there a more efficient approach?
Visit each triangle, calculate the normal for each triangle, ADD those to the vertex normal for each corner vertex.
Then at the end, normalise the normals for each vertex.
Then at least you only have to traverse the triangles once and you only store one normal/vertex.
Each vertex belongs to one or more faces (usually triangles, sometimes quads -- I'll use triangles in this answer).
A triangle that is not attached to any other triangles cannot be 'smoothed'. It is flat. Only when a face has neighbours can you reason about smoothing them together.
For a vertex where multiple faces meet, calculate the normals for each of these faces. The cross product of two vectors returns a perpendicular (normal) vector, which is what we want.
A --- B
\ /
C
v1 = B - A
v2 = C - A
normal = v1 cross v2
Be careful to calculate these vectors consistently across all faces, otherwise your normal may be in a negative direction to that you require.
So at a vertex where multiple faces meet, sum the normals of the faces, normalise the resulting vector, and apply it to the vertex.
Sometimes you have a mesh where some parts of it are to be smoothed, and others not. An easy to picture example is a cylinder made of triangles. The round surface of the cylinder would smooth well, but if you consider triangles from the flat ends at the vertices around the sharp ridge, it will look strange. To avoid this, you can introduce a rule that ignore normals from faces which deviate too far from the normal of the face you're calculating for.
EDIT there's a really good video showing technique for calculating Gourad shading, though it doesn't discuss an actual algorithm.
You might like to take a look at the source of of Three.js. Specifically, the computeVertexNormals function. It does not support maintaining sharp edges. The efficiency of your algorithm depends to a large extent upon the way in which you are modelling your primitives.
If I have cube whose edges are parallel to the axes and is centered at the origin, is it correct that the normals are parallel to the axes or in other words only one component in normal vector can be non-zero and the other two components must be zero? IF x,y,z, is normal vector then if x is not zero then y and z must be zero?
In OpenGL ES application how many normals are needed for proper lighting? Do We need one normal per vertex, or one normal per triangle or one normal per surface?
These 2 lines of code are related to this question:
gl.glEnableClientState(GL10.GL_NORMAL_ARRAY);
gl.glNormalPointer(GL10.GL_FLOAT, 0, mNormalBuffer);
How OpenGL ES knows which normal corresponds with which triangle, or vertex or surface of the mesh being drawn?
Normals are specified per vertex and do not have to be parallel to an axis (although they will be in your cube's case), they must be of unit length and perpendicular to the surface that your mesh is approximating.
Check out this answer to a similar question.