How to calculate the normals of a box? - algorithm

I am trying to create an algorithm that calculate the normals of a model/ mesh. People have been telling me to use the cross products between the two vectors which at first seem like a good idea until I discovered that it might not always work. For instance just imagine a box with its front face sitting at the origin and its back face down the Z axis. Here is an image:
I do apologize for bad hand writing but that shouldn't be of any significance. As you can see,I cross v and u to get the normal pointing toward the positive z axis. However, If I use that same calculation to calculate the normal for the back face then obviously the normal will then be a vector directing inside the shape. The result is that I have inaccurate normals to calculate the brightness of a light. I want the normal to be facing away from the model at all time.
I know there gotta be a better way to calculate the normal but I don't know what it is. Can anyone suggests to me another algorithm to calculate the normal that would get rid of this problem? If not then there has to be a way to check whether or not a normal is facing inside the object / model. If so then can you suggests it in the answer and where I would find an explanation about it because I would love to have an intuition on how these methodologies work.

Most software packages obey a configurable cyclic ordering for triangle indices - clockwise or anti-clockwise. Thus all meshes they export have self-consistent ordering, and as long as your program uses the same convention, you should have nothing to worry about.
Having said that, I imagine you want to know what to do in the hypothetical (?) situation where the index ordering is inconsistent.
One method we could use is ray-intersection. The important theorem is that a ray with its source outside the mesh will only intersect the mesh an even number of times, and if inside, odd.
To do this, we can do the following:
Calculate the "normal" using the cross product as above (and normalize it) => N
Take any point on the triangle (preferably the midpoint)
Increment this point along the normal by some small epsilon value (depends on your floating point format and size of model - I'd say 1e-4 for single and 1e-8 for double precision) => P
Intersect this ray [dir = N, src = P] with all triangles in the mesh (a good algorithm for this would be Möller–Trumbore)
If the number of intersections is even, then the ray started from outside of the mesh; this means that the normal points outwards from the mesh (because you incremented its source from a point on the surface). - and of course, vice versa.
Minor (-ish ?) digression: a naive approach to the above, of looping through all triangles in the mesh, would be O(n) - and hence the whole procedure would have quadratic time complexity. This is perfectly fine for very small meshes of ~20 triangles (e.g. a box), but not ideal for any larger!
You can use spatial sub-division techniques to lower the cost of this intersection step:
K-D trees / Octrees: These require O(n log n) (for the best algorithm, that is - see Ingo Wald's paper) to construct, but intersections are guaranteed to be O(log n) if done properly. The overall complexity would then be O(n log n), which is pretty much the best you can get
Grid: This simply partitions the search space and triangles into smaller boxes. Construction is O(n) and much more memory-efficient. Intersection time is still O(n), but the constant factor is much smaller than that of the naive approach.

Cross products are not commutative so v x u is not the same as u x v. In fact, they will be the exact opposite.
For the front face, you want to take u x v (assuming you're in a right-hand coordinate system), and the back face you want to cross v x u.
See right-hand rule for more info on how crossing vectors works.

Related

Efficient sorting of integer vertices of a convex polygon

I am given as input n pairs of integers which describe points in the 2D plane that are known ahead of time to be vertices of some convex polygon.
I'd like to efficiently sort these points in a clockwise or counter-clockwise fashion.
At first I thought of doing something like the initial step of Graham Scan, but I can't see a simple way to break ties for vertices that make the same angle with the anchor point.
Notice that, as you walk along the sides of the polygon, sometimes these vertices may be getting closer to the anchor point, and sometimes they may be getting farther.
Something that does seem to work is producing a point in the interior of the polygon (for instance, the average of the n points) and using it is an anchor point for radial sorting of the input.
Indeed, because the anchor point lies in the interior, any ray emanating from it contains at most one input point, so there will be no ties.
The overall complexity is not affected: computing the midpoint is an O(n) task, and the bottleneck is the actual sorting.
This does involve a few more operations than the hopeful version of Graham Scan (where we assume there are no ties to be broken), but the bigger loss is leaving integer arithmethic behind by introducing division into the mix.
This in turn can be remedied with scaling everything by a factor of n, but at this point it seems like grasping at straws.
Am I missing something?
Is there a simpler, efficient way to solve this sorting problem, preferrably one that can avoid floating point calculations?

Optimally filling a 3D sphere with smaller spheres

I'm trying to optimally fill a 3D spherical volume with "particles" (represented by 3D XYZ vectors) that need to maintain a specific distance from each other, while attempting to minimize the amount of free space present in-between them.
There's one catch though- The particles themselves may fall on the boundary of the spherical volume- they just can't exist outside of it. Ideally, I'd like to maximize the number of particles that fall on this boundary (which makes this a kind of spherical packing problem, I suppose) and then fill the rest of the volume inwards.
Are there any kinds of algorithms out there that can solve this sort of thing? It doesn't need to be exact, but the key here is that the density of the final solution needs to be reasonably accurate (+/- ~5% of a "perfect" solution).
There is not a single formula which fills a sphere optimally with n spheres. On this wikipedia page you can see the optimal configurations for n <= 12. For the optimal configurations for n <= 500 you can view this site. As you can see on these sites different numbers of spheres have different optimal symmetry groups.
your constraints are a bit vague so hard to say for sure but I would try field approach for this. First see:
Computational complexity and shape nesting
Path generation for non-intersecting disc movement on a plane
How to implement a constraint solver for 2-D geometry?
and sub-links where you can find some examples of this approach.
Now the algo:
place N particles randomly inside sphere
N should be safely low so it is smaller then your solution particles count.
start field simulation
so use your solution rules to create attractive and repulsive forces and drive your particles via Newton D'Alembert physics. Do not forget to add friction (so movement will stop after time) and sphere volume boundary.
stop when your particles stop moving
so if max(|particles_velocity|)<threshold stop.
now check if all particles are correctly placed
not breaking any of your rules. If yes then remember this placement as solution and try again from #1 with N+1 particles. If not stop and use last correct solution.
To speed this up you can add more particles instead of using (N+1) similarly to binary search (add 32 particles until you can ... then just 16 ... ). Also you do not need to use random locations in #1 for the other runs. you can let the other particles start positions where they were placed in last run solution.
How to determine accuracy of the solution is entirely different matter. As you did not provide exact rules then we can only guess. I would try to estimate ideal particle density and compute the ideal particle count based on sphere volume. You can use this also for the initial guess of N and then compare with the final N.

Generating a minimal set of vertices from a spline/curve

In my project, I represent geometry using splines. For physics and rendering I preprocess the splines and convert them into lines, and later polygons, by sampling the splines at a regular interval. However, I want to reduce the number of vertices/lines by ignoring samples that are already well enough represented by a line.
Coming up short when searching, I was wondering if there are any traditional techniques to convert a curve to a set of vertices while reducing the resulting error.
EDIT: To clarify, the result I want to end up with is a number of vertices/line segments that best represent the spline with the fewest amount of vertices/line segments. I'm not sure how to define what "best represent the spline" really means, but the goal is to make it as hard as possible to distinguish the difference between the spline and the approximation.
It can be done by recursively refining part which is not near segment between part ends.
If we have curve (spline) C:[0,1]->R^n. Than first approximation is segment S between curve end points [C(0), C(1)]. Take point C(0.5) and check how far is it from segment S. If it is far than we have to take it in discretization, if not than S is good approximation. If C(0.5) is far, than next approximation is polyline [C(0), C(0.5), C(1)], and we make same procedure with parts [C(0), C(0.5)] and [C(0.5), C(1)].
If you are using polynomial spline of order >= 3 (e.g. cubic spline) than it can have inflection point(s). In that case it is possible that curve point on half can 'fall' right on segment, but curve around to be far from segment. In that case it is good to check one more level of sub-parts.
This is entirely based on my own intuition, so I'm not sure if it coincides AT ALL with best practices. I do have a mathematics degree, so hopefully it's not too far off. I'll have you note that the computation involved may outstrip performance gains granted by not using as many vertices if the spline needs to be recalculated frequently.
Let's say the vertices are in an array like [v(0), v(1), v(2),..., v(n)] where each v(i) is something like (x, y). By iterating over the vertices starting at v(1) and ending at v(n-1), we can compare a point with its neighbors in order to tell whether or not to discard it. Note that we ignore v(0) and v(n) for two reasons: (I assume) we don't want to remove our endpoints, and also v(0) and v(n) are missing a neighbor that we would need in order to set up our calculation. I can think of a couple possibilities here that might warrant examination, but one in particular seems (in my head) to be the best answer...
Consider the case where we're deciding whether or not to remove v(i) from the vertex array. We could examine the Cartesian distance between v(i) and its neighbors, and remove the point if both are below some threshold value T. For example if v(i-1) = (x1, y1) and v(i) = (x2, y2) and v(i+1) = (x3, y3), then we evaluate sqrt((x2-x1)^2 + (y2-y1)^2))<T && sqrt((x3-x2)^2 + (y3-y2)^2))<T, removing v(i) if the evaluation returns true.
In 3+ dimensions, this would become more complicated - the calculation would be similar, but you would require a method of determining a point's neighbors since they might not lie directly next to the examined point in the vertex array.

How can I pick a set of vertices to subtract from a polygon such that the distortion is minimum?

I'm working with a really slow renderer, and I need to approximate polygons so that they look almost the same when confined to a screen area containing very few pixels. That is, I'd need an algorithm to go through a polygon and subtract/move a bunch of vertices until the end polygon has a good combination of shape preservation and economy of vertice usage.
I don't know if there's a formal name for these kind of problems, but if anyone knows what it is it would help me get started with my research.
My untested plan is to remove the vertices that change the polygon area the least, and protect the vertices that touch the bounding box from removal, until the difference in area from the original polygon to the proposed approximate one exceeds a tolerance I specify.
This would all be done only once, not in real time.
Any other ideas?
Thanks!
You're thinking about the problem in a slightly off way. If your goal is to reduce the number of vertices with a minimum of distortion, you should be defining your distortion in terms of those same vertices, which define the shape. There's a very simple solution here, which I believe would solve your problem:
Calculate distance between adjacent vertices
Choose a tolerance between vertices, below which the vertices are resolved into a single vertex
Replace all pairs of vertices with distances lower than your cutoff with a single vertex halfway between the two.
Repeat until no vertices are removed.
Since your area is ultimately decided by the vertex placement, this method preserves shape and minimizes shape distortion. The one drawback is that distance between vertices might be slightly less intuitive than polygon area, but the two are proportional. If you really wish, you could run through the change in area that would result from vertex removal, but that's a lot more work for questionable benefit imo.
As mentioned by Angus, if you want a direct solution for the change in area, it's not actually super difficult. Was originally going to leave this as an exercise to the reader, but it's totally possible to solve this exactly, though you need to include vertices on either side.
Assume you're looking at a window of vertices [A, B, C, D] that are connected in that order. In this example we're determining the "cost" of combining B and C.
Calculate the angle offset from collinearity from A toward C. Basically you just want to see how far from collinear the two points are. This is |sin(|arctan(B - A)| - |arctan(C - A)|)| Where pipes are absolute value, and differences are the sensical notion of difference.
Calculate the total distance over which the angle change will effectively be applied, this is just the euclidean distance from A to B times the euclidean distance from B to C.
Multiply the terms from 2 and 3 to get your first term
To get your second term, repeat steps 2 - 4 replacing A with D, B with C, and C with B (just going in the opposite direction)
Calculate the geometric mean of the two terms obtained.
The number that results in step 6 presents the full-picture minus a couple constants.
I tried my own plan first: Protect the vertices touching the bounding box, then remove the rest in the order that changes the resultant area the least, until you can't find a vertice to remove that keeps the new polygon area within X% of the original one. This is the result with X = 5%:
When the user zooms out really far these shapes fit the bill well enough for me. I haven't tried any of the other suggestions. The savings are quite astonishing, sometimes from 80-100 vertices down to 4 or 5.

Linear programming algorithm

Consider the following algorithm for linear programming, minimizing [c,x] with A.x <= b.
(1) Start with a feasible point x_0
(2) Given a feasible point x_k, find the greatest alpha such that x_k - alpha.c is admissible (straighforward, look at the ratios of the components of A.x0 to A.c)
(3) take the normal unit vector n to the hyperplane we just reached, pointing inwards. Project n on the plane [c,.] giving r = n - [n,c]/[c,c].c, then look for the greatest beta for which x_k - alpha.c + beta.r is admissible. Set x_{k+1} = x_k - alpha.c + 1/2*beta.r
If x_{k+1} is close enough to x_k within tolerance, return it, otherwise go to (2)
The basic idea is to follow the gradient until one hits a wall. Then, rather than following the shell of the simplex, like the simplex algorithm would do, the solution is kicked back inside the simplex, on a plane where the solutions are no worse, in the direction of the normal vector. The solution moves halfway between the starting point and the next constraint in this direction. It's no worse than before, but now it's more "inside" the simplex, where is has a shot at taking long leaps towards the optimum.
Though the probability of hitting an intersection of more than one hyperplane is 0, if one gets close enough to multiple hyperplane within a certain tolerance, the average of the normals may be taken.
This can be generalized to any convex objective function by following geodesics on the levels of the function. For quadratic programming in particular, one rotates the solution towards the inside of the simplex.
Questions:
Does this algorithm have a name or fall within a category of linear-programming algorithms?
Does it have an obvious flaw that I'm overlooking?
I am pretty sure this doesn't work, unless I miss something: your algorithm will not start moving in most cases.
Assume your variable x is taken in R^n.
A polyhedron of the form Ax <= b is contained in a 'maximal' affine subspace P of dimension p <= n, and usually p is much smaller than n (you will have a lot of equality constraints, which can be implicit or explicit: you cannot assume that the expression of P is simple to obtain from A and b).
Now assume you can find an initial point x_0 (which is far from being obvious, btw) ; there is very little chance that the direction of the gradient c is a feasible direction. You would need to consider the projection of the direction c on P, and this is very difficult to do in practice (how would you compute such projection?).
Then, what you want in your step (3) is not the normal direction of the hyperplane you reached, but again its projection on P (visualize the polyedron as a 2d polyedron embedded in a 3d space can help).
There is a very good reason why barrier functions are used in the interior point methods: it is very difficult to describe in practice the geometry of the high-dimension convex sets from the constraints (even simple ones like polyedrons), and things that "seems obvious" when you draw a picture on a piece of paper will not usually work when the dimension of the polyedron increases.
One last point is that your algorithm would not give the exact solution, whereas the simplex does in theory (and I read somewhere it can be done in practice by working with exact rational numbers).
read up on interior point methods: http://en.wikipedia.org/wiki/Interior_point_method
this approach can have nice theoretical properties, but the algorithm performance can tend to tail off in practice

Resources