Looking for some pointers for voxelisation strategies - algorithm

To voxelise a mesh basically means to be able to determine for a point(x,y,z) if it is either inside or outside a mesh.
A mesh here is just a raw set of triangles. Being outside the mesh means that there is a ray from the point (with any direction) that does not intersect the mesh from any viewpoint.
For a well behaved, closed, non intersecting mesh, that is easy: Trace a ray in any direction, if the number of intersections is odd, the point is inside.
But for a "bad" mesh, composed of open parts this is terrible. For example the mesh could be two cubes connected by an open cylinder that sticks into both of them.
ZBuffer rendering solves this fine for one viewpoint. But the issue is for any viewpoint. To me the problem is well defined, but not obvious to solve.
I am looking for some pointers on how to approach this problem. There must be tons of research for it. Any links to papers? Or am I missing something in how to think about the problem?

It's possible if all of the surfaces on your meshes have a "sidedness" to them, i.e. they have a front side and a back side.
Then to determine if a point is inside the mesh you can trace a ray from the point in any direction, and keep a count of intersections like this:
if the ray goes through the back side and out the front side (i.e. emerging from the inside to the outside), then add one.
if the ray goes through the front side and out the back side, (i.e. entering the inside from the outside), then subtract one.
if the mesh surface is double sided, do not add or subtract anything.
If the final count is positive, or if the point lies exactly on any surface, then the point is inside the mesh.
You need to add the restriction that it's never possible to see an exposed back side of a surface from outside the mesh. This is equivalent to saying that the mesh should always render correctly from all exterior viewpoints with back-face culling turned on.
For the example with the cube and open cylinder, for the cube to be solid, its surfaces should be single sided (making them double-sided would mean defining a hollow cube with infinitely thin walls).
The mesh surface of the cylinder would also have to be single-sided, or else it also would have infinitely thin walls, and points inside the cylinder (that aren't inside the cubes) would not be inside the mesh. As long as the ends of the cylinder are stuck inside the cube, the restriction is not broken since you can never see an exposed back side of a face.
If one of the cubes is removed, then the restriction is not met and this algorithm will fail.

Related

Threejs - can you use circleBufferGeometry with Points material?

I am setting up a particle system in threejs by adapting the buffer geometry drawcalls example in threejs. I want to create a series of points, but I want them to be round.
The documentation for threejs points says it accepts geometry or buffer geometry, but I also noticed there is a circleBufferGeometry. Can I use this?
Or is there another way to make the points round besides using sprites? I'm not sure, but it seems like loading an image for each particle would cause a lot of unnecessary overhead.
So, in short, is there a more performant or simple way to make a particle system of round particles (spheres or discs) in threejs without sprites?
If you want to draw each "point"/"particle" as a geometric circle, you can use THREE.InstancedBufferGeometry or take a look at this
The geometry of a Points object defines where the points exist in 3D space. It does not define the shape of the points. Points are also drawn as quads, so they're always going to be a square, though they don't have to appear that way.
Your first option is to (as you pointed out) load a texture for each point. I don't really see how this would introduce "a lot" of overhead, because the texture would only be loaded once, and would be applied to all points. But, I'm sure you have your reasons.
Your other option is to create your own shader to draw the point as a circle. This method takes the point as a square, and discards any fragments (multiple fragments make up a pixel) outside the circle.

Can points or meshes be drawn at infinite distance?

I'm interested in drawing a stardome in THREE.js using either mesh points or a particle system.
I don't want the camera to be able to move any closer to any part of the stardome, since the stars are effectively at infinite distance.
I can think of a couple of ways to do this:
A very large mesh (or very large point/particle distances)
Camera and stardome have their movement exactly linked.
Is there any way to specify a mesh, point, or particle system is automaticaly rendered at infinite distance so it is always drawn behind any foreground objects?
I haven't used three.js, but my guess is no. OpenGL camera's need a "near clipping plane" and "far clipping plane", which effectively denote the minimum and maximum distance that it'll render things in. If you've played video games where you move too close to a wall and start to see through it, or see things in the distance suddenly vanish as you move away, those were probably the clipping planes at work.
The workaround is usually one of 2 ways:
1) Set the far clipping plane distance as high as it'll let you go. I don't know what data type three.js would use for this, but my guess is a 32-bit float.
2) Render it in "layers". Render all the stars first before anything else in the scene.
Option 2 is the one I usually use.
Even if you used option 1, you would still synchronize the position of the camera and skybox.
If you do not depth cull, draw the skybox first and match its position, but not rotation, to the camera.
Also disable lighting on the skybox. Instead, bake an ambience directly into its texture.
You're don't want things infinitely away, you just want them not to move with respect to the viewer and to not appear in front of things. The best way to do that is to prevent the viewer from getting closer to them which produces the illusion of the object being far away. The second thing is to modify your depth culling function so that the skybox is always considered further away than whatever you are currently drawing.
If you create a very large mesh object, you'll have to set your camera's far plane large enough to include the mesh which means you'll end up drawing things that you really do want to cull.

three.js - Overlapping layers flickering

When several objects overlap on the same plane, they start to flicker. How do I tell the renderer to put one of the objects in front?
I tried to use .renderDepth, but it only works partly -
see example here: http://liveweave.com/ahTdFQ
Both boxes have the same size and it works as intended. I can change which of the boxes is visible by setting .renderDepth. But if one of the boxes is a bit smaller (say 40,50,50) the contacting layers are flickering and the render depth doesn't work anymore.
How to fix that issue?
When .renderDepth() doesn't work, you have to set the depths yourself.
Moving whole meshes around is indeed not really efficient.
What you are looking for are offsets bound to materials:
material.polygonOffset = true;
material.polygonOffsetFactor = -0.1;
should solve your issue. See update here: http://liveweave.com/syC0L4
Use negative factors to display and positive factors to hide.
Try for starters to reduce the far range on your camera. Try with 1000. Generally speaking, you shouldn't be having overlapping faces in your 3d scene, unless they are treated in a VERY specific way (look up the term 'decal textures'/'decals'). So basically, you have to create depth offsets, and perhaps even pre sort the objects when doing this, which all requires pretty low-level tinkering.
If the far range reduction helps, then you're experiencing a lack of precision (depending on the device). Also look up 'z fighting'
UPDATE
Don't overlap planes.
How do I tell the renderer to put one of the objects in front?
You put one object in front of the other :)
For example if you have a camera at 0,0,0 looking at an object at 0,0,10, if you want another object to be behind the first object put it at 0,0,11 it should work.
UPDATE2
What is z-buffering:
http://en.wikipedia.org/wiki/Z-buffering
http://msdn.microsoft.com/en-us/library/bb976071.aspx
Take note of "floating point in range of 0.0 - 1.0".
What is z-fighting:
http://en.wikipedia.org/wiki/Z-fighting
...have similar values in the z-buffer. It is particularly prevalent with
coplanar polygons, where two faces occupy essentially the same space,
with neither in front. Affected pixels are rendered with fragments
from one polygon or the other arbitrarily, in a manner determined by
the precision of the z-buffer.
"The renderer cannot reposition anything."
I think that this is completely untrue. The renderer can reposition everything, and probably does if it's not shadertoy, or some video filter or something. Every time you move your camera the renderer repositions everything (the camera is actually the only thing that DOES NOT MOVE).
It seems that you are missing some crucial concepts here, i'd start with this:
http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/
About the depth offset mentioned:
How this would work, say you want to draw a decal on a surface. You can 'draw' another mesh on this surface - by say, projecting a quad onto it. You want to draw a bullet hole over a concrete wall and end up with two coplanar surfaces - the wall, the bullet hole. You can figure out the depth buffer precision, find the smallest value, and then move the bullet hole mesh by that value towards the camera. The object does not get scaled (you're doing this in NDC which you can visualize as a cube and moving planes back and forth in the smallest possible increment), but does translate in depth direction, ending up in front of the other.
I don't see any flicker. The cube movement in 3D seems to be super-smooth. Can you try in a different computer (may be faster one)? I used Chrome on Macbook Pro.

Box backface culling

Assume I have a camera defined by its position and direction, and a box defined by its center and extents (three orthogonal vectors from the box center to face centers). Face is visible when its outer surface is facing the camera and invisible when its inner surface is facing it.
It seems obvious that depending on box position and orientation there may be 1-3 faces of the box visible. Is there some clever way how to determine which faces are visible? An obvious solution would be to compute 6 dot-products of the face normal against the face-camera vector for each face. Is there a better way?
Note: perspective projection will be used, but I do not think it matters, the property of "facing camera" seems independent to a projecting.
I believe the method you described is the normal way to do this. It's a very fast calculation so you shouldn't be worried too much about speed. This is the same method they use to reduce the number of calculations for ray-triangle intersection algorithms. If the front of the face isn't visible, the method doesn't continue calculations for that face. See this paper for a c++ implementation of this algorithm. It's in the first half of the calculations. http://jgt.akpeters.com/papers/MollerTrumbore97/code.html
The only cleverness is that if a face of the cube is visible, the opposing face definitely isn't. At least in a regular perspective projection.
Note that the opposite might not be true: if a face is invisible, the opposing face might be invisible too. This is because the type of projection does matter. Imagine the cube being really up close to the camera, which is looking straight at one face. Then rotate the cube slightly, and while with a parallel projection, another face would immediately become visible, in a perspective projection this doesn't happen.

Raytraced Shadows Problem

I've got a problem with shadowrays in my raytracer.
Please have a look at the following two pictures
3D sMax:
alt text http://neo.cycovery.com/shadow_problem.gif
My Raytracer:
alt text http://neo.cycovery.com/shadow_problem2.jpg
The scene is lit by a very bright light, shining from the back. It's so bright that there is no gradient in the shading, just either white or dark (due to the overexposure).
both images were rendered using 3DStudioMax and both use the exact same geometry, just in one case the normals are interpolated across the triangles.
Now consider the red dot on the surface. In the case of the unsmoothed version, it lies in a dark area. this means that the light source is not visible from this triangle, since it's facing away from it.
In the smoothed version however, it lies in the lit area, because the interpolated normal would suggest, that the light would be visible at that point (although the actual geometry of the triangle is facing away from the lightsource).
My problem now is when raytraced shadows come in. if a shadowray is shot into the scene, from the red dot, to test whether the light-source is visible or not (to determine shadowing), the shadowray will return an intersection, independent of whether normals are interpolated or not (because intersections only depend on the geometry). Therefore the pixel would be shaded dark.
3dsamx is handling the case correctly - the rendered image was generated with Raytraced shadows turned on. However, my own Raytracer runs exactly into this problem when i turn on raytraced shadows (in my raytracer, the point is dark in both cases, because raytraced shadows determine the point lying in the shadow), and i don't know how to solve it.
I hope someone knows this problem and how to deal with it..
thanks!
The 'correct' solutions are either to tesselate triangles, or to solve the equation of the surface the triangle belongs to. I have seen only the tessellation. Tessellation gives you the controllable precision and so on...
Otherwise, you should test normal in the point (what I believe '3DStudio' does) and in the case the normal is not facing the light, just set the point as not lit. It has nothing to do with 'self-shading'. Easily this problem can be solved only with tessellation. Good luck!
I'm not sure if I understood your problem correctly.
It's kind of hard to get which version/result is obtained by which method and what result you
consider correct.
Isn't it the case when you need to threat intersection of shadowray with the triangle on which
The Red Point ;-) lies as special case.
You don't do geometry intersection, as with any other triangle,
but only direction check between interpolated normal and shadowray.
Or in more general sense, you say that shadowray stops at a triangle, any triangle, when:
a) they intersect and
b) interpolated normal of a triangle at intersection point has direction opposite to shadowray.
How you can do it:
If the interpolated normal at the point is facing towards the light then the surface is potentially facing the light. If facing away you are in shadow.
In the first case, the two things that would cause a shadow are other objects, and yourself if you are concave object. In the other object that is easy.
Now in the case of yourself, when you cast your ray at the light source, if you are truly 'convex inside point' you will hit yourself twice as you enter then leave the object, thus in shadow.
If you hit yourself a single time, then you must be on the edge of where the light is striking, but as we know at that point we are facing the light (via the smoothed normals) it means we are not shadowed.
Mat, I believe your problem could be that your shadow ray hits the same triangle that it originates from because it is so close to being tangential (note that it happens right on the border of the light-shadow transition). There are two ways of solving this problem: One approach is to use a "bias" that tests the distance from the ray origin, but I think a better solution is to store a reference to the originating triangle with the ray. If you do a test against this triangle, simply ignore it.
If you happen to be using a spatial index like a BVH, then you could try stepping out of the "box" containing the triangle before doing any intersection tests, but this is less simple than before-mentioned approach and must be approached with more care.
It's normal to shift an intersection by a small amount in the direction of the light source when firing shadow rays.
I.e. shadowRay.dir = shadowRay.dir * 0.0001
This avoids the shadow ray intersecting the same primitive as the primary / reflective / refractive ray did.
Perhaps your problem is because you did not do this?
I had the same issue and in my case an if statement where if the dot prod of the light direction and the normal is positive, did the trick.
the pseudocode is:
bool shadow_hit(light, shader_ctx){
....
if(dot(light.dir, shader_ctx.normal) > 0) return false
...
return true;
}

Resources