Please help me clear up this question I have about the volume ray casting algorithm:
In the wikipedia article (link), it says that "For each sampling point, a gradient of illumination values is computed. These represent the orientation of local surfaces within the volume."
My question is: Why a gradient of illumination values? Why not opacity values? Surely the transition from "stuff" to "no stuff" is more accurately described by changes in opacity.
Consider, for instance, two voxels: [1][2]. 1 is bright and transparent, and 2 is dark and opaque. In my mind this corresponds to a surface facing left. Am I missing something?
The gradient vector points along the direction of greatest increase. In this case, that means it points in the direction in which illumination increases most rapidly, and so in your [1][2] example the gradient at 2 indeed points left.
Conversely, if you took the gradient of the opacity you'd get a vector pointing towards the direction of increasing opacity, ie "inwards". You could take the negation of that to get a vector pointing outwards, but there's still the problem that opacity is local. It tells you only about the structure of the object in the immediate surroundings. When you're shading something, what you want to know is how well-lit a surface is as you travel from point-to-point. Knowing the surfaces of constant illumination allows you to deduce that, but from a surface of constant opacity you could only deduce how well-lit it'd be if there were nothing between the surface and the light source.
gradient is the normal vector to iso-surface, you may consider volumetric object as a stack of iso-surfaces (a lot of them ;o), each iso-surface has identical scalar field value upon its surface; Transfer Function sets correspondence of opacity/Red/Green/Blue and scalar values so each iso-surface has an assigned transparency and color. Such representation makes simple to understand the role of volumetric gradient for volume rendering...
In classical ray tracing, you calculate a ray from your eye through a pixel on a screen in front of you and out into the world to see what it hits. Once you determine the intersection point, you need to determine the lighting characteristics at that point. The lighting characteristics depend on the surface normal. When you think about it, the surface normal is a property that's determined not just by the intersection point itself but by the the surrounding surface.
This same principle applies to ray casting. Once you determine that the ray does hit some voxel, you need to analyze surrounding voxels to effectively calculate a "surface normal" that you can use for the lighting calculations.
Related
I'd like to add "flying" (3D) arcs to my orthographic projection, as shown here, but with clipping instead of the fade effect. This seems difficult since the arcs are created independently of the projection. (Each arc is defined by three points obtained from the projection--the start, end, and great circle midpoint extended along a line from the center of the canvas--but the arc itself is drawn using "2D" cardinal interpolation on the corresponding points on the svg canvas.)
My first thought was that I might need to do some spherical geometry to get the coordinates where the clipping happens, but now I'm wondering if there's a more straightforward way to accomplish this (I'm new to D3).
This is what my map looks like without clipping:
I'm also very green to d3, but fortunately I'm also fresh from my own search for a decent solution for clipping flight lines in orthographic. The demo you link to is clever in more ways than one:
The arc is drawn from three points interpolated with a Catmull-Rom curve in the projected 2D coordinates that happens to visually approximate a true circular curve in 3D nicely
The line fades with proximity of either of its points to the clipping plane, as you've pointed out
Drawing the spline in the projected, 2D coordinates eliminates any option to split the line before projection and get visual smoothness for cheap, even if d3 had the functionality natively (which I haven't been able to find anyways). That means that interpolation will have to be a lot more manual.
My first thought was that I might need to do some spherical geometry to get the coordinates where the clipping happens, but now I'm wondering if there's a more straightforward way to accomplish this
I eventually settled with what I consider the most obvious option, which unfortunately you're aiming precisely to avoid:
Obtain the coordinates of the clipping point by the cross product of the current globe center with the normal to the great circle plane of the arc. Given your origin and destination Po and Pd respectively and the globe center C, you're looking for C x (Po x Pd) normalized
Interpolate coordinates between your origin and destination using something like d3.geoInterpolate
Project interpolated point at the right scale (read: elevation) above the ground for that fraction of the flight line
Draw one [smoothed] line from the origin to the clipping point along the interpolated points in between, and another from the clipping point to the destination, moving one accordingly to the background. Watch the cases where the whole line is in front or behind the clipping plane.
To figure out where in the flight path you need to splice your clipped point, you will probably need to compare the great angles of your clipping point to one end vs. end-to-end. Note also that performance takes a hit, but you may still be satisfied with the number of flight lines you've drawn in your example.
I am trying to draw a area graph with a gradient. This is what I have right now.
If you look at the red-green graph, you will notice the gradient is does not look the way its supposed to.
EDIT: The gradient should be uniform like this:
I am using OpenGL ES 2.0 and GLKit to draw a bunch of charts. The chart is drawn using GL_TRIANGLES. I understand that the issue is that the gradient is being drawn for each triangle individually.
The only approach I can think of is to use a stencil buffer. I will draw the gradient in a big rectangle and clip it to this shape using the stencil. Is there a better way to do this? If not could you help me draw a stencil with specified points? I am new to OpenGL and not getting a good explanation on using stencil buffer.
You don't need a stencil buffer. I don't think more triangles will help, either — more likely that'd just cause you more confusion because you'd be assigning per-vertex colors to intermediate vertices and having to interpolate them yourself.
Your gradients are coming out that way because of how and where you assign vertex colors for interpolation. Notice the difference in colors between your output and the example of what you're looking for:
You've got 100% red at every vertex along the top edge of your graph, and 100% green at every vertex along the bottom edge. OpenGL interpolates colors linearly across the face of each triangle, which is why you've got more red in the shorter parts of your graph.
In the output you're looking for, the top of the graph starts out less red in the shorter parts, so that it makes a shorter transition to white in over shorter distance.
There are a few different ways to do this, but probably the easiest (for your plan of using GLKBaseEffect instead of writing your own shaders) might be to use a 1D texture for your gradient, and assign a texture coordinate to each vertex that's proportional to its Y coordinate on the graph, like so:
(The example coordinates in my diagram assume your graph vertices cover the range 0.0 to 1.0, but the point stands regardless: the vertical texture coordinate for each point should be a fraction of the graph's total height, between 0.0 and 1.0.)
Alternatively, you could look into drawing in two passes: First, draw the shape of your graph, then draw a quad (two triangles) covering the entire screen with your gradient, using the appropriate glBlendFunc so that it only draws over the area you've filled in with your graph shape.
OpenGL ES can do what you want but you need to increase the tessellation of your model. In other words, instead of using just a few large triangles, you need more and smaller triangles, with the vertex color changes spread over them evenly. This will give you better control over the gradients. Triangles are cheap on accelerated OpenGL ES, so even if you increase the number 100 times, it will not have much impact on performance.
You might also consider a different approach, where the entire graph is covered by a single texture which contains the gradient. That would be easier to implement.
I'm creating a 3D globe with a map on it which is supposed to unravel and fill the screen after a few seconds.
I've managed to create the globe using three.js and webGL, but I'm having trouble finding any information on being able to animate a shape change. Can anyone provide any help? Is it even possible?
(Abstract Algorithm's and Kevin Reid's answers are good, and only one thing is missing: some actual Three.js code.)
You basically need to calculate where each point of the original sphere will be mapped to after it flattens out into a plane. This data is an attribute of the shader: a piece of data attached to each vertex that differs from vertex to vertex of the geometry. Then, to animate the transition from the original position to the end position, in your animation loop you will need to update the amount of time that has passed. This data is a uniform of the shader: a piece of data that remains constant for all vertices during each frame of the animation, but may change from one frame to the next. Finally, there exists a convenient function called "mix" that will linearly interpolate between the original position and the end/goal position of each vertex.
I've written two examples for you: the first just "flattens" a sphere, sending the point (x,y,z) to the point (x,0,z).
http://stemkoski.github.io/Three.js/Shader-Attributes.html
The second example follows Abstract Algorithm's suggestion in the comments: "unwrapping the sphere's vertices back on plane surface, like inverse sphere UV mapping." In this example, we can easily calculate the ending position from the UV coordinates, and so we actually don't need attributes in this case.
http://stemkoski.github.io/Three.js/Sphere-Unwrapping.html
Hope this helps!
In 3D, anything and everything is possible. ;)
Your sphere geometry has it's own vertices, and basically you just need to animate their position, so after animation they are all sitting on one planar surface.
Try creating sphere and plane geometry, with same number of vertices, and animating sphere's vertices with interpolated values of sphere's and plane's original values. That way, on the start you would have sphere shape and in the end, plane shape.
Hope this helps, tell me if you need more directives how to do it.
myGlobe.geometry.vertices[index].position = something_calculated;
// myGlobe is instance of THREE.Mesh and something_calculated would be THREE.Vector3 instance that you can calculate in some manner (sphere-plane interpolation over time)
(Abstract Algorithm's answer is good, but I think one thing needs improvement: namely using vertex shaders.)
You make a set of vertices textured with the map image. Then, design a calculation for interpolating between the sphere shape and the flat shape. It doesn't have to be linear interpolation — for example, one way that might be good is to put the map on a small portion of an sphere of increasing radius until it looks flat (getting it all the way will be tricky).
Then, write that calculation in your vertex shader. The position of each vertex can be computed entirely from the texture coordinates (since that determines where-on-the-map the vertex goes and implies its position) and a uniform variable containing a time value.
Using the vertex shader will be much more efficient than recomputing and re-uploading the coordinates using JavaScript, allowing perfectly smooth animation with plenty of spare resources to do other things as well.
Unfortunately, I'm not familiar enough with Three.js to describe how to do this in detail, but all of the above is straightforward in basic WebGL and should be possible in any decent framework.
I am trying to create a terrain solution in ThreeJS and I'm running into some trouble with the generation of the normals. I am approaching the problem by creating a number of mesh objects using the THREE.PlaneGeometry class. Once all of the tiles have been created I go through each and set the UV's so that each tile represents a part of the whole. I also generate a height value of the vertex Y positions to create some hills. I then call the geometry functions
geometry.computeFaceNormals();
geometry.computeVertexNormals();
This is just so that I have some default face and vertex normals for each tile.
I then go through each tile and try to average out the normals on each corner.
The problem is (I think) with the normals, but I don't really know what to call this problem. Each of the normals on the plane's corners point in the same direction as the face when created. This makes the terrain look like a flat shaded object. To prevent this I thought perhaps what I needed to do was make sure each vertext normal (each corner) had the same averaged normal as its immediate neighbours normals. I.E each corner of each tile has the same normal as all the immediate normals around it from the adjacent planes.
figure A
Here I am visualising each of the 4 normals on the mesh. You can see that at each corner the normals are the same (On top of eachother)
figure B
EDIT
figure C
EDIT
Figure D
Except even when the verts all share the same normals it still comes up all blocky <:/
I don't know how to do this... I think my understanding of what needs to be done is incorrect...?
Any help would be greatly appreciated.
You're basically right about what should happen. The shading you're getting is not consistent with continuous normals. If each all the vertex-faces at a given location have the same normal you should not see the clear shading discontinuities in your second image. However the image doesn't look like simple face normals either, at least not to my eye.
A couple of things to look at:
1) I note that your quads themselves are not planar. Is it possible your algorithm is assuming that they are? the non-planar quad meshes don't have real 'face normal' to use as a base.
2) Are your normalized normalized after you average them? That is, do they have a vector length of 1?
3) Are you confident that the normal averaging code is actually using the correct normals to average? The shading in this does not look like completely flat shaded image where each vertex-face normal in a quad is the same - if that were the case you'd get consistent shading across each quad although the quads would not be continuous. This it possible your original vertex-face normals are not in fact lined up with the face normals?
4) Try turning off the bump maps to debug. Depending on how the bump is being done in your shader you may have incorrect binormals/bitangents rather than bad vert normals.
Instead of averaging at each vertex / corner the neighborhood normals you should average the four normals that each vertex has (4 tiles meet at each vertex).
Starting with a 3D mesh, how would you give a rounded appearance to the edges and corners between the polygons of that mesh?
Without wishing to discourage other approaches, here's how I'm currently approaching the problem:
Given the mesh for a regular polyhedron, I can give the mesh's edges a rounded appearance by scaling each polygon along its plane and connecting the edges using cylinder segments such that each cylinder is tangent to each polygon where it meets that polygon.
Here's an example involving a cube:
Here's the cube after scaling its polygons:
Here's the cube after connecting the polygons' edges using cylinders:
What I'm having trouble with is figuring out how to deal with the corners between polygons, especially in cases where more than three edges meet at each corner. I'd also like an algorithm that works for all closed polyhedra instead of just those that are regular.
I post this as an answer because I can't put images into comments.
Sattle point
Here's an image of two brothers camping:
They placed their simple tents right beside each other in the middle of a steep walley (that's one bad place for tents, but thats not the point), so one end of each tent points upwards. At the point where the four squares meet you have a sattle point. The two edges on top of each tent can be rounded normally as well as the two downward edges. But at the sattle point you have different curvature in both directions and therefore its not possible to use a sphere. This rules out Svante's solution.
Selfintersection
The following image shows some 3D polygons if viewed from the side. Its some sharp thing with a hole drilled into it from the other side. The left image shows it before, the right after rounding.
.
The mass thats get removed from the sharp edge containts the end of the drill hole.
There is someething else to see here. The drill holes sides might be very large polygons (lets say it's not a hole but a slit). Still you only get small radii at the top. you can't just scale your polygons, you have to take into account the neighboring polygon.
Convexity
You say you're only removing mass, this is only true if your geometry is convex. Look at the image you posted. But now assume that the viewer is inside the volume. The radii turn away from you and therefore add mass.
NURBS
I'm not a nurbs specialist my self. But the constraints would look something like this:
The corners of the nurbs patch must be at the same position as the corners of the scaled-down polygons. The normal vectors of the nurb surface at the corners must be equal to the normal of the polygon. This should be sufficent to gurarantee that the nurb edge will be a straight line following the polygon edge. The normals also ensure that no visible edges will result at the border between polygon and nurbs patch.
I'd just do the math myself. nurbs are just polygons. You'll have some unknown coefficients and your constraints. This gives you a system of equations (often linear) that you can solve.
Is there any upper bound on the number of faces, that meet at that corner?
You might you might employ concepts from CAGD, especially Non-Uniform Rational B-Splines (NURBS) might be of interest for you.
Your current approach - glueing some fixed geometrical primitives might be too inflexible to solve the problem. NURBS require some mathematical work to get used to, but might be more suitable for your needs.
Extrapolating your cylinder-edge approach, the corners should be spheres, resp. sphere segments, that have the same radius as the cylinders meeting there and the centre at the intersection of the cylinders' axes.
Here we have a single C++ header for generating triangulated rounded 3D boxes. The code is in C++ but also easy to transplant to other coding languages. Also it's easy to be modified for other primitives like quads.
https://github.com/nepluno/RoundCornerBox
As #Raymond suggests, I also think that the nepluno repo provides a very good implementation to solve this issue; efficient and simple.
To complete his answer, I just wrote a solution to this issue in JS, based on the BabylonJS 3D engine. This solution can be found here, and can be quite easily replaced by another 3D engine:
https://playground.babylonjs.com/#AY7B23