Tiled Terrain Normals - three.js

I am trying to create a terrain solution in ThreeJS and I'm running into some trouble with the generation of the normals. I am approaching the problem by creating a number of mesh objects using the THREE.PlaneGeometry class. Once all of the tiles have been created I go through each and set the UV's so that each tile represents a part of the whole. I also generate a height value of the vertex Y positions to create some hills. I then call the geometry functions
geometry.computeFaceNormals();
geometry.computeVertexNormals();
This is just so that I have some default face and vertex normals for each tile.
I then go through each tile and try to average out the normals on each corner.
The problem is (I think) with the normals, but I don't really know what to call this problem. Each of the normals on the plane's corners point in the same direction as the face when created. This makes the terrain look like a flat shaded object. To prevent this I thought perhaps what I needed to do was make sure each vertext normal (each corner) had the same averaged normal as its immediate neighbours normals. I.E each corner of each tile has the same normal as all the immediate normals around it from the adjacent planes.
figure A
Here I am visualising each of the 4 normals on the mesh. You can see that at each corner the normals are the same (On top of eachother)
figure B
EDIT
figure C
EDIT
Figure D
Except even when the verts all share the same normals it still comes up all blocky <:/
I don't know how to do this... I think my understanding of what needs to be done is incorrect...?
Any help would be greatly appreciated.

You're basically right about what should happen. The shading you're getting is not consistent with continuous normals. If each all the vertex-faces at a given location have the same normal you should not see the clear shading discontinuities in your second image. However the image doesn't look like simple face normals either, at least not to my eye.
A couple of things to look at:
1) I note that your quads themselves are not planar. Is it possible your algorithm is assuming that they are? the non-planar quad meshes don't have real 'face normal' to use as a base.
2) Are your normalized normalized after you average them? That is, do they have a vector length of 1?
3) Are you confident that the normal averaging code is actually using the correct normals to average? The shading in this does not look like completely flat shaded image where each vertex-face normal in a quad is the same - if that were the case you'd get consistent shading across each quad although the quads would not be continuous. This it possible your original vertex-face normals are not in fact lined up with the face normals?
4) Try turning off the bump maps to debug. Depending on how the bump is being done in your shader you may have incorrect binormals/bitangents rather than bad vert normals.

Instead of averaging at each vertex / corner the neighborhood normals you should average the four normals that each vertex has (4 tiles meet at each vertex).

Related

UV-mapping a BufferGeometrys indices in Three.js

I'm having trouble UV mapping each side of a cube. The cube was made as a BufferGeometry where each side is a rotated copy of one side (sort of a working template) and rotated accordingly by applying a quarternion. UVs are copied as well. I'll skip any vertex coordinate I've seen before and rely on indices.
This leaves me with a total of 8 vertices and 12 faces. But I think I'm running short on vertices when I have to set all of my UVs. As obvious on the screenshot I've "correctly" mapped each side of the cube. But the top and bottom is lacking. I don't know how to set the vertex UV top and bottom faces.
Can I in some way apply several UVs on the same vertex depending on which face it is used in or have I completely lost the plot?
I could solve the problem by applying 6 PlaneBufferGeometry but that would leave me with 4*6=24 vertices. That is a whole lot more than 8.
I haven't been able to figure this one out. Either I've complete misunderstood how it works or what I'm trying to accomplish is impossible given my constraints.
With BufferGeometry, vertices can only be reused if all the attributes for that vertex match. Since each corner of the cube has 3 perpendicular normals, there must be 3 copies of that vertex.
If you have uvs, it is the same issue -- vertices must be duplicated if the uvs are different.
Study BoxBufferGeometry, which is implemented as "indexed-BufferGeometry".
three.js r.90

Clipping non-projection elements (orthographic)

I'd like to add "flying" (3D) arcs to my orthographic projection, as shown here, but with clipping instead of the fade effect. This seems difficult since the arcs are created independently of the projection. (Each arc is defined by three points obtained from the projection--the start, end, and great circle midpoint extended along a line from the center of the canvas--but the arc itself is drawn using "2D" cardinal interpolation on the corresponding points on the svg canvas.)
My first thought was that I might need to do some spherical geometry to get the coordinates where the clipping happens, but now I'm wondering if there's a more straightforward way to accomplish this (I'm new to D3).
This is what my map looks like without clipping:
I'm also very green to d3, but fortunately I'm also fresh from my own search for a decent solution for clipping flight lines in orthographic. The demo you link to is clever in more ways than one:
The arc is drawn from three points interpolated with a Catmull-Rom curve in the projected 2D coordinates that happens to visually approximate a true circular curve in 3D nicely
The line fades with proximity of either of its points to the clipping plane, as you've pointed out
Drawing the spline in the projected, 2D coordinates eliminates any option to split the line before projection and get visual smoothness for cheap, even if d3 had the functionality natively (which I haven't been able to find anyways). That means that interpolation will have to be a lot more manual.
My first thought was that I might need to do some spherical geometry to get the coordinates where the clipping happens, but now I'm wondering if there's a more straightforward way to accomplish this
I eventually settled with what I consider the most obvious option, which unfortunately you're aiming precisely to avoid:
Obtain the coordinates of the clipping point by the cross product of the current globe center with the normal to the great circle plane of the arc. Given your origin and destination Po and Pd respectively and the globe center C, you're looking for C x (Po x Pd) normalized
Interpolate coordinates between your origin and destination using something like d3.geoInterpolate
Project interpolated point at the right scale (read: elevation) above the ground for that fraction of the flight line
Draw one [smoothed] line from the origin to the clipping point along the interpolated points in between, and another from the clipping point to the destination, moving one accordingly to the background. Watch the cases where the whole line is in front or behind the clipping plane.
To figure out where in the flight path you need to splice your clipped point, you will probably need to compare the great angles of your clipping point to one end vs. end-to-end. Note also that performance takes a hit, but you may still be satisfied with the number of flight lines you've drawn in your example.

WebGL Change Shape Animation

I'm creating a 3D globe with a map on it which is supposed to unravel and fill the screen after a few seconds.
I've managed to create the globe using three.js and webGL, but I'm having trouble finding any information on being able to animate a shape change. Can anyone provide any help? Is it even possible?
(Abstract Algorithm's and Kevin Reid's answers are good, and only one thing is missing: some actual Three.js code.)
You basically need to calculate where each point of the original sphere will be mapped to after it flattens out into a plane. This data is an attribute of the shader: a piece of data attached to each vertex that differs from vertex to vertex of the geometry. Then, to animate the transition from the original position to the end position, in your animation loop you will need to update the amount of time that has passed. This data is a uniform of the shader: a piece of data that remains constant for all vertices during each frame of the animation, but may change from one frame to the next. Finally, there exists a convenient function called "mix" that will linearly interpolate between the original position and the end/goal position of each vertex.
I've written two examples for you: the first just "flattens" a sphere, sending the point (x,y,z) to the point (x,0,z).
http://stemkoski.github.io/Three.js/Shader-Attributes.html
The second example follows Abstract Algorithm's suggestion in the comments: "unwrapping the sphere's vertices back on plane surface, like inverse sphere UV mapping." In this example, we can easily calculate the ending position from the UV coordinates, and so we actually don't need attributes in this case.
http://stemkoski.github.io/Three.js/Sphere-Unwrapping.html
Hope this helps!
In 3D, anything and everything is possible. ;)
Your sphere geometry has it's own vertices, and basically you just need to animate their position, so after animation they are all sitting on one planar surface.
Try creating sphere and plane geometry, with same number of vertices, and animating sphere's vertices with interpolated values of sphere's and plane's original values. That way, on the start you would have sphere shape and in the end, plane shape.
Hope this helps, tell me if you need more directives how to do it.
myGlobe.geometry.vertices[index].position = something_calculated;
// myGlobe is instance of THREE.Mesh and something_calculated would be THREE.Vector3 instance that you can calculate in some manner (sphere-plane interpolation over time)
(Abstract Algorithm's answer is good, but I think one thing needs improvement: namely using vertex shaders.)
You make a set of vertices textured with the map image. Then, design a calculation for interpolating between the sphere shape and the flat shape. It doesn't have to be linear interpolation — for example, one way that might be good is to put the map on a small portion of an sphere of increasing radius until it looks flat (getting it all the way will be tricky).
Then, write that calculation in your vertex shader. The position of each vertex can be computed entirely from the texture coordinates (since that determines where-on-the-map the vertex goes and implies its position) and a uniform variable containing a time value.
Using the vertex shader will be much more efficient than recomputing and re-uploading the coordinates using JavaScript, allowing perfectly smooth animation with plenty of spare resources to do other things as well.
Unfortunately, I'm not familiar enough with Three.js to describe how to do this in detail, but all of the above is straightforward in basic WebGL and should be possible in any decent framework.

2D geometry outline shader

I want to create a shader to outline 2D geometry. I'm using OpenGL ES2.0. I don't want to use a convolution filter, as the outline is not dependent on the texture, and it is too slow (I tried rendering the textured geometry to another texture, and then drawing that with the convolution shader). I've also tried doing 2 passes, the first being single colorded overscaled geometry to represent an oultine, and then normal drawing on top, but this results in different thicknesses or unaligned outlines. I've looking into how silhouette's in cel-shading are done but they are all calculated using normals and lights, which I don't use at all.
I'm using Box2D for physics, and have "destructable" objects with multiple fixtures. At any point an object can be broken down (fixtures deleted), and I want to the outline to follow the new outter counter.
I'm doing the drawing with a vertex buffer that matches the vertices of the fixtures, preset texture coordinates, and indices to draw triangles. When a fixture is removed, it's associated indices in the index buffer are set to 0, so no triangles are drawn there anymore.
The following image shows what this looks like for one object when it is fully intact.
The red points are the vertex positions (texturing isn't shown), the black lines are the fixtures, and the blue lines show the seperation of how the triangles are drawn. The gray outline is what I would like the outline to look like in any case.
This image shows the same object with a few fixtures removed.
Is this possible to do this in a vertex shader (or in combination with other simple methods)? Any help would be appreciated.
Thanks :)
Assuming you're able to do something about those awkward points that are slightly inset from the corners (eg, if you numbered the points in English-reading order, with the first being '1', point 6 would be one)...
If a point is interior then if you list all the polygon edges connected to it in clockwise order, each pair of edges in sequence will have a polygon in common. If any two edges don't have a polygon in common then it's an exterior point.
Starting from any exterior point you can then get the whole outline by first walking in any direction and subsequently along any edge that connects to an exterior point you haven't visited yet (or, alternatively, that isn't the edge you walked along just now).
Starting from an existing outline and removing some parts, you can obviously start from either exterior point that used to connect to another but no longer does and just walk from there until you get to the other.
You can't handle this stuff in a shader under ES because you don't get connectivity information.
I think the best you could do in a shader is to expand the geometry by pushing vertices outward along their surface normals. Supposing that your data structure is a list of rectangles, each described by, say, a centre, a width and a height, you could achieve the same thing by drawing each with the same centre but with a small amount added to the width and height.
To be completely general you'd need to store normals at vertices, but also to update them as geometry is removed. So there'd be some pushing of new information from the CPU but it'd be relatively limited.

OBJ format and Flat vs Smooth Shading

I'm using OpenGL ES 1.1 and working on converting an OBJ export from Blender into some binary files containing vertex data. I actually already have a working tool, but I'm working on changing some things and came across a question.
Even with Smooth shading, it seems that with correct normals (perpendicular to the face plane) it achieves a flat appearance for faces. With Smooth shading enabled and the proper normals (simply via edges marked as sharp in Blender and an edge-split modifier applied), I can get the affect of smooth parts and sharp edges.
Where I'm going with this brings 2 questions.
Are the "s 1" or "s off" lines where smooth or flat shading is denoted in the OBJ file completely unnecessary from a smooth shading and use of normals standpoint?
When actually set to Flat shading in OpenGL, are normals completely ignored (or just assumed to all be perpendicular to the faces)?
For a vertex to look smooth, its normal has to be the average of the adjacent face normals (or something the like), but not perpendicular to the face plane (except if you meaned the average plane of all its adjacent faces).
GL_FLAT means, the color of a face is not interpolated over the triangle, but taken from a single triangle corner (don't know which, first or last). This color comes either from vertex colors or vertex lighting, so in fact you get per-face normals, but this is not neccessarily the faces direction, but the normal of a corner vertex.
If you got per vertex normals in the OBJ file you do not need the s parts. But you can use these to compute vertex normals. The s parts are the smoothing groups and are to be interpreted as 32bit bitfields. So there are actually 32 different smoothing groups and every face can be part of more than one. So all faces after an "s 5" line are part of smoothing groups 1 and 3 (first and third bits set). When two neighbouring faces are part of the same smoothing group, the edge between them is smooth (vertices share normals). This way you can reconstruct the neccessary per-vertex normals.
Changing the mode between gl_flat and gl_smooth doesn't seem to affect my rendering when I'm using per vertex normals. Your problem from what I can tell is that each face only has one normal. For smooth shading, each face should have three normals, one for each vertex, and they should all be different. For example, the normals of a cylinder, if projected inside of the cylinder, should all intersect at the axis of the cylinder. If your model has smooth normals, then an OBJ export should export per vertex normals. It sounds like you are probably assigning per face normals. As far as rendering in OpenGL-ES, the smoothing groups aren't used, only normals.

Resources