I'm not sure if the title is the best, but I'll try to explain what my problem is. I'm playing aroung with Three.js and Physijs to make a simple game where the player can walk around on a field. I'm using a plane geometry as the ground and wanted to use a displacement map to change its shape. I previously manipulated the vertices directly (using just some mathematical formulas to displace the vertices) and used a HeightfieldMap for collision detection and this worked great, but now that I implemented displacement mapping through a vertex shader, Physijs treats the ground as a flat object. I'm guessing the manipulation of the vertices in the shader happens after the Physijs simulation, but is there any way to get Physijs to take the vertex displacement from the shader into account when doing collision detection? Alternatively, is there a good way to do displacement mapping without shaders?
Related
I am a 3D computer graphics learner and I want to implement frustum culling to work on the gpu. I've already implemented it with software rendering system. That culling worked with polygon with minimum of 3 vertices and, if it was outside the frustum, added a new point on the frustum inside. Here is algorithm implementation just in case.
And, as I want to implement that algorithm on the gpu, I have several questions with it:
I was thinking to implement frustum culling with geometry shader, but that seems that I won't be able to cull UVs coordinates then, is it Right? Or there is a way to pass, somehow, those UV's coordinates, cull them in geometry shader, and pass them next to the fragment shader?
I thought to implement it in vertex shader, but won't I be able to get all vertices for current triangle? Maybe I just don't quite know a way to corretly get all vertices per face? Wouldn't I just to cull only one by one vertex?
Maybe I just don't know the right way to implement frustum culling on gpu. Can there be a better algorithm to implement frustum culling? Thanks.
I believe what you have implemented is 'clipping'.
'frustum culling' usually refers to a technique where you discard invisible elements as a whole and I believe that is not what you are asking here.
back to clipping,
Graphics API by default will clip polygons in normalized device coordinate space. For example with OpenGL, out side of 2x2x2 cube with center at (0,0,0) will be clipped automatically after the vertex shader.
I'm implementing area light in my ray tracer. In a simple sphere obj model (i.e. it's the sphere that consists of triangles) squared patches are displayed. How can I make sphere surface smooth?
I suspect that surface normals calculation must be fixed.
Currently for each triangle single normal is computed for all points it contains.
Here's the sphere:
The square patches are displayed because you are simply seeing the triangles that make up the obj model. Either find an obj model with more triangles or use texture mapping to smooth the surface. If you are looking for a computationally effective method for drawing spheres, you can create a collision algorithm for spheres. I do not know how low-level you went on your ray-tracing project, but if you defined the reflection algorithm for the triangle, you can pretty easily do one for a sphere. I have created a visualization here:
https://www.geogebra.org/m/g9rrhttp
You can check that out if you want. If you do not want to implement this new algorithm, you can look for an obj model with a texture. the texture is what will make it appear smooth.
I implemented a particle system using BufferGeometry and to make things efficient I pass the velocity and acceleration information of each vertex (particle) to the vertex shader and I calculate the position of vertices inside the vertex shader. I need a way to get the world coordinates of a given vertex in the Javasript code but I could not find any except maybe passing FBO from the shader. Is there a better or easier way to achieve this?
Thanks!
Update:
Well, I ended up applying the motion formula of the vertex to the initial position of the vertex and then applying the matrixWorld of the parent mesh to the vector to get the final position. It works but I'm still looking for a better solution.
I'm creating a 3D globe with a map on it which is supposed to unravel and fill the screen after a few seconds.
I've managed to create the globe using three.js and webGL, but I'm having trouble finding any information on being able to animate a shape change. Can anyone provide any help? Is it even possible?
(Abstract Algorithm's and Kevin Reid's answers are good, and only one thing is missing: some actual Three.js code.)
You basically need to calculate where each point of the original sphere will be mapped to after it flattens out into a plane. This data is an attribute of the shader: a piece of data attached to each vertex that differs from vertex to vertex of the geometry. Then, to animate the transition from the original position to the end position, in your animation loop you will need to update the amount of time that has passed. This data is a uniform of the shader: a piece of data that remains constant for all vertices during each frame of the animation, but may change from one frame to the next. Finally, there exists a convenient function called "mix" that will linearly interpolate between the original position and the end/goal position of each vertex.
I've written two examples for you: the first just "flattens" a sphere, sending the point (x,y,z) to the point (x,0,z).
http://stemkoski.github.io/Three.js/Shader-Attributes.html
The second example follows Abstract Algorithm's suggestion in the comments: "unwrapping the sphere's vertices back on plane surface, like inverse sphere UV mapping." In this example, we can easily calculate the ending position from the UV coordinates, and so we actually don't need attributes in this case.
http://stemkoski.github.io/Three.js/Sphere-Unwrapping.html
Hope this helps!
In 3D, anything and everything is possible. ;)
Your sphere geometry has it's own vertices, and basically you just need to animate their position, so after animation they are all sitting on one planar surface.
Try creating sphere and plane geometry, with same number of vertices, and animating sphere's vertices with interpolated values of sphere's and plane's original values. That way, on the start you would have sphere shape and in the end, plane shape.
Hope this helps, tell me if you need more directives how to do it.
myGlobe.geometry.vertices[index].position = something_calculated;
// myGlobe is instance of THREE.Mesh and something_calculated would be THREE.Vector3 instance that you can calculate in some manner (sphere-plane interpolation over time)
(Abstract Algorithm's answer is good, but I think one thing needs improvement: namely using vertex shaders.)
You make a set of vertices textured with the map image. Then, design a calculation for interpolating between the sphere shape and the flat shape. It doesn't have to be linear interpolation — for example, one way that might be good is to put the map on a small portion of an sphere of increasing radius until it looks flat (getting it all the way will be tricky).
Then, write that calculation in your vertex shader. The position of each vertex can be computed entirely from the texture coordinates (since that determines where-on-the-map the vertex goes and implies its position) and a uniform variable containing a time value.
Using the vertex shader will be much more efficient than recomputing and re-uploading the coordinates using JavaScript, allowing perfectly smooth animation with plenty of spare resources to do other things as well.
Unfortunately, I'm not familiar enough with Three.js to describe how to do this in detail, but all of the above is straightforward in basic WebGL and should be possible in any decent framework.
I am trying to create a terrain solution in ThreeJS and I'm running into some trouble with the generation of the normals. I am approaching the problem by creating a number of mesh objects using the THREE.PlaneGeometry class. Once all of the tiles have been created I go through each and set the UV's so that each tile represents a part of the whole. I also generate a height value of the vertex Y positions to create some hills. I then call the geometry functions
geometry.computeFaceNormals();
geometry.computeVertexNormals();
This is just so that I have some default face and vertex normals for each tile.
I then go through each tile and try to average out the normals on each corner.
The problem is (I think) with the normals, but I don't really know what to call this problem. Each of the normals on the plane's corners point in the same direction as the face when created. This makes the terrain look like a flat shaded object. To prevent this I thought perhaps what I needed to do was make sure each vertext normal (each corner) had the same averaged normal as its immediate neighbours normals. I.E each corner of each tile has the same normal as all the immediate normals around it from the adjacent planes.
figure A
Here I am visualising each of the 4 normals on the mesh. You can see that at each corner the normals are the same (On top of eachother)
figure B
EDIT
figure C
EDIT
Figure D
Except even when the verts all share the same normals it still comes up all blocky <:/
I don't know how to do this... I think my understanding of what needs to be done is incorrect...?
Any help would be greatly appreciated.
You're basically right about what should happen. The shading you're getting is not consistent with continuous normals. If each all the vertex-faces at a given location have the same normal you should not see the clear shading discontinuities in your second image. However the image doesn't look like simple face normals either, at least not to my eye.
A couple of things to look at:
1) I note that your quads themselves are not planar. Is it possible your algorithm is assuming that they are? the non-planar quad meshes don't have real 'face normal' to use as a base.
2) Are your normalized normalized after you average them? That is, do they have a vector length of 1?
3) Are you confident that the normal averaging code is actually using the correct normals to average? The shading in this does not look like completely flat shaded image where each vertex-face normal in a quad is the same - if that were the case you'd get consistent shading across each quad although the quads would not be continuous. This it possible your original vertex-face normals are not in fact lined up with the face normals?
4) Try turning off the bump maps to debug. Depending on how the bump is being done in your shader you may have incorrect binormals/bitangents rather than bad vert normals.
Instead of averaging at each vertex / corner the neighborhood normals you should average the four normals that each vertex has (4 tiles meet at each vertex).