How check collision with terrain mesh in this example http://stemkoski.github.io/Three.js/Shader-Heightmap-Textures.html
All intersection as plane
Can't check for collision using raycaster as usual because the geometric object is actually a plane, which just appears different because of the shader code. The best work-around is to write a method that generates geometry object based on image texture, then you can remove shader code, and check for intersections as usual.
Related
I'm implementing area light in my ray tracer. In a simple sphere obj model (i.e. it's the sphere that consists of triangles) squared patches are displayed. How can I make sphere surface smooth?
I suspect that surface normals calculation must be fixed.
Currently for each triangle single normal is computed for all points it contains.
Here's the sphere:
The square patches are displayed because you are simply seeing the triangles that make up the obj model. Either find an obj model with more triangles or use texture mapping to smooth the surface. If you are looking for a computationally effective method for drawing spheres, you can create a collision algorithm for spheres. I do not know how low-level you went on your ray-tracing project, but if you defined the reflection algorithm for the triangle, you can pretty easily do one for a sphere. I have created a visualization here:
https://www.geogebra.org/m/g9rrhttp
You can check that out if you want. If you do not want to implement this new algorithm, you can look for an obj model with a texture. the texture is what will make it appear smooth.
I´m using the displacementMap attribute in the MeshPhongMaterial with a loaded greyscaled texture. The extrusion/displacement works fine but the normals and faces of the effected mesh.geometry does not get updated.
I used
geometry.computeFaceNormals()
and
geometry.computeVertexNormals().
I want to make a walkable character on the terrain by ray-casting down and receiving the height (y) value of the intersected face/vertice for the chaarcters offset manipulation.
Enclosed an img of the current displaced geometry with the VertexNormalsHelper(red lines) and the FaceNormalsHelper(green lines ) applied.
Does someone know how to update them correctly?
The displacement map will only be applied in the vertex shader, meaning the fundamental geometry isn't actually altered, and the fundamental geometry is what the helper is showing.
You would have to iterate and change the actual vertices in the geometry buffer itself to alter them.
The same of course applies for the raycaster intersection test which will use the fundamental vertices/faces.
I want to make a walkable character on the terrain by ray-casting down and receiving the height (y)
A possible workaround to avoid altering the geometry is to first get the UV position of your character on the plane, then map that to a 2D position on the displacement map. Finally pick the value at that point, scale and use that for height relative to the mesh.
This may even be more efficient than using raycasting, but you might have to do interpolation for positions that isn't on a vertex.
I would like to create a vertex shader in Three.js to render the faces of a textured geometry so that all the triangles are face-on to the camera.
This is to emulate the functionality and performance of Three.js Points, but without the size limitation of gl_PointSize.
I'm not really sure what calculation to perform in the vertex shader. Any help appreciated.
you will have to add custom attribute to your geometry, easiest one to use would be a vector to the center of the triangle
in vertex shader you will have to calculate how to move each vertex, you now have
vertex position
vector to center
vertex normal == face normal
camera orientation (from matrices)
from that you can calculate the triangle center, which be static and calculate the rotation vertex has to make around the triangle center around axis perpendicular to the vector to center so that normal will come out as inverse of the camera orientation
the math is not very complicated, but writing shader code is tedious because of the non-existent debug - i advise you to first write a code that rotates the positions of geometry(using only the same parametres) and port it to shader
I'm not sure if the title is the best, but I'll try to explain what my problem is. I'm playing aroung with Three.js and Physijs to make a simple game where the player can walk around on a field. I'm using a plane geometry as the ground and wanted to use a displacement map to change its shape. I previously manipulated the vertices directly (using just some mathematical formulas to displace the vertices) and used a HeightfieldMap for collision detection and this worked great, but now that I implemented displacement mapping through a vertex shader, Physijs treats the ground as a flat object. I'm guessing the manipulation of the vertices in the shader happens after the Physijs simulation, but is there any way to get Physijs to take the vertex displacement from the shader into account when doing collision detection? Alternatively, is there a good way to do displacement mapping without shaders?
I'm creating a 3D globe with a map on it which is supposed to unravel and fill the screen after a few seconds.
I've managed to create the globe using three.js and webGL, but I'm having trouble finding any information on being able to animate a shape change. Can anyone provide any help? Is it even possible?
(Abstract Algorithm's and Kevin Reid's answers are good, and only one thing is missing: some actual Three.js code.)
You basically need to calculate where each point of the original sphere will be mapped to after it flattens out into a plane. This data is an attribute of the shader: a piece of data attached to each vertex that differs from vertex to vertex of the geometry. Then, to animate the transition from the original position to the end position, in your animation loop you will need to update the amount of time that has passed. This data is a uniform of the shader: a piece of data that remains constant for all vertices during each frame of the animation, but may change from one frame to the next. Finally, there exists a convenient function called "mix" that will linearly interpolate between the original position and the end/goal position of each vertex.
I've written two examples for you: the first just "flattens" a sphere, sending the point (x,y,z) to the point (x,0,z).
http://stemkoski.github.io/Three.js/Shader-Attributes.html
The second example follows Abstract Algorithm's suggestion in the comments: "unwrapping the sphere's vertices back on plane surface, like inverse sphere UV mapping." In this example, we can easily calculate the ending position from the UV coordinates, and so we actually don't need attributes in this case.
http://stemkoski.github.io/Three.js/Sphere-Unwrapping.html
Hope this helps!
In 3D, anything and everything is possible. ;)
Your sphere geometry has it's own vertices, and basically you just need to animate their position, so after animation they are all sitting on one planar surface.
Try creating sphere and plane geometry, with same number of vertices, and animating sphere's vertices with interpolated values of sphere's and plane's original values. That way, on the start you would have sphere shape and in the end, plane shape.
Hope this helps, tell me if you need more directives how to do it.
myGlobe.geometry.vertices[index].position = something_calculated;
// myGlobe is instance of THREE.Mesh and something_calculated would be THREE.Vector3 instance that you can calculate in some manner (sphere-plane interpolation over time)
(Abstract Algorithm's answer is good, but I think one thing needs improvement: namely using vertex shaders.)
You make a set of vertices textured with the map image. Then, design a calculation for interpolating between the sphere shape and the flat shape. It doesn't have to be linear interpolation — for example, one way that might be good is to put the map on a small portion of an sphere of increasing radius until it looks flat (getting it all the way will be tricky).
Then, write that calculation in your vertex shader. The position of each vertex can be computed entirely from the texture coordinates (since that determines where-on-the-map the vertex goes and implies its position) and a uniform variable containing a time value.
Using the vertex shader will be much more efficient than recomputing and re-uploading the coordinates using JavaScript, allowing perfectly smooth animation with plenty of spare resources to do other things as well.
Unfortunately, I'm not familiar enough with Three.js to describe how to do this in detail, but all of the above is straightforward in basic WebGL and should be possible in any decent framework.