What is the most efficient way to send my geometry to glsl? - opengl-es

In my scene I have a tree of meshes. Each mesh contains a transformation matrix, a list of vertices, normals, uvs and a list of child meshes.
I would like to send my geometry in an 'array of structures' to glsl using a VBO, but I don't know how to handle updating each mesh's matrix in the event of a parent mesh's matrix transformation. Should I traverse the mesh tree and add a matrix for every vertex every frame? Should I premultiply the vertices each frame? Should I hold a list of matrices in a uniform and reference the current vertex's matrix by index? Should I use one glDrawArrays or glDrawElements for every mesh?

The 'standard' approach would be to use on glDraw call per mesh.
For each mesh, you feed its transform matrix in a single uniform that is applied for the whole mesh. This matrix is then used in the vertex shader to transform the mesh from 'object space' to 'clip space'.
Having multiple matrices for one mesh (or draw call) is generally used for non-rigid transformations on meshes (deformations induced by bones for example).
Another approach, depending on the number of meshes you have to draw, would be to use 'hardware instancing'. Where you can draw a similar mesh multiple times, providing an array of matrices, and let the hardware pick the according matrix given the instanceID being drawn.

Related

Algorithm to create Image Texture from Vertex colors

I know i can go from 3d space to 2d space of the Mesh by getting the corresponding uv coordinates of the vertex.
When i transform to uv space, each vertex will have its color and i can put the color in the pixel position what the uv co-ordinate returns for a particular vertex, but the issue is how do i derive the pixels that lie inbetween them, i want a smooth gradient.
For example, the color value at uv co-ordinate (0.5,0.5)->(u,v) is [30,40,50]->(RGB) and at [0.75,0.75] its [70,80,90] and lets say there are three vertices and theres one more at [0.25.0.6] as [10,20,30], how do i derive the colors that goes on the area these three uv/vertex coordinates fill, i mean the inbetween values for the pixels?
Just draw this mesh on a GPU. You know already that you can replace vertex positions with UVs so you have it represented on the texture space.
Keep your vertex colors unchanged and draw this mesh on your target texture, GPU will do color interpolation for every triangle you draw.
And keep in mind that if two or more triangles share the same texture space then the result will depend on triangle order.

Does three.js counts vertices that doesn't referenced in face list?

Let say I have a geometry having unnecessary vertices in vertex list. But those unnecessary vertices are not referenced in face list of this geometry. Will three.js make an extra calculation while rendering this geometry due to unused vertices or will it just ignore those vertices?
Three.js technically doesn't care about the contents of the vertex list. It uses that data to create a buffer, which it sends to the GPU. The same is true for the faces list.
The GPU will look at the faces buffer to determine which vertices to use to draw a particular face. If there are extra vertices in the vertex list, they should be ignored, but they do take up extra memory.
Take a look at how BufferGeometry builds its vertex list (BufferGeometry.attributes.position) and faces list (BufferGeometry.index). This is much closer to how GL and the GPU use these lists, and should give you a clearer picture of how the relationships work.

Getting the vertex coordinate of a BufferGeometry transformed in vertex shader

I implemented a particle system using BufferGeometry and to make things efficient I pass the velocity and acceleration information of each vertex (particle) to the vertex shader and I calculate the position of vertices inside the vertex shader. I need a way to get the world coordinates of a given vertex in the Javasript code but I could not find any except maybe passing FBO from the shader. Is there a better or easier way to achieve this?
Thanks!
Update:
Well, I ended up applying the motion formula of the vertex to the initial position of the vertex and then applying the matrixWorld of the parent mesh to the vector to get the final position. It works but I'm still looking for a better solution.

WebGL Change Shape Animation

I'm creating a 3D globe with a map on it which is supposed to unravel and fill the screen after a few seconds.
I've managed to create the globe using three.js and webGL, but I'm having trouble finding any information on being able to animate a shape change. Can anyone provide any help? Is it even possible?
(Abstract Algorithm's and Kevin Reid's answers are good, and only one thing is missing: some actual Three.js code.)
You basically need to calculate where each point of the original sphere will be mapped to after it flattens out into a plane. This data is an attribute of the shader: a piece of data attached to each vertex that differs from vertex to vertex of the geometry. Then, to animate the transition from the original position to the end position, in your animation loop you will need to update the amount of time that has passed. This data is a uniform of the shader: a piece of data that remains constant for all vertices during each frame of the animation, but may change from one frame to the next. Finally, there exists a convenient function called "mix" that will linearly interpolate between the original position and the end/goal position of each vertex.
I've written two examples for you: the first just "flattens" a sphere, sending the point (x,y,z) to the point (x,0,z).
http://stemkoski.github.io/Three.js/Shader-Attributes.html
The second example follows Abstract Algorithm's suggestion in the comments: "unwrapping the sphere's vertices back on plane surface, like inverse sphere UV mapping." In this example, we can easily calculate the ending position from the UV coordinates, and so we actually don't need attributes in this case.
http://stemkoski.github.io/Three.js/Sphere-Unwrapping.html
Hope this helps!
In 3D, anything and everything is possible. ;)
Your sphere geometry has it's own vertices, and basically you just need to animate their position, so after animation they are all sitting on one planar surface.
Try creating sphere and plane geometry, with same number of vertices, and animating sphere's vertices with interpolated values of sphere's and plane's original values. That way, on the start you would have sphere shape and in the end, plane shape.
Hope this helps, tell me if you need more directives how to do it.
myGlobe.geometry.vertices[index].position = something_calculated;
// myGlobe is instance of THREE.Mesh and something_calculated would be THREE.Vector3 instance that you can calculate in some manner (sphere-plane interpolation over time)
(Abstract Algorithm's answer is good, but I think one thing needs improvement: namely using vertex shaders.)
You make a set of vertices textured with the map image. Then, design a calculation for interpolating between the sphere shape and the flat shape. It doesn't have to be linear interpolation — for example, one way that might be good is to put the map on a small portion of an sphere of increasing radius until it looks flat (getting it all the way will be tricky).
Then, write that calculation in your vertex shader. The position of each vertex can be computed entirely from the texture coordinates (since that determines where-on-the-map the vertex goes and implies its position) and a uniform variable containing a time value.
Using the vertex shader will be much more efficient than recomputing and re-uploading the coordinates using JavaScript, allowing perfectly smooth animation with plenty of spare resources to do other things as well.
Unfortunately, I'm not familiar enough with Three.js to describe how to do this in detail, but all of the above is straightforward in basic WebGL and should be possible in any decent framework.

rendering n-point polygon with face in 3D

Newbie to three.js. I have multiple n-sided polygons to be displayed as faces (I want the polygon face to be opaque). Each polygon is facing a different direction in 3D space (essentially theses faces are part of some building).
Here are a couple of methods I tried, but they do not fit the bill:
Used Geometry object and added the n-vertices and used line mesh. It created the polygon as a hollow polygon. As my number of points are not just 3 or 4, I could not use the Face3 or Face4 object. Essentially a Face-n object.
I looked at the WebGL geometric shapes example. The shape object works in 2D and extrusion. All the objects in the example are on one plane. While my requirement is each polygon has a different 3D normal vector. Should I use 2D shape and also take note of the face normal and rotate the 2D shape after rendering.
Or is there a better way to render multiple 3D flat polygons with opaque faces with just x, y, z vertices.
As long as your polygons are convex you can still use the Face3 object. If you take one n-sided polygon, lets say a hexagon, you can create Face3 polygons by taking vertices numbered (0,1,2) as one face, vertices (0,2,3) as another face, vertices (0,3,4) as other face and vertices (0,4,5) as last face. I think you can get the idea if you draw it on paper. But this works only for convex polygons.

Resources