Is there a reasonable place to store computed data for a vertex shader that is computed once, and then used many times (i.e. for each vertex)?
I'm writing a shader that follows a catmull-rom curve, and I need to pre-compute (just once!) a series of evenly spaced positions along the curve so that I can plot text glyphs correctly. Once computed, I intend to use the evenly spaced array of positions as a fast lookup.
It's possible there could be hundreds of vec3 or vec4 points in this cache, depending on how finely sliced into arclengths the spline is.
Would such data best be placed in a uniform? A texture? Something else?
This question is pretty broad, but if you're thinking of performing calculations in the GPU, then you're looking for a THREE.WebGLRenderTarget. Instead of rendering a shader to the <canvas> you can render it to a RenderTarget, which stores the result in a texture that you can attach to other materials later.
Take a look at this example,
They perform position calculations in a fragment shader
These positions gets stored in a RenderTarget's texture
The texture is then passed to a plane to displace the vertex.y positions.
Here's some pseudocode on how it could be achieved:
// Create renderTarget
const renderTarget = new THREE.WebGLRenderTarget(width, height);
// Perform GPU calculations, store result in renderTarget.texture
renderer.setRenderTarget(renderTarget);
renderer.render(calculationScene, calculationCamera);
// Resulting texture can now be assigned to materials
object.material.map = renderTarget.texture;
// Now we render to canvas as usual
renderer.setRenderTarget(null);
renderer.render(scene, camera);
This texture data could be used in lieu of a vec3 or vec4 if you use RGB or RGBA channels respectively.
Related
To improve performance/fps in a SceneKit scene, I would like to minimise the number of draw calls. The scene contains a procedurally generated city, for which I generate houses of random heights (each an SCNBox) and tile them with a single, identical repeating facade texture, like so:
The proper way to apply the textures appears to be as follows:
let material = SCNMaterial()
material.diffuse.contents = image
material.diffuse.wrapS = SCNWrapMode.repeat
material.diffuse.wrapT = SCNWrapMode.repeat
buildingGeometry.firstMaterial = material
This works. But as written, it stretches the material to fit the size of the faces of the box. To resize the textures to maintain aspect ratio, one needs to add the following code:
material.diffuse.contentsTransform = SCNMatrix4MakeScale(sx, sy, sz)
where sx, sy and sz are appropriate scale factors derived from size of the faces in the geometry. This also works.
But that latter approach implies that every node needs a custom material, which in turn means that I cannot re-use a single material for all of the houses, which in turn means that every single node requires an extra draw call.
Is there a way to use a single texture material to tile all of the houses (without stretching the texture)?
Using a surface shader modifier (SCNShaderModifierEntryPointSurface) you could modify _surface.diffuseTexcoord based on scn_node.boundingBox.
Since the bounding box is dynamically fed to the shader all the objects will be using the same shader and will benefit from instancing (reducing the number of draw calls).
The SCNShadable.h header file has more details on that.
[Updated with a JSFiddle here]
If you hover slightly outside the plane the raycaster still thinks it's hovering over the object because we modified the z position in the vertex shader
For my project I have a carousel of planes (PlaneBufferGeometry and ShaderMaterial) that I need hover effects on.
However, I have this one state where the planes are shrunk by animating each vertex's z coordinate in the vertex shader. In this state, my hover effects using THREE.Raycaster are broken because the positions in the BufferGeom array aren't updated so the Raycaster still uses the same uvs as the original sized planes.
I already tried calling the following functions for every plane p after the vertex shader runs:
p.frustrumCulled = false;
p.geometry.verticesNeedUpdate = true;
p.geometry.normalsNeedUpdate = true;
p.geometry.computeBoundingBox();
p.geometry.computeBoundingSphere();
p.geometry.computeFaceNormals();
p.geometry.computeVertexNormals();
p.geometry.attributes.position.needsUpdate = true;
I also know if I just scale each plane using THREE.Mesh's built in scale, the uvs would be raycasted correctly but I can't do that because there's a specific animation I can only achieve with the vertex shader.
Raycasting happens on the CPU. If you are going to displace vertices on the GPU (via the vertex shader), raycasting can' work correctly since it is not possible to respect the transformed vertices for the intersection test.
You have two options now. You can apply the transformation at the CPU instead of the GPU before performing the raycast. An other option is the usage of different approaches like GPU picking in order to detect the interaction with a 3D object.
I am doing a particle system in WebGL using Three.js, and I want to do all the computation of the particles in the shaders. To achieve that, the positions (for example) of the particles are stored in a texture which is sampled by the vertex shader of each particle (POINT primitive).
The position texture is in fact two render targets which are swapped each frame after being updated off screen. Each pixel of this texture represent a particle.
To update a position, I read one of he render targets (texture2D), do some computation, and write on the other render target (fragment output).
To perform the "do some computation" step, I need some per particle attributes, like its velocity (and a lot of others). Since this step is done in the fragment shader, I can't use the vertex attributes buffers, so I have to store these properties in separate textures and sample each of them in the fragment shader.
It works, but sampling textures is slow as far as I know, and I wonder if there is some better ways to do this, like having one vertex per particle, each rendering a single fragment of the position texture.
I know that OpenGL 4 as some alternative ways to deal with this, like UBO or SSBO, but I'm not sure about WebGL.
I'm drawing a fairly simple 2D scene containing only rectangles. I have one FloatBuffer into which I put X, Y, Z, R, G, B, A, U, and V data for each vertex.
I draw using glDrawArrays and GL_TRIANGLE_STRIP, keeping the rectangles separate with degenerate vertices.
To facilitate the use of multiple textures, I keep separate float arrays for each texture's draw calls. The texture is binded, the float array is put into the FloatBuffer, and I draw.
Then the next texture is then binded and this continues until I have drawn all of my textures for this render.
I use an Orthographic projection so I can use the Z coordinates and GL_DEPTH_TEST for setting depth independently of the draw order.
To use alpha blending, every piece of advice on the internet seems to say:
GLES20.glEnable(GLES20.GL_BLEND);
GLES20.glBlendFunc(GLES20.GL_SRC_ALPHA, GLES20.GL_ONE_MINUS_SRC_ALPHA);
This works fine per each texture's "draw", because I have the draw calls sorted in the buffer from back to front before drawing. I have no way to correctly draw texture2 under a partially transparent texture1 because of the depth test and texture1 being drawn before texture2. texture1 chops off the overlapping part of texture2 because the depth test says that texture1 is in front of texture2.
The only ways I see around this are
1) only using 1 texture in the whole program, and 2) not using transparent textures. Neither of these are acceptable options.
Basically, I need a way to have alpha blending without needing to sort back-to-front. Is this possible?
It sounds like you might need to do Depth Peeling. Here's a PDF that shows how to do it.
I'm trying to do this: getimagedata() from diferent positions of the same canvas elemente and make each of the imagedata chunks to be the texture of individuals particles in a particle system. I dont want the all the system to have the same texture, rather each particle's texture has to correspond to a a chunk of the image on the canvas. Once I have my imgData[i] array filled with the information, how can I assign each of its elements to the texture value of each particle? (remember, I want each particle to have a different texture that corresponds to each element in the imgData[i] array )
ParticleSystem only supports a single texture for all particles in the system, and all particles share that texture.