morphing animation in vertex shader - opengl-es

To interpolate vertex position in vertex shader for morphing animation between two morph target's I send to shader two vertex positions. At now i have a mesh which has about 600 morph target's and I have a feeling that it is not a good idea trying to send to vertex shader 600 vertex positions. Could someone please tell me what is the correct way to animate object with so much morph targets?
P.S. I'm very new to 3d programming.

The idea is simple: only send the morph target data you actually intend to use.
Put all of your morph target data in a single buffer. When it comes time to render, use glVertexAttribPointer on each of the two position attributes you are morphing between. Use that function to specify the byte offset for that particular morph target.
As an example, imagine the following buffer memory layout:
|----Morph Target 0----|----Morph Target 1----|----...----|----Morph Target N----|
K L M Z
To blend between targets 0 and 1, you call glVertexAttribPointer(0, ..., (void*)K) and glVertexAttribPointer(1, ..., (void*)L), where K and L are the offsets of the morph targets in the buffer.
Obviously, you can only blend between two morph targets. But you can blend between any two.
I have a feeling that it is not a good idea trying to send to vertex shader 600 vertex positions.
It's more that it's not possible. The number of attributes is limited by the implementation, and that number is usually small (between 16 and 32).

Related

THREE.js adding bullets as sprites and rotating each individually

I have been working on programming a game where everything is rendered in 3d. Though the bullets are 2d sprites. this poses a problem. I have to rotate the bullet sprite by rotating the material. This turns every bullet possessing that material rather than the individual sprite I want to turn. It is also kind of inefficient to create a new sprite clone for every bullet. is there a better way to do this? Thanks in advance.
Rotate the sprite itself instead of the texture.
edit:
as OP mentioned.. the spritematerial controls the sprites rotation.y, so setting it manually does nothing...
So instead of using the Sprite type, you could use a regular planegeometry mesh with a meshbasic material or similar, and update the matrices yourself to both keep the sprite facing the camera, and rotated toward its trajectory..
Then at least you can share the material amongst all instances.
Then the performance bottleneck becomes the number of drawcalls.. (1 per sprite)..
You can improve on that by using a single BufferGeometry, and computing the 4 screen space vertices for each sprite, each frame. This moves the bottleneck away from drawCalls, and will be limited by the speed at which you can transform vertices in javascript, which is slow but not the end of the world. This is also how many THREE.js particle systems are implemented.
The next step beyond that is to use a custom vertex shader to do the heavy vertex computation.. you still update the buffergeometry each frame, but instead of transforming verts, you're just writing the position of the sprite into each of the 4 verts, and letting the vertex shader take care of figuring out which of the 4 verts it's transforming (possibly based on the UV coordinate, or stored in one of the vertex color channels..., .r for instace) and which sprite to render from your sprite atlas (a single texture/canvas with all your sprites layed out on a grid) encoded in the .g of the vertex color..
The next step beyond that, is to not update the BufferGeometry every frame, but store both position and velocity of the sprite in the vertex data.. and only pass a time offset uniform into the vertex shader.. then the vertex shader can handle integrating the sprite position over a longer time period. This only works for sprites that have deterministic behavior, or behavior that can be derived from a texture data source like a noise texture or warping texture. Things like smoke, explosions, etc.
You can extend these techniques to draw gigantic scrolling tilemaps. I've used these techniques to make multilayer scrolling/zoomable hexmaps that were 2048 hexes square, (which is a pretty huge map)(~4m triangles). with multiple layers of sprites on top of that, at 60hz.
Here the original stemkoski particle system for reference:
http://stemkoski.github.io/Three.js/Particle-Engine.html
and:
https://stemkoski.github.io/Three.js/ParticleSystem-Dynamic.html

rendering millions of voxels using 3D textures with three.js

I am using three.js to render a voxel representation as a set of triangles. I have got it render 5 million triangles comfortably but that seems to be the limit. you can view it online here.
select the Dublin model at resolution 3 to see a lot of triangles being drawn.
I have used every trick to get it this far (buffer geometry, voxel culling, multiple buffers) but I think it has hit the maximum amount that openGL triangles can accomplish.
Large amounts of voxels are normally rendered as a set of images in a 3D texture and while there are several posts on how to hack 2d textures into 3D textures but they seem to have a maximum limit on the texture size.
I have searched for tutorials or examples using this approach but haven't found any. Has anyone used this approach before with three.js
Your scene is render twice, because SSAO need depth texture. You could use WEBGL_depth_texture extension - which have pretty good support - so you just need a single render pass. You can stil fallback to low-perf-double-pass if extension is unavailable.
Your voxel's material is double sided. It's may be on purpose, but it may create a huge overdraw.
In your demo, you use a MeshPhongMaterial and directional lights. It's a needlessly complex material. Your geometries don't have any normals so you can't have any lighting. Try to use a simpler unlit material.
Your goal is to render a huge amount of vertices, so assuming the framerate is bound by vertex shader :
try stuff like https://github.com/GPUOpen-Tools/amd-tootle to preprocess your geometries. Focusing on prefetch vertex cache and vertex cache.
reduce the bandwidth used by your vertex buffers. Since your vertices are aligned on a "grid", you could store vertices position as 3 Shorts instead of 3 floats, reducing your VBO size by 2. You could use a same tricks if you had normals since all normals should be Axis aligned (cubes)
generally reduce the amount of varyings needed by fragment shader
if you need more attributes than just vec3 position, use one single interleaved VBO instead of one per attrib.

How to store and access per fragment attributes in WebGL

I am doing a particle system in WebGL using Three.js, and I want to do all the computation of the particles in the shaders. To achieve that, the positions (for example) of the particles are stored in a texture which is sampled by the vertex shader of each particle (POINT primitive).
The position texture is in fact two render targets which are swapped each frame after being updated off screen. Each pixel of this texture represent a particle.
To update a position, I read one of he render targets (texture2D), do some computation, and write on the other render target (fragment output).
To perform the "do some computation" step, I need some per particle attributes, like its velocity (and a lot of others). Since this step is done in the fragment shader, I can't use the vertex attributes buffers, so I have to store these properties in separate textures and sample each of them in the fragment shader.
It works, but sampling textures is slow as far as I know, and I wonder if there is some better ways to do this, like having one vertex per particle, each rendering a single fragment of the position texture.
I know that OpenGL 4 as some alternative ways to deal with this, like UBO or SSBO, but I'm not sure about WebGL.

Webgl texture atlas

I would like to ask for help concerning the making of the WEBGL Engine. I am stuck at the Texture Atlases. There is a texture, containing 2-2 pictures, and I draw its upper left corner to a vertex (texture coordinates are the following : 0-0.5 0-0.5).
This works properly, although when I look the vertex from afar, all of these blur together, and give strange looing colours. I think it is caused, because I use automatically generated Mipmap, and when I look it from afar, the texture unit uses the 1x1 Mipmap texture, where the 4 textures are blurred together to one pixel.
I was suggested the Mipmap’s own generator, with maximum level setting, (GL_TEXTURE_MAX_LEVEL),, although it is not supported by the Webgl. I was also suggested to use the „textureLod” function in the Fragment Shader, but the Webgl only lets me to use it in the vertex shader.
The only solution seems to be the Bias, the value that can be given at the 3rd parameter of the Fragment Shader „texture2D” function, but with this, I can only set the offset of the Mipmap LOD, not the actual value.
My idea is to use the Depth value (the distance from the camera) to move the Bias (increase it , so it will go more and more negative) so this insures, that it won’t use the last Mipmap level at greater distances, but to always take sample from a higher resolution Mipmap level. The issue with this, that I must calculate the angle of the given vertex to the camera, because the LOD value depends on this.
So the Bias=Depth + some combination of the Angle. I would like to ask help calculating this. If someone has any ideas concerning the Webgl Texture Atlases, I would gladly use them.

Mapping corners to arbitrary positions using Direct2D

I'm using WIC and Direct2D (via SharpDX) to composite photos into video frames. For each frame I have the exact coordinates where each corner will be found. While the photos themselves are a standard aspect ratio (e.g. 4:3 or 16:9) the insertion points are not -- they may be rotated, scaled, and skewed.
Now I know in Direct2D I can apply matrix transformations to accomplish this... but I'm not exactly sure how. The examples I've seen are more about applying specific transformations (e.g. rotate 30 degrees) than trying to match an exact destination.
Given that I know the exact coordinates (A,B,C,D) above, is there an easy way to map the source image onto the target? Alternately how would I generate the matrix given the source and destination coordinates?
If Direct3D is an option, all you will need to do is to render the quadrilateral as two triangles (with the frog texture mapped onto it).
To make sure there are no artifacts, render the quad as an indexed mesh, like in the example here (note that it shares vertex 0 and vertex 2 across both triangles). Of course, you can replace the actual vertex coordinates with A, B, C and D.
To begin, you can check out these tutorials for SlimDX, an excellent set of .NET bindings to DirectX.
It is not really possible to achieve this with Direct2D. That could be possible with Direct2D1.1 (from Win8 Metro) with a custom vertex shader, but in the end, as ananthonline suggest, It will be much easier to do it with Direct3D11.
Also you can use triangle strip primitives that are easier to setup (you don't need to create an index buffer). For the coordinates, you can directly sends coordinates to a vertex shader without any transforms (the vertex shaders will copy input SV_POSITION directly to the pixel shader). You just have to map your coordinates into x [-1,1] and y[-1,1]. I suggest you to start with SharpDX MiniCubeTexture sample, change the matrix to perform an orthonormal projection (instead of the sample perspective).

Resources