In a scenario where vertices are displaced in the vertex shader, how to retrieve their transformed positions in WebGL / Three.js?
Other questions here suggest to write the positions to a texture and then read the pixels, but the resulting value don't seem to be correct.
In the example below the position is passed to the fragment shader without any transformations:
// vertex shader
varying vec4 vOut;
void main() {
gl_Position = vec4(position, 1.0);
vOut = vec4(position, 1.0);
}
// fragment shader
varying vec4 vOut;
void main() {
gl_FragColor = vOut;
}
Then reading the output texture, I would expect pixel[0].r to be identical to positions[0].x, but that is not the case.
Here is a jsfiddle showing the problem:
https://jsfiddle.net/brunoimbrizi/m0z8v25d/2/
What am I missing?
Solved. Quite a few things were wrong with the jsfiddle mentioned in the question.
width * height should be equal to the vertex count. A PlaneBufferGeometry with 4 by 4 segments results in 25 vertices. 3 by 3 results in 16. Always (w + 1) * (h + 1).
The positions in the vertex shader need a nudge of 1.0 / width.
The vertex shader needs to know about width and height, they can be passed in as uniforms.
Each vertex needs an attribute with its index so it can be correctly mapped.
Each position should be one pixel in the resulting texture.
The resulting texture should be drawn as gl.POINTS with gl_PointSize = 1.0.
Working jsfiddle: https://jsfiddle.net/brunoimbrizi/m0z8v25d/13/
You're not writing the vertices out correctly.
https://jsfiddle.net/ogawzpxL/
First off you're clipping the geometry, so your vertices actually end outside the view, and you see the middle of the quad without any vertices.
You can use the uv attribute to render the entire quad in the view.
gl_Position = vec4( uv * 2. - 1. , 0. ,1.);
Everything in the buffer represents some point on the quad. What seems to be tricky is when you render, the pixel will sample right next to your vertex. In the fiddle i've applied an offset to the world space thing by how much it would be in pixel space, and it didn't really work.
The reason why it seems to work with points is that this is all probably wrong :) If you want to transform only the vertices, then you need to store them properly in the texture. You can use points for this, but ideally they wouldn't be spaced out so much. In your scenario, they would fill the first couple of rows of the texture (since it's much larger than it could be).
You might start running into problems as soon as you try to apply this to something other than PlaneGeometry. In which case this problem has to be broken down.
Related
I draw the particles in my game as a capsule (Two GL_POINTS, two GL_TRIANGLES). Everything is nicely batched so that I draw the triangles first, then the points second (two draw calls total).
My problem is that in OpenGL es you have to round GL_POINTS yourself, and I have been doing it like this in the fragment shader.
precision highp float;
varying float outColor;
vec3 hsv2rgb(vec3 c)
{
vec4 K = vec4(1.0, 2.0 / 3.0, 1.0 / 3.0, 3.0);
vec3 p = abs(fract(c.xxx + K.xyz) * 6.0 - K.www);
return c.z * mix(K.xxx, clamp(p - K.xxx, 0.0, 1.0), c.y);
}
void main()
{
vec2 circCoord = 2.0 * gl_PointCoord - 1.0;
gl_FragColor = vec4( hsv2rgb( vec3(outColor / 360.0, 1.0, 1.0) ) , step(dot(circCoord, circCoord), 1.0) );
}
The problem is I also need to do depth sorting, because a particle is drawn in two separate draw calls sometimes the z position is not correct without depth buffering because they are drawn in two draw calls.
Now that I have the depth buffer going, and the rounded points they are not mixing well, and instead of having rounded particles they have a black area around them. Any ideas?
Some extra notes:
I am doing ios opengl es (tile based deferred rendering I believe)
Each particle is initially defined as two points, the current location and the location it was in last frame. These two points are drawn with GL_POINTS later. Then the rectangle part is made with two triangles decided by finding a vector perpendicular to the vector of the two points.
Also the particles are already sorted in front to back order. Technically their z position is arbitrary, I just need them to be intact.
Also the particles are already sorted in front to back order. Technically their z position is arbitrary, I just need them to be intact.
I suspect that's your problem.
Points are square. You can fiddle the blending to make them appear round, but the geometry (and hence the depth value) is still square. Things behind the point are being Z-failed by the corners which are outside of the coloured round region.
The only fix to this without changing your algorithm completely is either use a triangle mesh rather than a point (so the actual geometry is round), or discard fragments in the fragment shader for point pixels which are outside of the round region you want to keep.
Note that using discard in shaders can be relatively expensive, so check the performance of that approach ...
Im trying to crate a shader, that converts fft-data (passed as a texture) to a bar graphic and then to on a circle in the center of the screen. Here is a image of what im trying to achieve: link to image
i experimentet a bit with shader toy and came along wit this shader: link to shadertoy
with all the complex shaders i saw on shadertoy, it thought this should be doable with maths somehow.
can anybody here give me a hint how to do it?
It’s very doable — you just have to think about the ranges you’re sampling in. In your Shadertoy example, you have the following:
float r = length(uv);
float t = atan(uv.y, uv.x);
fragColor = vec4(texture2D(iChannel0, vec2(r, 0.1)));
So r is going to vary roughly from 0…1 (extending past 1 in the corners), and t—the angle of the uv vector—is going to vary from 0…2π.
Currently, you’re sampling your texture at (r, 0.1)—in other words, every pixel of your output will come from the V position 10% down your source texture and varying across it. The angle you’re calculating for t isn’t being used at all. What you want is for changes in the angle (t) to move across your texture in the U direction, and for changes in the distance-from-center (r) to move across the texture in the V direction. In other words, this:
float r = length(uv);
float t = atan(uv.y, uv.x) / 6.283; // normalize it to a [0,1] range - 6.283 = 2*pi
fragColor = vec4(texture2D(iChannel0, vec2(t, r)));
For the source texture you provided above, you may find your image appearing “inside out”, in which case you can subtract r from 1.0 to flip it.
Using directx 11, I'm working on a graphics effect system that uses a geometry shader to build quads in world space. These quads then use a fragment shader in which the main texture is the rendered scene texture. Effectively producing post process effects on qorld space quads. The simplest of which is a tint effect.
The vertex shader only passes the data through to the geometry shader.
The geometry shader calculates extra vertices based on a normal. Using cross product, I find the x and z axis and append the tri-stream with 4 new verts in each diagonal direction from the original position (generating a quad from the given position and size).
The pixel shader (tint effect) simply multiplies the scene texture colour with the colour variable set.
The quad generates and displays correctly on screen. However;
The problem that I am facing is the mapping of the uv coordinates fails to align with the image on the back buffer. That is, when using the tint shader with half alpha as the given colour you can see the image displayed on the quad does not overlay the image on the back buffer perfectly, unless the quad facing towards the camera. The closer the quad normal matches the cameras y axis, the more the image is skewed.
I am currently using the formula below to calculate the uv coordinates:
float2 uv = vert0.position.xy / vert0.position.w;
vert0.uv.x = uv.x * 0.5f + 0.5f;
vert0.uv.y = -uv.y * 0.5f + 0.5f;
I have also used the formula below, which resulted (IMO) in the uv's not taking perspective into concideration.
float2 uv = vert0.position.xy / SourceTextureResolution;
vert0.uv.x = uv.x * ASPECT_RATIO + 0.5f;
vert0.uv.y = -uv.y + 0.5f;
Question:
How can I obtain screen space uv coordinates based on a vertex position generated in the geometry shader?
If you would like me to elaborate on any points please ask and i will try my best :)
Thanks in advance.
I am frustratingly close to working skeletal animation in WebGL.
Background
I have a model with a zombie walk animation that I got for free. I downloaded the entire thing in a single Collada file. I wrote a parser to get all the vertices, normals, joints influence indices/weights, and joint matrices. I am able to render the character in its bind pose by doing
joints[i].skinning_matrix = MatrixMultiply(joints[i].inverse_bind_pose_matrix, joints[i].world_matrix);
where a joint's world matrix is the joint's bind_pose_matrix multiplied by its parent's world matrix. I got the inverse_bind_pose_matrix by doing:
joints[i].inverse_bind_pose_matrix = MatrixInvert(joints[i].world_matrix);
So really, rendering the character in its bind pose is just passing Identity matrices to the shader, so maybe I'm not even doing that part right at all. However, the inverse bind pose matrices that I calculate are nearly identical to the ones supplied by the Collada file, so I'm pretty sure those are good.
Here's my model in its bind pose:
Problem
Once I go ahead and try to calculate the skinning matrix using a single frame of the animation (I chose frame 10 at random), it still resembles a man but something is definitely wrong.
I'm using the same inverse_bind_pose_matrix that I calculated in the first place. I'm using a new world matrix, calculated instead by multiplying each joint's keyframe/animation matrix by its parent's new world matrix.
I'm not doing any transposing anywhere in my entire codebase, though I think I've tried transposing pretty much any combination of matrices to no avail.
Here is my model in his animation frame 10 pose:
Some Relevant Code
vertex shader:
attribute float aBoneIndex1;
// up to aBoneIndex5
attribute float aBoneWeight1;
// up to aBoneWeight5
uniform mat4 uBoneMatrices[52];
void main(void) {
vec4 vertex = vec4(0.0, 0.0, 0.0, 0.0);
vertex += aBoneWeight1 * vec4(uBoneMatrices[int(aBoneIndex1)] * aPosition);
vertex += aBoneWeight2 * vec4(uBoneMatrices[int(aBoneIndex2)] * aPosition);
vertex += aBoneWeight3 * vec4(uBoneMatrices[int(aBoneIndex3)] * aPosition);
vertex += aBoneWeight4 * vec4(uBoneMatrices[int(aBoneIndex4)] * aPosition);
vertex += aBoneWeight5 * vec4(uBoneMatrices[int(aBoneIndex5)] * aPosition);
// normal/lighting part
// the "/ 90.0" simply scales the model down, problem persists without it.
gl_Position = uPMatrix * uMVMatrix * vec4(vertex.xyz / 90.0, 1.0);
}
You can see my parser in its entirety (if you really really want to...) on GitHub and you can see the model live here.
I am working on my own deffered rendering engine. I am rendering the scene to the g-buffer containing diffuse color, view space normals and depth (for now). I have implemented directional light for the second rendering stage and it works great. Now I want to render a point light, which is a bit harder.
I need the point light position for the shader in view space because I have only depth in the g-buffer and I can't afford a matrix multiplication in every pixel. I took the light position and transformed it by the same matrix, by which I transform every vertex in shader, so it should align with verices in the scene (using D3DXVec3Transform). But that isn't the case: transformed position doesn't represent viewspace position nearly at all. It's x,y coordinates are off the charts, they are often way out of the (-1,1) range. The transformed position respects the camera orientation somewhat, but the light moves too quick and the y-axis is inverted. Only if the camera is at (0,0,0), the light stands at (0,0) in the center of the screen. Here is my relevant rendering code executed every frame:
D3DXMATRIX matView; // the view transform matrix
D3DXMATRIX matProjection; // the projection transform matrix
D3DXMatrixLookAtLH(&matView,
&D3DXVECTOR3 (x,y,z), // the camera position
&D3DXVECTOR3 (xt,yt,zt), // the look-at position
&D3DXVECTOR3 (0.0f, 0.0f, 1.0f)); // the up direction
D3DXMatrixPerspectiveFovLH(&matProjection,
fov, // the horizontal field of view
asp, // aspect ratio
znear, // the near view-plane
zfar); // the far view-plane
D3DXMATRIX vysl=matView*matProjection;
eff->SetMatrix("worldViewProj",&vysl); //vertices are transformed ok ín shader
//render g-buffer
D3DXVECTOR4 lpos; D3DXVECTOR3 lpos2(0,0,0);
D3DXVec3Transform(&lpos,&lpos2,&vysl); //transforming lpos into lpos2 using vysl, still the same matrix
eff->SetVector("poslight",&lpos); //but there is already a mess in lpos at this time
//render the fullscreen quad with wrong lighting
Not that relevant shader code, but still, I see the light position this way (passing IN.texture is just me being lazy):
float dist=length(float2(IN.texture0*2-1)-float2(poslight.xy));
OUT.col=tex2D(Sdiff,IN.texture0)/dist;
I have tried to transform a light only by matView without projection, but the problem is still the same. If I transform the light in a shader, it's the same result, so the problem is the matrix itself. But it is the same matrix as is transforming the vertices! How differently are vertices treated?
Can you please take a look at the code and tell me where the mistake is? It seems to me it should work ok, but it doesn't. Thanks in advance.
You don't need a matrix multiplication to reconstruct view position, here is a code snippet (from andrew lauritzen deffered light example)
tP is the projection transform, position screen is -1/1 pixel coordinate and viewspaceZ is linear depth that you sample from your texture.
float3 ViewPosFromDepth(float2 positionScreen,
float viewSpaceZ)
{
float2 screenSpaceRay = float2(positionScreen.x / tP._11,
positionScreen.y / tP._22);
float3 positionView;
positionView.z = viewSpaceZ;
positionView.xy = screenSpaceRay.xy * positionView.z;
return positionView;
}
Result of this transform D3DXVec3Transform(&lpos,&lpos2,&vysl); is a vector in homogeneous space(i.e. projected vector but not divided by w). But in you shader you use it's xy components without respecting this(w). This is (quite probably) the problem. You could divide vector by its w yourself or use D3DXVec3Project instead of D3DXVec3Transform.
It's working fine for vertices as (I suppose) you mul them by the same viewproj matrix in the vertex shader and pass transformed values to interpolator where hardware eventually divides it's xyz by interpolated 'w'.