Unprojecting Screen coords to world in OpenGL es 2.0 - opengl-es

Long time listener, first time caller.
So I have been playing around with the Android NDK and I'm at a point where I want to Unproject a tap to world coordinates but I can't make it work.
The problem is the x and y values for both the near and far points are the same which doesn't seem right for a perspective projection. Everything in the scene draws OK so I'm a bit confused why it wouldn't unproject properly, anyway here is my code please help thanks
//x and y are the normalized screen coords
ndk_helper::Vec4 nearPoint = ndk_helper::Vec4(x, y, 1.f, 1.f);
ndk_helper::Vec4 farPoint = ndk_helper::Vec4(x, y, 1000.f, 1.f);
ndk_helper::Mat4 inverseProjView = this->matProjection * this->matView;
inverseProjView = inverseProjView.Inverse();
nearPoint = inverseProjView * nearPoint;
farPoint = inverseProjView * farPoint;
nearPoint = nearPoint *(1 / nearPoint.w_);
farPoint = farPoint *(1 / farPoint.w_);

Well, after looking at the vector/matrix math code in ndk_helper, this isn't a surprise. In short: Don't use it. After scanning through it for a couple of minutes, it has some obvious mistakes that look like simple typos. And particularly the Vec4 class is mostly useless for the kind of vector operations you need for graphics. Most of the operations assume that a Vec4 is a vector in 4D space, not a vector containing homogenous coordinates in 3D space.
If you want, you can check it out here, but be prepared for a few face palms:
https://android.googlesource.com/platform/development/+/master/ndk/sources/android/ndk_helper/vecmath.h
For example, this is the implementation of the multiplication used in the last two lines of your code:
Vec4 operator*( const float& rhs ) const
{
Vec4 ret;
ret.x_ = x_ * rhs;
ret.y_ = y_ * rhs;
ret.z_ = z_ * rhs;
ret.w_ = w_ * rhs;
return ret;
}
This multiplies a vector in 4D space by a scalar, but is completely wrong if you're operating with homogeneous coordinates. Which explains the results you are seeing.
I would suggest that you either write your own vector/matrix library that is suitable for graphics type operations, or use one of the freely available libraries that are tested, and used by others.
BTW, the specific values you are using for your test look somewhat odd. You definitely should not be getting the same results for the two vectors, but it's probably not what you had in mind anyway. For the z coordinate in your input vectors, you are using the distances of the near and far planes in eye coordinates. But then you apply the inverse view-projection matrix to those vectors, which transforms them back from clip/NDC space into world space. So your input vectors for this calculation should be in clip/NDC space, which means the z-coordinate values corresponding to the near/far plane should be at -1 and 1.

Related

depth peeling invariance in webgl (and threejs)

I'm looking at what i think is the first paper for depth peeling (the simplest algorithm?) and I want to implement it with webgl, using three.js
I think I understand the concept and was able to make several peels, with some logic that looks like this:
render(scene, camera) {
const oldAutoClear = this._renderer.autoClear
this._renderer.autoClear = false
setDepthPeelActive(true) //sets a global injected uniform in a singleton elsewhere, every material in the scene has onBeforeRender injected with additional logic and uniforms
let ping
let pong
for (let i = 0; i < this._numPasses; i++) {
const pingPong = i % 2 === 0
ping = pingPong ? 1 : 0
pong = pingPong ? 0 : 1
const writeRGBA = this._screenRGBA[i]
const writeDepth = this._screenDepth[ping]
setDepthPeelPassNumber(i) //was going to try increasing the polygonOffsetUnits here globally,
if (i > 0) {
//all but first pass write to depth
const readDepth = this._screenDepth[pong]
setDepthPeelFirstPass(false)
setDepthPeelPrevDepthTexture(readDepth)
this._depthMaterial.uniforms.uFirstPass.value = 0
this._depthMaterial.uniforms.uPrevDepthTex.value = readDepth
} else {
//first pass just renders to depth
setDepthPeelFirstPass(true)
setDepthPeelPrevDepthTexture(null)
this._depthMaterial.uniforms.uFirstPass.value = 1
this._depthMaterial.uniforms.uPrevDepthTex.value = null
}
scene.overrideMaterial = this._depthMaterial
this._renderer.render(scene, camera, writeDepth, true)
scene.overrideMaterial = null
this._renderer.render(scene, camera, writeRGBA, true)
}
this._quad.material = this._blitMaterial
// this._blitMaterial.uniforms.uTexture.value = this._screenDepth[ping]
this._blitMaterial.uniforms.uTexture.value = this._screenRGBA[
this._currentBlitTex
]
console.log(this._currentBlitTex)
this._renderer.render(this._scene, this._camera)
this._renderer.autoClear = oldAutoClear
}
I'm using gl_FragCoord.z to do the test, and packing the depth into a 8bit RGBA texture, with a shader that looks like this:
float depth = gl_FragCoord.z;
vec4 pp = packDepthToRGBA( depth );
if( uFirstPass == 0 ){
float prevDepth = unpackRGBAToDepth( texture2D( uPrevDepthTex , vSS));
if( depth <= prevDepth + 0.0001) {
discard;
}
}
gl_FragColor = pp;
Varying vSS is computed in the vertex shader, after the projection:
vSS.xy = gl_Position.xy * .5 + .5;
The basic idea seems to work and i get peels, but only if i using the fudge factor. It looks like it fails though as the angle gets more obtuse (which is why polygonOffset needs both the factor and units, to account for the slope?).
I didn't understand at all how the invariance is solved. I don't understand how the mentioned extension is being used other than it seems to be overriding the fragment depth, but with what?
I must admit that I'm not sure even which interpolation is being referred to here since every pixel is aligned, i'm just using nearest filtering.
I did see some hints about depth buffer precision, but not really understanding the issue, i wanted to try packing the depth into only three channels and see what happens.
Having such a small fudge factor make it sort of work tells me that likely all these sampled and computed depths do seem to exist in the same space. But this seems to be the same issue as if using gl.EQUAL for depth testing? For shits and giggles i tried to override the depth with the unpacked depth immediately after packing it, but it didn't seem to do anything.
edit
Increasing the polygon offset with each peel seems to have done the trick. I got some fighting though with the lines but i think it's due to the fact that i was already using offset to draw them and i need to include that in the peel offset. I'd still love to understand more about the problem.
The depth buffer stores depths :) Depending on the 'far' and 'near' planes the perspective projection tends to set the depths of the points "stacked" in just a short part of the buffer. It's not linear in z. You can see this on your own setting a different color depending on the depth and render some triangle that takes most of near-far distance.
A shadow map stores depths (distances to light)... calculated after projection. Later, in the second or following pass, you will compare those depths, which are "stacked", which makes some comparisons to fail due to they are very similar values: hazardous variances.
You can user a more fine-grained depth buffer, 24 bits instead of 16 or 8 bits. This may solve part of the problem.
There's another issue: the perspective division or z/w, needed to get normalized device coordinates (NDC). It occurs after vertex shader, so gl_FragDepth = gl_FragCoord.z is affected.
The other approach is to store the depths calculated in some space that doesn't suffer "stacking" nor perspective division. Camera space is one. In other words, you can calculate the depth undoing projection in the vertex shader.
The article you link to is for old fixed-pipeline, without shaders. It shows a NVIDIA extension to deal with these variances.

Matrix multiplication does not work in my vertex shader

Currently, I am calculating the World View Projection Matrix in my application instead of the GPU. I want to move this calculation to the GPU, but I am currently unable to do so.
Case 1 (see below) works very well, but case 2 doesn't and I have no idea what I've done wrong.
In my camera class, I calculate the View and the Projection matrices like this:
ViewMatrix = SharpDX.Matrix.LookAtLH(_cameraPosition, _lookAtPosition, SharpDX.Vector3.UnitY);
ProjectionMatrix = SharpDX.Matrix.PerspectiveFovLH((float)Math.PI / 4.0f, renderForm.ClientSize.Width / (float)renderForm.ClientSize.Height, 0.1f, 500.0f);
Then, I calculate the World matrix for each of my models during the render process:
SharpDX.Matrix worldMatrix = SharpDX.Matrix.Translation(_position);
Case 1: Calculation of matrix in my application
When rendering a model, I calculate the World View Projection Matrix like this:
SharpDX.Matrix matrix = SharpDX.Matrix.Multiply(worldMatrix, camera.ViewMatrix);
matrix = SharpDX.Matrix.Multiply(matrix, camera.ProjectionMatrix);
matrix.Transpose();
And in my vertex shader, I calculate the final position of my vertices by calling:
output.pos = mul(input.pos, WVP);
And everything works fine!
Case 2: Calculation of matrix in HLSL
Instead of calculating anything in my application, I just write the three matrices World, View and Projection into my vertex shader's constant buffer and calculate everything in HLSL:
matrix mat = mul(World, View);
mat = mul(mat, Projection);
mat = transpose(mat);
output.pos = mul(input.pos, mat);
It does work. I don't see anything in my scene. So I assume some calculations were wrong. I checked my code several times.
Either, I am blind or stupid. What did I do wrong?
in hlsl you don't need to calculate the transpose. You shoul also use the float4x4, so it's easy to see which dimensions you use.
Your matrices just should look like:
float4x4 worldViewProj = mul(world, mul(view, proj));
float4 pos = mul(input.pos, worldViewProj);
Keep in mind that points are from type float4(x,y,z,1) and vectors are float4(x,y,z,0).
In linear algebra multiplication of a vector is
𝑝′=𝑀⋅𝑝
so you need the transpose for changing side of M
𝑝′𝑇=pT⋅𝑀𝑇 (the T means the transpose)
HLSL is a bit different.
For the easiest way, just multiply all matrices from left to right and then use the mul function with your vector on the left just as in my example above.
For more information read this: HLSL mul()
I used several days to experiment with the HLSL shaders and with my render functions. It turned out, that I transposed one matrix which I shouldn't have done. As a result, my whole scene was messed up. It was not much fun to debug, but it works now! :-)

GLSL Shader: FFT-Data as Circle Radius

Im trying to crate a shader, that converts fft-data (passed as a texture) to a bar graphic and then to on a circle in the center of the screen. Here is a image of what im trying to achieve: link to image
i experimentet a bit with shader toy and came along wit this shader: link to shadertoy
with all the complex shaders i saw on shadertoy, it thought this should be doable with maths somehow.
can anybody here give me a hint how to do it?
It’s very doable — you just have to think about the ranges you’re sampling in. In your Shadertoy example, you have the following:
float r = length(uv);
float t = atan(uv.y, uv.x);
fragColor = vec4(texture2D(iChannel0, vec2(r, 0.1)));
So r is going to vary roughly from 0…1 (extending past 1 in the corners), and t—the angle of the uv vector—is going to vary from 0…2π.
Currently, you’re sampling your texture at (r, 0.1)—in other words, every pixel of your output will come from the V position 10% down your source texture and varying across it. The angle you’re calculating for t isn’t being used at all. What you want is for changes in the angle (t) to move across your texture in the U direction, and for changes in the distance-from-center (r) to move across the texture in the V direction. In other words, this:
float r = length(uv);
float t = atan(uv.y, uv.x) / 6.283; // normalize it to a [0,1] range - 6.283 = 2*pi
fragColor = vec4(texture2D(iChannel0, vec2(t, r)));
For the source texture you provided above, you may find your image appearing “inside out”, in which case you can subtract r from 1.0 to flip it.

Applying a "Spread" value to an XMFLOAT4X4

I'm attempting to add a small value to a World Matrix in order to replicate the accuracy of a fired weapon [pistol, assault rifle]
Currently, my World Matrix resides at a Parent Objects' position, with the ability to rotate about the Y axis exclusively.
I've done this in Unity3D, running whenever the object needs to be created [once per]:
var coneRotation = Quaternion.Euler(Random.Range(-spread, spread), Random.Range(-spread, spread), 0);
var go = Instantiate(obj, parent.transform.position, transform.rotation * coneRotation) as GameObject;
and am attempting to replicate the results using Direct3D11.
This lambda returns a random value between [-1.5, 1.5] currently:
auto randF = [&](float lower_bound, float uppder_bound) -> float
{
return lower_bound + static_cast <float> (rand()) / (static_cast <float> (RAND_MAX / (uppder_bound - lower_bound)));
};
My first thought was to simply multiply a random x && y into the forward vector of an object upon initialization, and move it in this fashion: position = position + forward * speed * dt; [speed being 1800], though the rotation is incorrect (not to mention bullets fire up).
I've also attempted to make a Quaternion [as in Unity3D]: XMVECTOR quaternion = XMVectorSet(random_x, random_y, 0) and creating a Rotation Matrix using XMMatrixRotationQuaternion.
Afterwards I call XMStoreFloat4x4(&world_matrix, XMLoadFloat4x4(&world_matrix) * rotation);, and restore the position portion of the matrix [accessing world_matrix._41/._42/._43] (world_matrix being the matrix of the "bullet" itself, not the parent).
[I've also tried to reverse the order of the multiplication]
I've read that the XMMatrixRotationQuaternion doesn't return as an Euler Quaternion, and XMQuaternionToAxisAngle does, though I'm not entirely certain how to use it.
What would be the proper way to accomplish something like this?
Many thanks!
Your code XMVECTOR quaternion = XMVectorSet(random_x, random_y, 0); is not creating a valid quaternion. First, if you did not set the w component to 1, then the 4-vector quaternion doesn't actually represent a 3D rotation. Second, a quaternion's vector components are not Euler angles.
You want to use XMQuaternionRotationRollPitchYaw which constructs a quaternion rotation from Euler angle input, or XMQuaternionRotationRollPitchYawFromVector which takes the three Euler angles as a vector. These functions are doing what Unity's Quaternion.Euler method is doing.
Of course, if you want a rotation matrix and not a quaternion, then you can XMMatrixRotationRollPitchYaw or XMMatrixRotationRollPitchYawFromVector to directly construct a 4x4 rotation matrix from Euler angles--which actually uses quaternions internally anyhow. Based on your code snippet, it looks like you already have a base rotation as a quaternion you want to concatenate with your spread quaternion, so you probably don't want to use this option for this case.
Note: You should look at using the C++11 standard <random> rather than your home-rolled lambda wrapper around the terrible C rand function.
Something like:
std::random_device rd;
std::mt19937 gen(rd());
// spread should be in radians here (not degrees which is what Unity uses)
std::uniform_real_distribution<float> dis(-spread, spread);
XMVECTOR coneRotation = XMQuaternionRotationRollPitchYaw( dis(gen), dis(gen), 0 );
XMVECTOR rot = XMQuaternionMultiply( parentRot, coneRotation );
XMMATRIX transform = XMMatrixAffineTransformation( g_XMOne, g_XMZero, rot, parentPos );
BTW, if you are used to Unity or XNA Game Studio C# math libraries, you might want to check out the SimpleMath wrapper for DirectXMath in DirectX Tool Kit.

What should my shader look like when doing diffuse/directional lighting with skeletal matrices?

I've found examples online of how to perform diffusion lighting but I can't seem to find any regarding how things would change when using skeletal matrices.
Does anyone have an example I could look at?
I specifically used this page as an example to learn diffusion lighting:
http://learningwebgl.com/blog/?p=684
No matter what kind of transformation you apply to your vertices, the most important thing is to keep consistent; know in what space you are performing your transformations. Assuming object_matrix is the transformation of your object and camera_matrix the view transformation:
vec4 pos = VertexPosition;
// pos is in object space
pos = object_matrix * pos;
// pos is now in world space
pos = camera_matrix * pos;
// view space
Light coordinates are usually in world space, in which case:
pos = object_matrix * pos;
// perform diffuse lighting computations here
pos = camera_matrix * pos;
If by "skeletal matrices" you refer to skeletal animation, they're done in object space.
Hope this helps.

Resources