Frustum Culling With View Matrix - matrix

In a GLSL shader I need to omit a few tessellation patches to drastically increase performance. These patches are triangles with given world coordinates for each vertex. However, when I convert these coordinates into view space for frustum culling, there is a margin of error.
This is the original terrain.
This is how the error affects it on the top.
This is a closeup of a section with dirt.
These errors happen namly around the top of the screen but also the sides and the bottom.
Here is the code I use to determine if I should exclude the triangle (in GLSL).
bool inFrustum( vec3 p,vec3 q,vec3 r) {
vec4 Pclip = camera * vec4(p, 1.0f);
vec4 Qclip = camera * vec4(q, 1.0f);
vec4 Rclip = camera * vec4(r, 1.0f);
if(((-Pclip.w>Pclip.x&&-Qclip.w>Qclip.x&&-Rclip.w>Rclip.x)|| (Pclip.x>Pclip.w&&Qclip.x>Qclip.w&&Rclip.x>Rclip.w))||
((-Pclip.w>Pclip.y&&-Qclip.w>Qclip.y&&-Rclip.w>Rclip.y)||(Pclip.y>Pclip.w&&Qclip.y>Qclip.w&&Rclip.y>Rclip.w))||
((-Pclip.w>Pclip.z&&-Qclip.w>Qclip.z&&-Rclip.w>Rclip.z)||(Pclip.z>Pclip.w&&Qclip.z>Qclip.w&&Rclip.z>Rclip.w))){
return false;
}
else{
return true;
}
}
I would greatly appreciate any help given!
Behemyth

In my shader I use the following to cull patches:
bool visible(vec3 vert)
{
int clipoffset = 5; //a bit offset because of displacements
vec4 p = MVP*vec4(vert,1);
return !(( p1.x < -(p1.w+clipoffset))||
( p.x > (p.w+clipoffset))||
( p.y < -(p.w+clipoffset))||
( p.y > (p.w+clipoffset))||
( p.z < -(p.w+clipoffset))||
( p.z > (p.w+clipoffset)));
}
and it looks like this from above:
PS: I use quads tessellation so I check if one of the vertices is in frustum:
if( visible(inPos[0])||
visible(inPos[1])||
visible(inPos[2])||
visible(inPos[3]))
{
outt[0] = calcTessellationLevel(inPos[3],inPos[0]);
outt[1] = calcTessellationLevel(inPos[0],inPos[1]);
outt[2] = calcTessellationLevel(inPos[1],inPos[2]);
outt[3] = calcTessellationLevel(inPos[2],inPos[3]);
inn[1] = (outt[0]+outt[2])/2;
inn[0] = (outt[1]+outt[3])/2;
}
EDIT: In your code maybe the (and) || operators caused the problem, try that without brackets after every second statement:
if(S1||S2||S3||S4)
instead of
if((S1||S2)||(S3||S4))
EDIT:: hmmm....I haven't looked at the date it was asked, dont know how I've found it....O.o

Related

Why are my specular highlights elliptical?

I think these should be circular. I assume there is something wrong with my normals but I haven't found anything wrong with them. Then again, finding a good test for the normals is difficult.
Here is the image:
Here is my shading code for each light, leaving out the recursive part for reflections:
lighting = ( hit.obj.ambient + hit.obj.emission );
const glm::vec3 view_direction = glm::normalize(eye - hit.pos);
const glm::vec3 reflection = glm::normalize(( static_cast<float>(2) * ( glm::dot(view_direction, hit.normal) * hit.normal ) ) - view_direction);
for(int i = 0; i < numused; ++i)
{
glm::vec3 hit_to_light = (lights[i].pos - hit.pos);
float dist = glm::length(hit_to_light);
glm::vec3 light_direction = glm::normalize(hit_to_light);
Ray lightray(hit.pos, light_direction);
Intersection blocked = Intersect(lightray, scene, verbose ? verbose : false);
if( blocked.dist >= dist)
{
glm::vec3 halfangle = glm::normalize(view_direction + light_direction);
float specular_multiplier = pow(std::max(glm::dot(halfangle,hit.normal), 0.f), shininess);
glm::vec3 attenuation_term = lights[i].rgb * (1.0f / (attenuation + dist * linear + dist*dist * quad));
glm::vec3 diffuse_term = hit.obj.diffuse * ( std::max(glm::dot(light_direction,hit.normal) , 0.f) );
glm::vec3 specular_term = hit.obj.specular * specular_multiplier;
}
}
And here is the line where I transform the object space normal to world space:
*norm = glm::normalize(transinv * glm::vec4(glm::normalize(p - sphere_center), 0));
Using the full phong model, instead of blinn-phong, I get teardrop highlights:
If I color pixels according to the (absolute value of the) normal at the intersection point I get the following image (r = x, g = y, b = z):
I've solved this issue. It turns out that the normals were all just slightly off, but not enough that the image colored by normals could depict it.
I found this out by computing the normals on spheres with a uniform scale and a translation.
The problem occurred in the line where I transformed the normals to world space:
*norm = glm::normalize(transinv * glm::vec4(glm::normalize(p - sphere_center), 0));
I assumed that the homogeneous coordinate would be 0 after the transformation because it was zero beforehand (rotations and scales do not affect it, and because it is 0, neither can translations). However, it is not 0 because the matrix is transposed, so the bottom row was filled with the inverse translations, causing the homogeneous coordinate to be nonzero.
The 4-vector is then normalized and the result is assigned to a 3-vector. The constructor for the 3-vector simply removes the last entry, so the normal was left unnormalized.
Here's the final picture:

Drawing a perfect horizontal line at a specific position with a fragment shader

is it possible to draw a perfect horizontal line of a single pixel height at any chosen position on the vertical axis with a fragment shader applied to a screen aligned quad?
I have found many solutions with smoothstep or more complex functions but i am looking for an elegant and fast way of doing this.
A solution i have made is by using an exponential function and making it steeper but it have many shortcomings that i don't want (the line is not really one pixel height due to the exponential function and it is rather tricky to get one right), here is the GLSL code :
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
vec2 uv = fragCoord.xy / iResolution.xy;
// a centered horizontal line
float v = pow(uv.y - 0.5, 2.);
// make it steeper
v *= 100000.;
// make it white on a black background
v = clamp(1. - v, 0., 1.);
fragColor = vec4(v);
}
Here is the shadertoy code which execute this: https://www.shadertoy.com/view/Ms2cWh
What i would like :
a perfect horizontal line drawn to a specific Y position in pixels units or normalized
its intensity limited to [0, 1] range without clamping
a fast way of doing it
If you just want to:
draw a perfect horizontal line of a single pixel height at any chosen
position on the vertical axis with a fragment shader applied to a
screen aligned quad
, then maybe:
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
int iPosition = 250; // the y coord in pixels
int iThickness = 10; // the thickness in pixels
vec2 uv = fragCoord.xy / iResolution.xy;
float v = float( iPosition ) / iResolution.y;
float vHalfHeight = ( float( iThickness ) / ( iResolution.y ) ) / 2.;
if ( uv.y > v - vHalfHeight && uv.y < v + vHalfHeight )
fragColor = vec4(1.,1.,1.,1.); // or whatever color
}
Here is a neat solution without branching. I don't know if it is really faster than with branching though.
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
vec2 uv = fragCoord.xy / iResolution.xy;
float py = iMouse.y/iResolution.y;
float hh = 1./iResolution.y;
// can also be replace with step(0., hh-abs(uv.y-py))
float v = sign(hh-abs(uv.y-py));
fragColor = vec4(v);
}
I know the question was answered properly before me, but in case someone is looking for a way to render a textured line in a pixel perfect way I wrote an article with some examples.
It's about pixel perfect UI in general, but using it for a line is just a matter of clamping/repeating texture sampling. Also I'm using Unity, but there is no reason the method would be exclusive to it.

Assimp animation bone transformation

Recently I'm working on bone animation import, so I made a 3d minecraft-like model with some IK technique to test Assimp animation import. Ouput format is COLLADA(*.dae),and the tool I used is Blender. On the programming side, my enviroment is opengl/glm/assimp. I think these information for my problem is enough.One thing, the animation of the model, I just record 7 unmove keyframe for testing assimp animation.
First, I guess my transformation except local transform part is correct, so let the function only return glm::mat4(1.0f), and the result show the bind pose(not sure) model. (see below image)
Second, Turn back the value glm::mat4(1.0f) to bone->localTransform = transform * scaling * glm::mat4(1.0f);, then the model deform. (see below image)
Test image and model in blender:
(bone->localTransform = glm::mat4(1.0f) * scaling * rotate; : this image is under ground :( )
The code here:
void MeshModel::UpdateAnimations(float time, std::vector<Bone*>& bones)
{
for each (Bone* bone in bones)
{
glm::mat4 rotate = GetInterpolateRotation(time, bone->rotationKeys);
glm::mat4 transform = GetInterpolateTransform(time, bone->transformKeys);
glm::mat4 scaling = GetInterpolateScaling(time, bone->scalingKeys);
//bone->localTransform = transform * scaling * glm::mat4(1.0f);
//bone->localTransform = glm::mat4(1.0f) * scaling * rotate;
//bone->localTransform = glm::translate(glm::mat4(1.0f), glm::vec3(0.5f));
bone->localTransform = glm::mat4(1.0f);
}
}
void MeshModel::UpdateBone(Bone * bone)
{
glm::mat4 parentTransform = bone->getParentTransform();
bone->nodeTransform = parentTransform
* bone->transform // assimp_node->mTransformation
* bone->localTransform; // T S R matrix
bone->finalTransform = globalInverse
* bone->nodeTransform
* bone->inverseBindPoseMatrix; // ai_mesh->mBones[i]->mOffsetMatrix
for (int i = 0; i < (int)bone->children.size(); i++) {
UpdateBone(bone->children[i]);
}
}
glm::mat4 Bone::getParentTransform()
{
if (this->parent != nullptr)
return parent->nodeTransform;
else
return glm::mat4(1.0f);
}
glm::mat4 MeshModel::GetInterpolateRotation(float time, std::vector<BoneKey>& keys)
{
// we need at least two values to interpolate...
if ((int)keys.size() == 0) {
return glm::mat4(1.0f);
}
if ((int)keys.size() == 1) {
return glm::mat4_cast(keys[0].rotation);
}
int rotationIndex = FindBestTimeIndex(time, keys);
int nextRotationIndex = (rotationIndex + 1);
assert(nextRotationIndex < (int)keys.size());
float DeltaTime = (float)(keys[nextRotationIndex].time - keys[rotationIndex].time);
float Factor = (time - (float)keys[rotationIndex].time) / DeltaTime;
if (Factor < 0.0f)
Factor = 0.0f;
if (Factor > 1.0f)
Factor = 1.0f;
assert(Factor >= 0.0f && Factor <= 1.0f);
const glm::quat& startRotationQ = keys[rotationIndex].rotation;
const glm::quat& endRotationQ = keys[nextRotationIndex].rotation;
glm::quat interpolateQ = glm::lerp(endRotationQ, startRotationQ, Factor);
interpolateQ = glm::normalize(interpolateQ);
return glm::mat4_cast(interpolateQ);
}
glm::mat4 MeshModel::GetInterpolateTransform(float time, std::vector<BoneKey>& keys)
{
// we need at least two values to interpolate...
if ((int)keys.size() == 0) {
return glm::mat4(1.0f);
}
if ((int)keys.size() == 1) {
return glm::translate(glm::mat4(1.0f), keys[0].vector);
}
int translateIndex = FindBestTimeIndex(time, keys);
int nextTranslateIndex = (translateIndex + 1);
assert(nextTranslateIndex < (int)keys.size());
float DeltaTime = (float)(keys[nextTranslateIndex].time - keys[translateIndex].time);
float Factor = (time - (float)keys[translateIndex].time) / DeltaTime;
if (Factor < 0.0f)
Factor = 0.0f;
if (Factor > 1.0f)
Factor = 1.0f;
assert(Factor >= 0.0f && Factor <= 1.0f);
const glm::vec3& startTranslate = keys[translateIndex].vector;
const glm::vec3& endTrabslate = keys[nextTranslateIndex].vector;
glm::vec3 delta = endTrabslate - startTranslate;
glm::vec3 resultVec = startTranslate + delta * Factor;
return glm::translate(glm::mat4(1.0f), resultVec);
}
The code idea is referenced from Matrix calculations for gpu skinning and Skeletal Animation With Assimp.
Overall, I fectch all the information from assimp to MeshModel and save it to the bone structure, so I think the information is alright?
The last thing, my vertex shader code:
#version 330 core
#define MAX_BONES_PER_VERTEX 4
in vec3 position;
in vec2 texCoord;
in vec3 normal;
in ivec4 boneID;
in vec4 boneWeight;
const int MAX_BONES = 100;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
uniform mat4 boneTransform[MAX_BONES];
out vec3 FragPos;
out vec3 Normal;
out vec2 TexCoords;
out float Visibility;
const float density = 0.007f;
const float gradient = 1.5f;
void main()
{
mat4 boneTransformation = boneTransform[boneID[0]] * boneWeight[0];
boneTransformation += boneTransform[boneID[1]] * boneWeight[1];
boneTransformation += boneTransform[boneID[2]] * boneWeight[2];
boneTransformation += boneTransform[boneID[3]] * boneWeight[3];
vec3 usingPosition = (boneTransformation * vec4(position, 1.0)).xyz;
vec3 usingNormal = (boneTransformation * vec4(normal, 1.0)).xyz;
vec4 viewPos = view * model * vec4(usingPosition, 1.0);
gl_Position = projection * viewPos;
FragPos = vec3(model * vec4(usingPosition, 1.0f));
Normal = mat3(transpose(inverse(model))) * usingNormal;
TexCoords = texCoord;
float distance = length(viewPos.xyz);
Visibility = exp(-pow(distance * density, gradient));
Visibility = clamp(Visibility, 0.0f, 1.0f);
}
If my question above, lack of code or describe vaguely, please let me know, Thanks!
Edit:(1)
In additional, my bone information like this(code fetching part):
for (int i = 0; i < (int)nodeAnim->mNumPositionKeys; i++)
{
BoneKey key;
key.time = nodeAnim->mPositionKeys[i].mTime;
aiVector3D vec = nodeAnim->mPositionKeys[i].mValue;
key.vector = glm::vec3(vec.x, vec.y, vec.z);
currentBone->transformKeys.push_back(key);
}
had some transformation vector, so my code above glm::mat4 transform = GetInterpolateTransform(time, bone->transformKeys);,Absloutely, get the same value from it. I'm not sure I made a nomove keyframe animation that provide the transform values is true or not (of course it has 7 keyframe).
A keyframe contents like this(debug on head bone):
7 different keyframe, same vector value.
Edit:(2)
If you want to test my dae file, I put it in jsfiddle, come and take it :). Another thing, in Unity my file work correctly, so I think maybe not my local transform occurs the problem, it seems the problem could be some other like parentTransform or bone->transform...etc? I aslo add local transform matrix with all bone, But can not figure out why COLLADA contains these value for my unmove animation...
For amounts of testing, and finally found the problem is the UpdateBone() part.
Before I point out my problem, I need to say the series of matrix multiplication let me confuse, but when I found the solution, it just make me totally (maybe just 90%) realize all the matrix doing.
The problem comes from the article,Matrix calculations for gpu skinning. I assumed the answer code is absolutely right and don't think any more about the matrix should be used. Thus, misusing matrix terribly transfer my look into the local transform matrix. Back to the result image in my question section is bind pose when I change the local transform matrix to return glm::mat4(1.0f).
So the question is why the changed make the bind pose? I assumed the problem must be local transform in bone space, but I'm wrong. Before I give the answer, look at the code below:
void MeshModel::UpdateBone(Bone * bone)
{
glm::mat4 parentTransform = bone->getParentTransform();
bone->nodeTransform = parentTransform
* bone->transform // assimp_node->mTransformation
* bone->localTransform; // T S R matrix
bone->finalTransform = globalInverse
* bone->nodeTransform
* bone->inverseBindPoseMatrix; // ai_mesh->mBones[i]->mOffsetMatrix
for (int i = 0; i < (int)bone->children.size(); i++) {
UpdateBone(bone->children[i]);
}
}
And I make the change as below:
void MeshModel::UpdateBone(Bone * bone)
{
glm::mat4 parentTransform = bone->getParentTransform();
if (boneName == "Scene" || boneName == "Armature")
{
bone->nodeTransform = parentTransform
* bone->transform // when isn't bone node, using assimp_node->mTransformation
* bone->localTransform; //this is your T * R matrix
}
else
{
bone->nodeTransform = parentTransform // This retrieve the transformation one level above in the tree
* bone->localTransform; //this is your T * R matrix
}
bone->finalTransform = globalInverse // scene->mRootNode->mTransformation
* bone->nodeTransform //defined above
* bone->inverseBindPoseMatrix; // ai_mesh->mBones[i]->mOffsetMatrix
for (int i = 0; i < (int)bone->children.size(); i++) {
UpdateBone(bone->children[i]);
}
}
I don't know what the assimp_node->mTransformation give me before, only the description "The transformation relative to the node's parent" in the assimp documentation. For some testing, I found that the mTransformation is the bind pose matrix which the current node relative to parent if I use these on bone node. Let me give a picture that captured the matrix on head bone.
The left part is the transform which is fetched from assimp_node->mTransformation.The right part is my unmove animation's localTransform which is calculated by the keys from nodeAnim->mPositionKeys, nodeAnim->mRotationKeys and nodeAnim->mScalingKeys.
Look back what I did, I made a bind pose tranformation twice, so the image in my question section look just seperate but not spaghetti :)
On the last, let me show what I did before the unmove animation testing and correct animation result.
(For everyone, If my concept is wrong , please point me out! Thx.)

Why is this basic "rotate around the origin" failing to work?

I've done this a hundred times, but this is my first time with a manually constructed cube made of "sticks", which are 3D lines. It's constructed around the origin, out 5 from the origin in each of the X, Y, and Z directions.
When I rotate it, I'm still "inside it" and it rotates around me (the camera). I'm applying a translation and rotation, so I'm stymied as to what I'm doing wrong.
Here's the basic code to rotate the box, by which I mean generate it's world matrix:
float rotateX = 0.0f, rotateY = 0.0f, rotateZ = 0.0f;
XMFLOAT4 positionBox = XMFLOAT4(0, 0, -50, 1); // Camera at origin looking at this
XMMATRIX matrixCubeWorld;
void CALLBACK OnFrameMove( double fTime, float fElapsedTime, void* pUserContext )
{
auto pCamera = g_GameServices.GetService<CWorldCamera>();
XMMATRIX translation = XMMatrixTranslationFromVector(XMLoadFloat4(&positionBox));
XMMATRIX rotation = XMMatrixRotationRollPitchYaw(rotateX, rotateY, rotateZ);
matrixCubeWorld = rotation * translation;
if (GetKeyState('X') < 0)
rotateX = RotateAround(rotateX, fElapsedTime);
if (GetKeyState('Y') < 0)
rotateY = RotateAround(rotateY, fElapsedTime);
}
And when I set up to draw, I use that matrix:
D3D11_MAPPED_SUBRESOURCE MappedResource;
V(pd3dImmediateContext->Map(_pVertexShaderVariables, 0, D3D11_MAP_WRITE_DISCARD, 0, &MappedResource));
auto pCB = reinterpret_cast<VSCB3DLineChangesEveryFrame *>(MappedResource.pData);
pCB->_gWorldViewProj = matrixCubeWorld * pCamera->GetViewMatrix() * pCamera->GetProjMatrix();
pd3dImmediateContext->Unmap(_pVertexShaderVariables, 0);
return hr;
...and the shader is as simple as can be:
VertexShaderOutput Line3DVertexShaderFunction(float3 position : POSITION, float4 color : COLOR, float2 tex : TEXCOORD0)
{
VertexShaderOutput output;
output.position = mul(float4(position, 1), _gWorldViewProj);
output.color = color;
output.tex = tex;
return output;
}
So do I have a bug or a misunderstanding? I've tried with the inverse of the translation, thinking that would 'bring it back to the origin before rotating' but didn't improve it.
Transformations look good imho.
Maybe it's due to the fact that 'XMMatrixTranslationFromVector'
takes only 3d-vector as the documentation (msdn) says.
Also make sure that RotateAround function and camera view/proj matrices give correct results.
Best regards.

Volume ray casting doesn't work fine (Webgl + GLSL + Three.js)

I have tried to make better quality of my volume ray casting algorithm. I have set a smaller step of raycast (quality is better), but it causes problem. It is on pictures below (black areas where they shouldnt be).
I am using RGB cube to get direction of ray in volume.
I think, i have the same algorithm like there: volume rendering (using glsl) with ray casting algorithm
Have anybody some ideas, where could be a problem? I need to resolve this, because deadline of my diplom thesis is to close:( I realy don't know, why it doesnt work:(
EDIT:
I cant show there my all code (it could be problem, if i will supply it before hand it in school). But the key code to going throught the volume:
// All variables neede to rays
vec3 rayDirection = texture2D(backFaceCube, texCoo).xyz - varcolor.xyz;
float lenRay = length(rayDirection);
vec3 normDir = normalize(rayDirection);
float d = qualitySteps; //quality steps is size of steps defined by user -> example: 0.01, 0.001, 0.0001 etc.
vec3 step = normDir * d;
float lenStep = length(step);
float accumulatedLength = 0.0;
and then in cycle:
posInCube.xyz += step;
accumulatedLength += lenStep;
...
...
...
if(accumulatedLength >= lenRay || accumulatedColor.a > 1.0 ) {
break;
}
EDIT2:(sorry but like comment it was too long)
Yes, the texture is noisy...i have tried to delete the condition with alpha: if(accumulatedColor.a > 1.0), but the result is same.
I think that there is some direct correlation with length of ray and size of step. I tried many combination and i have found these things.
If step is big, i am able to go throught all volume, but if it is small, than i am realy not able to go throught volume (maybe). If step is extremely big, than i can see mirroved object (it can be caused by repeating texture if i go out of the texture on GPU). If step is too small, than i am able to mapped only small part of texture -> it seems, that ray is too short, but in reality he isnt. Questins are, why mapping of 3D coordinates to 2D texture is wrong and depend on size of step..
Can you please supply the code for your fragment shader?
Are you traversing the whole vector from front to end position? Here's an example shader (the code might contain some errors since I just wrote it from the top of my head. I unfortunately can't test the code on my computer at the moment):
in vec2 texCoord;
out vec4 outColor;
uniform float stepSize;
uniform int numSteps;
uniform sampler2d frontTexture;
uniform sampler2d backTexture;
uniform sampler3d volumeTexture;
uniform sampler1d transferTexture; // Density to RGB
void main()
{
vec4 color = vec4(0.0);
vec3 startPosition = texture(frontTexture, texCoord);
vec3 endPosition = texture(backTexture, texCoord);
vec3 delta = normalize(startPosition - endPosition) * stepSize;
vec3 position = startPosition;
for (int i = 0; i < numSteps; ++i)
{
float density = texture(volumeTexture, position).r;
vec3 voxelColor = texture(transferTexture, density);
// Sampling distance correction
color.a = 1.0 - pow((1.0 - color.a), stepSize * 500.0);
// Front to back blending (no shading done)
color.rgb = color.rgb + (1.0 - color.a) * voxelColor.a * voxelColor.rgb;
color.a = color.a + (1.0 - color.a) * voxelColor.a;
if (color.a >= 1.0)
{
break;
}
// Advance
position += direction;
if (position.x > 1.0 || position.y > 1.0 || position.z > 1.0)
{
break;
}
}
outColor = color;
}

Resources