Strange behavior of tessFactors inside tessellation stage - directx-11

i've noticed some super stange behavior on my nvidia 860m. Im programming some 3d engine and i'm using tessellation for terrain rendering.
I use a simple quad tessellation algorithm.
struct PatchTess
{
float EdgeTess[4] : SV_TessFactor;
float InsideTess[2] : SV_InsideTessFactor;
};
PatchTess ConstantHS(InputPatch<VS_OUT, 4> patch)
{
PatchTess pt;
float3 l = (patch[0].PosW + patch[2].PosW) * 0.5f;
float3 t = (patch[0].PosW + patch[1].PosW) * 0.5f;
float3 r = (patch[1].PosW + patch[3].PosW) * 0.5f;
float3 b = (patch[2].PosW + patch[3].PosW) * 0.5f;
float3 c = (patch[0].PosW + patch[1].PosW + patch[2].PosW + patch[3].PosW) * 0.25f;
pt.EdgeTess[0] = GetTessFactor(l);
pt.EdgeTess[1] = GetTessFactor(t);
pt.EdgeTess[2] = GetTessFactor(r);
pt.EdgeTess[3] = GetTessFactor(b);
pt.InsideTess[0] = GetTessFactor(c);
pt.InsideTess[1] = pt.InsideTess[0];
return pt;
}
[domain("quad")]
[partitioning("fractional_even")]
[outputtopology("triangle_cw")]
[outputcontrolpoints(4)]
[patchconstantfunc("ConstantHS")]
[maxtessfactor(64.0f)]
VS_OUT HS(InputPatch<VS_OUT, 4> p, uint i : SV_OutputControlPointID)
{
VS_OUT vout;
vout.PosW = p[i].PosW;
return vout;
}
[domain("quad")]
DS_OUT DS(PatchTess patchTess, float2 uv : SV_DomainLocation, const OutputPatch<VS_OUT, 4> quad)
{
DS_OUT dout;
float3 p = lerp(lerp(quad[0].PosW, quad[1].PosW, uv.x), lerp(quad[2].PosW, quad[3].PosW, uv.x), uv.y);
p.y = GetHeight(p);
dout.PosH = mul(float4(p, 1.0f), gViewProj);
dout.PosW = p;
return dout;
}
This code above isn't the problem, just want to give you some code context.
The Problem occurres in this function:
inline float GetTessFactor(float3 posW)
{
const float factor = saturate((length(gEyePos - posW) - minDistance) / (maxDistance - minDistance));
return pow(2, lerp(6.0f, 0.0f, factor));
}
When i use the debug mode in Visual Studio, everything works pretty finde, tessellation works as it should. But in release mode, i got flickering of the terrain patches.
And now the super strange thing: When i change the function and switch from pow to just a linear function or something else, everything works as exspected.
So this works fine:
inline float GetTessFactor(float3 posW)
{
const float factor = saturate((length(gEyePos - posW) - minDistance) / (maxDistance - minDistance));
return lerp(64.0f, 0.0f, factor));
}
EDIT:
changing the line:
pt.InsideTess[0] = GetTessFactor(c);
to
pt.InsideTess[0] = max(max(pt.EdgeTess[0], pt.EdgeTess[1]), max(pt.EdgeTess[2], pt.EdgeTess[3]));
does the job.
It seems that sometimes the pow function is calculating values (withing the valid range of 64.0f) that are not valid with the edge tess factors.
Also keep in mind, that this problem just appears when running in release mode and not in debug mode (VS 2013).
Does anyone know restrictions for the combination of the tessfactor values? I didn't find any information on msdn or any similar pages.
Thanks

Newest driver update solved the Problem.

Related

Assimp animation bone transformation

Recently I'm working on bone animation import, so I made a 3d minecraft-like model with some IK technique to test Assimp animation import. Ouput format is COLLADA(*.dae),and the tool I used is Blender. On the programming side, my enviroment is opengl/glm/assimp. I think these information for my problem is enough.One thing, the animation of the model, I just record 7 unmove keyframe for testing assimp animation.
First, I guess my transformation except local transform part is correct, so let the function only return glm::mat4(1.0f), and the result show the bind pose(not sure) model. (see below image)
Second, Turn back the value glm::mat4(1.0f) to bone->localTransform = transform * scaling * glm::mat4(1.0f);, then the model deform. (see below image)
Test image and model in blender:
(bone->localTransform = glm::mat4(1.0f) * scaling * rotate; : this image is under ground :( )
The code here:
void MeshModel::UpdateAnimations(float time, std::vector<Bone*>& bones)
{
for each (Bone* bone in bones)
{
glm::mat4 rotate = GetInterpolateRotation(time, bone->rotationKeys);
glm::mat4 transform = GetInterpolateTransform(time, bone->transformKeys);
glm::mat4 scaling = GetInterpolateScaling(time, bone->scalingKeys);
//bone->localTransform = transform * scaling * glm::mat4(1.0f);
//bone->localTransform = glm::mat4(1.0f) * scaling * rotate;
//bone->localTransform = glm::translate(glm::mat4(1.0f), glm::vec3(0.5f));
bone->localTransform = glm::mat4(1.0f);
}
}
void MeshModel::UpdateBone(Bone * bone)
{
glm::mat4 parentTransform = bone->getParentTransform();
bone->nodeTransform = parentTransform
* bone->transform // assimp_node->mTransformation
* bone->localTransform; // T S R matrix
bone->finalTransform = globalInverse
* bone->nodeTransform
* bone->inverseBindPoseMatrix; // ai_mesh->mBones[i]->mOffsetMatrix
for (int i = 0; i < (int)bone->children.size(); i++) {
UpdateBone(bone->children[i]);
}
}
glm::mat4 Bone::getParentTransform()
{
if (this->parent != nullptr)
return parent->nodeTransform;
else
return glm::mat4(1.0f);
}
glm::mat4 MeshModel::GetInterpolateRotation(float time, std::vector<BoneKey>& keys)
{
// we need at least two values to interpolate...
if ((int)keys.size() == 0) {
return glm::mat4(1.0f);
}
if ((int)keys.size() == 1) {
return glm::mat4_cast(keys[0].rotation);
}
int rotationIndex = FindBestTimeIndex(time, keys);
int nextRotationIndex = (rotationIndex + 1);
assert(nextRotationIndex < (int)keys.size());
float DeltaTime = (float)(keys[nextRotationIndex].time - keys[rotationIndex].time);
float Factor = (time - (float)keys[rotationIndex].time) / DeltaTime;
if (Factor < 0.0f)
Factor = 0.0f;
if (Factor > 1.0f)
Factor = 1.0f;
assert(Factor >= 0.0f && Factor <= 1.0f);
const glm::quat& startRotationQ = keys[rotationIndex].rotation;
const glm::quat& endRotationQ = keys[nextRotationIndex].rotation;
glm::quat interpolateQ = glm::lerp(endRotationQ, startRotationQ, Factor);
interpolateQ = glm::normalize(interpolateQ);
return glm::mat4_cast(interpolateQ);
}
glm::mat4 MeshModel::GetInterpolateTransform(float time, std::vector<BoneKey>& keys)
{
// we need at least two values to interpolate...
if ((int)keys.size() == 0) {
return glm::mat4(1.0f);
}
if ((int)keys.size() == 1) {
return glm::translate(glm::mat4(1.0f), keys[0].vector);
}
int translateIndex = FindBestTimeIndex(time, keys);
int nextTranslateIndex = (translateIndex + 1);
assert(nextTranslateIndex < (int)keys.size());
float DeltaTime = (float)(keys[nextTranslateIndex].time - keys[translateIndex].time);
float Factor = (time - (float)keys[translateIndex].time) / DeltaTime;
if (Factor < 0.0f)
Factor = 0.0f;
if (Factor > 1.0f)
Factor = 1.0f;
assert(Factor >= 0.0f && Factor <= 1.0f);
const glm::vec3& startTranslate = keys[translateIndex].vector;
const glm::vec3& endTrabslate = keys[nextTranslateIndex].vector;
glm::vec3 delta = endTrabslate - startTranslate;
glm::vec3 resultVec = startTranslate + delta * Factor;
return glm::translate(glm::mat4(1.0f), resultVec);
}
The code idea is referenced from Matrix calculations for gpu skinning and Skeletal Animation With Assimp.
Overall, I fectch all the information from assimp to MeshModel and save it to the bone structure, so I think the information is alright?
The last thing, my vertex shader code:
#version 330 core
#define MAX_BONES_PER_VERTEX 4
in vec3 position;
in vec2 texCoord;
in vec3 normal;
in ivec4 boneID;
in vec4 boneWeight;
const int MAX_BONES = 100;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
uniform mat4 boneTransform[MAX_BONES];
out vec3 FragPos;
out vec3 Normal;
out vec2 TexCoords;
out float Visibility;
const float density = 0.007f;
const float gradient = 1.5f;
void main()
{
mat4 boneTransformation = boneTransform[boneID[0]] * boneWeight[0];
boneTransformation += boneTransform[boneID[1]] * boneWeight[1];
boneTransformation += boneTransform[boneID[2]] * boneWeight[2];
boneTransformation += boneTransform[boneID[3]] * boneWeight[3];
vec3 usingPosition = (boneTransformation * vec4(position, 1.0)).xyz;
vec3 usingNormal = (boneTransformation * vec4(normal, 1.0)).xyz;
vec4 viewPos = view * model * vec4(usingPosition, 1.0);
gl_Position = projection * viewPos;
FragPos = vec3(model * vec4(usingPosition, 1.0f));
Normal = mat3(transpose(inverse(model))) * usingNormal;
TexCoords = texCoord;
float distance = length(viewPos.xyz);
Visibility = exp(-pow(distance * density, gradient));
Visibility = clamp(Visibility, 0.0f, 1.0f);
}
If my question above, lack of code or describe vaguely, please let me know, Thanks!
Edit:(1)
In additional, my bone information like this(code fetching part):
for (int i = 0; i < (int)nodeAnim->mNumPositionKeys; i++)
{
BoneKey key;
key.time = nodeAnim->mPositionKeys[i].mTime;
aiVector3D vec = nodeAnim->mPositionKeys[i].mValue;
key.vector = glm::vec3(vec.x, vec.y, vec.z);
currentBone->transformKeys.push_back(key);
}
had some transformation vector, so my code above glm::mat4 transform = GetInterpolateTransform(time, bone->transformKeys);,Absloutely, get the same value from it. I'm not sure I made a nomove keyframe animation that provide the transform values is true or not (of course it has 7 keyframe).
A keyframe contents like this(debug on head bone):
7 different keyframe, same vector value.
Edit:(2)
If you want to test my dae file, I put it in jsfiddle, come and take it :). Another thing, in Unity my file work correctly, so I think maybe not my local transform occurs the problem, it seems the problem could be some other like parentTransform or bone->transform...etc? I aslo add local transform matrix with all bone, But can not figure out why COLLADA contains these value for my unmove animation...
For amounts of testing, and finally found the problem is the UpdateBone() part.
Before I point out my problem, I need to say the series of matrix multiplication let me confuse, but when I found the solution, it just make me totally (maybe just 90%) realize all the matrix doing.
The problem comes from the article,Matrix calculations for gpu skinning. I assumed the answer code is absolutely right and don't think any more about the matrix should be used. Thus, misusing matrix terribly transfer my look into the local transform matrix. Back to the result image in my question section is bind pose when I change the local transform matrix to return glm::mat4(1.0f).
So the question is why the changed make the bind pose? I assumed the problem must be local transform in bone space, but I'm wrong. Before I give the answer, look at the code below:
void MeshModel::UpdateBone(Bone * bone)
{
glm::mat4 parentTransform = bone->getParentTransform();
bone->nodeTransform = parentTransform
* bone->transform // assimp_node->mTransformation
* bone->localTransform; // T S R matrix
bone->finalTransform = globalInverse
* bone->nodeTransform
* bone->inverseBindPoseMatrix; // ai_mesh->mBones[i]->mOffsetMatrix
for (int i = 0; i < (int)bone->children.size(); i++) {
UpdateBone(bone->children[i]);
}
}
And I make the change as below:
void MeshModel::UpdateBone(Bone * bone)
{
glm::mat4 parentTransform = bone->getParentTransform();
if (boneName == "Scene" || boneName == "Armature")
{
bone->nodeTransform = parentTransform
* bone->transform // when isn't bone node, using assimp_node->mTransformation
* bone->localTransform; //this is your T * R matrix
}
else
{
bone->nodeTransform = parentTransform // This retrieve the transformation one level above in the tree
* bone->localTransform; //this is your T * R matrix
}
bone->finalTransform = globalInverse // scene->mRootNode->mTransformation
* bone->nodeTransform //defined above
* bone->inverseBindPoseMatrix; // ai_mesh->mBones[i]->mOffsetMatrix
for (int i = 0; i < (int)bone->children.size(); i++) {
UpdateBone(bone->children[i]);
}
}
I don't know what the assimp_node->mTransformation give me before, only the description "The transformation relative to the node's parent" in the assimp documentation. For some testing, I found that the mTransformation is the bind pose matrix which the current node relative to parent if I use these on bone node. Let me give a picture that captured the matrix on head bone.
The left part is the transform which is fetched from assimp_node->mTransformation.The right part is my unmove animation's localTransform which is calculated by the keys from nodeAnim->mPositionKeys, nodeAnim->mRotationKeys and nodeAnim->mScalingKeys.
Look back what I did, I made a bind pose tranformation twice, so the image in my question section look just seperate but not spaghetti :)
On the last, let me show what I did before the unmove animation testing and correct animation result.
(For everyone, If my concept is wrong , please point me out! Thx.)

Math differences between GLSL and Metal

I play around with GLSL and got this effect. And I tried to convert it to metal but I got some funky result for y-axis when it is smaller than 0:
There are these funny curvy crop off for most of the cubes above the horizon(<0). This is my Metal code:
static float mod(float x, float y)
{
return x - y * floor(x/y);
}
static float vmax(float3 v) {
return max(max(v.x, v.y), v.z);
}
float fBoxCheap(float3 p, float3 b) { //cheap box
return vmax(abs(p) - b);
}
static float map( float3 p )
{
p.x = mod(p.x + 5,10)-5;
p.y = mod(p.y + 5 ,10)-5;
p.z = mod(p.z + 5 ,10)-5;
float box = fBoxCheap(p-float3(0.0,3.0,0.0),float3(4.0,3.0,1.0));
return box;
}
It is almost the same code in GLSL:
float vmax(vec3 v) {
return max(max(v.x, v.y), v.z);
}
float box(vec3 p, vec3 b) { //cheap box
return vmax(abs(p) - b);
}
float map( vec3 p )
{
p.x=mod(p.x+3.0,6.0)-3.0;
p.y=mod(p.y+3.0,6.0)-3.0;
p.z=mod(p.z+3.0,6.0)-3.0;
return box( p, vec3(1.,1.,1.) );
}
How can I resolve this?
I am fairly new to both GLSL and Metal but I find Metal is more tricky because of these math issue.
I don't think there's a difference here. You can create similar artifacts in the GL version by applying all of the same modifications you do in the Metal version. The problem is that offsetting the point after you fold space with mod violates the requirement that the SDF be Lipschitz continuous (i.e., the gradient must be <= 1 everywhere). If you want to translate the box, translate p before applying mod.

Path Tracing Shadowing Error

I really dont know what else do to to fix this problem.I have written a path tracer using explicit light sampling in c++ and I keep getting this weird really black shadows which I know is wrong.I have done everything to fix it but I still keep getting it,even on higher samples.What am I doing wrong ? Below is a image of the scene.
And The Radiance Main Code
RGB Radiance(Ray PixRay,std::vector<Primitive*> sceneObjects,int depth,std::vector<AreaLight> AreaLights,unsigned short *XI,int E)
{
int MaxDepth = 10;
if(depth > MaxDepth) return RGB();
double nearest_t = INFINITY;
Primitive* nearestObject = NULL;
for(int i=0;i<sceneObjects.size();i++)
{
double root = sceneObjects[i]->intersect(PixRay);
if(root > 0)
{
if(root < nearest_t)
{
nearest_t = root;
nearestObject = sceneObjects[i];
}
}
}
RGB EstimatedRadiance;
if(nearestObject)
{
EstimatedRadiance = nearestObject->getEmission() * E;
Point intersectPoint = nearestObject->intersectPoint(PixRay,nearest_t);
Vector intersectNormal = nearestObject->surfacePointNormal(intersectPoint).Normalize();
if(nearestObject->getBRDF().Type == 1)
{
for(int x=0;x<AreaLights.size();x++)
{
Point pointOnTriangle = RandomPointOnTriangle(AreaLights[x].shape,XI);
Vector pointOnTriangleNormal = AreaLights[x].shape.surfacePointNormal(pointOnTriangle).Normalize();
Vector LightDistance = (pointOnTriangle - intersectPoint).Normalize();
//Geometric Term
RGB Geometric_Term = GeometricTerm(intersectPoint,pointOnTriangle,sceneObjects);
//Lambertian BRDF
RGB LambertianBRDF = nearestObject->getColor() * (1. / M_PI);
//Emitted Light Power
RGB Emission = AreaLights[x].emission;
double MagnitudeOfXandY = (pointOnTriangle - intersectPoint).Magnitude() * (pointOnTriangle - intersectPoint).Magnitude();
RGB DirectLight = Emission * LambertianBRDF * Dot(intersectNormal,-LightDistance) *
Dot(pointOnTriangleNormal,LightDistance) * (1./MagnitudeOfXandY) * AreaLights[x].shape.Area() * Geometric_Term;
EstimatedRadiance = EstimatedRadiance + DirectLight;
}
//
Vector diffDir = CosWeightedRandHemiDirection(intersectNormal,XI);
Ray diffRay = Ray(intersectPoint,diffDir);
EstimatedRadiance = EstimatedRadiance + ( Radiance(diffRay,sceneObjects,depth+1,AreaLights,XI,0) * nearestObject->getColor() * (1. / M_PI) * M_PI );
}
//Mirror
else if(nearestObject->getBRDF().Type == 2)
{
Vector reflDir = PixRay.d-intersectNormal*2*Dot(intersectNormal,PixRay.d);
Ray reflRay = Ray(intersectPoint,reflDir);
return nearestObject->getColor() *Radiance(reflRay,sceneObjects,depth+1,AreaLights,XI,0);
}
}
return EstimatedRadiance;
}
I haven't debugged your code, so there may be any number of bugs of course, but I can give you some tips: First, go look at SmallPT, and see what it does that you don't. It's tiny but still quite easy to read.
From the look of it, it seems there are issues with either the sampling and/or gamma correction. The easiest one is gamma: when converting RGB intensity in the range 0..1 to RGB in the range 0..255, remember to always gamma correct. Use a gamma of 2.2
R = r^(1.0/gamma)
G = g^(1.0/gamma)
B = b^(1.0/gamma)
Having the wrong gamma will make any path traced image look bad.
Second: sampling. It's not obvious from the code how the sampling is weighted. I'm only familiar with Path Tracing using russian roulette sampling. With RR the radiance basically works like so:
if (depth > MaxDepth)
return RGB();
RGB color = mat.Emission;
// Russian roulette:
float survival = 1.0f;
float pContinue = material.Albedo();
survival = 1.0f / pContinue;
if (Rand.Next() > pContinue)
return color;
color += DirectIllumination(sceneIntersection);
color += Radiance(sceneIntersection, depth+1) * survival;
RR is basically a way of terminating rays at random, but still maintaining an unbiased estimate of the true radiance. Since it adds a weight to the indirect term, and the shadow and bottom of the speheres are only indirectly lit, I'd suspect that has something to do with it (if it isn't just the gamma).

Setting up perspective projection on Opengl ES 2.0 makes objects disappear

I'm working on a project using opengl-es 2.0, and I'm having some trouble setting up perspective projection.
If I don't set up the perspective projection and simply multiply the object-to-world matrix (I believe it's also called model matrix) by the vertex positions, the objects on screen are rendered correctly, they appear stretched, but as far as I know, that's something the projection matrix would fix. The problem is, whenever I set the perspective matrix and use it, the objects on screen disappear, and no matter how much I move them around they never show up in screen.
The calculations to get the Model-View-Projection matrix are done in CPU and the last multiplication the MVP-Matrix by the actual object-space vertex data is done in the vertex shader, this is why I believe the problem might be on the process to get that MVP-Matrix. I've run a bunch of unit tests, but according to those tests (and my basic knowledge of linear algebra) those matrices are being correctly calculated, and my internet-research throughout the day isn't helping for now. :-/
This is the code I use to calculate the MVP-Matrix:
Matrix4D projection_matrix;
projection_matrix.makePerspective(45.0f, 0.001f, 100.0f, 480.0f/320.0f);
Matrix4D view_matrix;
view_matrix.makeIdentity(); //Should be a real view matrix. TODO.
Matrix4D model_matrix(getWorldMatrix());
Matrix4D mvp_matrix(projection_matrix);
mvp_matrix *= view_matrix;
mvp_matrix *= model_matrix;
mMesh->draw(time, mvp_matrix.getRawData());
I think this code is pretty self-explanatory, but just in case, those Matrix4D are 4x4 matrices, and calling makePerspective/makeIdentity on them will make that matrix the Perspective or Identity matrix. The getRawData() call on Matrix4D objects returns the matrix data as a float array in column-major notation, and the mMesh variable is another object which, when draw is called, will simply send all the vertex and material data to the shaders.
The makePerspective function's code is the following:
Matrix4D& Matrix4D::makePerspective(const float field_of_view,
const float near, const float far, const float aspect_ratio) {
float size = near * tanf(DEGREES_TO_RADIANS(field_of_view) / 2.0f);
return this->makeFrustum(-size, size, -size / aspect_ratio,
size / aspect_ratio, near, far);
}
Matrix4D& Matrix4D::makeFrustum(const float left, const float right,
const float bottom, const float top, const float near,
const float far) {
this->mRawData[0] = 2.0f * near / (right - left);
this->mRawData[1] = 0.0f;
this->mRawData[2] = 0.0f;
this->mRawData[3] = 0.0f;
this->mRawData[4] = 0.0f;
this->mRawData[5] = 2.0f * near / (top - bottom);
this->mRawData[6] = 0.0f;
this->mRawData[7] = 0.0f;
this->mRawData[8] = (right + left) / (right - left);
this->mRawData[9] = (top + bottom) / (top - bottom);
this->mRawData[10] = - (far + near) / (far - near);
this->mRawData[11] = -1.0f;
this->mRawData[12] = 0.0f;
this->mRawData[13] = 0.0f;
this->mRawData[14] = -2.0f * far * near / (far - near);
this->mRawData[15] = 0.0f;
return *this;
}
And the getWorldMatrix() call does this(with some related code):
const Matrix4D& getWorldMatrix() {
return mWorldMatrix =
getTranslationMatrix() *
getRotationMatrix() *
getScaleMatrix();
}
const Matrix4D& getRotationMatrix() {
return this->mRotationMatrix.makeRotationFromEuler(this->mPitchAngle,
this->mRollAngle, this->mYawAngle);
}
const Matrix4D& getTranslationMatrix() {
return this->mTranslationMatrix.makeTranslation(this->mPosition.x,
this->mPosition.y, this->mPosition.z);
}
const Matrix4D& getScaleMatrix() {
return this->mScaleMatrix.makeScale(this->mScaleX, this->mScaleY, this->mScaleZ);
}
///This code goes in the Matrix4D class.
Matrix4D& Matrix4D::makeTranslation(const float x, const float y,
const float z) {
this->mRawData[0] = 1.0f;
this->mRawData[1] = 0.0f;
this->mRawData[2] = 0.0f;
this->mRawData[3] = 0.0f;
this->mRawData[4] = 0.0f;
this->mRawData[5] = 1.0f;
this->mRawData[6] = 0.0f;
this->mRawData[7] = 0.0f;
this->mRawData[8] = 0.0f;
this->mRawData[9] = 0.0f;
this->mRawData[10] = 1.0f;
this->mRawData[11] = 0.0f;
this->mRawData[12] = x;
this->mRawData[13] = y;
this->mRawData[14] = z;
this->mRawData[15] = 1.0f;
return *this;
}
Matrix4D& Matrix4D::makeScale(const float x, const float y,
const float z) {
this->mRawData[0] = x;
this->mRawData[1] = 0.0f;
this->mRawData[2] = 0.0f;
this->mRawData[3] = 0.0f;
this->mRawData[4] = 0.0f;
this->mRawData[5] = y;
this->mRawData[6] = 0.0f;
this->mRawData[7] = 0.0f;
this->mRawData[8] = 0.0f;
this->mRawData[9] = 0.0f;
this->mRawData[10] = z;
this->mRawData[11] = 0.0f;
this->mRawData[12] = 0.0f;
this->mRawData[13] = 0.0f;
this->mRawData[14] = 0.0f;
this->mRawData[15] = 1.0f;
return *this;
}
Matrix4D& Matrix4D::makeRotationFromEuler(const float angle_x,
const float angle_y, const float angle_z) {
float a = cosf(angle_x);
float b = sinf(angle_x);
float c = cosf(angle_y);
float d = sinf(angle_y);
float e = cosf(angle_z);
float f = sinf(angle_z);
float ad = a * d;
float bd = b * d;
this->mRawData[0] = c * e;
this->mRawData[1] = -bd * e + a * f;
this->mRawData[2] = ad * e + b * f;
this->mRawData[3] = 0.0f;
this->mRawData[4] = -c * f;
this->mRawData[5] = bd * f + a * e;
this->mRawData[6] = -ad * f + b * e;
this->mRawData[7] = 0.0f;
this->mRawData[8] = -d;
this->mRawData[9] = -b * c;
this->mRawData[10] = a * c;
this->mRawData[11] = 0.0f;
this->mRawData[12] = 0.0f;
this->mRawData[13] = 0.0f;
this->mRawData[14] = 0.0f;
this->mRawData[15] = 1.0f;
return *this;
}
Finally, the vertex shader is pretty much this:
#version 110
const float c_one = 1.0;
const float c_cero = 0.0;
uniform float time;
uniform mat4 mvp_matrix;
attribute vec3 position;
attribute vec3 normal;
attribute vec2 texture_coordinate;
varying vec2 v_texture_coordinate;
void main()
{
gl_Position = mvp_matrix * vec4(position, c_one);
v_texture_coordinate = texture_coordinate;
}
Just in case, the object being rendered is rendered on position (0.0f, 0.0f, -3.0f) with 0.5f scale applied to all the three axis's.
I don't really know what could be wrong, I'm hoping someone can spot what I may be missing, and any help would be appreciated. Debugging this would be a lot easier if I could get per-vertex results on the shader :-/.
As a side note, I'm having doubts on how to calculate the View or camera matrix, as far as I know it's simply a matrix with the inverted transformations the camera has to do, by which I understand something like, if I want to move the camera 100 units to the right, I move it 100 units to the left, is that right?
EDIT: Just trying to give more information, maybe that way someone will be able to help me. I've noticed the model matrix is incorrect with the code above, mostly because of the matrix order, I've changed it to the following and now the model matrix seems good:
const Matrix4D& getWorldMatrix() {
return mWorldMatrix =
getScaleMatrix() * getRotationMatrix() * getTranslationMatrix();
}
Despite this, still no luck. The matrices resulting from my test data are these:
Projection matrix:
[1.609506,0.000000,0.000000,0.000000]
[0.000000,2.414258,0.000000,0.000000]
[0.000000,0.000000,-1.000020,-0.002000]
[0.000000,0.000000,-1.000000,0.000000]
Model matrix:
[0.500000,0.000000,0.000000,0.000000]
[0.000000,0.500000,0.000000,0.000000]
[0.000000,0.000000,0.500000,-3.000000]
[0.000000,0.000000,0.000000,1.000000]
MVP matrix:
[0.804753,0.000000,0.000000,0.000000]
[0.000000,1.207129,0.000000,0.000000]
[0.000000,0.000000,2.499990,-0.001000]
[0.000000,0.000000,-1.000000,0.000000]
And the mesh I'm using to test all this is a simple cube going from 1.0f to -1.0f on each axis, centered on the origin. As far as I know, this should position the vertex closest to the near limit (0.0001f) on position -2.0f along the z axis, so the cube is in front of the camera and withing the view frustum. Any clues someone?

DirectX 11 Compute Shader - not writing all values

I am trying some experiments in fractal rendering with DirectX11 Compute Shaders.
The provided example runs on a FeatureLevel_10 device.
My RwStructured output buffer has a data format of R32G32B32A32_FLOAT
The problem is that when writing to the buffer, it seems that only the ALPHA ( w ) value gets written nothing else....
Here is the shader code:
struct BufType
{
float4 value;
};
cbuffer ScreenConstants : register(b0)
{
float2 ScreenDimensions;
float2 Padding;
};
RWStructuredBuffer<BufType> BufferOut : register(u0);
[numthreads(1, 1, 1)]
void Main( uint3 DTid : SV_DispatchThreadID )
{
uint index = DTid.y * ScreenDimensions.x + DTid.x;
float minRe = -2.0f;
float maxRe = 1.0f;
float minIm = -1.2;
float maxIm = minIm + ( maxRe - minRe ) * ScreenDimensions.y / ScreenDimensions.x;
float reFactor = (maxRe - minRe ) / (ScreenDimensions.x - 1.0f);
float imFactor = (maxIm - minIm ) / (ScreenDimensions.y - 1.0f);
float cim = maxIm - DTid.y * imFactor;
uint maxIterations = 30;
float cre = minRe + DTid.x * reFactor;
float zre = cre;
float zim = cim;
bool isInside = true;
uint iterationsRun = 0;
for( uint n = 0; n < maxIterations; ++n )
{
float zre2 = zre * zre;
float zim2 = zim * zim;
if ( zre2 + zim2 > 4.0f )
{
isInside = false;
iterationsRun = n;
}
zim = 2 * zre * zim + cim;
zre = zre2 - zim2 + cre;
}
if ( isInside )
{
BufferOut[index].value = float4(1.0f,0.0f,0.0f,1.0f);
}
}
The code actually produces in a sense the correct result ( 2D Mandelbrot set ) but it seems somehow only the alpha value is touched and nothing else is written, although the pixels inside the set should be colored red... ( the image is black & white )
Anybody has a clue what's going on here ?
After some fiddling around i found the problem.
I have not found any documentation from MS mentioning this, so it could also be a Nvidia
specific driver issue.
Apparently you are only allowed to write ONCE per Compute Shader Invocation to the same element in a RWSructuredBuffer. And you also HAVE to write ONCE.
I changed the code to accumulate the correct color in a local variable, and write it now only once at the end of the shader.
Everything works perfectly now in that way.
I'm not sure but, shouldn't it be for BufferOut decl:
RWStructuredBuffer<BufType> BufferOut : register(u0);
instead of :
RWStructuredBuffer BufferOut : register(u0);
If you are only using a float4 write target, why not use just:
RWBuffer<float4> BufferOut : register (u0);
Maybe this could help.
After playing around today again, i ran into the same problem once again.
The following code produced all white output:
[numthreads(1, 1, 1)]
void Main( uint3 dispatchId : SV_DispatchThreadID )
{
float4 color = float4(1.0f,0.0f,0.0f,1.0f);
WriteResult(dispatchId,color);
}
The WriteResult method is a utility method from my hlsl standard library.
Long story short. After i upgraded from Driver version 192 to 195(beta) the problem went away.
Seems like the drivers have some definitive problems in compute shader support left, so beware.
from what ive seen, computer shaders are only useful if you need a more general computational model than the tradition pixel shader, or if you can load data and then share it between threads in fast shared memory. im fairly sure u would get better performance with a pixel shader for the mandelbrot shader.
on my setup (win7, feb 10 dx sdk, gtx480) my compute shaders have a punishing setup time of over 0.2-0.3ms (binding a SRV and a UAV and then calling dispatch()).
if u do a PS implementation please post your experiences.
I have no direct experience with DX compute shaders but...
Why are you setting alpha = 1.0?
IIRC, that makes the pixel 100% transparent, so your inside pixels are transparent red, and show up as whatever color was drawn behind them.
When alpha = 1.0, the RGB components are never used.

Resources