Problem:
My goal is to write a code, that rotates the root joint of a bvh, θ degrees around global y axis3, and keeps values in the range of -180 to 180 (just like MotionBuilder does). I have tried to rotate a joint using euler, quaternions, matrices (considering the rotational order of a bvh) but I haven't yet figured out how to get the correct values. MotionBuilder calculates the values x,y,z so they are valid for the bvh file. I would like to write a code that calculates the rotation x,y,z for a joint, just like in MotionBuilder.
Example:
Initial: Root rotation: [x= -169.56, y=15.97, z=39.57]
After manually rotating about 45 degrees: Root rotation: [x=-117.81, y=49.37, z=70.15]
global y axis:
To rotate a node around the world Y axis any number of degrees the following works (https://en.wikipedia.org/wiki/Rotation_matrix):
import math
from pyfbsdk import *
angle = 45.0
radians = math.radians(angle)
root_matrix = FBMatrix()
root.GetMatrix(root_matrix, FBModelTransformationType.kModelRotation, True)
transformation_matrix = FBMatrix([
math.cos(radians), 0.0, math.sin(radians), 0.0,
0.0, 1.0, 0.0, 0.0,
-math.sin(radians), 0.0, math.cos(radians), 0.0,
0.0, 0.0, 0.0, 1.0
])
result_matrix = root_matrix * transformation_matrix
root.SetMatrix(result_matrix , FBModelTransformationType.kModelRotation, True)
If there are any Pre-Rotations on the root node the process is more complex and you can try setting the Rotations using the SetVector with the LRMToDof method.
result_vector = FBVector3d()
root.LRMToDof(result_vector, result_matrix)
root.SetVector(result_vector, FBModelTransformationType.kModelRotation, True)
Related
Basically I'm offsetting a texture2D from its original inputImageTexture with this code
highp vec2 toprightcoord = textureCoordinate + 0.25;
highp vec4 tr = texture2D(inputImageTexture, toprightcoord);
It's does what it supposed to do, however it leaves a stretched pixel color from the edge of the offsetted texture (like a cheese from pulled pizza slice).
How to replace it to any color or transparent?
I assume that you have set the texture wrap parameters to GL_CLAMP_TO_EDGE. See glTexParameter.
This causes the stretched pixels when access the texture with the texture lookup function texture2D, out of the range [0.0, 1.0].
You can create a "tiled" texture with the wrap parameter GL_REPEAT.
But if you want
"How to replace it to any color or transparent?"
, then you have to do a range check.
The following code sets the alpha channel to 0.0, if the limit of 1.0 is exceeded at either the x or y coordinate. Where the variable inBounds is set to 1.0 if the texture coordinate is in bounds and else it is set to 0.0:
vec2 toprightcoord = textureCoordinate + 0.25;
vec4 tr = texture2D(inputImageTexture, toprightcoord);
vec2 boundsTest = step(toprightcoord, vec2(1.0));
flaot inBounds = boundsTest.x * boundsTest.y;
tr.a *= inBounds;
You can extend this, to a range test in [0.0, 1.0]:
vec2 boundsTest = step(vec2(0.0), toprightcoord) * step(toprightcoord, vec2(1.0));
Note, the glsl function step
genType step( genType edge, genType x);
returnes 0.0, if x[i] < edge[i], and 1.0 is returned otherwise.
With the glsl function mix, the color can be replaced by a different one:
vec4 red = vec4(1.0, 0.0, 0.0, 1.0);
tr = mix(red, tr, inBounds);
Is it possible to write a 5x5 kernel to process the limited color range into the full range?
This is my sample bitonal kernel, and I don't know what values to use and where to achieve this color expansion:
Grayscale
{ 0.3, 0.3, 0.3, 0.0, 0.0 }
{ 0.6, 0.6, 0.6, 0.0, 0.0 }
{ 0.1, 0.1, 0.1, 0.0, 0.0 }
{ 0.0, 0.0, 0.0, 1.0, 0.0 }
{ 0.0, 0.0, 0.0, 0.0, 1.0 }
I would like RGB color expansion RGB 16-235 => 0-255
However i need the kernel matrix because I am not processing the image but I'm passing the matrix to a windows API function (undocumented: SetMagnificationDesktopColorEffect).
I cannot do a simple subtract/divide/multiply on the pixels. I do not have them.
You can basically do it without kernel by substracting 16 from your image and then dividing it by 219. Then you will have normalized to 1 image which you have to multiply by 255 to get 255 intensity range representation.
I am currently working on a raytracer and I just "bumped" in an issue.
I implemented texture mapping for planes, cylinders and spheres and it's working pretty well... Except for the normal map part.
Here is what I have, the in-world position and the in-world normals of each pixel : world-space normals.
And some tangent-space normal map (the usual normal map).
I can't seem to figure out how to convert the tangent-space normals to world-space. I have tried using a "TBN" matrix but the normals are off : normal map projected normals.
And here is my code to compute the new normal :
VEC3 t = vec3_cross(worldnormal, new_vec3(0.0, 1.0, 0.0));
VEC3 b;
if (!vec3_length(t))
t = vec3_cross(worldnormal, new_vec3(0.0, 0.0, 1.0));
t = vec3_normalize(t);
b = vec3_normalize((vec3_cross(worldnormal, t)));
VEC3 map_n = vec3_normalize(get_texture_color(normal_map, texcoords));
MAT3 tbn = new_mat3(t, b, worldnormal);
worldnormal = vec3_normalize(mat3_mult_vec3(tbn, map_n));
get_texture_color() returns the normal map's texture color divided by 255.f
So !
I just found what was wrong with my normal mapping !
After trying to use a constant {0, 0, 1} normal to see if my TBN matrix was right (and it was) I just found out that normal map's tangent space normals had to be "converted"
So the right code is :
VEC3 t = vec3_cross(worldnormal, new_vec3(0.0, 1.0, 0.0));
VEC3 b;
if (!vec3_length(t))
t = vec3_cross(worldnormal, new_vec3(0.0, 0.0, 1.0));
t = vec3_normalize(t);
b = vec3_normalize((vec3_cross(worldnormal, t)));
VEC3 map_n = vec3_normalize(get_texture_color(normal_map, texcoords));
//map_n * 2 - 1
map_n = vec3_sub(vec3_scale(map_n, 2), new_vec3(1, 1, 1));
MAT3 tbn = new_mat3(t, b, worldnormal);
worldnormal = vec3_normalize(mat3_mult_vec3(tbn, map_n));
So close, yet so far !
Here is how it looks now, looking pretty good IMHO !
New (propper) normal mapping using TBN matrix !
With a better material for middle pillar ! (not the other "sort of" water)
How can I check if vertex is visible in the most simple way?
If my vertex shader looks like:
void main(void) {
vec4 glPosition = vec4(VTPosition.x * VTAspectRatio, VTPosition.y, VTPosition.z, 1.0);
gl_Position = VTProjection * VTModelview * glPosition;
}
Can I check visibility on CPU the same way ?
Vector4 vertex = {0.5, 0.5, -1.0, 1.0};
vertex = projectionMatrix * modelViewMatrix * vertex;
if vertex x and y value is in range -1.0 .. 1.0 (viewport coordinates) it is visible
The output position of the vertex shader (gl_Position) will undergo perspective division to obtain NDC (Normalized Device Coordinates). In the NDC space, clipping is against the [-1.0, 1.0] range for all coordinates (1).
So to test if a given vertex will be clipped, you have to determine if gl_Position.xyz / gl_Position.w is in the range [-1.0, 1.0] for all coordinates. GLSL code to test this condition could look like this:
if (any(lessThan(gl_Position.xyz, vec3(-gl_Position.w))) ||
any(greaterThan(gl_Position.xyz, vec3(gl_Position.w))))
{
// vertex will be clipped
}
(1) Strictly speaking, clipping can be performed at various points in the rendering pipeline, as long as geometry outside the viewing volume ends up being clipped. But it's easiest to express in NDC.
I'm doing my first steps with OpenGL ES 2.0 trying things on my ipod touch. I was wondering how to solve this coordinates issue..
To explain better, I was trying to draw a quad and rotate/translate it using a vertex shader (also because from what I've read it seems the only way to do it).
Since I'm working with a ipod I have a 1.5 : 1 ratio and a viewport set by
glViewport(0, 0, backingWidth, backingHeight);
So 0,0 is the center and bounds for clipping should be at -1.0, -1.0, -1.0, 1.0, etc (right?)
To draw a square I had to use different values for x and y coordinates because of the aspect ratio:
static const GLfloat lineV[] = {
-0.5f, 0.33f, 0.5f, 0.33f,
0.5f, 0.33f, 0.5f,-0.33f,
0.5f,-0.33f, -0.5f,-0.33f,
-0.5f,-0.33f, -0.5f, 0.33f,
-0.5f, 0.33f, 0.5f,-0.33f,
0.5f, 0.33f, -0.5f,-0.33f,
};
It's a square with both diagonals (I know that using indexes would be more efficient but that's not the point)..
Then I tried writing a vertex shader to rotate the object while moving it:
void main()
{
m = mat4( cos(rotation), sin(rotation), 0.0, 0.0,
-sin(rotation), cos(rotation), 0.0, 0.0,
0.0, 0.0, 1.0, 0.0,
0.0, 0.0, 0.0, 1.0);
m2 = mat4(1.0);
m2[1][3] = sin(rotation)*0.8;
gl_Position = position*(m*m2);
}
It works but since coordinates are not the same the quad is distorted while it rotates. How should I prevent that? I thought if it was possible to change the view frustum to have different bounds (not -1.0 to 1.0 on both axis so that enlarging on y-axis would fix the problem).
In addition is there a better way to use matrixes? I mean, I was used to use glRotatef without having to specify the whole matrix.. does convenience functions/constructors exist to accomplish this task?
The first arguments to glViewport() is not the center, it's the bottom left corner's coordinates.
You should probably set up a projection that takes your aspect into account, typically using gluPerspective() (if GLU is available in ES).
No glut or support functions are provided from what I've seen. Basically I solved it by using equal coordinates when building vertices and using a vertex shader to scale on y axis by the right aspect ratio.