I am currently working on a raytracer and I just "bumped" in an issue.
I implemented texture mapping for planes, cylinders and spheres and it's working pretty well... Except for the normal map part.
Here is what I have, the in-world position and the in-world normals of each pixel : world-space normals.
And some tangent-space normal map (the usual normal map).
I can't seem to figure out how to convert the tangent-space normals to world-space. I have tried using a "TBN" matrix but the normals are off : normal map projected normals.
And here is my code to compute the new normal :
VEC3 t = vec3_cross(worldnormal, new_vec3(0.0, 1.0, 0.0));
VEC3 b;
if (!vec3_length(t))
t = vec3_cross(worldnormal, new_vec3(0.0, 0.0, 1.0));
t = vec3_normalize(t);
b = vec3_normalize((vec3_cross(worldnormal, t)));
VEC3 map_n = vec3_normalize(get_texture_color(normal_map, texcoords));
MAT3 tbn = new_mat3(t, b, worldnormal);
worldnormal = vec3_normalize(mat3_mult_vec3(tbn, map_n));
get_texture_color() returns the normal map's texture color divided by 255.f
So !
I just found what was wrong with my normal mapping !
After trying to use a constant {0, 0, 1} normal to see if my TBN matrix was right (and it was) I just found out that normal map's tangent space normals had to be "converted"
So the right code is :
VEC3 t = vec3_cross(worldnormal, new_vec3(0.0, 1.0, 0.0));
VEC3 b;
if (!vec3_length(t))
t = vec3_cross(worldnormal, new_vec3(0.0, 0.0, 1.0));
t = vec3_normalize(t);
b = vec3_normalize((vec3_cross(worldnormal, t)));
VEC3 map_n = vec3_normalize(get_texture_color(normal_map, texcoords));
//map_n * 2 - 1
map_n = vec3_sub(vec3_scale(map_n, 2), new_vec3(1, 1, 1));
MAT3 tbn = new_mat3(t, b, worldnormal);
worldnormal = vec3_normalize(mat3_mult_vec3(tbn, map_n));
So close, yet so far !
Here is how it looks now, looking pretty good IMHO !
New (propper) normal mapping using TBN matrix !
With a better material for middle pillar ! (not the other "sort of" water)
Related
So I'm trying to add a rotation and a perspective effect to an image into the vertex shader. The rotation works just fine but I'm unable to make the perspective effect. I'm working in 2D.
The rotation matrix is generated from the code but the perspective matrix is a bunch of hardcoded values I got from GIMP by using the perspective tool.
private final Matrix3 perspectiveTransform = new Matrix3(new float[] {
0.58302f, -0.29001f, 103.0f,
-0.00753f, 0.01827f, 203.0f,
-0.00002f, -0.00115f, 1.0f
});
This perspective matrix was doing the result I want in GIMP using a 500x500 image. I'm then trying to apply this same matrix on texture coordinates. That's why I'm multiplying by 500 before and dividing by 500 after.
attribute vec4 a_position;
attribute vec4 a_color;
attribute vec2 a_texCoord0;
uniform mat4 u_projTrans;
uniform mat3 u_rotation;
uniform mat3 u_perspective;
varying vec4 v_color;
varying vec2 v_texCoords;
void main() {
v_color = a_color;
vec3 vec = vec3(a_texCoord0 * 500.0, 1.0);
vec = vec * u_perspective;
vec = vec3((vec.xy / vec.z) / 500.0, 0.0);
vec -= vec3(0.5, 0.5, 0.0);
vec = vec * u_rotation;
v_texCoords = vec.xy + vec2(0.5);
gl_Position = u_projTrans * a_position;
}
For the rotation, I'm offsetting the origin so that it rotates around the center instead of the top left corner.
Pretty much everything I know about GIMP's perspective tool comes from http://www.math.ubc.ca/~cass/graphics/manual/pdf/ch10.ps This was suggesting I would be able to reproduce what GIMP does after reading it, but it turns out I can't. The result shows nothing (no pixel) while removing the perspective part shows the image rotating properly.
As mentioned in the link, I'm dividing by vec.z to convert my homogeneous coordinates back to a 2D point. I'm not using the origin shifting for the perspective transformation as it was mentioned in the link that the top left corner was used as an origin. p.11:
There is one thing to be careful about - the origin of GIMP
coordinates is at the upper left, with y increasing downwards.
EDIT:
Thanks to #Rabbid76's answer, it's now showing something! However, it's not transforming my texture like the matrix was transforming my image on GIMP.
My transformation matrix on GIMP was supposed to do something a bit like that:
But instead, it looks like something like that:
This is what I think from what I can see from the actual result:
https://imgur.com/X56rp8K (Image used)
(As pointed out, it texture parameter is clamp to edge instead of clamp to border, but that's beside the point)
It looks like it's doing the exact opposite of what I'm looking for. I tried offsetting the origin to the center of the image and to the bottom left before applying the matrix without success. This is a new result but it's still the same problem: How to apply the GIMP perspective matric into a GLSL shader?
EDIT2:
With more testing, I can confirm that it's doing the "opposite". Using this simple downscale transformation matrix:
private final Matrix3 perspectiveTransform = new Matrix3(new float[] {
0.75f, 0f, 50f,
0f, 0.75f, 50f,
0f, 0f, 1.0f
});
The result is an upscaled version of the image:
If I invert the matrix programmatically, it works for the simple scaling matrix! But for the perspective matrix, it shows that:
https://imgur.com/v3TLe2d
EDIT3:
Thanks to #Rabbid76 again it turned out applying the rotation after the perspective matrix does the rotation before and I end up with a result like this: https://imgur.com/n1vWq0M
It is almost it! The only problem is that the image is VERY squished. It's just like the perspective matrix was applied multiple times. But if you look carefully, you can see it rotating while in perspective just like I want it. The problem now is how to unsquish it to get a result just like I had in GIMP. (The root problem is still the same, how to take a GIMP matrix and apply it in a shader)
This perspective matrix was doing the result I want in GIMP using a 500x500 image. I'm then trying to apply this same matrix on texture coordinates. That's why I'm multiplying by 500 before and dividing by 500 after.
The matrix
0.58302 -0.29001 103.0
-0.00753 0.01827 203.0
-0.00002 -0.00115 1.0f
is a 2D perspective transformation matrix. It operates with 2D Homogeneous coordinate.
See 2D affine and perspective transformation matrices
Since the matrix which is displayed in GIMP is the transformation from the perspective to the orthogonal view, the inverse matrix has to be used for the transformation.
The inverse matrix can be calculated by calling inv().
The matrix is setup to performs a operation of a Cartesian coordinate in the range [0, 500], to a Homogeneous coordinates in the range [0, 500].
Your assumption is correct, you have to scale the input from the range [0, 1] to [0, 500] and the output from [0, 500] to [0, 1].
But you have to scale the 2D Cartesian coordinates
Further you have to do the rotation after the perspective projection and the Perspective divide.
It may be necessary (dependent on the bitmap and the texture coordinate attributes), that you have to flip the V coordinate of the texture coordinates.
And most important, the transformation has to be done per fragment in the fragment shader.
Note, since this transformation is not linear (it is perspective transformation), it is not sufficient to to calculate the texture coordinates on the corner points.
vec2 Project2D( in vec2 uv_coord )
{
vec2 v_texCoords;
const float scale = 500.0;
// flip Y
//vec2 uv = vec2(uv_coord.x, 1.0 - uv_coord.y);
vec2 uv = uv_coord.xy;
// uv_h: 3D homougenus in range [0, 500]
vec3 uv_h = vec3(uv * scale, 1.0) * u_perspective;
// uv_h: perspective devide and downscale [0, 500] -> [0, 1]
vec3 uv_p = vec3(uv_h.xy / uv_h.z / scale, 1.0);
// rotate
uv_p = vec3(uv_p.xy - vec2(0.5), 0.0) * u_rotation + vec3(0.5, 0.5, 0.0);
return uv_p.xy;
}
Of course you can do the transformation in the vertex shader too.
But then you have to pass the 2d homogeneous coordinate to from the vertex shader to the fragment shader
This is similar to set a clip space coordinates to gl_Position.
The difference is that you have a 2d homogeneous coordinate and not a 3d. and you have to do the Perspective divide manually in the fragment shader:
Vertex shader:
attribute vec2 a_texCoord0;
varying vec3 v_texCoords_h;
uniform mat3 u_perspective
vec3 Project2D( in vec2 uv_coord )
{
vec2 v_texCoords;
const float scale = 500.0;
// flip Y
//vec2 uv = vec2(uv_coord.x, 1.0 - uv_coord.y);
vec2 uv = uv_coord.xy;
// uv_h: 3D homougenus in range [0, 500]
vec3 uv_h = vec3(uv * scale, 1.0) * u_perspective;
// downscale
return vec3(uv_h.xy / scale, uv_h.z);
}
void main()
{
v_texCoords_h = Project2D( a_texCoord0 );
.....
}
Fragment shader:
varying vec3 v_texCoords_h;
uniform mat3 u_rotation;
void main()
{
// perspective divide
vec2 uv = vertTex.xy / vertTex.z;
// rotation
uv = (vec3(uv.xy - vec2(0.5), 0.0) * u_rotation + vec3(0.5, 0.5, 0.0)).xy;
.....
}
See the preview, where I used the following 2D projection matrix, which is the inverse matrix from that one which is displayed in GIMP:
2.452f, 2.6675f, -388.0f,
0.0f, 7.7721f, -138.0f,
0.00001f, 0.00968f, 1.0f
Further note, in compare to u_projTrans, u_perspective is initialized in row major order.
Because of that you have to multiply the vector from the left to u_perspective:
vec_h = vec3(vec.xy * 500.0, 1.0) * u_perspective;
But you have to multiply the vector from the right to u_projTrans:
gl_Position = u_projTrans * a_position;
See GLSL Programming/Vector and Matrix Operations
and Data Type (GLSL)
Of course this may change if you transpose the matrix when you set it by glUniformMatrix*
i found this great page hls picker, and i'm wondering if there is possibility to achieve similar effect in WebGL. I'm passing to my fragment shader some color, for example #FF7400, what is the easiest way to convert it to hsl and change its luminosity, or to have smooth transition to black color (luminosity equel 0). I want to make clouds in my page that have different color(luminosity) depends how far they are from sun. Thanks in advance for any help.
thanks for geat links but i think that i found much simplier way to made easy color transition, all i need is to use webGL method T mix(T x, T y, float a) - linear blend of x and y.
This code i use in shadertoy editor:
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
vec2 uv = gl_FragCoord.xy / iResolution.xy;
vec4 orange = vec4(0.533, 0.25, 0.145, 1.0);
vec4 blue = vec4(0.18, 0.23, 0.27, 1.0);
vec4 black = vec4(0.0, 0.0, 0.0, 1.0);
vec4 white = vec4(1.0, 1.0, 1.0, 1.0);
float ratio = iResolution.x / iResolution.y;
float PI = 3.14159265359;
vec4 mixC = mix(orange, blue, sin(ratio * uv.y));
mixC = mix(mixC, black, cos(2.0 * PI * uv.x) / ratio);
mixC = mix(mixC, black, cos(2.0 * PI * uv.y) / ratio);
mixC = mix(mixC, white, 0.1);
fragColor = mixC;
}
As you can see there, i've made transition between of four colors with just couple lines of code and the results looks like this:
I think about fragment shader as a little photoshop. Every photoshop operation should be possible with WebGL.
If we are talking about 2D image where sun position is relative and you want to just use some basic image processing, you can use functions from this answer rgb2hsv and hsv2rgb. I think it should work with GLSL 1.
Then you can multiply Sat, Lum or Hue and then transfer it back to RGB.
If it doesnt, you might have to reimplement it from common formula, use wiki or this link: http://www.rapidtables.com/convert/color/rgb-to-hsl.htm
If you want to do some more image processing, when neighbour pixels are needed, I suggest this great tutorial where you can easy do some edge sharpening, blur etc.: http://webglfundamentals.org/webgl/lessons/webgl-image-processing.html
original image
Hi guys, i think i found a unorthodox but easy way to desaturate an RGB image, we just need to find the average color for the pixel,
average _color = (R+G+B)/3 keeping the Alpha...
vec4(average_color,average_color,average_color, Alpha);
void main()
{
//write a color of the current fragment to a variable
lowp vec4 color_of_pixel = texture2D(texture_sampler, var_texcoord0.xy);
//Find the average among red, green and blue and it keeps the "force" of the color...
float average_color = ((color_of_pixel.r + color_of_pixel.g + color_of_pixel.b)/3);
lowp vec4 color_of_pixel_final = vec4(average_color,average_color,average_color,color_of_pixel.a);
gl_FragColor = color_of_pixel_final; // write the color_of_pixel to the output gl_FragColor
}
Desaturated image
I'm implementing simple ray tracing for spheres in a fragment shader and I'm currently working on the function that computes color for a diffusely shaded sphere. Here is the code for the function:
vec3 shadeSphere(vec3 point, vec4 sphere, vec3 material) {
vec3 color = vec3(1.,2.,3.);
vec3 N = (point - sphere.xyz) / sphere.w;
vec3 diffuse = max(dot(Ldir, N), 0.0);
vec3 ambient = material/5;
color = ambient + Lrgb * diffuse * max(0.0, N * Ldir);
return color;
}
I'm getting errors on the two lines where I'm using the max function. I got the code for the line where I'm assigning max(dot(Ldir,N),0.0) from the webgl cheat sheet which uses the function max(dot(ec_light_dir, ec_normal), 0.0);
For some reason, my implementation is not working as I'm getting the error:
ERROR: 0:38: 'max' : no matching overloaded function found
What could be the problem with either of the these max functions I'm using?
There's 2 max statements in your shader. It's the 2nd one that's the problem
max(0.0, N * LDir) makes no sense. N is a vec3. There's no version of max that takes max(float, vec3). There is a version of max that's max(vec3, float) so swap that to be
`max(N * LDir, 0.0)`
and it might work. Basically your shader is NOT an ES 2.0 shader. Maybe it's being used on a driver that is not spec compliant (ie, the driver has a bug). WebGL tries to follow the spec 100%
The dot product is a scalar value not a vec3, you need to either store it in a float
float diffuse = max(dot(Ldir, N),0.0);
or initialize a vec3 with it
vec3 diffuse = vec3(max(dot(Ldir, N),0.0));
Same goes for the ambient term. Usually both diffuse and ambient terms are just scalars.
Im trying to reduce the number of post process textures I have to draw in my scene. The end goal is to support an SSAO shader. The shader requires depth, postion and normal data. Currently I am storing the depth and normals in 1 float texture and the position in another.
I've been doing some reading, and it seems possible that you can get the position by simply using the depth stored in the normal texture. You have to unproject the x and y and multiply it by the depth value. I can't seem to get this right however and its probably due to my lack of understanding...
So currently my positions are drawn to a position texture. This is what it looks like (this is currently working correctly)
So is my new method. I pass the normal texture that stores the normal x,y and z in the RGB channels and the depth in the w. In the SSAO shader I need to get the position and so this is how im doing it:
//viewport is a vec2 of the viewport width and height
//invProj is a mat4 using camera.projectionMatrixInverse (camera.projectionMatrixInverse.getInverse( camera.projectionMatrix );)
vec3 get_eye_normal()
{
vec2 frag_coord = gl_FragCoord.xy/viewport;
frag_coord = (frag_coord-0.5)*2.0;
vec4 device_normal = vec4(frag_coord, 0.0, 1.0);
return normalize((invProj * device_normal).xyz);
}
...
float srcDepth = texture2D(tNormalsTex, vUv).w;
vec3 eye_ray = get_eye_normal();
vec3 srcPosition = vec3( eye_ray.x * srcDepth , eye_ray.y * srcDepth , eye_ray.z * srcDepth );
//Previously was doing this:
//vec3 srcPosition = texture2D(tPositionTex, vUv).xyz;
However when I render out the positions it looks like this:
The SSAO looks very messed up using the new method. Any help would be greatly appreciated.
I was able to find a solution to this. You need to multiply the ray normal by the camera far - near (I was using the normalized depth value - but you need the world depth value.)
I created a function to extract the position from the normal/depth texture like so:
First in the depth capture pass (fragment shader)
float ld = length(vPosition) / linearDepth; //linearDepth is cam.far - cam.near
gl_FragColor = vec4( normalize( vNormal ).xyz, ld );
And now in the shader trying to extract the position...
/// <summary>
/// This function will get the 3d world position from the Normal texture containing depth in its w component
/// <summary>
vec3 get_world_pos( vec2 uv )
{
vec2 frag_coord = uv;
float depth = texture2D(tNormals, frag_coord).w;
float unprojDepth = depth * linearDepth - 1.0;
frag_coord = (frag_coord-0.5)*2.0;
vec4 device_normal = vec4(frag_coord, 0.0, 1.0);
vec3 eye_ray = normalize((invProj * device_normal).xyz);
vec3 pos = vec3( eye_ray.x * unprojDepth, eye_ray.y * unprojDepth, eye_ray.z * unprojDepth );
return pos;
}
I am learning to use shaders in OpenGL ES.
As an example: Here's my playground fragment shader which takes the current video frame and makes it grayscale:
varying highp vec2 textureCoordinate;
uniform sampler2D videoFrame;
void main() {
highp vec4 theColor = texture2D(videoFrame, textureCoordinate);
highp float avrg = (theColor[0] + theColor[1] + theColor[2]) / 3.0;
theColor[0] = avrg; // r
theColor[1] = avrg; // g
theColor[2] = avrg; // b
gl_FragColor = theColor;
}
theColor represents the current pixel. It would be cool to also get access to the previous pixel at this same coordinate.
For sake of curiousity, I would like to add or multiply the color of the current pixel to the color of the pixel in the previous render frame.
How could I keep the previous pixels around and pass them in to my fragment shader in order to do something with them?
Note: It's OpenGL ES 2.0 on the iPhone.
You need to render the previous frame to a texture, using a Framebuffer Object (FBO), then you can read this texture in your fragment shader.
The dot intrinsic function that Damon refers to is a code implementation of the mathematical dot product. I'm not supremely familiar with OpenGL so I'm not sure what the exact function call is, but mathematically a dot product goes like this :
Given a vector a and a vector b, the 'dot' product a 'dot' b produces a scalar result c:
c = a.x * b.x + a.y * b.y + a.z * b.z
Most modern graphics hardware (and CPUs, for that matter) are capable of performing this kind of operation in one pass. In your particular case, you could compute your average easily with a dot product like so:
highp vec4 = (1/3, 1/3, 1/3, 0) //or zero
I always get the 4th component in homogeneous vectors and matrices mixed up for some reason.
highp float avg = theColor DOT vec4
This will multiple each component of theColor by 1/3 (and the 4th component by 0), and then add them together.