glsl Vertex Lighting Shader - opengl-es

I am having a problem with some simple vertex point light in a glsl shader.
I am still confused by what coordinate space to do the lighting in.
Right now I am transforming the position by the modelview and the normal by the upper 3x3 modelview(no translation). I am also transforming the lightposition by the view matrix to get it into the same space.
The problem is the light position moves when the camera moves.
void main() {
attribute vec4 position;
attribute vec3 normal;
attribute vec2 texcoord0;
varying vec4 colorVarying;
varying vec2 texOut0;
uniform mat4 Projection;
uniform mat4 Modelview;
uniform mat3 NormalMatrix;//this is the upper 3x3 of the modelview
uniform vec3 LightPosition; //already transformed by view matrix
vec3 N = NormalMatrix * normal;
vec4 P = Modelview * position; //no view
vec3 L = normalize(LightPosition - P.xyz);
float df = max(0.0, dot(N, L.xyz));
vec3 final_color = AmbientMaterial + df * DiffuseMaterial;
colorVarying = vec4(final_color,1);
gl_Position = Projection * Modelview * position;
}
I figured out my error - I am using es 2.0 and was sending my normal matrix via
glUniformMatrix3fv(gVertexLightingShader->Uniforms[UNIFORM_NORMALMATRIX], 1, 0, m_modelview.data());
But m_modelview was a 4x4 matrix - so the normal matrix was not correct.

As #datenwolf said, the way you calculate the normal matrix will only work if it's orhtonormal, that is, it doesn't contain roations or scaling.
This is the way to solve this issue:
var normalMatrix = mat3.create();
mat4.toInverseMat3(mvMatrix, normalMatrix);
normalMatrix = mat3.toMat4(normalMatrix);
mat4.transpose(normalMatrix);

Related

How to texture non-unwrapped model using a cubemap

I have lots of models that ain't unwrapped (they don't have UV coordinates). They are quite complex to unwrap them. Thus, I decided to texture them using a seamless cubemap:
[VERT]
attribute vec4 a_position;
varying vec3 texCoord;
uniform mat4 u_worldTrans;
uniform mat4 u_projTrans;
...
void main()
{
gl_Position = u_projTrans * u_worldTrans * a_position;
texCoord = vec3(a_position);
}
[FRAG]
varying vec3 texCoord;
uniform samplerCube u_cubemapTex;
void main()
{
gl_FragColor = textureCube(u_cubemapTex, texCoord);
}
It works, but the result is quite weird due to texturing depends on the vertices position. If my model is more complex than a cube or sphere, I see visible seams and low resolution of the texture on some parts of the object.
Reflection is mapped good on the model, but it has a mirror effect.
Reflection:
[VERT]
attribute vec3 a_normal;
varying vec3 v_reflection;
uniform mat4 u_matViewInverseTranspose;
uniform vec3 u_cameraPos;
...
void main()
{
mat3 normalMatrix = mat3(u_matViewInverseTranspose);
vec3 n = normalize(normalMatrix * a_normal);
//calculate reflection
vec3 vView = a_position.xyz - u_cameraPos.xyz;
v_reflection = reflect(vView, n);
...
}
How to implement something like a reflection, but with “sticky” effect, which means that it’s as if the texture is attached to a certain vertex (not moving). Each side of the model must display its own side of the cubemap, and as a result it should look like a common 2D texturing. Any advice will be appreciated.
UPDATE 1
I summed up all comments and decided to calculate cubemap UV. Since I use LibGDX, some names may differ from OpenGL ones.
Shader class:
public class CubemapUVShader implements com.badlogic.gdx.graphics.g3d.Shader {
ShaderProgram program;
Camera camera;
RenderContext context;
Matrix4 viewInvTraMatrix, viewInv;
Texture texture;
Cubemap cubemapTex;
...
#Override
public void begin(Camera camera, RenderContext context) {
this.camera = camera;
this.context = context;
program.begin();
program.setUniformMatrix("u_matProj", camera.projection);
program.setUniformMatrix("u_matView", camera.view);
cubemapTex.bind(1);
program.setUniformi("u_textureCubemap", 1);
texture.bind(0);
program.setUniformi("u_texture", 0);
context.setDepthTest(GL20.GL_LEQUAL);
context.setCullFace(GL20.GL_BACK);
}
#Override
public void render(Renderable renderable) {
program.setUniformMatrix("u_matModel", renderable.worldTransform);
viewInvTraMatrix.set(camera.view);
viewInvTraMatrix.mul(renderable.worldTransform);
program.setUniformMatrix("u_matModelView", viewInvTraMatrix);
viewInvTraMatrix.inv();
viewInvTraMatrix.tra();
program.setUniformMatrix("u_matViewInverseTranspose", viewInvTraMatrix);
renderable.meshPart.render(program);
}
...
}
Vertex:
attribute vec4 a_position;
attribute vec2 a_texCoord0;
attribute vec3 a_normal;
attribute vec3 a_tangent;
attribute vec3 a_binormal;
varying vec2 v_texCoord;
varying vec3 v_cubeMapUV;
uniform mat4 u_matProj;
uniform mat4 u_matView;
uniform mat4 u_matModel;
uniform mat4 u_matViewInverseTranspose;
uniform mat4 u_matModelView;
void main()
{
gl_Position = u_matProj * u_matView * u_matModel * a_position;
v_texCoord = a_texCoord0;
//CALCULATE CUBEMAP UV (WRONG!)
//I decided that tm_l2g mentioned in comments is u_matView * u_matModel
v_cubeMapUV = vec3(u_matView * u_matModel * vec4(a_normal, 0.0));
/*
mat3 normalMatrix = mat3(u_matViewInverseTranspose);
vec3 t = normalize(normalMatrix * a_tangent);
vec3 b = normalize(normalMatrix * a_binormal);
vec3 n = normalize(normalMatrix * a_normal);
*/
}
Fragment:
varying vec2 v_texCoord;
varying vec3 v_cubeMapUV;
uniform sampler2D u_texture;
uniform samplerCube u_textureCubemap;
void main()
{
vec3 cubeMapUV = normalize(v_cubeMapUV);
vec4 diffuse = textureCube(u_textureCubemap, cubeMapUV);
gl_FragColor.rgb = diffuse;
}
The result is completely wrong:
I expect something like that:
UPDATE 2
The texture looks stretched on the sides and distorted in some places if I use vertices position as a cubemap coordinates in the vertex shader:
v_cubeMapUV = a_position.xyz;
I uploaded euro.blend, euro.obj and cubemap files to review.
that code works only for meshes that are centered around (0,0,0) if that is not the case or even if (0,0,0) is not inside the mesh then artifacts occur...
I would start with computing BBOX BBOXmin(x0,y0,z0),BBOXmax(x1,y1,z1) of your mesh and translate the position used for texture coordinate so its centered around it:
center = 0.5*(BBOXmin+BBOXmax);
texCoord = vec3(a_position-center);
However non uniform vertex density would still lead to texture scaling artifacts especially if BBOX sides sizes differs too much. Rescaling it to cube would help:
vec3 center = 0.5*(BBOXmin+BBOXmax); // center of BBOX
vec3 size = BBOXmax-BBOXmin; // size of BBOX
vec3 r = a_position-center; // position centered around center of BBOX
r.x/=size.x; // rescale it to cube BBOX
r.y/=size.y;
r.z/=size.z;
texCoord = r;
Again if the center of BBOX is not inside mesh then this would not work ...
The reflection part is not clear to me do you got some images/screenshots ?
[Edit1] simple example
I see it like this (without the center offsetting and aspect ratio corrections mentioned above):
[Vertex]
//------------------------------------------------------------------
#version 420 core
//------------------------------------------------------------------
uniform mat4x4 tm_l2g;
uniform mat4x4 tm_g2s;
layout(location=0) in vec3 pos;
layout(location=1) in vec4 col;
out smooth vec4 pixel_col;
out smooth vec3 pixel_txr;
//------------------------------------------------------------------
void main(void)
{
pixel_col=col;
pixel_txr=(tm_l2g*vec4(pos,0.0)).xyz;
gl_Position=tm_g2s*tm_l2g*vec4(pos,1.0);
}
//------------------------------------------------------------------
[Fragment]
//------------------------------------------------------------------
#version 420 core
//------------------------------------------------------------------
in smooth vec4 pixel_col;
in smooth vec3 pixel_txr;
uniform samplerCube txr_skybox;
out layout(location=0) vec4 frag_col;
//------------------------------------------------------------------
void main(void)
{
frag_col=texture(txr_skybox,pixel_txr);
}
//------------------------------------------------------------------
And here preview:
The white torus in first few frames are using fixed function and the rest is using shaders. As you can see the only input I use is the vertex position,color and transform matrices tm_l2g which converts from mesh coordinates to global world and tm_g2s which holds the perspective projection...
As you can see I render BBOX with the same CUBE MAP texture as I use for rendering the model so it looks like cool reflection/transparency effect :) (which was not intentional).
Anyway When I change the line
pixel_txr=(tm_l2g*vec4(pos,0.0)).xyz;
into:
pixel_txr=pos;
In my vertex shader the object will be solid again:
You can combine both by passing two texture coordinate vectors and fetching two texels in fragment adding them with some ratio together. Of coarse you would need to pass 2 Cube map textures one for object and one for skybox ...
The red warnings are from my CPU side code reminding me that I am trying to set uniforms that are not present in the shaders (as I did this from the bump mapping example without changing CPU side code...)
[Edit1] here preview of your mesh with offset
The Vertex changes a bit (just added the offsetting described in the answer):
//------------------------------------------------------------------
#version 420 core
//------------------------------------------------------------------
uniform mat4x4 tm_l2g;
uniform mat4x4 tm_g2s;
uniform vec3 center=vec3(0.0,0.0,2.0);
layout(location=0) in vec3 pos;
layout(location=1) in vec4 col;
out smooth vec4 pixel_col;
out smooth vec3 pixel_txr;
//------------------------------------------------------------------
void main(void)
{
pixel_col=col;
pixel_txr=pos-center;
gl_Position=tm_g2s*tm_l2g*vec4(pos,1.0);
}
//------------------------------------------------------------------
So by offsetting the center point you can get rid of the singular point distortion however as I mentioned in comments for arbitrary meshes there will be always some distortions with cheap texturing tricks instead of proper texture coordinates.
Beware my mesh was resized/normalized (sadly I do not remeber if its <-1,+1> range or different ona and too lazy to dig in my source code of the GLSL engine I tested this in) so the offset might have different magnitude in your environment to achieve the same result.

Rendering artifacts when using dot(n,l) as texture lookup coordinate Webgl

I'm implementing the xToon shader(pdf) in glsl to use as a shader with Three.js.
I'm getting some rendering artifacts, and I think the problem is due to webgl strangeness that I am not knowledgable about, perhaps relating to a Nan or Inf or something... I'm pulling my hair out.
I'll include the complete fragment and vertex shaders below, but I think this is the offending code located in the fragment shader:
....
vec3 n = normalize(vNormal);
vec3 l = normalize(lightDir);
float d = dot(n, l) * 0.5 + 0.5;
//vec2 texLookUp = vec2(d, loa);
vec2 texLookUp = vec2(d, 0.055);
vec4 dColor = texture2D(texture, texLookUp);
gl_FragColor = dColor;
....
To the best of my debugging efforts there seems to be some problem with using the value d as a component of the texture look up vector. This code produces these strange artifacts:
There shouldn't be those yellow "lines" on those contours...
As you may have noted, I'm not actually using the "loa" value in this code. For a while I thought that this problem was in the way I was calculating loa, but it seems that this bug is independent of loa.
Any help would be much appreciated!
The fragment shader:
uniform vec3 lightDir;
uniform sampler2D texture;
varying vec3 vNormal;
varying vec3 vPosition;
varying vec2 vUv;
// loa calculation for texture lookup
varying highp float loa;
void main() {
vec3 n = normalize(vNormal);
vec3 l = normalize(lightDir);
float d = dot(n, l) * 0.5 + 0.5;
//vec2 texLookUp = vec2(d, loa);
vec2 texLookUp = vec2(d, 0.055);
vec4 dColor = texture2D(texture, texLookUp);
gl_FragColor = dColor;
}
And the vertex shader:
uniform vec3 cameraPos;
uniform vec3 lightDir;
uniform vec3 focalPos;
uniform float inflate;
uniform float zmin;
uniform float r;
varying vec3 vNormal;
varying vec2 vUv;
varying float loa;
void main() {
vec3 n = normalize(normal);
// euclidiean distance to point from camera pos
float depth = length(cameraPos - position);
// 1. detail mapping correcting for perspective projection
float z = depth / zmin;
loa = 1.0 - (log2(z)/log2(r));
loa = clamp(loa, 0.055, 0.9);
vNormal = n;
vUv = uv;
gl_Position = projectionMatrix * modelViewMatrix * vec4(normal * inflate + position, 1.0 );
}
I solved the problem by setting the texture to ClampToEdgeWrapping instead of RepeatWrapping. I was led to this answer by this stack overflow question:
Using floor() function in GLSL when sampling a texture leaves glitch
The solution is explained very well in this blog post:
http://webglfundamentals.org/webgl/lessons/webgl-3d-textures.html
And the functions to deal with this in THREEjs are members of the Texture and are explained in the THREEjs docs here.
Also I needed to set the min filter to Nearest to fully get rid of the artifacts.

Is this GLSL program correct? My cubes are solid black

My phong fragment shader is not shading anything, just making everything black.
This is my fragment shader
precision mediump float;
varying vec3 vposition;
varying vec3 vnormal;
varying vec4 vcolor;
varying vec3 veyePos;
void main() {
vec3 lightPos = vec3(0,0,0);
vec4 s = normalize(vec4(lightPos,1) - vec4(veyePos,1));
vec4 r = reflect(-s,vec4(vnormal, 1));
vec4 v = normalize(-vec4(veyePos, 1));
float spec = max( dot(v,r),0.0 );
float diff = max(dot(vec4(vnormal,1),s),0.0);
vec3 diffColor = diff * vec3(1,0,0);
vec3 specColor = pow(spec,3.0) * vec3(1,1,1);
vec3 ambientColor = vec3(0.1,0.1,0.1);
gl_FragColor = vec4(diffColor + 0.5 * specColor + ambientColor, 1);
}
This is my vertex shader
uniform mat4 uMVPMatrix;
uniform mat4 uMVMatrix;
uniform vec3 eyePos;
attribute vec4 aPosition;
attribute vec4 aColor;
attribute vec4 aNormal;
varying vec4 vcolor;
varying vec3 vposition;
varying vec3 vnormal;
varying vec3 veyePos;
void main() {
mat4 normalMat = transpose(inverse(uMVMatrix));
vec4 vertPos4 = uMVMatrix * vec4(vec3(aPosition), 1.0);
vposition = vec3(vertPos4) / vertPos4.w;
vcolor = aColor;
veyePos = eyePos;
vnormal = vec3(uMVMatrix * vec4(vec3(aNormal),0.0));
gl_Position = uMVPMatrix * aPosition;
}
MVMatrix is model-view matrix
MVPMatrix is model-view-projection matrix
first of all your lighting equations are incorrect:
vector s that you use to calculate the diffuse color should be a unit vector that originates at your vertex (vposition) towards your light. so it would be
s = normalize(lightPos - vposition)
also lightPos should be given in camera space and not in world space (so you should multiply it by your MV matrix)
vector r is the reflection of s around the normal so i dont understand why you take -s also the normal there should be in non-homegenous coordinates so it would be:
r = reflect(s,vnormal)
and finally v is the viewing ray (multiplied by -1) so it should be the vector that originates at vposition and goes towards eyepos.
v = normalize(veyepos - vposition)
also in your vertex shader veyepos (assuming it is the position of your camera) should not be varying (should be flat variable) because you dont want to interpolate it.
in your vertex shader you calculate normalMat but you forgot to use it when calculating your normals in camera space.
also normalMat should be mat3 because it is the inverse transpose of the
3by3 block of your MV matrix originating at (0,0)
** in order to be efficient you should calculate your normalMat on the cpu and pass it as a uniform to your vertex shader
hope this helps

Compute normals in shader issue

I have the following vertex shader to rotate normals. Before I implemented that, I passed also the rotation matrix of the mesh to calculate the normals. That time lighting was just fine.
#version 150
uniform mat4 projection;
uniform mat4 modelview;
in vec3 position;
in vec3 normal;
in vec2 texcoord;
out vec3 fposition;
out vec3 fnormal;
out vec2 ftexcoord;
void main()
{
mat4 mvp = projection * modelview;
fposition = vec3(mvp * vec4(position, 1.0));
fnormal = normalize(mat3(transpose(inverse(modelview))) * normal);
ftexcoord = texcoord;
gl_Position = mvp * vec4(position, 1.0);
}
But with this shader, the lighting computed in the fragment shader turns with the camera. I haven't changed the fragment shader, so the issue should be in the code above.
What am I doing wrong in computing the normals?
The steps you use to create the normal Matrix might be out of order.
Try:
fnormal = normalize(transpose(inverse(mat3(modelview))) * normal)
Edit:
Since you are inverting the mat4, the translation values (which get truncated when a mat4 is converted to a mat3) are probably affecting the calculation of the inverse matrix.

GLSL Shader - How to calculate the height of a texture?

In this question I asked how to create a "mirrored" texture and now I want to move this "mirrored" image down on the y-axis about the height of the image.
I tried something like this with different values of HEIGHT but I cannot find a proper solution:
// Vertex Shader
uniform highp mat4 u_modelViewMatrix;
uniform highp mat4 u_projectionMatrix;
attribute highp vec4 a_position;
attribute lowp vec4 a_color;
attribute highp vec2 a_texcoord;
varying lowp vec4 v_color;
varying highp vec2 v_texCoord;
void main()
{
highp vec4 pos = a_position;
pos.y = pos.y - HEIGHT;
gl_Position = (u_projectionMatrix * u_modelViewMatrix) * pos;
v_color = a_color;
v_texCoord = vec2(a_texcoord.x, 1.0 - a_texcoord.y);
}
What you are actually changing in your code snippet is the Y position of your vertices... this is most certainly not what you want to do.
a_position is your model-space position; the coordinate system that is centered around your quad (I'm assuming you're using a quad to display the texture).
If instead you do the modification in screen-space, you will be able to move the image up and down etc... so change the gl_Position value:
((u_projectionMatrix * u_modelViewMatrix) * pos + Vec4(0,HEIGHT,0,0))
Note that then you will be in screen-space; so check the dimensions of your viewport.
Finally, a better way to achieve the effect you want to do is to use a rotation matrix to flip and tilt the image.
You would then combine this matrix with the rotation of you image (combine it with the modelviewmatrix).
You can choose to either multiply the model matrices by the view projection on the CPU:
original_mdl_mat = ...;
rotated_mdl_mat = Matrix.CreateTranslation(0, -image.Height, 0) * Matrix.CreateRotationY(180) * original_mdl_mat;
mvm_original_mat = Projection * View * original_mdl_mat;
mvm_rotated_mat = Projection * View * rotated_mdl_mat;
or on the GPU:
uniform highp mat4 u_model;
uniform highp mat4 u_viewMatrix;
uniform highp mat4 u_projectionMatrix;
gl_Position = (u_projectionMatrix * u_model * u_viewMatrix) * pos;
The coordinates passed to texture2D always sample the source in the range [0, 1) on both axes, regardless of the original texture size and aspect ratio. So a kneejerk answer is that the height of a texture is always 1.0.
If you want to know the height of the source image comprising the texture in pixels then you'll need to supply that yourself — probably as a uniform — since it isn't otherwise exposed.

Resources