I'm trying to get this tutorial to work but I ran into two issues, one of which can be found here. The other one is the following.
For convenience this is the code that is supposed to work and here's a jsfiddle.
Vertex-shader:
uniform mat4 projectionMatrix;
uniform mat4 modelViewMatrix;
attribute vec3 position;
uniform vec3 normal;
varying vec3 vNormal;
void main() {
test = 0.5;
vNormal = normal;
gl_Position = projectionMatrix *
modelViewMatrix *
vec4(position,1.0);
}
Fragment-shader:
varying mediump vec3 vNormal;
void main() {
mediump vec3 light = vec3(0.5, 0.2, 1.0);
// ensure it's normalized
light = normalize(light);
// calculate the dot product of
// the light to the vertex normal
mediump float dProd = max(0.0, dot(vNormal, light));
// feed into our frag colour
gl_FragColor = vec4(dProd, // R
dProd, // G
dProd, // B
1.0); // A
}
The values for normal in the vertex shader or at least the values for vNormal in the fragment shader seem to be 0. The sphere that is supposed to show up stays black. As soon as I change the values for gl_FragColor manually the sphere changes colors. Can anybody tell me why this is not working?
In your vertex shader the vec3 normal should be an attribute (since each vertex has a normal) not a uniform:
attribute vec3 normal;
Here is the working version of your code.
Related
I am clipping off on the fragment shader (setting the transparency to 0/1) based on the cut off vertex (v_cutPos) and the current vertex (v_currPos) that I get from the vertex shader. These two vertices are passed as world coords.
Now, the cut off logic works fine. But the cut itself is not smooth (it has to follow a certain shape). And when I pass the same vertices after converting to clip space, the cut is much smoother (or finer.)
Is there any explanation to this?
//fragment shader
precision highp float;
varying mediump vec4 v_color;
varying vec4 v_currPos;
varying vec4 v_cutPos;
/* returns 0 if pt is inside box otherwise 1 */
float insideCutArea(vec2 pt, vec2 cutPos)
{
return float(pt.y > cutPos.y);
}
void main(void)
{
float transparency = insideCutArea(v_currPos.xy, v_cutPos.xy);
gl_FragColor = vec4(v_color.xyz, v_color.w * transparency);
}
//vertex shader
varying mediump vec4 v_color;
uniform vec3 cutPos;
varying vec4 v_currPos;
varying vec4 v_cutPos;
void main(void)
{
//-------------------
other transformations
-------------------//
v_cutPos = myPMVMatrix * vec4(cutPos,1.0); //cut is not fine when not multiplying with the matrix
gl_Position = myPMVMatrix * vec4(validVertex, 1.0);
v_currPos = myPMVMatrix * vec4(validVertex, 1.0); //cut is not fine when not multiplying with the matrix
v_color = color;
}
PS: This question was previously closed due to not much clarity. I
have created this again with code explaining what I have done.
In my code, I'm mixing two textures. I want to position a texture at any place on the plane but when I add an offset to the texture UV XY coordinate the image just gets stretched.
offsetText1 = vec2(0.1,0.1);
vec4 displacement = texture2D(utexture1,vUv+offsetText1);
How do I move the texture to any position without stretching it?
VERTEX SHADER:
varying vec2 vUv;
uniform sampler2D utexture1;
uniform sampler2D utexture2;
varying vec2 offsetText1;
void main() {
offsetText1 = vec2(0.1,0.1);
vUv = uv;
vec4 modelPosition = modelMatrix * vec4(position, 1.0);
vec4 displacement = texture2D(utexture1,vUv+offsetText1);
vec4 displacement2 = texture2D(utexture2,vUv);
modelPosition.z += displacement.r*1.0;
modelPosition.z += displacement2.r*40.0;
gl_Position = projectionMatrix * viewMatrix * modelPosition;
}
FRAGMENT SHADER:
#ifdef GL_ES
precision highp float;
#endif
uniform sampler2D utexture1;
uniform sampler2D utexture2;
varying vec2 vUv;
varying vec2 offsetText1;
void main() {
vec3 c;
vec4 Ca = texture2D(utexture1,vUv+offsetText1 );
vec4 Cb = texture2D(utexture2,vUv);
c = Ca.rgb * Ca.a + Cb.rgb * Cb.a * (2.0 - Ca.a);
gl_FragColor = vec4(c, 1.0);
}
image with offsetText1 = vec2(0.0,0.0);
no stretching
image with offsetText1 = vec2(0.1,0.1); image is being stretched from the top right corner.
stretching
That's the behavior of textures. They extend in the range from [0, 1], so when you go beyond 1 or below 0, they'll "wrap". You need to tell it what to do when wrapping. Do you want it to repeat, stretch, or mirror?
You could establish this with the texture.wrapS and .wrapT properties, which accepts one of 3 values:
THREE.RepeatWrapping
THREE.ClampToEdgeWrapping
THREE.MirroredRepeatWrapping
If you want to just show white where the texture extends out of bounds, then you'd have to do that programmatically in your shader code. Here's some pseudocode:
if (uv < 0 || > 1)
color = white
My phong fragment shader is not shading anything, just making everything black.
This is my fragment shader
precision mediump float;
varying vec3 vposition;
varying vec3 vnormal;
varying vec4 vcolor;
varying vec3 veyePos;
void main() {
vec3 lightPos = vec3(0,0,0);
vec4 s = normalize(vec4(lightPos,1) - vec4(veyePos,1));
vec4 r = reflect(-s,vec4(vnormal, 1));
vec4 v = normalize(-vec4(veyePos, 1));
float spec = max( dot(v,r),0.0 );
float diff = max(dot(vec4(vnormal,1),s),0.0);
vec3 diffColor = diff * vec3(1,0,0);
vec3 specColor = pow(spec,3.0) * vec3(1,1,1);
vec3 ambientColor = vec3(0.1,0.1,0.1);
gl_FragColor = vec4(diffColor + 0.5 * specColor + ambientColor, 1);
}
This is my vertex shader
uniform mat4 uMVPMatrix;
uniform mat4 uMVMatrix;
uniform vec3 eyePos;
attribute vec4 aPosition;
attribute vec4 aColor;
attribute vec4 aNormal;
varying vec4 vcolor;
varying vec3 vposition;
varying vec3 vnormal;
varying vec3 veyePos;
void main() {
mat4 normalMat = transpose(inverse(uMVMatrix));
vec4 vertPos4 = uMVMatrix * vec4(vec3(aPosition), 1.0);
vposition = vec3(vertPos4) / vertPos4.w;
vcolor = aColor;
veyePos = eyePos;
vnormal = vec3(uMVMatrix * vec4(vec3(aNormal),0.0));
gl_Position = uMVPMatrix * aPosition;
}
MVMatrix is model-view matrix
MVPMatrix is model-view-projection matrix
first of all your lighting equations are incorrect:
vector s that you use to calculate the diffuse color should be a unit vector that originates at your vertex (vposition) towards your light. so it would be
s = normalize(lightPos - vposition)
also lightPos should be given in camera space and not in world space (so you should multiply it by your MV matrix)
vector r is the reflection of s around the normal so i dont understand why you take -s also the normal there should be in non-homegenous coordinates so it would be:
r = reflect(s,vnormal)
and finally v is the viewing ray (multiplied by -1) so it should be the vector that originates at vposition and goes towards eyepos.
v = normalize(veyepos - vposition)
also in your vertex shader veyepos (assuming it is the position of your camera) should not be varying (should be flat variable) because you dont want to interpolate it.
in your vertex shader you calculate normalMat but you forgot to use it when calculating your normals in camera space.
also normalMat should be mat3 because it is the inverse transpose of the
3by3 block of your MV matrix originating at (0,0)
** in order to be efficient you should calculate your normalMat on the cpu and pass it as a uniform to your vertex shader
hope this helps
I'm very new to OpenGL, GLSL and WebGL. I'm trying to get this sample code to work in a tool like http://shdr.bkcore.com/ but I can't get it to work.
Vertex Shader:
void main()
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
gl_TexCoord[0] = gl_MultiTexCoord0;
}
Fragment Shader:
precision highp float;
uniform float time;
uniform vec2 resolution;
varying vec3 fPosition;
varying vec3 fNormal;
uniform sampler2D tex0;
void main()
{
float border = 0.01;
float circle_radius = 0.5;
vec4 circle_color = vec4(1.0, 1.0, 1.0, 1.0);
vec2 circle_center = vec2(0.5, 0.5);
vec2 uv = gl_TexCoord[0].xy;
vec4 bkg_color = texture2D(tex0,uv * vec2(1.0, -1.0));
// Offset uv with the center of the circle.
uv -= circle_center;
float dist = sqrt(dot(uv, uv));
if ( (dist > (circle_radius+border)) || (dist < (circle_radius-border)) )
gl_FragColor = bkg_color;
else
gl_FragColor = circle_color;
}
I figured that this code must be from an outdated version of the language, so I changed the vertex shader to:
precision highp float;
attribute vec2 position;
attribute vec3 normal;
varying vec2 TextCoord;
attribute vec2 textCoord;
uniform mat3 normalMatrix;
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
varying vec3 fNormal;
varying vec3 fPosition;
void main()
{
gl_Position = vec4(position, 0.0, 1.0);
TextCoord = vec2(textCoord);
}
That seemed to fix the error messages about undeclared identifiers and not being able to "convert from 'float' to highp 4-component something-or-other", but I have no idea if, functionally, this will do the same thing as the original intended.
Also, when I convert to this version of the Vertex Shader I have no idea what I'm supposed to do with this line in the Fragment Shader:
vec2 uv = gl_TexCoord[0].xy;
How do I convert this line to fit in with the converted vertex shader and how can I be sure that the vertex shader is even converted correctly?
gl_TexCoord is from desktop OpenGL, and not part of OpenGL ES. You'll need to create a new user-defined vec2 varying to hold the coordinate value.
I have the following vertex shader to rotate normals. Before I implemented that, I passed also the rotation matrix of the mesh to calculate the normals. That time lighting was just fine.
#version 150
uniform mat4 projection;
uniform mat4 modelview;
in vec3 position;
in vec3 normal;
in vec2 texcoord;
out vec3 fposition;
out vec3 fnormal;
out vec2 ftexcoord;
void main()
{
mat4 mvp = projection * modelview;
fposition = vec3(mvp * vec4(position, 1.0));
fnormal = normalize(mat3(transpose(inverse(modelview))) * normal);
ftexcoord = texcoord;
gl_Position = mvp * vec4(position, 1.0);
}
But with this shader, the lighting computed in the fragment shader turns with the camera. I haven't changed the fragment shader, so the issue should be in the code above.
What am I doing wrong in computing the normals?
The steps you use to create the normal Matrix might be out of order.
Try:
fnormal = normalize(transpose(inverse(mat3(modelview))) * normal)
Edit:
Since you are inverting the mat4, the translation values (which get truncated when a mat4 is converted to a mat3) are probably affecting the calculation of the inverse matrix.