Drawing a circle in fragment shader - filter

I am a complete noob when it comes to creating shaders. Or better said, I just learned about it yesterday.
I am trying to create a really simple circle. I thouht I finally figured it out but it turns out to be to large. It should match the DisplayObject size where the filter is applied to.
The fragment shader:
precision mediump float;
varying vec2 vTextureCoord;
vec2 resolution = vec2(1.0, 1.0);
void main() {
vec2 uv = vTextureCoord.xy / resolution.xy;
uv -= 0.5;
uv.x *= resolution.x / resolution.y;
float r = 0.5;
float d = length(uv);
float c = smoothstep(d,d+0.003,r);
gl_FragColor = vec4(vec3(c,0.5,0.0),1.0);
}
Example using Pixi.js:
var app = new PIXI.Application();
document.body.appendChild(app.view);
var background = PIXI.Sprite.fromImage("required/assets/bkg-grass.jpg");
background.width = 200;
background.height = 200;
app.stage.addChild(background);
var vertexShader = `
attribute vec2 aVertexPosition;
attribute vec2 aTextureCoord;
uniform mat3 projectionMatrix;
varying vec2 vTextureCoord;
void main(void)
{
gl_Position = vec4((projectionMatrix * vec3(aVertexPosition, 1.0)).xy, 0.0, 1.0);
vTextureCoord = aTextureCoord;
}
`;
var fragShader = `
precision mediump float;
varying vec2 vTextureCoord;
vec2 resolution = vec2(1.0, 1.0);
void main() {
vec2 uv = vTextureCoord.xy / resolution.xy;
uv -= 0.5;
uv.x *= resolution.x / resolution.y;
float r = 0.5;
float d = length(uv);
float c = smoothstep(d,d+0.003,r);
gl_FragColor = vec4(vec3(c,0.5,0.),1.0);
}
`;
var filter = new PIXI.Filter(vertexShader, fragShader);
filter.padding = 0;
background.filters = [filter];
body { margin: 0; }
<script src="https://cdnjs.cloudflare.com/ajax/libs/pixi.js/4.5.2/pixi.js"></script>

Pixi.js's vTextureCoord do not go from 0 to 1.
From the docs
V4 filters differ from V3. You can't just add in the shader and assume that texture coordinates are in the [0,1] range.
...
Note: vTextureCoord multiplied by filterArea.xy is the real size of bounding box.
If you want to get the pixel coordinates, use uniform filterArea, it will be passed to the filter automatically.
uniform vec4 filterArea;
...
vec2 pixelCoord = vTextureCoord * filterArea.xy;
They are in pixels. That won't work if we want something like "fill the ellipse into a bounding box". So, lets pass dimensions too! PIXI doesnt do it automatically, we need a manual fix:
filter.apply = function(filterManager, input, output)
{
this.uniforms.dimensions[0] = input.sourceFrame.width
this.uniforms.dimensions[1] = input.sourceFrame.height
// draw the filter...
filterManager.applyFilter(this, input, output);
}
Lets combine it in shader!
uniform vec4 filterArea;
uniform vec2 dimensions;
...
vec2 pixelCoord = vTextureCoord * filterArea.xy;
vec2 normalizedCoord = pixelCoord / dimensions;
Here's your snippet updated.
var app = new PIXI.Application();
document.body.appendChild(app.view);
var background = PIXI.Sprite.fromImage("required/assets/bkg-grass.jpg");
background.width = 200;
background.height = 200;
app.stage.addChild(background);
var vertexShader = `
attribute vec2 aVertexPosition;
attribute vec2 aTextureCoord;
uniform mat3 projectionMatrix;
varying vec2 vTextureCoord;
void main(void)
{
gl_Position = vec4((projectionMatrix * vec3(aVertexPosition, 1.0)).xy, 0.0, 1.0);
vTextureCoord = aTextureCoord;
}
`;
var fragShader = `
precision mediump float;
varying vec2 vTextureCoord;
uniform vec2 dimensions;
uniform vec4 filterArea;
void main() {
vec2 pixelCoord = vTextureCoord * filterArea.xy;
vec2 uv = pixelCoord / dimensions;
uv -= 0.5;
float r = 0.5;
float d = length(uv);
float c = smoothstep(d,d+0.003,r);
gl_FragColor = vec4(vec3(c,0.5,0.),1.0);
}
`;
var filter = new PIXI.Filter(vertexShader, fragShader);
filter.apply = function(filterManager, input, output)
{
this.uniforms.dimensions[0] = input.sourceFrame.width
this.uniforms.dimensions[1] = input.sourceFrame.height
// draw the filter...
filterManager.applyFilter(this, input, output);
}
filter.padding = 0;
background.filters = [filter];
body { margin: 0; }
<script src="https://cdnjs.cloudflare.com/ajax/libs/pixi.js/4.5.2/pixi.js"></script>

It seems you've stumbled upon weird floating point precision problems: texture coordinates (vTextureCoord) in your fragment shader aren't strictly in (0, 1) range. Here's what I've got when I've added line gl_FragColor = vec4(vTextureCoord, 0, 1):
It seems good, but if we inspect it closely, lower right pixel should be (1, 1, 0), but it isn't:
The problem goes away if instead of setting size to 500 by 500 we use power-of-two size (say, 512 by 512), the problem goes away:
The other possible way to mitigate the problem would be to try to circumvent Pixi's code that computes projection matrix and provide your own that transforms smaller quad into desired screen position.

Related

Color a set of fragments in fragment shader

My goal is to color a set of fragments determined by the interpolation between two points. Below the code I've written, It didn't work.!.
I also added some comments, Probably there are some mistakes I made or something that I misunderstood.
Thank you for your help.
#ifdef GL_ES
precision highp float;
#endif
uniform vec2 u_resolution;
void main(){
vec2 position = gl_FragCoord.xy / u_resolution;
vec4 color = vec4(0.97, 0.1, 0.53, 1.0);
// center (hopefully)
vec2 P1 = vec2(0.0,0.0);
// top right
vec2 P2 = vec2(1.0,1.0);
// generate 100 points between P1...P2
for(float i = 0.0; i < 1.0; i+=0.01) {
float lerpX = mix(P1.x, P2.x, i);
float lerpY = mix(P1.y, P2.y, i);
vec2 interpolatedPoint = vec2(lerpX, lerpY);
// check if current fragment is one of the
// interpolated points and color it
if (position.x == interpolatedPoint.x) {
gl_FragColor = color;
} else {
discard;
}
}
}
WORKING SOLUTION
#ifdef GL_ES
precision highp float;
#endif
uniform vec2 u_resolution;
void main(){
vec2 position = gl_FragCoord.xy / u_resolution;
vec4 color = vec4(0.97, 0.1, 0.53, 1.0);
// center (hopefully)
vec2 P1 = vec2(0.0,0.0);
// top right
vec2 P2 = vec2(1.0,1.0);
// generate 100 points between P1...P2
for(float i = 0.0; i < 1.0; i+=0.01) {
float lerpX = mix(P1.x, P2.x, i);
float lerpY = mix(P1.y, P2.y, i);
vec2 interpolatedPoint = vec2(lerpX, lerpY);
// check if current fragment is one of the
// interpolated points and color it
if (distance(position, interpolatedPoint) <= 0.01) {
gl_FragColor = color;
}
}
}
position.x == interpolatedPoint.x is a floating point comparison and is almost never evaluated as true. Your code discards all fragments. Implement a floating point comparison. Calculate the absolute value of the difference between the two values and compare it with an epsilon:
if (distance(position, interpolatedPoint) <= 0.01) {
gl_FragColor = color;
}

Change color in GLSL shaders

I am trying to modify the color of a 3d model (three.js) which uses GLSL shaders (frag and vert files). To be honest I am not experienced at all with shader language.
.frag file
precision highp float;
uniform sampler2D uTexture;
varying vec2 vPUv;
varying vec2 vUv;
void main() {
vec4 color = vec4(0.0);
vec2 uv = vUv;
vec2 puv = vPUv;
// pixel color
vec4 colA = texture2D(uTexture, puv);
// greyscale
float grey = colA.r * 0.31 + colA.g * 0.71 + colA.b * 0.07;
vec4 colB = vec4(grey, grey, grey, 1.0);
// circle
float border = 0.3;
float radius = 0.5;
float dist = radius - distance(uv, vec2(0.5));
float t = smoothstep(0.0, border, dist);
// final color
color = colB;
color.a = t;
gl_FragColor = color;
}
.vert file
precision highp float;
attribute float pindex;
attribute vec3 position;
attribute vec3 offset;
attribute vec2 uv;
attribute float angle;
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
uniform float uTime;
uniform float uRandom;
uniform float uDepth;
uniform float uSize;
uniform vec2 uTextureSize;
uniform sampler2D uTexture;
uniform sampler2D uTouch;
varying vec2 vPUv;
varying vec2 vUv;
#pragma glslify: snoise2 = require(glsl-noise/simplex/2d)
float random(float n) {
return fract(sin(n) * 43758.5453123);
}
void main() {
vUv = uv;
// particle uv
vec2 puv = offset.xy / uTextureSize;
vPUv = puv;
// pixel color
vec4 colA = texture2D(uTexture, puv);
float grey = colA.r * 0.21 + colA.g * 0.71 + colA.b * 0.07;
// displacement
vec3 displaced = offset;
// randomise
displaced.xy += vec2(random(pindex) - 0.5, random(offset.x + pindex) - 0.5) * uRandom;
float rndz = (random(pindex) + snoise_1_2(vec2(pindex * 0.1, uTime * 0.1)));
displaced.z += rndz * (random(pindex) * 2.0 * uDepth);
// center
displaced.xy -= uTextureSize * 0.5;
// touch
float t = texture2D(uTouch, puv).r;
displaced.z += t * 20.0 * rndz;
displaced.x += cos(angle) * t * 20.0 * rndz;
displaced.y += sin(angle) * t * 20.0 * rndz;
// particle size
float psize = (snoise_1_2(vec2(uTime, pindex) * 0.5) + 2.0);
psize *= max(grey, 0.2);
psize *= uSize;
// final position
vec4 mvPosition = modelViewMatrix * vec4(displaced, 1.0);
mvPosition.xyz += position * psize;
vec4 finalPosition = projectionMatrix * mvPosition;
gl_Position = finalPosition;
}
This creates particles from a very dark grey tone up to white like you see in the example. I would like to change only the color of the very dark tones to match the background color. I've played with some color values I've found here but unfortunately the results are not what I expected.
Maybe somebody has a quick hint for me?
I would like to change only the color of the very dark tones to match the background color.
Actually you create a gray scale color. But what you actually want is that the "dark tones to match the background" and the light tones are white. Hence you want a gradient from the background to white.
Either blend a white color with the back ground dependent on the grayscale:
void main()
{
vec4 color = vec4(0.0);
// [...]
// final color
color.rgb = vec3(1.0);
color.a = t * grey;
gl_FragColor = color;
}
You have to mix the back ground color and white dependent on the gray scale. For this you have to know the background color in the fragment shader:
void main()
{
vec4 color = vec4(0.0);
vec3 backgroundColor = vec3(42.0, 67.0, 101.0) / 255.0;
// [...]
// final color
color.rgb = mix(backgroundColor.rgb, vec3(1.0), gray);
color.a = t;
gl_FragColor = color;
}

Slow memory climb until crash in the GPU

I'm displaying a grid of particle clouds using shaders. Every time a user clicks a cloud, that cloud disappears and a new one takes its place. The curious thing is that the memory usage in the GPU climbs every time a new cloud replaces an old one - regardless of whether that new cloud is larger or smaller (and the buffer sizes always stay the same - the unused points are simply displayed offscreen with no color). After less than 10 clicks the GPU maxes out and crashes.
Here is my physics shader where the new positions are updated - I pass in the new position values for the new cloud by updating certain values in the the tOffsets texture. After that are my two (vert and frag) visual effects shaders. Can you see my efficiency issue? Or could this be a garbage collection matter? - Thanks in advance!
Physics Shader (frag only):
// Physics shader: This shader handles the calculations to move the various points. The position values are rendered out to at texture that is passed to the next pair of shaders that add the sprites and opacity.
// the tPositions sampler is added to this shader by Three.js's GPUCompute script
uniform sampler2D tOffsets;
uniform sampler2D tGridPositionsAndSeeds;
uniform sampler2D tSelectionFactors;
uniform float uPerMotifBufferDimension;
uniform float uTime;
uniform float uXOffW;
...noise functions omitted for brevity...
void main() {
vec2 uv = gl_FragCoord.xy / resolution.xy;
vec4 offsets = texture2D( tOffsets, uv ).xyzw;
float alphaMass = offsets.z;
float cellIndex = offsets.w;
if (cellIndex >= 0.0) { // this point will be rendered on screen
float damping = 0.98;
float texelSize = 1.0 / uPerMotifBufferDimension;
vec2 perMotifUV = vec2( mod(cellIndex, uPerMotifBufferDimension)*texelSize, floor(cellIndex / uPerMotifBufferDimension)*texelSize );
perMotifUV += vec2(0.5*texelSize);
vec4 selectionFactors = texture2D( tSelectionFactors, perMotifUV ).xyzw;
float swapState = selectionFactors.x;
vec4 gridPosition = texture2D( tGridPositionsAndSeeds, perMotifUV ).xyzw;
vec2 noiseSeed = gridPosition.zw;
vec4 nowPos;
vec2 velocity;
nowPos = texture2D( tPositions, uv ).xyzw;
velocity = vec2(nowPos.z, nowPos.w);
if ( swapState == 0.0 ) { // if no new position values are ready to be swapped in for this point
nowPos = texture2D( tPositions, uv ).xyzw;
velocity = vec2(nowPos.z, nowPos.w);
} else { // if swapState == 1, this means new position values are ready to be swapped in for this point
nowPos = vec4( -(uTime) + offsets.x, offsets.y, 0.0, 0.0 );
velocity = vec2(0.0, 0.0);
}
...physics calculations omitted for brevity...
vec2 newPosition = vec2(nowPos.x - velocity.x, nowPos.y - velocity.y);
// Write new position out to a texture for processing in the visual effects shader
gl_FragColor = vec4(newPosition.x, newPosition.y, velocity.x, velocity.y);
} else { // this point will not be rendered on screen
// Write new position out off screen (all -1 cellIndexes have off-screen offset values)
gl_FragColor = vec4( offsets.x, offsets.y, 0.0, 0.0);
}
From the physics shader the tPositions texture with the points' new movements is rendered out and passed to the visual effects shaders:
Visual Effects Shader (vert):
uniform sampler2D tPositions; // passed in from the Physics Shader
uniform sampler2D tSelectionFactors;
uniform float uPerMotifBufferDimension;
uniform sampler2D uTextureSheet;
uniform float uPointSize;
uniform float uTextureCoordSizeX;
uniform float uTextureCoordSizeY;
attribute float aTextureIndex;
attribute float aAlpha;
attribute float aCellIndex;
varying float vCellIndex;
varying vec2 vTextureCoords;
varying vec2 vTextureSize;
varying float vAlpha;
varying vec3 vColor;
...omitted noise functions for brevity...
void main() {
vec4 tmpPos = texture2D( tPositions, position.xy );
vec2 pos = tmpPos.xy;
vec2 vel = tmpPos.zw;
vCellIndex = aCellIndex;
if (vCellIndex >= 0.0) { // this point will be rendered onscreen
float texelSize = 1.0 / uPerMotifBufferDimension;
vec2 perMotifUV = vec2( mod(aCellIndex, uPerMotifBufferDimension)*texelSize, floor(aCellIndex / uPerMotifBufferDimension)*texelSize );
perMotifUV += vec2(0.5*texelSize);
vec4 selectionFactors = texture2D( tSelectionFactors, perMotifUV ).xyzw;
float aSelectedMotif = selectionFactors.x;
float aColor = selectionFactors.y;
float fadeFactor = selectionFactors.z;
vTextureCoords = vec2( aTextureIndex * uTextureCoordSizeX, 0 );
vTextureSize = vec2( uTextureCoordSizeX, uTextureCoordSizeY );
vAlpha = aAlpha * fadeFactor;
vColor = vec3( 1.0, aColor, 1.0 );
gl_PointSize = uPointSize;
} else { // this point will not be rendered onscreen
vAlpha = 0.0;
vColor = vec3(0.0, 0.0, 0.0);
gl_PointSize = 0.0;
}
gl_Position = projectionMatrix * modelViewMatrix * vec4( pos.x, pos.y, position.z, 1.0 );
}
Visual Effects Shader (frag):
uniform sampler2D tPositions;
uniform sampler2D uTextureSheet;
varying float vCellIndex;
varying vec2 vTextureCoords;
varying vec2 vTextureSize;
varying float vAlpha;
varying vec3 vColor;
void main() {
gl_FragColor = vec4( vColor, vAlpha );
if (vCellIndex >= 0.0) { // this point will be rendered onscreen, so add the texture
vec2 realTexCoord = vTextureCoords + ( gl_PointCoord * vTextureSize );
gl_FragColor = gl_FragColor * texture2D( uTextureSheet, realTexCoord );
}
}
Thanks to #Blindman67's comment above, I sorted out the problem. It had nothing to do with the shaders. In the Javascript (Three.js) I needed to signal the GPU to delete old textures before adding the updated ones.
Everytime I update a texture (most of mine are DataTextures) I need to call dispose() on the existing texture before creating and updating the new one, like so:
var textureHandle; // holds a reference to the current texture uniform value
textureHandle.dispose(); // ** deallocates GPU memory **
textureHandle = new THREE.DataTexture( textureData, dimension, dimension, THREE.RGBAFormat, THREE.FloatType );
textureHandle.needsUpdate = true;
uniforms.textureHandle.value = textureHandle;

GLSL Check texture alpha between 2 vectors

I'm trying to learn how to make shaders, and a little while ago, I posted a question here : GLSL Shader - Shadow between 2 textures on a plane
So, the answer gave me the right direction to take, but I have some trouble for checking if there is a fragment that is not transparent between the current fragment and the light position.
So here is the code :
Vertex Shader :
attribute vec3 position;
attribute vec3 normal;
attribute vec2 uv;
varying vec2 uvVarying;
varying vec3 normalVarying;
varying vec3 posVarying;
uniform vec4 uvBounds0;
uniform mat4 agk_World;
uniform mat4 agk_ViewProj;
uniform mat3 agk_WorldNormal;
void main()
{
vec4 pos = agk_World * vec4(position,1);
gl_Position = agk_ViewProj * pos;
vec3 norm = agk_WorldNormal * normal;
posVarying = pos.xyz;
normalVarying = norm;
uvVarying = uv * uvBounds0.xy + uvBounds0.zw;
}
And the fragment shader :
#ifdef GL_ES
#ifdef GL_FRAGMENT_PRECISION_HIGH
precision highp float;
#else
precision mediump float;
#endif
#endif
uniform sampler2D texture0;
uniform sampler2D texture1;
varying vec2 uvVarying;
varying vec3 normalVarying;
varying vec3 posVarying;
uniform vec4 uvBounds0;
uniform vec2 playerPos;
uniform vec2 agk_resolution;
uniform vec4 agk_PLightPos;
uniform vec4 agk_PLightColor;
uniform vec4 agk_ObjColor;
void main (void)
{
vec4 lightPos = agk_PLightPos;
lightPos.x = playerPos.x;
lightPos.y = -playerPos.y;
vec3 dir = vec3(lightPos.x - posVarying.x, lightPos.y - posVarying.y, lightPos.z - posVarying.z);
vec3 norm = normalize(normalVarying);
float atten = dot(dir,dir);
atten = clamp(lightPos.w/atten,0.0,1.0);
float intensity = dot(normalize(dir),norm);
intensity = clamp(intensity,0.0,1.0);
vec3 lightColor = agk_PLightColor.rgb * intensity * atten;
vec3 shadowColor = agk_PLightColor.rgb * 0;
bool inTheShadow = false;
if (intensity * atten > 0.05) {
float distanceToLight = length(posVarying.xy - lightPos.xy);
for (float i = distanceToLight; i > 0.0; i -= 0.1) {
vec2 uvShadow = ???
if (texture2D(texture0, uvShadow).a > 0) {
inTheShadow = true;
break;
}
}
}
if (texture2D(texture0, uvVarying).a == 0) {
if (inTheShadow == true) {
gl_FragColor = texture2D(texture1, uvVarying) * vec4(shadowColor, 1) * agk_ObjColor;
}
else {
gl_FragColor = texture2D(texture1, uvVarying) * vec4(lightColor, 1) * agk_ObjColor;
}
}
else {
gl_FragColor = texture2D(texture0, uvVarying) * agk_ObjColor;
}
}
So, this is the part where I have some troubles :
bool inTheShadow = false;
if (intensity * atten > 0.05) {
float distanceToLight = length(posVarying.xy - lightPos.xy);
for (float i = distanceToLight; i > 0.0; i -= 0.1) {
vec2 uvShadow = ???
if (texture2D(texture0, uvShadow).a > 0) {
inTheShadow = true;
break;
}
}
}
I first check if I'm in the light radius with intensity * atten > 0.05
Then I get the distance from the current fragment to the light position.
And then, I make a for loop, to check each fragment between the current fragment and the light position. I tried some calculations to get the current fragment, but with no success.
So, any idea on how I can calculate the uvShadow in my loop ?
I hope I'm using the good variables too, cause in the last part of my code, where I use gl_FragColor, I'm using uvVarying to get the current fragment (If i'm not mistaken), but to get the light distance, I had to calculate the length between posVarying and lightPos and not between uvVarying and lightPos (I made a test, where the further I was from the light, the more red it became, and with posVarying, it made me a circle with gradient around my player (lightPos) but when I used uvVarying, the circle was only one color, and it was more or less red, when I was approaching my player to the center of the screen).
Thanks and best regards,
Max
When you access a texture through texture2D() you use normalised coordinates. I.e. numbers that go from (0.0, 0.0) to (1.0, 1.0). So you need to convert your world positions to this normalised space. So something like:
vec2 uvShadow = posVarying.xy + ((distanceToLight / 0.1) * i * (posVarying.xy - lightPos.xy));
// Take uvShadow from world space to view space, this is -1.0 to 1.0
uvShadow *= mat2(inverse(agk_View)); // This could be optimized if you are using orthographic projection
// Now take it to texture space
uvShadow += 0.5;
uvShadow *= 0.5;

GLSL 4.0 mesh rotation messes up normal? help please

Could someone please help me with my OpenGL GLSL 4.0 shader. The problem i am having is when a 3d (0bj file) is loaded and rendered, all works(lighting good, mesh vertices display great) well except the normals of the mesh file. Specifically, when the obj file is rotated in its local/model space the normal does not appear to light mesh in accordance with the light position and its current orientation (I hope that makes some sense).
I believe the problem is with my normal matrix.
Problem: when my 3d mesh rotates, the lighting is meshed up(does not reflect the light position).
Any help would be much appreciated. Thank in advance
VertexShader
#version 400
//Handle translation, projection, etc
struct Matrix {
mat4 mvp;
mat4 mv;
mat4 view;
mat4 projection;
};
struct Light {
vec3 position;
vec3 color;
vec3 direction;
float intensity;
vec3 ambient;
};
//---------------------------------------------------
//INPUT
//---------------------------------------------------
//Per-Vertex Data
//---------------------------------------------------
layout (location = 0) in vec3 inputPosition;
layout (location = 1) in vec3 inputNormal;
layout (location = 2) in vec3 inputTexture;
//--------------------------------------------
// UNIFORM:INPUT Supplied Data from C++ application
//--------------------------------------------
uniform Matrix matrix;
uniform Light light;
uniform vec3 cameraPosition;
out vec3 fragmentNormal;
out vec3 cameraVector;
out vec3 lightVector;
out vec2 texCoord;
void main() {
// output the transformed vertex
gl_Position = matrix.mvp * vec4(inputPosition,1.0);
//When using, (vec3,0.0)
mat3 Normal_Matrix = mat3( transpose(inverse(matrix.mv)) );
// set the normal for the fragment shader and
// the vector from the vertex to the camera
vec3 vertex = (matrix.mv * vec4(inputPosition,1.0)).xyz;
//----------------------------------------------------------
//The problem (i think) is here
//----------------------------------------------------------
fragmentNormal = normalize(Normal_Matrix * inputNormal);
cameraVector = (matrix.mv *vec4(cameraPosition,1.0)).xyz - vertex ;
lightVector = vertex - (matrix.mv * vec4(light.position,1.0)).xyz;
//store the texture data
texCoord = inputTexture.xy;
}
Fragment Shader
#version 400
const int NUM_LIGHTS = 3;
const float MAX_DIST = 15.0;
const float MAX_DIST_SQUARED = MAX_DIST * MAX_DIST;
const vec3 AMBIENT = vec3(0.152, 0.152, 0.152); //0.2 for all component is a good dark value
struct Light {
vec3 position;
vec3 color;
vec3 direction;
float intensity;
vec3 ambient;
};
//the image
uniform sampler2D textureSampler;
uniform Light light;
//in: used interpolation, must define both in vertex&fragment shader;
out vec4 finalOutput;
in vec2 texCoord; //Texture Coordinate
//in: used interpolation, must define both in vertex&fragment shader;
in vec3 fragmentNormal;
in vec3 cameraVector;
in vec3 lightVector;
void main() {
vec4 texColor = texture2D(textureSampler, texCoord);
// initialize diffuse/specular lighting
vec3 diffuse = vec3(0.005f, 0.005f, 0.005f);
vec3 specular = vec3(0.00f, 0.00f, 0.00f);
// normalize the fragment normal and camera direction
vec3 normal = normalize(fragmentNormal);
vec3 cameraDir = normalize(cameraVector);
// loop through each light
// calculate distance between 0.0 and 1.0
float dist = min(dot(lightVector, lightVector), MAX_DIST_SQUARED) / MAX_DIST_SQUARED;
float distFactor = 1.0 - dist;
// diffuse
vec3 lightDir = normalize(lightVector);
float diffuseDot = dot(normal, lightDir);
diffuse += light.color * clamp(diffuseDot, 0.0, 1.0) * distFactor;
// specular
vec3 halfAngle = normalize(cameraDir + lightDir);
vec3 specularColor = min(light.color + 0.8, 1.0);
float specularDot = dot(normal, halfAngle);
specular += specularColor * pow(clamp(specularDot, 0.0, 1.0), 16.0) * distFactor;
vec4 sample0 = vec4(1.0, 1.0, 1.0, 1.0);
vec3 ambDifCombo = (diffuse + AMBIENT);
//calculate the final color
vec3 color = clamp(sample0.rgb * ambDifCombo + specular, 0.0, 1.0);
finalOutput = vec4(color * vec3(texColor), sample0.a);
}
You should not transform your light position. Your light should remain stationary while your mesh rotates. Instead of this:
lightVector = vertex - (matrix.mv * vec4(light.position,1.0)).xyz;
Do this:
lightVector = vertex - light.position;
I would also try not transforming your camera position too.

Resources