I have a displacement map on a plane 512px* 512px (100x100 segments) , as the image for the displacement map scrolls left the vertices snap to position of height not blend smoothly, I have been looking at the mix() function and smooth-step() to morph the normals to their positions over time but i having a hard time implementing it.
uniform sampler2D heightText; //texture greyscale 512x512
uniform float displace;
uniform float time;
uniform float speed;
varying vec2 vUV;
varying float scaleDisplace;
void main() {
vUV = uv;
vec2 uvOffset = vUV + vec2( 0.1, 0.1)* time; // animates offset
vec2 uvCo = vUV + vec2( 0.0, 0.0);
vec2 texSize = vec2(-0.8, 0.8); // scales image larger
vec4 data = texture2D( heightText, uvOffset + fract(uvCo)*texSize.x);
scaleDisplace = data.r;
//vec3 possy = normal * displace * scaleDisplace;
vec3 morphPossy = mix( position, normal *displace , scaleDisplace)* time ;
gl_Position = projectionMatrix * modelViewMatrix * vec4(morphPossy, 1.0 );
}
Using Three.js 71 with vertex and pixel:
Illustration purpose:
Any help appreciated ...
Since you're using a texture as a height map, you should make sure that:
heightText.magFilter = THREE.LinearFilter; // This is the default value.
so that the values you receive are smoothed texel to texel.
Related
Im in a Three.js project and Im trying to convert a square with a square texture inside into a trapezoid.
I manage to create the shape but the texture inside, although it fits/cover the shape it do it with an undesired distorsiĆ³n.
Im using a PlaneBufferGeometry with ShaderMaterial and im trying to obtain this distorsion in the shader part (although it would be ok if it is done in the threejs geometry part).
This is my vertex:
uniform sampler2D uTexture;
varying vec2 vUv;
void main(){
float scaleTOP = 0.5;
float scaleBOTTOM = 1.0;
float scaleLEFT = 1.0;
float scaleRIGHT = 1.0;
float scaleX = mix(scaleBOTTOM, scaleTOP, uv.y);
float posX = position.x*scaleX;
float scaleY = mix(scaleLEFT, scaleRIGHT, uv.x);
float posY = position.y*scaleY;
vec3 finalPosition = vec3(posX, posY);
gl_Position = projectionMatrix * modelViewMatrix * vec4( finalPosition, 1.0 );
// Varyings:
vUv = uv;
}
And this is my fragment:
uniform sampler2D uTexture;
varying vec2 vUv;
void main() {
vec4 tex = texture2D ( uTexture, vUv );
gl_FragColor = vec4(tex.r, tex.g, tex.b, 1.0);
}
Unfortunately I manage to distort the square into the trapezoid but the texture is not distorted in the way I want. See figure to see the intended result:
Figure:
My vertex and fragment were ok.
The problem was that the Threejs geometry I was using had only 2 polygons. I was using:
this.bg_geometry = new THREE.PlaneBufferGeometry(width, height, 1, 1)
Thats it... with only one division which only created two triangles which actually can be seen in the figure I posted.
I changed the geometry to:
this.bg_geometry = new THREE.PlaneBufferGeometry(width, height, 100, 100)
...and now the texture is distorted as desired.
Anyway many thanks to #prisoner849 as he put me in the track to pass 4 points as uniforms uPoints in this order: TL,TR,BL,BR to set the shape of the plane.
My vertex shader looks now like this:
uniform vec3 uPoints[4];
varying vec2 vUv;
void main(){
vec3 baselineBottom = (uPoints[3] - uPoints[2]) * uv.x + uPoints[2];
vec3 baselineTop = (uPoints[1] - uPoints[0]) * uv.x + uPoints[0];
vec3 finalPosition = (baselineTop - baselineBottom) * uv.y + baselineBottom;
gl_Position = projectionMatrix * modelViewMatrix * vec4( finalPosition, 1.0 );
vUv = uv;
}
In my code, I'm mixing two textures. I want to position a texture at any place on the plane but when I add an offset to the texture UV XY coordinate the image just gets stretched.
offsetText1 = vec2(0.1,0.1);
vec4 displacement = texture2D(utexture1,vUv+offsetText1);
How do I move the texture to any position without stretching it?
VERTEX SHADER:
varying vec2 vUv;
uniform sampler2D utexture1;
uniform sampler2D utexture2;
varying vec2 offsetText1;
void main() {
offsetText1 = vec2(0.1,0.1);
vUv = uv;
vec4 modelPosition = modelMatrix * vec4(position, 1.0);
vec4 displacement = texture2D(utexture1,vUv+offsetText1);
vec4 displacement2 = texture2D(utexture2,vUv);
modelPosition.z += displacement.r*1.0;
modelPosition.z += displacement2.r*40.0;
gl_Position = projectionMatrix * viewMatrix * modelPosition;
}
FRAGMENT SHADER:
#ifdef GL_ES
precision highp float;
#endif
uniform sampler2D utexture1;
uniform sampler2D utexture2;
varying vec2 vUv;
varying vec2 offsetText1;
void main() {
vec3 c;
vec4 Ca = texture2D(utexture1,vUv+offsetText1 );
vec4 Cb = texture2D(utexture2,vUv);
c = Ca.rgb * Ca.a + Cb.rgb * Cb.a * (2.0 - Ca.a);
gl_FragColor = vec4(c, 1.0);
}
image with offsetText1 = vec2(0.0,0.0);
no stretching
image with offsetText1 = vec2(0.1,0.1); image is being stretched from the top right corner.
stretching
That's the behavior of textures. They extend in the range from [0, 1], so when you go beyond 1 or below 0, they'll "wrap". You need to tell it what to do when wrapping. Do you want it to repeat, stretch, or mirror?
You could establish this with the texture.wrapS and .wrapT properties, which accepts one of 3 values:
THREE.RepeatWrapping
THREE.ClampToEdgeWrapping
THREE.MirroredRepeatWrapping
If you want to just show white where the texture extends out of bounds, then you'd have to do that programmatically in your shader code. Here's some pseudocode:
if (uv < 0 || > 1)
color = white
I'm struggling with handling Coord in fragment Shader.
In brief, I just want to draw circle with fragment shader using (x,y,z) of world space. But because of camera position and the z of circle's center position, I cannot get actual right projected x and y coords.
Let's suppose that my camera placed at (0, 0, 1000) and perspective with
fov: 45deg
aspect with screen_width/screen_height
nearZ: 1
farZ: 10000
Camera look at (0,0). In this case with three.js, I can get projectionMatrix and ModelViewMatrix of camera(e.g.PerspectiveCamera.projectionMatrix) and also in default I can use viewMatrix in fragmentShader of ShaderMaterial in three.js.
So in fragmentShader, for calculating projected coordinate of circle placed (300, 300, -1000), I write my VertexShader and FragmentShader like below.
My Vertex Shader is only for get projectionMatrix and modelViewMatrix as P and MV.
// vertexShader
varying mat4 P;
varying mat4 MV;
void main(){
P = projectionMatrix;
MV = modelViewMatrix;
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}
And then, I just calculate x and y using P and MV like below.
// fragmentShader
varying mat4 P;
varying mat4 MV;
uniform float x;
uniform float y;
uniform float z;
uniform float r;
uniform vec2 u_resolution;
float circle(vec2 _st, vec2 _center, float _radius){
vec2 dist = _st - _center + u_resolution;
return 1.-smoothstep(_radius-(_radius*0.01),
_radius+(_radius*0.01),
length(dist));
}
void main(){
vec2 coord = (P * MV * vec4(x, y, z, 1.0)).xy;
float point = circle(gl_FragCoord.xy, coord, r); // ignore r scaling.
gl_FragColor = vec4(vec4(point), point);
}
But the result doesn't match what I expected. And also some weird behaviors were found.
No matter what z of uniform, there's no change at all.
Pixel ratio can be some reason(e.g. retina display has pixel ratio as 2) but from my experiments of it, it has nothing to do with this.
Any mistake that I made? Or any misleading? (somehow there can be mistake in circle function but I think it doesn't make critical problem..)
Lets assume that x, y and z, define the center of a circle in world space. You want to draw a circle in a plane which is parallel to the view port in a screen space pass, where you draw a quad over the entire viewport.
You have to transform the center of the circle from world space coordinates to normalized device coordinates. The best solution would be to do this on the CPU and to set uniform with the result.
According to the code of your question, this can be done in the vertex shader, too. But you have to do a Perspective divide, after the transformation by the model view matrix and the projection matrix, to transform the point form clip space to view normalized device space:
uniform mat4 P;
uniform mat4 MV;
uniform float x;
uniform float y;
uniform float z;
varying vec3 cpt;
void main(){
vec4 cpt_h = projectionMatrix * modelViewMatrix * vec4(x, y, z, 1.0);
vec3 cpt = cpt_h.xyz / cpt_h.w;
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}
If u_resolution, is the width and the height of the viewport, then the x and y coordinate of the fragment in normalized device space can be calculated by:
vec2 coord = gl_FragCoord.xy / u_resolution.xy * 2.0 - 1.0;
But I recommend to transform the center point of the circle to window (pixel) coordinates, then the radius can be set in pixel, too:
vec2 cpt_p = (cpt.xy * 0.5 + 0.5) * u_resolution.xy;
To calculate the length of a vector you can use the GLSL function length.
The final fragment shader may look like this:
varying vec3 cpt;
uniform vec2 u_resolution;
uniform float u_pixel_ratio; // device pixel ratio
uniform float r; // e.g. 100.0 means a radius of 100 pixel
float circle( vec2 _st, vec2 _center, float _radius )
{
// thickness of the circle in pixel
const float thickness = 20.0;
// distance to the center point in pixel
float dist = length(_st - _center);
return 1.0 - smoothstep(0.0, thickness/2.0, abs(_radius-dist));
}
void main(){
vec2 cpt_p = (cpt.xy * 0.5 + 0.5) * u_resolution.xy * u_pixel_ratio;
float point = circle(gl_FragCoord.xy, cpt_p, r);
gl_FragColor = vec4(point);
}
e.g. a circle with a radius of 50.0 and a thickness of 20.0:
If you want to apply a perspective distortion to the circle, this means the size of the circle decreases by distance, then you have to set the radius r in world coordinates.
Calculate a point on the circle and calculate the distance of the point to the center point of the circle in the vertex shader in normalized device space.
This is the radius which you have to pass from the vertex shader to the fragment shader additional to the center point of the circle.
uniform mat4 P;
uniform mat4 MV;
uniform float x;
uniform float y;
uniform float z;
uniform float r; // e.g. radius in world space
varying vec3 cpt;
varying float radius;
void main(){
vec4 cpt_v = modelViewMatrix * vec4(x, y, z, 1.0);
vec4 rpt_v = vec4(cpt_v.x, cpt_v.y + r, cpt_v.zw);
vec4 cpt_h = projectionMatrix * cpt_v;
vec4 rpt_h = projectionMatrix * rpt_v;
cpt = cpt_h.xyz / cpt_h.w;
vec3 rpt = rpt_v.xyz / rpt_v.w;
radius = length(rpt-cpt);
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
}
varying vec3 cpt;
varying float radius;
uniform vec2 u_resolution;
uniform float u_pixel_ratio; // device pixel ratio
uniform float r; // e.g. 100.0 means a radius of 100 pixel
float circle( vec2 _st, vec2 _center, float _radius )
{
const float thickness = 20.0;
float dist = length(_st - _center);
return 1.0 - smoothstep(0.0, thickness/2.0, abs(_radius-dist));
}
void main()
{
vec2 cpt_p = (cpt.xy * 0.5 + 0.5) * u_resolution.xy * u_pixel_ratio;
float radius_p = radius * 0.5 * u_resolution.y * u_pixel_ratio.y;
float point = circle(gl_FragCoord.xy, cpt_p, radius_p);
gl_FragColor = vec4(point);
}
I'm displaying a grid of particle clouds using shaders. Every time a user clicks a cloud, that cloud disappears and a new one takes its place. The curious thing is that the memory usage in the GPU climbs every time a new cloud replaces an old one - regardless of whether that new cloud is larger or smaller (and the buffer sizes always stay the same - the unused points are simply displayed offscreen with no color). After less than 10 clicks the GPU maxes out and crashes.
Here is my physics shader where the new positions are updated - I pass in the new position values for the new cloud by updating certain values in the the tOffsets texture. After that are my two (vert and frag) visual effects shaders. Can you see my efficiency issue? Or could this be a garbage collection matter? - Thanks in advance!
Physics Shader (frag only):
// Physics shader: This shader handles the calculations to move the various points. The position values are rendered out to at texture that is passed to the next pair of shaders that add the sprites and opacity.
// the tPositions sampler is added to this shader by Three.js's GPUCompute script
uniform sampler2D tOffsets;
uniform sampler2D tGridPositionsAndSeeds;
uniform sampler2D tSelectionFactors;
uniform float uPerMotifBufferDimension;
uniform float uTime;
uniform float uXOffW;
...noise functions omitted for brevity...
void main() {
vec2 uv = gl_FragCoord.xy / resolution.xy;
vec4 offsets = texture2D( tOffsets, uv ).xyzw;
float alphaMass = offsets.z;
float cellIndex = offsets.w;
if (cellIndex >= 0.0) { // this point will be rendered on screen
float damping = 0.98;
float texelSize = 1.0 / uPerMotifBufferDimension;
vec2 perMotifUV = vec2( mod(cellIndex, uPerMotifBufferDimension)*texelSize, floor(cellIndex / uPerMotifBufferDimension)*texelSize );
perMotifUV += vec2(0.5*texelSize);
vec4 selectionFactors = texture2D( tSelectionFactors, perMotifUV ).xyzw;
float swapState = selectionFactors.x;
vec4 gridPosition = texture2D( tGridPositionsAndSeeds, perMotifUV ).xyzw;
vec2 noiseSeed = gridPosition.zw;
vec4 nowPos;
vec2 velocity;
nowPos = texture2D( tPositions, uv ).xyzw;
velocity = vec2(nowPos.z, nowPos.w);
if ( swapState == 0.0 ) { // if no new position values are ready to be swapped in for this point
nowPos = texture2D( tPositions, uv ).xyzw;
velocity = vec2(nowPos.z, nowPos.w);
} else { // if swapState == 1, this means new position values are ready to be swapped in for this point
nowPos = vec4( -(uTime) + offsets.x, offsets.y, 0.0, 0.0 );
velocity = vec2(0.0, 0.0);
}
...physics calculations omitted for brevity...
vec2 newPosition = vec2(nowPos.x - velocity.x, nowPos.y - velocity.y);
// Write new position out to a texture for processing in the visual effects shader
gl_FragColor = vec4(newPosition.x, newPosition.y, velocity.x, velocity.y);
} else { // this point will not be rendered on screen
// Write new position out off screen (all -1 cellIndexes have off-screen offset values)
gl_FragColor = vec4( offsets.x, offsets.y, 0.0, 0.0);
}
From the physics shader the tPositions texture with the points' new movements is rendered out and passed to the visual effects shaders:
Visual Effects Shader (vert):
uniform sampler2D tPositions; // passed in from the Physics Shader
uniform sampler2D tSelectionFactors;
uniform float uPerMotifBufferDimension;
uniform sampler2D uTextureSheet;
uniform float uPointSize;
uniform float uTextureCoordSizeX;
uniform float uTextureCoordSizeY;
attribute float aTextureIndex;
attribute float aAlpha;
attribute float aCellIndex;
varying float vCellIndex;
varying vec2 vTextureCoords;
varying vec2 vTextureSize;
varying float vAlpha;
varying vec3 vColor;
...omitted noise functions for brevity...
void main() {
vec4 tmpPos = texture2D( tPositions, position.xy );
vec2 pos = tmpPos.xy;
vec2 vel = tmpPos.zw;
vCellIndex = aCellIndex;
if (vCellIndex >= 0.0) { // this point will be rendered onscreen
float texelSize = 1.0 / uPerMotifBufferDimension;
vec2 perMotifUV = vec2( mod(aCellIndex, uPerMotifBufferDimension)*texelSize, floor(aCellIndex / uPerMotifBufferDimension)*texelSize );
perMotifUV += vec2(0.5*texelSize);
vec4 selectionFactors = texture2D( tSelectionFactors, perMotifUV ).xyzw;
float aSelectedMotif = selectionFactors.x;
float aColor = selectionFactors.y;
float fadeFactor = selectionFactors.z;
vTextureCoords = vec2( aTextureIndex * uTextureCoordSizeX, 0 );
vTextureSize = vec2( uTextureCoordSizeX, uTextureCoordSizeY );
vAlpha = aAlpha * fadeFactor;
vColor = vec3( 1.0, aColor, 1.0 );
gl_PointSize = uPointSize;
} else { // this point will not be rendered onscreen
vAlpha = 0.0;
vColor = vec3(0.0, 0.0, 0.0);
gl_PointSize = 0.0;
}
gl_Position = projectionMatrix * modelViewMatrix * vec4( pos.x, pos.y, position.z, 1.0 );
}
Visual Effects Shader (frag):
uniform sampler2D tPositions;
uniform sampler2D uTextureSheet;
varying float vCellIndex;
varying vec2 vTextureCoords;
varying vec2 vTextureSize;
varying float vAlpha;
varying vec3 vColor;
void main() {
gl_FragColor = vec4( vColor, vAlpha );
if (vCellIndex >= 0.0) { // this point will be rendered onscreen, so add the texture
vec2 realTexCoord = vTextureCoords + ( gl_PointCoord * vTextureSize );
gl_FragColor = gl_FragColor * texture2D( uTextureSheet, realTexCoord );
}
}
Thanks to #Blindman67's comment above, I sorted out the problem. It had nothing to do with the shaders. In the Javascript (Three.js) I needed to signal the GPU to delete old textures before adding the updated ones.
Everytime I update a texture (most of mine are DataTextures) I need to call dispose() on the existing texture before creating and updating the new one, like so:
var textureHandle; // holds a reference to the current texture uniform value
textureHandle.dispose(); // ** deallocates GPU memory **
textureHandle = new THREE.DataTexture( textureData, dimension, dimension, THREE.RGBAFormat, THREE.FloatType );
textureHandle.needsUpdate = true;
uniforms.textureHandle.value = textureHandle;
Could someone please help me with my OpenGL GLSL 4.0 shader. The problem i am having is when a 3d (0bj file) is loaded and rendered, all works(lighting good, mesh vertices display great) well except the normals of the mesh file. Specifically, when the obj file is rotated in its local/model space the normal does not appear to light mesh in accordance with the light position and its current orientation (I hope that makes some sense).
I believe the problem is with my normal matrix.
Problem: when my 3d mesh rotates, the lighting is meshed up(does not reflect the light position).
Any help would be much appreciated. Thank in advance
VertexShader
#version 400
//Handle translation, projection, etc
struct Matrix {
mat4 mvp;
mat4 mv;
mat4 view;
mat4 projection;
};
struct Light {
vec3 position;
vec3 color;
vec3 direction;
float intensity;
vec3 ambient;
};
//---------------------------------------------------
//INPUT
//---------------------------------------------------
//Per-Vertex Data
//---------------------------------------------------
layout (location = 0) in vec3 inputPosition;
layout (location = 1) in vec3 inputNormal;
layout (location = 2) in vec3 inputTexture;
//--------------------------------------------
// UNIFORM:INPUT Supplied Data from C++ application
//--------------------------------------------
uniform Matrix matrix;
uniform Light light;
uniform vec3 cameraPosition;
out vec3 fragmentNormal;
out vec3 cameraVector;
out vec3 lightVector;
out vec2 texCoord;
void main() {
// output the transformed vertex
gl_Position = matrix.mvp * vec4(inputPosition,1.0);
//When using, (vec3,0.0)
mat3 Normal_Matrix = mat3( transpose(inverse(matrix.mv)) );
// set the normal for the fragment shader and
// the vector from the vertex to the camera
vec3 vertex = (matrix.mv * vec4(inputPosition,1.0)).xyz;
//----------------------------------------------------------
//The problem (i think) is here
//----------------------------------------------------------
fragmentNormal = normalize(Normal_Matrix * inputNormal);
cameraVector = (matrix.mv *vec4(cameraPosition,1.0)).xyz - vertex ;
lightVector = vertex - (matrix.mv * vec4(light.position,1.0)).xyz;
//store the texture data
texCoord = inputTexture.xy;
}
Fragment Shader
#version 400
const int NUM_LIGHTS = 3;
const float MAX_DIST = 15.0;
const float MAX_DIST_SQUARED = MAX_DIST * MAX_DIST;
const vec3 AMBIENT = vec3(0.152, 0.152, 0.152); //0.2 for all component is a good dark value
struct Light {
vec3 position;
vec3 color;
vec3 direction;
float intensity;
vec3 ambient;
};
//the image
uniform sampler2D textureSampler;
uniform Light light;
//in: used interpolation, must define both in vertex&fragment shader;
out vec4 finalOutput;
in vec2 texCoord; //Texture Coordinate
//in: used interpolation, must define both in vertex&fragment shader;
in vec3 fragmentNormal;
in vec3 cameraVector;
in vec3 lightVector;
void main() {
vec4 texColor = texture2D(textureSampler, texCoord);
// initialize diffuse/specular lighting
vec3 diffuse = vec3(0.005f, 0.005f, 0.005f);
vec3 specular = vec3(0.00f, 0.00f, 0.00f);
// normalize the fragment normal and camera direction
vec3 normal = normalize(fragmentNormal);
vec3 cameraDir = normalize(cameraVector);
// loop through each light
// calculate distance between 0.0 and 1.0
float dist = min(dot(lightVector, lightVector), MAX_DIST_SQUARED) / MAX_DIST_SQUARED;
float distFactor = 1.0 - dist;
// diffuse
vec3 lightDir = normalize(lightVector);
float diffuseDot = dot(normal, lightDir);
diffuse += light.color * clamp(diffuseDot, 0.0, 1.0) * distFactor;
// specular
vec3 halfAngle = normalize(cameraDir + lightDir);
vec3 specularColor = min(light.color + 0.8, 1.0);
float specularDot = dot(normal, halfAngle);
specular += specularColor * pow(clamp(specularDot, 0.0, 1.0), 16.0) * distFactor;
vec4 sample0 = vec4(1.0, 1.0, 1.0, 1.0);
vec3 ambDifCombo = (diffuse + AMBIENT);
//calculate the final color
vec3 color = clamp(sample0.rgb * ambDifCombo + specular, 0.0, 1.0);
finalOutput = vec4(color * vec3(texColor), sample0.a);
}
You should not transform your light position. Your light should remain stationary while your mesh rotates. Instead of this:
lightVector = vertex - (matrix.mv * vec4(light.position,1.0)).xyz;
Do this:
lightVector = vertex - light.position;
I would also try not transforming your camera position too.