How to convert a square texture into a trapezoid texture with progressive distortion in GLSL - three.js

Im in a Three.js project and Im trying to convert a square with a square texture inside into a trapezoid.
I manage to create the shape but the texture inside, although it fits/cover the shape it do it with an undesired distorsiĆ³n.
Im using a PlaneBufferGeometry with ShaderMaterial and im trying to obtain this distorsion in the shader part (although it would be ok if it is done in the threejs geometry part).
This is my vertex:
uniform sampler2D uTexture;
varying vec2 vUv;
void main(){
float scaleTOP = 0.5;
float scaleBOTTOM = 1.0;
float scaleLEFT = 1.0;
float scaleRIGHT = 1.0;
float scaleX = mix(scaleBOTTOM, scaleTOP, uv.y);
float posX = position.x*scaleX;
float scaleY = mix(scaleLEFT, scaleRIGHT, uv.x);
float posY = position.y*scaleY;
vec3 finalPosition = vec3(posX, posY);
gl_Position = projectionMatrix * modelViewMatrix * vec4( finalPosition, 1.0 );
// Varyings:
vUv = uv;
}
And this is my fragment:
uniform sampler2D uTexture;
varying vec2 vUv;
void main() {
vec4 tex = texture2D ( uTexture, vUv );
gl_FragColor = vec4(tex.r, tex.g, tex.b, 1.0);
}
Unfortunately I manage to distort the square into the trapezoid but the texture is not distorted in the way I want. See figure to see the intended result:
Figure:

My vertex and fragment were ok.
The problem was that the Threejs geometry I was using had only 2 polygons. I was using:
this.bg_geometry = new THREE.PlaneBufferGeometry(width, height, 1, 1)
Thats it... with only one division which only created two triangles which actually can be seen in the figure I posted.
I changed the geometry to:
this.bg_geometry = new THREE.PlaneBufferGeometry(width, height, 100, 100)
...and now the texture is distorted as desired.
Anyway many thanks to #prisoner849 as he put me in the track to pass 4 points as uniforms uPoints in this order: TL,TR,BL,BR to set the shape of the plane.
My vertex shader looks now like this:
uniform vec3 uPoints[4];
varying vec2 vUv;
void main(){
vec3 baselineBottom = (uPoints[3] - uPoints[2]) * uv.x + uPoints[2];
vec3 baselineTop = (uPoints[1] - uPoints[0]) * uv.x + uPoints[0];
vec3 finalPosition = (baselineTop - baselineBottom) * uv.y + baselineBottom;
gl_Position = projectionMatrix * modelViewMatrix * vec4( finalPosition, 1.0 );
vUv = uv;
}

Related

Why can't I use Three JS to rotate a mesh if I used a vertex shader to set gl_Position?

I have a bunch of points mapped to a disc and I want to make them rotate about the origin of the scene. I have tried to make the points rotate using the standard approach:
<mesh ref={mesh}>
..
useFrame((state) => {
const { clock } = state;
gPoints.current.material.uniforms.uTime.value = clock.elapsedTime;
mesh.current.rotation.y += 0.01;
});
However it does not rotate anything. I pass along predetermined starting coordinates to the vertex shader as a uniform,
const vertexShader = `
uniform float uTime;
uniform float uDistance[1000];
uniform float uPointSizes[1000];
varying vec3 vColor;
varying float vDistance;
varying vec4 vRealPosition;
void main() {
float minPointSize = 1.0;
float maxPointSize = 10.0;
vDistance = distance(position, vec3(0.0));
vColor = mix(vec3(100.0/255.0, 15.0/255.0, 0.0), vec3(200.0/255.0, 50.0/255.0, 100.0/255.0), vDistance/20.0);
float pointSize = uPointSizes[gl_VertexID];
// Do Not Touch
gl_Position = projectionMatrix * viewMatrix * vec4( position, 1.0 );
gl_PointSize = pointSize;
vRealPosition = gl_Position;
}
`
export default vertexShader
Alternatively I have tried getting the points to rotate about the origin inside the vertex shader but I can't get it to work

cast shadow from partly transparent plane

Is there a possibility to cast shadow from a plane for which the texture plays a video with a chromakey shader ? My trial seams to answer by NO but I guess my shader is not adapted. The object is a simple PlabeBufferGeometry and the shader is :
vertexShader is :
varying vec2 vUv;
void main() {vUv = uv;
vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 ); gl_Position = projectionMatrix * mvPosition;
}
fragmentShader is :
uniform sampler2D vidtexture;
uniform vec3 color;
varying vec2 vUv;
void main()
{ vec3 tColor = texture2D( vidtexture, vUv ).rgb;
float a = (length(tColor - color) - 0.5) * 7.0;
gl_FragColor = vec4(tColor, a);}
Have you tried using the discard keyword? If your frag shader encounters that keyword, it won't render that fragment, it's as if it didn't exist. You could use this to create shadow outlines defined by your chromakey instead of always geting a square shadow.
void main(){
vec3 tColor = texture2D( vidtexture, vUv ).rgb;
float a = (length(tColor - color) - 0.5) * 7.0;
// Do not render pixels that are less than 10% opaque
if (a < 0.1) discard;
gl_FragColor = vec4(tColor, a);
}
This is the same approach Three.js uses for Material.alphaTest in all their built-in materials. You can see the GLSL source code for that command here.

Slow memory climb until crash in the GPU

I'm displaying a grid of particle clouds using shaders. Every time a user clicks a cloud, that cloud disappears and a new one takes its place. The curious thing is that the memory usage in the GPU climbs every time a new cloud replaces an old one - regardless of whether that new cloud is larger or smaller (and the buffer sizes always stay the same - the unused points are simply displayed offscreen with no color). After less than 10 clicks the GPU maxes out and crashes.
Here is my physics shader where the new positions are updated - I pass in the new position values for the new cloud by updating certain values in the the tOffsets texture. After that are my two (vert and frag) visual effects shaders. Can you see my efficiency issue? Or could this be a garbage collection matter? - Thanks in advance!
Physics Shader (frag only):
// Physics shader: This shader handles the calculations to move the various points. The position values are rendered out to at texture that is passed to the next pair of shaders that add the sprites and opacity.
// the tPositions sampler is added to this shader by Three.js's GPUCompute script
uniform sampler2D tOffsets;
uniform sampler2D tGridPositionsAndSeeds;
uniform sampler2D tSelectionFactors;
uniform float uPerMotifBufferDimension;
uniform float uTime;
uniform float uXOffW;
...noise functions omitted for brevity...
void main() {
vec2 uv = gl_FragCoord.xy / resolution.xy;
vec4 offsets = texture2D( tOffsets, uv ).xyzw;
float alphaMass = offsets.z;
float cellIndex = offsets.w;
if (cellIndex >= 0.0) { // this point will be rendered on screen
float damping = 0.98;
float texelSize = 1.0 / uPerMotifBufferDimension;
vec2 perMotifUV = vec2( mod(cellIndex, uPerMotifBufferDimension)*texelSize, floor(cellIndex / uPerMotifBufferDimension)*texelSize );
perMotifUV += vec2(0.5*texelSize);
vec4 selectionFactors = texture2D( tSelectionFactors, perMotifUV ).xyzw;
float swapState = selectionFactors.x;
vec4 gridPosition = texture2D( tGridPositionsAndSeeds, perMotifUV ).xyzw;
vec2 noiseSeed = gridPosition.zw;
vec4 nowPos;
vec2 velocity;
nowPos = texture2D( tPositions, uv ).xyzw;
velocity = vec2(nowPos.z, nowPos.w);
if ( swapState == 0.0 ) { // if no new position values are ready to be swapped in for this point
nowPos = texture2D( tPositions, uv ).xyzw;
velocity = vec2(nowPos.z, nowPos.w);
} else { // if swapState == 1, this means new position values are ready to be swapped in for this point
nowPos = vec4( -(uTime) + offsets.x, offsets.y, 0.0, 0.0 );
velocity = vec2(0.0, 0.0);
}
...physics calculations omitted for brevity...
vec2 newPosition = vec2(nowPos.x - velocity.x, nowPos.y - velocity.y);
// Write new position out to a texture for processing in the visual effects shader
gl_FragColor = vec4(newPosition.x, newPosition.y, velocity.x, velocity.y);
} else { // this point will not be rendered on screen
// Write new position out off screen (all -1 cellIndexes have off-screen offset values)
gl_FragColor = vec4( offsets.x, offsets.y, 0.0, 0.0);
}
From the physics shader the tPositions texture with the points' new movements is rendered out and passed to the visual effects shaders:
Visual Effects Shader (vert):
uniform sampler2D tPositions; // passed in from the Physics Shader
uniform sampler2D tSelectionFactors;
uniform float uPerMotifBufferDimension;
uniform sampler2D uTextureSheet;
uniform float uPointSize;
uniform float uTextureCoordSizeX;
uniform float uTextureCoordSizeY;
attribute float aTextureIndex;
attribute float aAlpha;
attribute float aCellIndex;
varying float vCellIndex;
varying vec2 vTextureCoords;
varying vec2 vTextureSize;
varying float vAlpha;
varying vec3 vColor;
...omitted noise functions for brevity...
void main() {
vec4 tmpPos = texture2D( tPositions, position.xy );
vec2 pos = tmpPos.xy;
vec2 vel = tmpPos.zw;
vCellIndex = aCellIndex;
if (vCellIndex >= 0.0) { // this point will be rendered onscreen
float texelSize = 1.0 / uPerMotifBufferDimension;
vec2 perMotifUV = vec2( mod(aCellIndex, uPerMotifBufferDimension)*texelSize, floor(aCellIndex / uPerMotifBufferDimension)*texelSize );
perMotifUV += vec2(0.5*texelSize);
vec4 selectionFactors = texture2D( tSelectionFactors, perMotifUV ).xyzw;
float aSelectedMotif = selectionFactors.x;
float aColor = selectionFactors.y;
float fadeFactor = selectionFactors.z;
vTextureCoords = vec2( aTextureIndex * uTextureCoordSizeX, 0 );
vTextureSize = vec2( uTextureCoordSizeX, uTextureCoordSizeY );
vAlpha = aAlpha * fadeFactor;
vColor = vec3( 1.0, aColor, 1.0 );
gl_PointSize = uPointSize;
} else { // this point will not be rendered onscreen
vAlpha = 0.0;
vColor = vec3(0.0, 0.0, 0.0);
gl_PointSize = 0.0;
}
gl_Position = projectionMatrix * modelViewMatrix * vec4( pos.x, pos.y, position.z, 1.0 );
}
Visual Effects Shader (frag):
uniform sampler2D tPositions;
uniform sampler2D uTextureSheet;
varying float vCellIndex;
varying vec2 vTextureCoords;
varying vec2 vTextureSize;
varying float vAlpha;
varying vec3 vColor;
void main() {
gl_FragColor = vec4( vColor, vAlpha );
if (vCellIndex >= 0.0) { // this point will be rendered onscreen, so add the texture
vec2 realTexCoord = vTextureCoords + ( gl_PointCoord * vTextureSize );
gl_FragColor = gl_FragColor * texture2D( uTextureSheet, realTexCoord );
}
}
Thanks to #Blindman67's comment above, I sorted out the problem. It had nothing to do with the shaders. In the Javascript (Three.js) I needed to signal the GPU to delete old textures before adding the updated ones.
Everytime I update a texture (most of mine are DataTextures) I need to call dispose() on the existing texture before creating and updating the new one, like so:
var textureHandle; // holds a reference to the current texture uniform value
textureHandle.dispose(); // ** deallocates GPU memory **
textureHandle = new THREE.DataTexture( textureData, dimension, dimension, THREE.RGBAFormat, THREE.FloatType );
textureHandle.needsUpdate = true;
uniforms.textureHandle.value = textureHandle;

GLSL webgl lerp normals from uv offset

I have a displacement map on a plane 512px* 512px (100x100 segments) , as the image for the displacement map scrolls left the vertices snap to position of height not blend smoothly, I have been looking at the mix() function and smooth-step() to morph the normals to their positions over time but i having a hard time implementing it.
uniform sampler2D heightText; //texture greyscale 512x512
uniform float displace;
uniform float time;
uniform float speed;
varying vec2 vUV;
varying float scaleDisplace;
void main() {
vUV = uv;
vec2 uvOffset = vUV + vec2( 0.1, 0.1)* time; // animates offset
vec2 uvCo = vUV + vec2( 0.0, 0.0);
vec2 texSize = vec2(-0.8, 0.8); // scales image larger
vec4 data = texture2D( heightText, uvOffset + fract(uvCo)*texSize.x);
scaleDisplace = data.r;
//vec3 possy = normal * displace * scaleDisplace;
vec3 morphPossy = mix( position, normal *displace , scaleDisplace)* time ;
gl_Position = projectionMatrix * modelViewMatrix * vec4(morphPossy, 1.0 );
}
Using Three.js 71 with vertex and pixel:
Illustration purpose:
Any help appreciated ...
Since you're using a texture as a height map, you should make sure that:
heightText.magFilter = THREE.LinearFilter; // This is the default value.
so that the values you receive are smoothed texel to texel.

GLSL 4.0 mesh rotation messes up normal? help please

Could someone please help me with my OpenGL GLSL 4.0 shader. The problem i am having is when a 3d (0bj file) is loaded and rendered, all works(lighting good, mesh vertices display great) well except the normals of the mesh file. Specifically, when the obj file is rotated in its local/model space the normal does not appear to light mesh in accordance with the light position and its current orientation (I hope that makes some sense).
I believe the problem is with my normal matrix.
Problem: when my 3d mesh rotates, the lighting is meshed up(does not reflect the light position).
Any help would be much appreciated. Thank in advance
VertexShader
#version 400
//Handle translation, projection, etc
struct Matrix {
mat4 mvp;
mat4 mv;
mat4 view;
mat4 projection;
};
struct Light {
vec3 position;
vec3 color;
vec3 direction;
float intensity;
vec3 ambient;
};
//---------------------------------------------------
//INPUT
//---------------------------------------------------
//Per-Vertex Data
//---------------------------------------------------
layout (location = 0) in vec3 inputPosition;
layout (location = 1) in vec3 inputNormal;
layout (location = 2) in vec3 inputTexture;
//--------------------------------------------
// UNIFORM:INPUT Supplied Data from C++ application
//--------------------------------------------
uniform Matrix matrix;
uniform Light light;
uniform vec3 cameraPosition;
out vec3 fragmentNormal;
out vec3 cameraVector;
out vec3 lightVector;
out vec2 texCoord;
void main() {
// output the transformed vertex
gl_Position = matrix.mvp * vec4(inputPosition,1.0);
//When using, (vec3,0.0)
mat3 Normal_Matrix = mat3( transpose(inverse(matrix.mv)) );
// set the normal for the fragment shader and
// the vector from the vertex to the camera
vec3 vertex = (matrix.mv * vec4(inputPosition,1.0)).xyz;
//----------------------------------------------------------
//The problem (i think) is here
//----------------------------------------------------------
fragmentNormal = normalize(Normal_Matrix * inputNormal);
cameraVector = (matrix.mv *vec4(cameraPosition,1.0)).xyz - vertex ;
lightVector = vertex - (matrix.mv * vec4(light.position,1.0)).xyz;
//store the texture data
texCoord = inputTexture.xy;
}
Fragment Shader
#version 400
const int NUM_LIGHTS = 3;
const float MAX_DIST = 15.0;
const float MAX_DIST_SQUARED = MAX_DIST * MAX_DIST;
const vec3 AMBIENT = vec3(0.152, 0.152, 0.152); //0.2 for all component is a good dark value
struct Light {
vec3 position;
vec3 color;
vec3 direction;
float intensity;
vec3 ambient;
};
//the image
uniform sampler2D textureSampler;
uniform Light light;
//in: used interpolation, must define both in vertex&fragment shader;
out vec4 finalOutput;
in vec2 texCoord; //Texture Coordinate
//in: used interpolation, must define both in vertex&fragment shader;
in vec3 fragmentNormal;
in vec3 cameraVector;
in vec3 lightVector;
void main() {
vec4 texColor = texture2D(textureSampler, texCoord);
// initialize diffuse/specular lighting
vec3 diffuse = vec3(0.005f, 0.005f, 0.005f);
vec3 specular = vec3(0.00f, 0.00f, 0.00f);
// normalize the fragment normal and camera direction
vec3 normal = normalize(fragmentNormal);
vec3 cameraDir = normalize(cameraVector);
// loop through each light
// calculate distance between 0.0 and 1.0
float dist = min(dot(lightVector, lightVector), MAX_DIST_SQUARED) / MAX_DIST_SQUARED;
float distFactor = 1.0 - dist;
// diffuse
vec3 lightDir = normalize(lightVector);
float diffuseDot = dot(normal, lightDir);
diffuse += light.color * clamp(diffuseDot, 0.0, 1.0) * distFactor;
// specular
vec3 halfAngle = normalize(cameraDir + lightDir);
vec3 specularColor = min(light.color + 0.8, 1.0);
float specularDot = dot(normal, halfAngle);
specular += specularColor * pow(clamp(specularDot, 0.0, 1.0), 16.0) * distFactor;
vec4 sample0 = vec4(1.0, 1.0, 1.0, 1.0);
vec3 ambDifCombo = (diffuse + AMBIENT);
//calculate the final color
vec3 color = clamp(sample0.rgb * ambDifCombo + specular, 0.0, 1.0);
finalOutput = vec4(color * vec3(texColor), sample0.a);
}
You should not transform your light position. Your light should remain stationary while your mesh rotates. Instead of this:
lightVector = vertex - (matrix.mv * vec4(light.position,1.0)).xyz;
Do this:
lightVector = vertex - light.position;
I would also try not transforming your camera position too.

Resources