What i want to do is to load a texture with only alpha values from a PNG-picture while using the color of the material to render the RGB. To give you some context i use this for GPU-picking to find sprites that are clicked on. This way i can know if a sprite was clicked on or if the user clicked on the transparent part of the sprite.
I tried using the THREE.AlphaFormat as format and i tried all the types but
what i get is a sprite with correct alpha, but the color from the texture is combined with the color of the material.
Here is the code i tried so far
var type = THREE.UnsignedByteType;
var spriteMap = new THREE.TextureLoader().load( url );
spriteMap.format = THREE.AlphaFormat;
spriteMap.type = type;
var spriteMaterial = new THREE.SpriteMaterial( { map: spriteMap , color: idcolor.getHex() } ); //
var sprite = new THREE.Sprite( spriteMaterial );
sprite.position.set( this.position.x , this.position.y , this.position.z );
sprite.scale.set( this.scale.x , this.scale.y , this.scale.z );
Selection.GpuPicking.pickingScene.add( sprite );
Any ideas on how to achieve this?
three.js r.91
I didn't manage to do what i wanted with combining texture and material. My solution was to create a plane and add my own custom shaders handling the Sprite functionality. I copied the shaders for sprites from three.js library and removed the code i didn't need since i only needed the correct alpha and one color to be visible.
My code for creating a sprite with color from material and alpha values from texture
//Create the position, scale you want for your sprite and add the url to your texture
var spriteMap = new THREE.TextureLoader().load( url , function( texture ){
var geometry = new THREE.PlaneGeometry( 1.0, 1.0 );
uniforms = {
color: {type:"v3", value: color },
map: {value: texture },
opacity: {type:"f", value: 1.0 },
alphaTest: {type:"f", value: 0.0 },
scale: {type:"v3", value: scale }
};
var material = new THREE.ShaderMaterial(
{
uniforms: uniforms,
vertexShader: vertexShader, //input the custom shader here
fragmentShader: fragmentShader, //input the custom shader here
transparent: true,
});
var mesh = new THREE.Mesh( geometry, material );
mesh.position.set( position.x , position.y , position.z );
mesh.scale.set( scale.x , scale.y ,sprite.scale.z );
scene.add(mesh);
} );
This is my vertex shader:
uniform vec3 scale;
varying vec2 vUV;
void main() {
float rotation = 0.0;
vUV = uv;
vec3 alignedPosition = position * scale;
vec2 rotatedPosition;
rotatedPosition.x = cos( rotation ) * alignedPosition.x - sin( rotation ) * alignedPosition.y;
rotatedPosition.y = sin( rotation ) * alignedPosition.x + cos( rotation ) * alignedPosition.y;
vec4 mvPosition;
mvPosition = modelViewMatrix * vec4( 0.0, 0.0 , 0.0 , 1.0 );
mvPosition.xy += rotatedPosition;
gl_Position = projectionMatrix * mvPosition;
}
my fragment shader:
varying vec2 vUV;
uniform vec3 color;
uniform sampler2D map;
uniform float opacity;
uniform float alphaTest;
void main() {
vec4 texture = texture2D( map, vUV );
//showing the color from material, but uses alpha from texture
gl_FragColor = vec4( color , texture.a * opacity );
if ( gl_FragColor.a < alphaTest ) discard;
}
Related
I've applied ShaderMaterial to a glb model that has opacity map (the model is human body and the opacity map is used to create hair and eyelashes), the reference for the model material was this -
So as you can see - the material is some sort of glow effect, so i was manage to find This Example which is pretty much what i need - the problem is that i can't figure out how to apply the models opacity map - if you look closely on the difference between my result (left picture) to the right picture - you'll see that the hair doesn't looks as it should - since the opacity map do not applied... i wonder is the ShaderMaterial is the good for this look or should i use other kind of shader.
Here is my material code -
let m = new THREE.MeshStandardMaterial({
roughness: 0.25,
metalness: 0.75,
opacity: 0.3,
map: new THREE.TextureLoader().load(
"/maps/opacity.jpg",
(tex) => {
tex.wrapS = THREE.RepeatWrapping;
tex.wrapT = THREE.RepeatWrapping;
tex.repeat.set(16, 1);
}
),
onBeforeCompile: (shader) => {
shader.uniforms.s = uniforms.s;
shader.uniforms.b = uniforms.b;
shader.uniforms.p = uniforms.p;
shader.uniforms.glowColor = uniforms.glowColor;
shader.vertexShader = document.getElementById("vertexShader").textContent;
shader.fragmentShader = document.getElementById(
"fragmentShader"
).textContent;
shader.side = THREE.FrontSide;
shader.transparent = true;
// shader.uniforms['alphaMap'].value.needsUpdate = true;
console.log(shader.vertexShader);
console.log(shader.fragmentShader);
},
});
Shader setting:
<script id="vertexShader" type="x-shader/x-vertex">
varying vec3 vNormal;
varying vec3 vPositionNormal;
void main()
{
vNormal = normalize( normalMatrix * normal ); // 转换到视图空间
vPositionNormal = normalize(( modelViewMatrix * vec4(position, 1.0) ).xyz);
gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
}
</script>
<!-- fragment shader a.k.a. pixel shader -->
<script id="fragmentShader" type="x-shader/x-vertex">
uniform vec3 glowColor;
uniform float b;
uniform float p;
uniform float s;
varying vec3 vNormal;
varying vec3 vPositionNormal;
void main()
{
float a = pow( b + s * abs(dot(vNormal, vPositionNormal)), p );
gl_FragColor = vec4( mix(vec3(0), glowColor, a), 1. );
}
</script>
You're creating a MeshStandardMaterial, but then you're overriding all its shader code when you assign new vertex and fragment shaders, making the Standard material useless. You should stick to ShaderMaterial like the demo you linked. It would make your code cleaner:
// Get shader code
let vertShader = document.getElementById("vertexShader").textContent;
let fragShader = document.getElementById("fragmentShader").textContent;
// Build texture
let alphaTex = new THREE.TextureLoader().load("/maps/opacity.jpg");
alphaTex.wrapS = THREE.RepeatWrapping;
alphaTex.wrapT = THREE.RepeatWrapping;
// alphaTex.repeat.set(16, 1); <- repeat won't work in a custom shader
// Build material
let m = new THREE.ShaderMaterial({
transparent: true,
// side: THREE.FrontSide, <- this is already default. Not needed
uniforms: {
s: {value: 1},
b: {value: 2},
p: {value: 3},
alphaMap: {value: alphaTex},
glowColor: {value: new THREE.Color(0x0099ff)},
// we create a Vec2 to manually handle repeat
repeat: {value: new THREE.Vector2(16, 1)}
},
vertexShader: vertShader,
fragmentShader: fragShader
});
This helps build you material in a cleaner way, since you're using its native build method without having to override anything. Then, you can sample the alphaMap texture in your fragment shader:
uniform float s;
uniform float b;
uniform float p;
uniform vec3 glowColor;
uniform vec2 repeat;
// Declare the alphaMap uniform if we're gonna use it
uniform sampler2D alphaMap;
// Don't forget to declare UV coordinates
varying vec2 vUv;
varying vec3 vNormal;
varying vec3 vPositionNormal;
void main()
{
float a = pow( b + s * abs(dot(vNormal, vPositionNormal)), p );
// Sample map with UV coordinates. Multiply by uniform to get repeat
float a2 = texture2D(alphaMap, vUv * repeat).r;
// Combine both alphas
float opacity = a * a2;
gl_FragColor = vec4( mix(vec3(0), glowColor, opacity), 1. );
}
Also, don't forget to carry over the UVs from your vertex shader:
// Don't forget to declare UV coordinates
varying vec2 vUv;
varying vec3 vNormal;
varying vec3 vPositionNormal;
void main()
{
// convert uv attribute to vUv varying
vUv = uv;
vNormal = normalize( normalMatrix * normal ); // 转换到视图空间
vPositionNormal = normalize(( modelViewMatrix * vec4(position, 1.0) ).xyz);
gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
}
Update
The error
'=' : cannot convert from 'lowp 4-component vector of float' to 'highp float'
means I made a mistake when taking the texture2D() sample in the fragment shader. It should have been texture2D().r so we only read the red channel to get a float instead of cramming all RGBA channels (yielding a vec4) into a float. See the following snippet for the final result:
var container, scene, camera, renderer, controls, torusKnot;
init()
function init() {
initBase()
initObject()
render()
}
function initBase () {
container = document.getElementById( 'ThreeJS' )
// SCENE
scene = new THREE.Scene();
// CAMERA
var SCREEN_WIDTH = window.innerWidth, SCREEN_HEIGHT = window.innerHeight
var VIEW_ANGLE = 45, ASPECT = SCREEN_WIDTH / SCREEN_HEIGHT, NEAR = 0.1, FAR = 20000
camera = new THREE.PerspectiveCamera( VIEW_ANGLE, ASPECT, NEAR, FAR)
camera.position.set(0,0,50)
camera.lookAt(scene.position)
// RENDERER
renderer = new THREE.WebGLRenderer( {antialias:true} )
renderer.setSize(SCREEN_WIDTH, SCREEN_HEIGHT)
renderer.setClearColor(0x333333)
container.appendChild( renderer.domElement )
// CONTROLS
controls = new THREE.OrbitControls( camera, renderer.domElement )
// Resize
window.addEventListener("resize", onWindowResize);
}
function onWindowResize() {
var w = window.innerWidth;
var h = window.innerHeight;
renderer.setSize(w, h);
camera.aspect = w / h;
camera.updateProjectionMatrix();
}
function initObject () {
let vertShader = document.getElementById("vertexShader").textContent;
let fragShader = document.getElementById("fragmentShader").textContent;
// Build texture
let alphaTex = new THREE.TextureLoader().load("https://threejs.org/examples/textures/floors/FloorsCheckerboard_S_Diffuse.jpg");
alphaTex.wrapS = THREE.RepeatWrapping;
alphaTex.wrapT = THREE.RepeatWrapping;
var customMaterial = new THREE.ShaderMaterial({
uniforms: {
s: {value: -1},
b: {value: 1},
p: {value: 2},
alphaMap: {value: alphaTex},
glowColor: {value: new THREE.Color(0x00ffff)},
// we create a Vec2 to manually handle repeat
repeat: {value: new THREE.Vector2(16, 1)}
},
vertexShader: vertShader,
fragmentShader: fragShader
})
var geometry = new THREE.TorusKnotBufferGeometry( 10, 3, 100, 32 )
torusKnot = new THREE.Mesh( geometry, customMaterial )
scene.add( torusKnot )
}
function render() {
torusKnot.rotation.y += 0.01;
renderer.render( scene, camera );
requestAnimationFrame(render);
}
body{
overflow: hidden;
margin: 0;
}
<script src="https://threejs.org/build/three.js"></script>
<script src="https://threejs.org/examples/js/controls/OrbitControls.js"></script>
<!-- vertext shader a.k.a. pixel shader -->
<script id="vertexShader" type="x-shader/x-vertex">
varying vec2 vUv;
varying vec3 vNormal;
varying vec3 vPositionNormal;
void main()
{
// convert uv attribute to vUv varying
vUv = uv;
vNormal = normalize( normalMatrix * normal ); // 转换到视图空间
vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );
vPositionNormal = normalize(( mvPosition ).xyz);
gl_Position = projectionMatrix * mvPosition;
}
</script>
<!-- fragment shader a.k.a. pixel shader -->
<script id="fragmentShader" type="x-shader/x-vertex">
uniform float s;
uniform float b;
uniform float p;
uniform vec3 glowColor;
uniform vec2 repeat;
// Declare the alphaMap uniform if we're gonna use it
uniform sampler2D alphaMap;
// Don't forget to declare UV coordinates
varying vec2 vUv;
varying vec3 vNormal;
varying vec3 vPositionNormal;
void main()
{
float a = pow( b + s * abs(dot(vNormal, vPositionNormal)), p );
// Sample map with UV coordinates. Multiply by uniform to get repeat
float a2 = texture2D(alphaMap, vUv * repeat).r;
// Combine both alphas
float opacity = a * a2;
gl_FragColor = vec4( mix(vec3(0), glowColor, opacity), 1. );
}
</script>
<div id="ThreeJS" style="position: absolute; left:0px; top:0px"></div>
EDIT: The Forge Viewer I'm using has customized version of Three.js release r71 (source) which is why I'm using outdated code. The current release of Three.js is r121.
I've created THREE.Group() that contains various THREE.Pointcloud(geometry, material). One of the Points is composed of THREE.BufferGeometry() and THREE.ShaderMaterial().
When I add a colour attribute to a BufferGeometry, only only red (1,0,0), white (1,1,1), or yellow (1,1,0) seem to work. This image is when I set the colour to (1,0,0). This image is when I set the colour to blue (0,0,1).
My question is, how do I resolve this? Is the issue in the shaders? Is the issue with how I build the BufferGeometry? Is it a bug? Thanks.
My shaders:
var vShader = `uniform float size;
varying vec3 vColor;
void main() {
vColor = color;
vec4 mvPosition = modelViewMatrix * vec4( position, 1.0 );
gl_PointSize = size * ( size / (length(mvPosition.xyz) + 0.00001) );
gl_Position = projectionMatrix * mvPosition;
}`
var fShader = `varying vec3 vColor;
uniform sampler2D sprite;
void main() {
gl_FragColor = vec4(vColor, 1.0 ) * texture2D( sprite, gl_PointCoord );
if (gl_FragColor.x < 0.2) discard;
}`
My material:
var materialForBuffers = new THREE.ShaderMaterial( {
uniforms: {
size: { type: 'f', value: this.pointSize},
sprite: { type: 't', value: THREE.ImageUtils.loadTexture("../data/white.png") },
},
vertexShader: vShader,
fragmentShader: fShader,
transparent: true,
vertexColors: true,
});
How the color is added:
const colors = new Float32Array( [ 1.0, 0.0, 0.0 ] );
geometryForBuffers.addAttribute('color', new THREE.BufferAttribute( colors, 3 ));
Link to code
It looks like you may already be using parts of that sample code but if not, please refer to https://github.com/petrbroz/forge-point-clouds/blob/develop/public/scripts/extensions/pointcloud.js (live demo https://forge-point-clouds.autodesk.io). This sample code uses the color geometry attribute already to specify colors of individual points.
I discovered that solution to create postporcessing effects with three.js :
https://medium.com/#luruke/simple-postprocessing-in-three-js-91936ecadfb7
(made by Luigi De Rosa)
It's a great way to do it. Unfortunately I can't manage to add transparency in my final renderer. Should I add a transparency component inside my postprocessing fragment shader ?
const fragmentShader = `precision highp float;
uniform sampler2D uScene;
uniform vec2 uResolution;
uniform float uTime;
void main() {
vec2 uv = gl_FragCoord.xy / uResolution.xy;
vec3 color = vec3(uv, 1.0);
//simple distortion effect
uv.y += sin(uv.x*30.0+uTime*10.0)/40.0;
uv.x -= sin(uv.y*10.0-uTime)/40.0;
color = texture2D(uScene, uv).rgb;
gl_FragColor = vec4(color, 1.0);
}
`;
Thank you
EDIT 1 :
I added the attribute transparent:true to the RawShaderMaterial.
I changed the format of the new THREE.WebGLRenderTarget by THREE.RGBAFormat instead of THREE.RGBFormat.
I also added those lines at the end of my fragment shader :
gl_FragColor = vec4(color, 1.0);
vec4 tex = texture2D( uScene, uv );
if(tex.a < 0.0) {
gl_FragColor.a = 1.0;
}
But I still doesn't see through my canvas
EDIT 2 :
Here's a snippet with the postProcessing class
let renderer, camera, scene, W = window.innerWidth, H = window.innerHeight, geometry, material, mesh;
initWebgl();
function initWebgl(){
renderer = new THREE.WebGLRenderer( { alpha: true, antialias: true } );
renderer.setPixelRatio( window.devicePixelRatio );
renderer.setSize( W, H );
document.querySelector('.innerCanvas').appendChild( renderer.domElement );
camera = new THREE.OrthographicCamera(-W/H/2, W/H/2, 1/2, -1/2, -0.1, 0.1);
scene = new THREE.Scene();
geometry = new THREE.PlaneBufferGeometry(0.5, 0.5);
material = new THREE.MeshNormalMaterial();
mesh = new THREE.Mesh( geometry , material );
scene.add(mesh);
}
function rafP(){
requestAnimationFrame(rafP);
// renderer.render(scene, camera);
post.render(scene, camera);
}
const vertexShader = `precision highp float;
attribute vec2 position;
void main() {
// Look ma! no projection matrix multiplication,
// because we pass the values directly in clip space coordinates.
gl_Position = vec4(position, 1.0, 1.0);
}`;
const fragmentShader = `precision highp float;
uniform sampler2D uScene;
uniform vec2 uResolution;
uniform float uTime;
void main() {
vec2 uv = gl_FragCoord.xy / uResolution.xy;
vec3 color = vec3(uv, 1.0);
uv.y += sin(uv.x*20.0)/10.0;
color = texture2D(uScene, uv).rgb;
gl_FragColor = vec4(color, 1.0);
vec4 tex = texture2D( uScene, uv );
// if(tex.a - percent < 0.0) {
if(tex.a < 0.0) {
gl_FragColor.a = 1.0;
//or without transparent = true use
// discard;
}
}`;
//PostProcessing
class PostFX {
constructor(renderer) {
this.renderer = renderer;
this.scene = new THREE.Scene();
// three.js for .render() wants a camera, even if we're not using it :(
this.dummyCamera = new THREE.OrthographicCamera();
this.geometry = new THREE.BufferGeometry();
// Triangle expressed in clip space coordinates
const vertices = new Float32Array([
-1.0, -1.0,
3.0, -1.0,
-1.0, 3.0
]);
this.geometry.addAttribute('position', new THREE.BufferAttribute(vertices, 2));
this.resolution = new THREE.Vector2();
this.renderer.getDrawingBufferSize(this.resolution);
this.target = new THREE.WebGLRenderTarget(this.resolution.x, this.resolution.y, {
format: THREE.RGBAFormat, //THREE.RGBFormat
stencilBuffer: false,
depthBuffer: true
});
this.material = new THREE.RawShaderMaterial({
fragmentShader,
vertexShader,
uniforms: {
uScene: { value: this.target.texture },
uResolution: { value: this.resolution }
},
transparent:true
});
// TODO: handle the resize -> update uResolution uniform and this.target.setSize()
this.triangle = new THREE.Mesh(this.geometry, this.material);
// Our triangle will be always on screen, so avoid frustum culling checking
this.triangle.frustumCulled = false;
this.scene.add(this.triangle);
}
render(scene, camera) {
this.renderer.setRenderTarget(this.target);
this.renderer.render(scene, camera);
this.renderer.setRenderTarget(null);
this.renderer.render(this.scene, this.dummyCamera);
console.log(this.renderer);
}
}
post = new PostFX(renderer);
rafP();
body{
margin:0;
padding:0;
background:#00F;
}
.innerCanvas{
position:fixed;
top:0;
left:0;
width:100%;
height:100%;
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/three.js/105/three.js"></script>
<div class="innerCanvas"></div>
On the Alpha channel, 0 means fully transparent and 1 means fully opaque.
The only thing you need, in this case, is to pass gl_FragColor the result from your texture sample. You don't even need to worry about its value.
gl_FragColor = texture2D(uScene, uv);
JSFiddle
I'm trying to replicate the effect shown in this Three.js example but instead of showing the wireframe and an opaque box, I'd like to show just the edges without any faces (like what is shown when using the THREE.EdgesGeometry.) I know that setting the linewidth property doesn't work and that using shaders is necessary but I'm not really sure where to begin. For reference, these are the shaders being used in the above Three.js example:
Vertex Shader:
attribute vec3 center;
varying vec3 vCenter;
void main() {
vCenter = center;
gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
}
Fragment Shader:
varying vec3 vCenter;
float edgeFactorTri() {
vec3 d = fwidth( vCenter.xyz );
vec3 a3 = smoothstep( vec3( 0.0 ), d * 1.5, vCenter.xyz );
return min( min( a3.x, a3.y ), a3.z );
}
void main() {
gl_FragColor.rgb = mix( vec3( 1.0 ), vec3( 0.2 ), edgeFactorTri() );
gl_FragColor.a = 1.0;
}
I've gotten as far as figuring out that changing what d gets multiplied by (1.5 in the example) is what determines the thickness of the line but I'm completely lost as to how the vCenter variable is actually used (it's a vec3 that is either [1, 0, 0], [0, 1, 0] or [0, 0, 1]) or what I could use to make the THREE.EdgesGeometry render with thicker lines like in the example.
Here is what happens when I try rendering the edges geometry with these shaders:
<script type="x-shader/x-vertex" id="vertexShader">
attribute vec3 center;
varying vec3 vCenter;
void main() {
vCenter = center;
gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
}
</script>
<script type="x-shader/x-fragment" id="fragmentShader">
varying vec3 vCenter;
uniform float lineWidth;
float edgeFactorTri() {
float newWidth = lineWidth + 0.5;
vec3 d = fwidth( vCenter.xyz );
vec3 a3 = smoothstep( vec3( 0.0 ), d * newWidth, vCenter.xyz );
return min( min( a3.x, a3.y ), a3.z );
}
void main() {
gl_FragColor.rgb = mix( vec3( 1.0 ), vec3( 0.2 ), edgeFactorTri() );
gl_FragColor.a = 1.0;
}
</script>
Javascript:
size = 150
geometry = new THREE.BoxGeometry(size, size, size);
material = new THREE.MeshBasicMaterial({ wireframe: true });
mesh = new THREE.Mesh(geometry, material);
mesh.position.x = -150;
scene.add(mesh);
//
// geometry = new THREE.BufferGeometry().fromGeometry(new THREE.BoxGeometry(size, size, size));
geometry = new THREE.EdgesGeometry(new THREE.BoxGeometry(size, size, size));
setupAttributes(geometry);
material = new THREE.ShaderMaterial({
uniforms: { lineWidth: { value: 10 } },
vertexShader: document.getElementById("vertexShader").textContent,
fragmentShader: document.getElementById("fragmentShader").textContent
});
material.extensions.derivatives = true;
mesh = new THREE.Mesh(geometry, material);
mesh.position.x = 150;
scene.add(mesh);
//
geometry = new THREE.BufferGeometry().fromGeometry(new THREE.SphereGeometry(size / 2, 32, 16));
setupAttributes(geometry);
material = new THREE.ShaderMaterial({
uniforms: { lineWidth: { value: 1 } },
vertexShader: document.getElementById("vertexShader").textContent,
fragmentShader: document.getElementById("fragmentShader").textContent
});
material.extensions.derivatives = true;
mesh = new THREE.Mesh(geometry, material);
mesh.position.x = -150;
scene.add(mesh);
jsFiddle
As you can see in the fiddle, this is not what I'm looking for, but I don't have a good enough grasp on how the shaders work to know where I'm going wrong or if this approach would work for what I want.
I've looked into this answer but I'm not sure how to use it as a ShaderMaterial and I can't use it as a shader pass (here are the shaders he uses for his answer.)
I've also looked into THREE.MeshLine and this issue doesn't seem to have been resolved.
Any guidance would be greatly appreciated!
You want to modify this three.js example so the mesh is rendered as a thick wireframe.
The solution is to modify the shader and discard fragments in the center portion of each face -- that is, discard fragments not close to an edge.
You can do that like so:
void main() {
float factor = edgeFactorTri();
if ( factor > 0.8 ) discard; // cutoff value is somewhat arbitrary
gl_FragColor.rgb = mix( vec3( 1.0 ), vec3( 0.2 ), factor );
gl_FragColor.a = 1.0;
}
You can also set material.side = THREE.DoubleSide if you want.
updated fiddle: https://jsfiddle.net/vy0we5wb/4.
three.js r.89
The following link is my pen:
https://codepen.io/johnhckuo/pen/RxrXxX
And here is my code in fragment shader:
void main() {
vec2 st = gl_FragCoord.xy/u_resolution.xy;
vec3 color = vec3(0.0);
vec3 worldtoEye = eye - worldPosition;
vec3 eyeDirection = normalize(worldtoEye);
vec3 pos = vec3(st.x*5.0, st.y*5.0, u_time*0.5);
color = vec3(fbm(pos) + 1.0);
//color = vec3(noise(pos)*0.5 + 1.0);
vec3 sunLight = vec3(1., 1., 1.);
color *= (diffuseLight(sunLight) + specularLight(eyeDirection));
vec3 oceanBlue = vec3(0.109, 0.419, 0.627);
gl_FragColor = vec4(oceanBlue * color, 1.0);
}
Here is the code of plane geometry:
var plane_geometry = new THREE.PlaneBufferGeometry( 2000, 2000, 32 );
var customMaterial = new THREE.ShaderMaterial(
{
uniforms: uniforms,
vertexShader: document.getElementById( 'vertexShader' ).textContent,
fragmentShader: document.getElementById( 'fragmentShader' ).textContent
}
);
customMaterial.side = THREE.DoubleSide;
var surface = new THREE.Mesh( plane_geometry, customMaterial );
surface.position.set(0,0,0);
scene.add( surface );
I've created a shader and tried to apply it onto my plane.
But whenever I zoom in/out, the shader since to be fixed to the screen and not zooming in/out correspondingly.
Any suggestions is appreciated !
Change this line:
vec2 st = gl_FragCoord.xy/u_resolution.xy;
to
vec2 st = uVu.xy* 2.0;
and fiddle with 2.0 (the resolution)
It's also easy to test out vert + frag shaders on ShaderFrog, as in: https://shaderfrog.com/app/view/1997