ThreeJS - Extend Lambert Shader with Custom VertexShader - three.js

I want to adapt this Shader here:
https://aerotwist.com/tutorials/an-introduction-to-shaders-part-2/
to a standard Lambert or Phong that it works with all my Lights in the Scene.
My current state is that I extend the Lambert with this code:
var attributes = {
displacement: {
type: 'f', // a float
value: [] // an empty array
}
};
var uniforms = {
amplitude: {
type: 'f', // a float
value: 0
}
};
var shaders = { mylambert : THREE.ShaderLib[ 'lambert' ] };
var materials = {};
materials.mylambert = function( parameters, myUniforms ) {
var material = new THREE.ShaderMaterial( {
vertexShader: $('#vertexLambert').text(),
fragmentShader: shaders.mylambert.fragmentShader,
uniforms: THREE.UniformsUtils.merge( [ shaders.mylambert.uniforms, myUniforms ] ),
attributes :attributes,
lights:true,
shading:THREE.FlatShading
} );
material.setValues( parameters );
return material;
};
var myProperties = {
lights: true,
fog: true,
transparent: true
};
var myMaterial = new materials.mylambert( myProperties, uniforms );
Which I got from this Post:
extending lambert material, opacity not working
The vertexShader is basically the shape as shaders.mylambert.vertexShader but with the additional code from the shader example on top.
It works somehow, so the vertices move, but the faces didn't shade when they change their shape so they always have the same shader when I use a plane for example as a the mesh.
In short;
I need a Lambert/Phong Shader that manipulates the Vertices over time up and down to simulate a low Poly Water surface.

If this is still relevant, You can solve this issue much simpler:
Have your model render with a Lambert, Phong, Standard or whatever lit material you like.
Create another Scene, Camera and a WebGLRenderTarget, create a plane and apply your ShaderMaterial to it. Position your camera so that the plane fits exactly your entire frame of the scene.
Render the other Scene to the WebGlRenderTarget and apply it as a map to your original Lambert material this way:
let mat = new THREE.MeshLambertMaterial({
map: renderTarget.texture
})
Viola! You now have a fully lit ShaderMaterial as you like.

Related

Mangled rendering when transforming scene coordinates instead of camera coordinates

I've been learning how to integrate ThreeJS with Mapbox, using this example. It struck me as weird that the approach is to leave the loaded model in its own coordinate system, and transform the camera location on render. So I attempted to rewrite the code, so that the GLTF model is transformed when loaded, then the ThreeJS camera is just synchronised with the Mapbox camera, with no further modifications.
The code now looks like this:
function newScene() {
const scene = new THREE.Scene();
// create two three.js lights to illuminate the model
const directionalLight = new THREE.DirectionalLight(0xffffff);
directionalLight.position.set(0, -70, 100).normalize();
scene.add(directionalLight);
const directionalLight2 = new THREE.DirectionalLight(0xffffff);
directionalLight2.position.set(0, 70, 100).normalize();
scene.add(directionalLight2);
return scene;
}
function newRenderer(map, gl) {
// use the Mapbox GL JS map canvas for three.js
const renderer = new THREE.WebGLRenderer({
canvas: map.getCanvas(),
context: gl,
antialias: true
});
renderer.autoClear = false;
return renderer;
}
// create a custom layer for a 3D model per the CustomLayerInterface
export function addModel(modelPath, origin, altitude = 0, orientation = [Math.PI / 2, 0, 0]) {
const coords = mapboxgl.MercatorCoordinate.fromLngLat(origin, altitude);
// transformation parameters to position, rotate and scale the 3D model onto the map
const modelTransform = {
translateX: coords.x,
translateY: coords.y,
translateZ: coords.z,
rotateX: orientation[0],
rotateY: orientation[1],
rotateZ: orientation[2],
/* Since our 3D model is in real world meters, a scale transform needs to be
* applied since the CustomLayerInterface expects units in MercatorCoordinates.
*/
scale: coords.meterInMercatorCoordinateUnits()
};
const scaleVector = new THREE.Vector3(modelTransform.scale, -modelTransform.scale, modelTransform.scale)
return {
id: "3d-model",
type: "custom",
renderingMode: "3d",
onAdd: function(map, gl) {
this.map = map;
this.camera = new THREE.Camera();
this.scene = newScene();
this.renderer = newRenderer(map, gl);
// use the three.js GLTF loader to add the 3D model to the three.js scene
new THREE.GLTFLoader()
.load(modelPath, gltf => {
gltf.scene.position.fromArray([coords.x, coords.y, coords.z]);
gltf.scene.setRotationFromEuler(new THREE.Euler().fromArray(orientation));
gltf.scene.scale.copy(scaleVector);
this.scene.add(gltf.scene);
const bbox = new THREE.Box3().setFromObject(gltf.scene);
console.log(bbox);
this.scene.add(new THREE.Box3Helper(bbox, 'blue'));
});
},
render: function(gl, matrix) {
this.camera.projectionMatrix = new THREE.Matrix4().fromArray(matrix);
this.renderer.state.reset();
this.renderer.render(this.scene, this.camera);
// this.map.triggerRepaint();
}
}
}
It basically works, in that a model is loaded and drawn in the right location in the Mapbox world. However, instead of looking like this:
It now looks like this, a mangled mess that jitters around chaotically as the camera moves:
I'm not yet familiar enough with ThreeJS to have any idea what I did wrong.
Here's a side-by-side comparison of the old, functional code on the right, vs the new broken code on the left.
Further investigation
I suspect possibly the cause is to do with shrinking all the coordinates down to within the [0..1] range of the projected coordinate system, and losing mathematical precision, perhaps. When I scale the model up by 100 times, it renders like this - messy and glitchy, but at least recognisable as something.

Three.js Object3d child lookAt camera position

I am struggling with an Object3D which child meshes should look at the camera position.
It works fine, if the camera is "far" away, but not if the camera moves towards the object.
Than, if the camera position is near to the object position, the second added plane rotates, until the camera look at the edge of the plan.
And I have know idea why this behavior appears just on the second added plane and just if the camera is near the object position.
Here is what i have so far.
Create the Object:
var obj = new THREE.Object3D();
obj.position.set( x, y, z );
var Uniforms = {
texturePrimary: { type: "t", value: Texture },
textureColorGraph: { type: "t", value: ColorGraph },
time: { type: "f", value: 0 },
color: { type: "f", value: 0 }
};
obj.Uniforms = Uniforms;
obj.add( makeplane1( 3.2, Uniforms ) );
obj.add( makeplane2( 25, Uniforms ) );
obj.update = function( pos ){
this.Uniforms.time.value = shaderTiming;
$.each(this.children, function(i,mesh){
if( mesh.name === "plane1" || mesh.name === "plane2"){
var vec = mesh.parent.worldToLocal( pos );
mesh.lookAt( vec );
}
});
};
function makePlane1( radius, uniforms ){
var Geo = new THREE.PlaneGeometry( radius, radius );
var Material = new THREE.ShaderMaterial(
{
uniforms: uniforms,
vertexShader: shaders[1].vertex,
fragmentShader: shaders[1].fragment,
blending: THREE.AdditiveBlending,
transparent: true
};
var plane = new THREE.Mesh( Geo, Material );
plane.name = "plane1";
return plane;
);
function makePlane2( radius, uniforms ){
var Geo = new THREE.PlaneGeometry( radius, radius );
var Material = new THREE.ShaderMaterial(
{
uniforms: uniforms,
vertexShader: shaders[2].vertex,
fragmentShader: shaders[2].fragment,
blending: THREE.AdditiveBlending,
transparent: true
};
);
var plane = new THREE.Mesh( Geo, Material );
plane.name = "plane2";
return plane;
}
I could call this.lookAt( pos ) in obj.update( pos ) to rotate the whole object, but other meshes should not rotate that way, so that is sadly no option.
And a simple vertex shader for both planes:
varying vec2 vUv;
void main() {
gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
vUv = uv;
}
And then i call in the animationloop:
$.each(scene.children, function(i, obj){
if( obj.update !== undefined ) {
shaderTiming = (time - startTime )/ 1000;
obj.update( camera.getWorldPosition() );
}
});
EDIT: I Just noticed that this behavior just occur, if the object's position is not (0,0,0). If so it works just like it should at any camera position.
Also a simple distance calculation, object to camera, is not working properly.
vec1 = this.position;
vec2 = camera.position;
var dist = Math.sqrt(Math.pow(vec1.x - vec2.x, 2) + Math.pow(vec1.y - vec2.y, 2) + Math.pow(vec1.z - vec2.z, 2));
Thanks for any hints.
Object3D.lookAt() does not support objects with rotated and/or translated parent(s).
three.js r.85

Three.js custom Shader with Texture

I want to write a custom shader which manipulates my image with three.js.
For that I want to create a plane with the image as a texture. Afterwards I want to move vertices around to distort the image.
(If that an absolute wrong way to do this, please tell me).
First I have my shaders:
<script type="x-shader/x-vertex" id="vertexshader">
attribute vec2 a_texCoord;
varying vec2 v_texCoord;
void main() {
// Pass the texcoord to the fragment shader.
v_texCoord = a_texCoord;
gl_Position = projectionMatrix *
modelViewMatrix *
vec4(position,1.0);
}
</script>
<script type="x-shader/x-fragment" id="fragmentshader">
uniform sampler2D u_texture;
varying vec2 v_texCoord;
void main() {
vec4 color = texture2D(u_texture, v_texCoord);
gl_FragColor = color;
}
</script>
Where I don't really understand what the texture2D is doing, but I found that in other code fragments.
What I want with this sample: Just color the vertex (gl_FracColor) with the color from the «underlying» image (=texture).
In my code I have setup a normal three scene with a plane:
// set some camera attributes
var VIEW_ANGLE = 45,
ASPECT = window.innerWidth/window.innerHeight,
NEAR = 0.1,
FAR = 1000;
var scene = new THREE.Scene();
var camera = new THREE.PerspectiveCamera(VIEW_ANGLE, ASPECT, NEAR, FAR);
camera.position.set(0, 0, 15);
var vertShader = document.getElementById('vertexshader').innerHTML;
var fragShader = document.getElementById('fragmentshader').innerHTML;
var texloader = new THREE.TextureLoader();
var texture = texloader.load("img/color.jpeg");
var uniforms = {
u_texture: {type: 't', value: 0, texture: texture},
};
var attributes = {
a_texCoord: {type: 'v2', value: new THREE.Vector2()}
};
// create the final material
var shaderMaterial = new THREE.ShaderMaterial({
uniforms: uniforms,
vertexShader: vertShader,
fragmentShader: fragShader
});
var renderer = new THREE.WebGLRenderer();
renderer.setSize( window.innerWidth, window.innerHeight );
document.body.appendChild(renderer.domElement);
var plane = {
width: 5,
height: 5,
widthSegments: 10,
heightSegments: 15
}
var geometry = new THREE.PlaneBufferGeometry(plane.width, plane.height, plane.widthSegments, plane.heightSegments)
var material = new THREE.MeshBasicMaterial( { color: 0x00ff00 } );
var plane = new THREE.Mesh( geometry, shaderMaterial );
scene.add(plane);
plane.rotation.y += 0.2;
var render = function () {
requestAnimationFrame(render);
// plane.rotation.x += 0.1;
renderer.render(scene, camera);
};
render();
Unfortunately, after running that code I just see a black window. Although I know that if I use the material as material when creating the mesh, I can see it clearly.
So it must be the shaderMaterial or the shaders.
Questions:
do I have to define the uniform u_texture and the attribute
a_texCoord in my shader Material uniforms and attributes? And do
they have to have the exact same name?
How many vertices are there anyway? Will I get a vertices for every pixel in the image? Or is it just 4 for each corner of the plane?
What value does a_texCoord have? Nothing happens if I write:
var attributes = {
a_texCoord: {type: 'v2', value: new THREE.Vector2(1,1)}
};
Or do I have to use some mapping (built in map stuff from three)? But how would I then change vertex positions?
Could someone shed some light on that matter?
I got it to work by changing this:
var uniforms = {
u_texture: {type: 't', value: 0, texture: texture},
};
To this:
var uniforms = {
u_texture: {type: 't', value: texture},
};
Anyway all other questions are still open and answers highly appreciated.
(btw: why the downgrade of someone?)
do I have to define the uniform u_texture and the attribute a_texCoord
in my shader Material uniforms and attributes? And do they have to
have the exact same name?
Yes and yes. The uniforms are defined as part of the shader-material while the attributes haven been moved from shader-material to the BufferGeometry-class in version 72 (i'm assuming you are using an up to date version, so here is how you do this today):
var geometry = new THREE.PlaneBufferGeometry(...);
// first, create an array to hold the a_texCoord-values per vertex
var numVertices = (plane.widthSegments + 1) * (plane.heightSegments + 1);
var texCoordBuffer = new Float32Array(2 * numVertices);
// now register it as a new attribute (the 2 here indicates that there are
// two values per element (vec2))
geometry.addAttribute('a_texCoord', new THREE.BufferAttribute(texCoordBuffer, 2));
As you can see, the attribute will only work if it has the exact same name as specified in your shader-code.
I don't know exactly what you are planning to use this for, but it sounds suspiciously like you want to have the uv-coordinates. If that is the case, you can save yourself a lot of work if you have a look at the THREE.PlaneBufferGeometry-class. It already provides an attribute named uv that is probably exactly what you are looking for. So you just need to change the attribute-name in your shader-code to
attribute vec2 uv;
How many vertices are there anyway? Will I get a vertices for every
pixel in the image? Or is it just 4 for each corner of the plane?
The vertices are created according to the heightSegments and widthSegments parameters. So if you set both to 5, there will be (5 + 1) * (5 + 1) = 36 vertices (+1 because a line with only 1 segment has two vertices etc.) and 5 * 5 * 2 = 50 triangles (with 150 indices) in total.
Another thing to note is that the PlaneBufferGeometry is an indexed geometry. This means that every vertex (and every other attribute-value) is stored only once, although it is used by multiple triangles. There is a special index-attribute that contains the information which vertices are used to create which triangles.
What value does a_texCoord have? Nothing happens if I write: ...
I hope the above helps to answer that.
Or do I have to use some mapping (built in map stuff from three)?
I would suggest you use the uv attribute as described above. But you absolutely don't have to.
But how would I then change vertex positions?
There are at least two ways to do this: in the vertex-shader or via javascript. The latter can be seen here: http://codepen.io/usefulthink/pen/vKzRKr?editors=1010
(the relevant part for updating the geometry starts in line 84).

Applying Perlin noise shader just to some objects within a mesh

I would like to modify one object within a mesh using perlin noise. How I am doing this right now...
I am creating the objects and adding them to a 3d object...
spheres = new THREE.Object3D();
for ( var i = 0; i < 40; i ++ ) {
var ball = new THREE.Mesh(new THREE.SphereGeometry(20,20,20), material);
ball.position.x = Math.random() * 600 - 300;
ball.position.y = Math.random() * 600 - 300;
ball.position.z = Math.random() * 600 - 300;
spheres.add(ball);
}
scene.add(spheres);
I am using a shader material...
material = new THREE.ShaderMaterial({
uniforms: {
time: {
type: "f",
value: 0.0
},
noisemax: {
type: "f",
value: 100.0
},
transparent: true
},
vertexShader: document.getElementById( 'vertexShader' ).textContent,
fragmentShader: document.getElementById( 'fragmentShader' ).textContent
});
The shader material works on all 40 balls no problem. What I would like to do is change the shader say for spheres.children[0]. Is it possible to change the perlin noise values (noiseMax) for one object or does it by nature effect the material for all objects using the material?
You have a couple of options.
The easy way is to create and use a separate ShaderMaterial for each item that differs, and you can set its uniform easily. e.g.
var firstMaterial = new THREE.ShaderMaterial({
uniforms: { noisemax: { type: 'f', value: 3}}
});
var secondMaterial = new THREE.ShaderMaterial({
uniforms: { noisemax: { type: 'f', value: 10}}
});
var firstBall = new THREE.Mesh(new THREE.SphereGeometry(20,20,20), firstMaterial);
var secondBall = new THREE.Mesh(new THREE.SphereGeometry(20,20,20), secondMaterial);
// or in your case
spheres.children[0].material = secondMaterial;
Alternatively and probably preferably for your situation (at least from a performance standpoint) is that you should change noisemax to an attribute, that way you can have a separate value per object.
You will have to remember that attributes are per vertex, so you'll need to duplicate the values for all vertices belonging to each object. This will complicate things a little bit.
Edit: To reduce memory usage, you can use THREE.InstancedBufferGeometry with THREE.InstancedBufferAttribute

How to add reflectionMaterial for an object in threejs

How to add reflectionMaterial with environment map, am using two cameras and two scene in order to achieve it, based on the webgl_materials_cubemap
and am using Object that is loaded using OBJMTLLoder, i can see the both Environment map, and Object on my scene, but the reflection of environment is not working on the object..
find my code below :
var urls = [
'textures/cube/canary/pos-x.png',
'textures/cube/canary/neg-x.png',
'textures/cube/canary/pos-y.png',
'textures/cube/canary/neg-y.png',
'textures/cube/canary/pos-z.png',
'textures/cube/canary/neg-z.png'
];
var cubemap = THREE.ImageUtils.loadTextureCube(urls); // load textures
cubemap.format = THREE.RGBFormat;
var shader = THREE.ShaderLib['cube']; // init cube shader from built-in lib
shader.uniforms['tCube'].value = cubemap; // apply textures to shader
// create shader material
var skyBoxMaterial = new THREE.ShaderMaterial( {
fragmentShader: shader.fragmentShader,
vertexShader: shader.vertexShader,
uniforms: shader.uniforms,
depthWrite: false,
side: THREE.BackSide
});
skybox = new THREE.Mesh( new THREE.BoxGeometry( 1000, 1000, 1000 ), skyBoxMaterial );
scene.add( skybox );
var object = scene.getObjectByName ("myname", true);
object.traverse( function ( child ) {
if ( child instanceof THREE.Mesh )
{
//child.geometry.computeFaceNormals();
var geometry = child.geometry;
var reflectionMaterial = new THREE.MeshBasicMaterial({
color: 0xcccccc,
envMap: cubemap
});
mesh = new THREE.Mesh(geometry, reflectionMaterial);
sceneCube.add(mesh);
}
});
Here am just changing the scene.add(mesh); to sceneCube.add(mesh); the reflection worked but the camera doesn't ..
you can see the differences here
Example 1
Example 2
In the first demo you can see that the scene working fine with the environment and without Object reflection
In the second you can see that the Reflection working fine but the camera behavior gets wired
i fixed it myself by adding following line inside the Object traverse
child.material.map = reflectionMaterial;
child.material.needsUpdate = true;
However i don't know am doing is correct or not but it is worked as like expected,

Resources