Three.js specific render-time uniforms - three.js

I want to implement per-object motion-blur effect based on calculating previous pixel position inside shaders.
This technic's first step is to build velocity map of moving objects. This step requirements is to have as uniform variables projection and model view matrices of current frame and the same matrices of previous frame.
How could I include those matrices to uniforms for some special shader? I supposed to have solution in some way like:
uniforms = {
some_uniform_var : {type: "m4", value: initialMatrix, getter: function(){
// `this` points to object
return this.worldMatrix
}}
}
But now in THREE.js this is not available. We could make some sort of monkey patching, but I cannot find best way to do it.
Any suggestions?

The current solvation to this problems consist of several parts. I'm using EffectComposer to make several passes of rendered scene, one of then - VelocityPass. It takes current and previous model-view matrix and projection matrix and produces two positions. Both of them then used to calculate speed of a point.
Shader looks like this
"void main() {",
"vec2 a = (pos.xy / pos.w) * 0.5 + 0.5;",
"vec2 b = (prevPos.xy / prevPos.w) * 0.5 + 0.5;",
"vec2 oVelocity = a - b;",
"gl_FragColor = vec4(oVelocity, 0.0, 1.);",
"}"
There're several issues of this decision.
Three.js has certain point where it injects matrices to object-related shaders. The very ending of SetProgram closure, which lives in WebGLRenderer. That's why I took the whole renderer file, renamed renderer to THREE.MySuperDuperWebGLRenderer and added couple lines of code in it:
A closure to access closures, defined in userspace:
function materialPerObjectSetup(material, object){
if( material.customUniformUpdate ){
material.customUniformUpdate( object, material, _gl ); // Yes, I had to pass it...
}
}
And calling of it in renderBuffer and renderBufferDirect;
var program = setProgram( camera, lights, fog, material, object );
materialPerObjectSetup(material, object);
Now - the userspace part:
velocityMat = new THREE.ShaderMaterial( THREE.VelocityShader );
velocityMat.customUniformUpdate = function(obj, mat, _gl){
// console.log("gotcha");
var new_m = obj.matrixWorld;
var p_uniforms = mat.program.uniforms;
var mvMatrix = camera.matrixWorldInverse.clone().multiplyMatrices(camera.matrixWorldInverse, obj._oldMatrix );
_gl.uniformMatrix4fv( p_uniforms.prevModelViewMatrix, false, mvMatrix.elements );
_gl.uniformMatrix4fv( p_uniforms.prevProjectionMatrix, false, camera.projectionMatrix.elements );
obj._pass_complete = true; // Необходимо сохранять состояние старой матрицы пока не отрисуется этот пасс.
// А то матрицы обновляются каждый рендеринг сцены.
}
_pass_complete needed when we rerendering scene several times - each time matrix recalculated. This trick help us save previous matrix untill we use it.
_gl.uniformMatrix4fv is needed, because three.js serves universes one time before rendering. No matter how much objects we have - other method will pass to the shader modelViewMatrix of the last one. This happens because I want to draw this scene fully using VelocityShader. There's no other way to say to Renderer to use some alternative material for objects.
And as final point of this explaination I putting here a trick to manage previous matrix of an object:
THREE.Mesh.prototype._updateMatrixWorld = rotatedObject.updateMatrixWorld;
THREE.Mesh.prototype._pass_complete = true;
Object.defineProperty(THREE.Mesh.prototype, "updateMatrixWorld", {get: function(){
if(this._pass_complete){
this._oldMatrix = this.matrixWorld.clone();
this._pass_complete = false;
}
this._updateMatrixWorld();
return (function(){
});
}})
I believe, that there's could be a nicer solution. But sometimes I need to act in rush. And such kind of monkey things could happen.

Related

Three.js - repositioning vertices in a 'particle' mesh

I have a basic three.js game working and I'd like to add particles. I've been searching online, including multiple questions here, and the closest I've come to getting a 'particle system' working is using a THREE.BufferGeometry, a THREE.BufferAttribute and a THREE.Points mesh. I set it up like this:
const particleMaterial = new THREE.PointsMaterial( { size: 10, map: particleTexture, blending: THREE.AdditiveBlending, transparent: true } );
const particlesGeometry = new THREE.BufferGeometry;
const particlesCount = 300;
const posArray = new Float32Array(particlesCount * 3);
for (let i = 0; i < particlesCount; i++) {
posArray[i] = Math.random() * 10;
}
const particleBufferAttribute = new THREE.BufferAttribute(posArray, 3);
particlesGeometry.setAttribute( 'position', particleBufferAttribute );
const particlesMesh = new THREE.Points(particlesGeometry, particleMaterial);
particlesMesh.counter = 0;
scene.add(particlesMesh);
This part works and displays the particles fine, at their initial positions, but of course I'd like to move them.
I have tried all manner of things, in my 'animate' function, but I am not happening upon the right combination. I'd like to move particles, ideally one vertex per frame.
The current thing I'm doing in the animate function - which does not work! - is this:
particleBufferAttribute.setXYZ( particlesMesh.counter, objects[0].position.x, objects[0].position.y, objects[0].position.z );
particlesGeometry.setAttribute( 'position', particleBufferAttribute );
//posArray[particlesMesh.counter] = objects[0].position;
particlesMesh.counter ++;
if (particlesMesh.counter > particlesCount) {
particlesMesh.counter = 0;
}
If anyone has any pointers about how to move Points mesh vertices, that would be great.
Alternatively, if this is not at all the right approach, please let me know.
I did find Stemkoski's ShaderParticleEngine, but I could not find any information about how to make it work (the docs are very minimal and do not seem to include examples).
You don't need to re-set the attribute, but you do need to tell the renderer that the attribute has changed.
particleBufferAttribute.setXYZ( particlesMesh.counter, objects[0].position.x, objects[0].position.y, objects[0].position.z );
particleBufferAttribute.needsUpdate = true; // This is the kicker!
By setting needsUpdate to true, the renderer knows to re-upload that attribute to the GPU.
This might not be concern for you, but just know that moving particles in this way is expensive, because you re-upload the position attribute every single frame, which includes all the position data for every particle you aren't moving.

How can i render the PMREM environment map when copying MeshStandardMaterial into a ShaderMaterial

I am trying to build the MeshStandardMaterial by using a ShaderMaterial. I'm keeping most of the #include <logic> statements, which makes it slightly difficult to put breakpoints.
I'd like to know if there is a straightforward way to render the PMREM cubemap, in this particular material template and have it show up the way it's supposed to.
I'm roughly using:
material.defines.USE_ENVMAP = ''
material.defines.ENVMAP_MODE_REFLECTION = ''
material.defines.ENVMAP_TYPE_CUBE_UV = ''
material.defines.ENVMAP_BLENDING_MULTIPLY = ''
material.defines.TEXTURE_LOD_EXT = ''
material.defines.USE_UV = ''
material.extensions.derivatives = true
material.extensions.shaderTextureLOD = true
Which,as far as i can tell, are all of the defines that appear when adding a texture to material.envmap. The shader compiles, the PMREM texture is being generated, and can be read in the shader (gl_FragColor = vec4( texture2D( envmap, vUv ).xyz, 1.) works for example). These are the uniforms i cloned:
{
envmap: UniformsUtils.clone(UniformsLib.envmap),
fog: UniformsUtils.clone(UniformsLib.fog),
lights: UniformsUtils.clone(UniformsLib.lights),
displacementmap: UniformsUtils.clone(UniformsLib.displacementmap)
}
The maxMipmap uniform seems to have a value of 0 when MeshStandardMaterial is used, i'm not sure what else is being used.
I get absolutely no effect from placing a texture in material.uniforms.envmap.value and using these defines. If i turn off the light in the scene, my object renders as black, no reflections.
This doesn't seem like it requires that many inputs but i get 0. out of it:
radiance += getLightProbeIndirectRadiance( /*specularLightProbe,*/ geometry.viewDir, geometry.normal, material.specularRoughness, maxMipLevel );
For my case it was a missing uniform:
https://github.com/mrdoob/three.js/blob/dev/src/renderers/shaders/ShaderLib.js#L99
envMapIntensity: { value: 1 } // temporary
It's not part of the envmap.

Is it possible to let Fog interact with the material's opacity?

I am working on a project that displays buildings. The requirement is to let the building gradually fade out (transparent) based on the distance between the camera and the buildings. Also, this effect has to follow the camera's movement.
I consider using THREE.Fog(), but the Fog seems can only change the material's color.
Above is a picture of the building with white fog.
The buildings are in tiles, each tile is one single geometry (I merged all the buildings into one) using
var bigGeometry = new THREE.Geometry();
bigGeometry.merge(smallGeometry);
The purple/blue color thing is the ground, and ground.material.fog = false;. So the ground won't interact with the fog.
My question is:
Is it possible to let the fog interact with the building's material's opacity instead of color? (more white translate to more transparent)
Or should I use Shader to control the material's opacity based on distance to the camera? But I have no idea of how to do this.
I also considered adding alphaMap. If so, each building tile have to map an alphaMap and all these alphaMap have to interact with the camera's movement. It's going to be a tons of work.
So any suggestions?
Best Regards,
Arthur
NOTE: I suspect there are probably easier/prettier ways to solve this than opacity. In particular, note that partially-opaque buildings will show other buildings behind them. To address that, consider using a gradient or some other scene background, and choosing a fog color to match that, rather than using opacity. But for the sake of trying it...
Here's how to alter an object's opacity based on its distance. This doesn't actually require THREE.Fog, I'm not sure how you would use the fog data directly. Instead I'll use THREE.NodeMaterial, which (as of three.js r96) is fairly experimental. The alternative would be to write a custom shader with THREE.ShaderMaterial, which is also fine.
const material = new THREE.StandardNodeMaterial();
material.transparent = true;
material.color = new THREE.ColorNode( 0xeeeeee );
// Calculate alpha of each fragment roughly as:
// alpha = 1.0 - saturate( distance / cutoff )
//
// Technically this is distance from the origin, for the demo, but
// distance from a custom THREE.Vector3Node would work just as well.
const distance = new THREE.Math2Node(
new THREE.PositionNode( THREE.PositionNode.WORLD ),
new THREE.PositionNode( THREE.PositionNode.WORLD ),
THREE.Math2Node.DOT
);
const normalizedDistance = new THREE.Math1Node(
new THREE.OperatorNode(
distance,
new THREE.FloatNode( 50 * 50 ),
THREE.OperatorNode.DIV
),
THREE.Math1Node.SAT
);
material.alpha = new THREE.OperatorNode(
new THREE.FloatNode( 1.0 ),
normalizedDistance,
THREE.OperatorNode.SUB
);
Demo: https://jsfiddle.net/donmccurdy/1L4s9e0c/
Screenshot:
I am the OP. After spending some time reading how to use Three.js's Shader material. I got some code that is working as desired.
Here's the code: https://jsfiddle.net/yingcai/4dxnysvq/
The basic idea is:
Create an Uniform that contains controls.target (Vector3 position).
Pass vertex position attributes to varying in the Vertex Shader. So
that the Fragment Shader can access it.
Get the distance between each vertex position and controls.target. Calculate alpha value based on the distance.
Assign alpha value to the vertex color.
Another important thing is: Because the fade out mask should follow the camera move, so don't forget to update the control in the uniforms every frame.
// Create uniforms that contains control position value.
uniforms = {
texture: {
value: new THREE.TextureLoader().load("https://threejs.org/examples/textures/water.jpg")
},
control: {
value: controls.target
}
};
// In the render() method.
// Update the uniforms value every frame.
uniforms.control.value = controls.target;
I had the same issue - a few years later - and solved it with the .onBeforeCompile function which is maybe more convenient to use.
There is a great tutorial here
The code itself is simple and could be easily changed for other materials. It just uses the fogFactor as alpha value in the material.
Here the material function:
alphaFog() {
const material = new THREE.MeshPhysicalMaterial();
material.onBeforeCompile = function (shader) {
const alphaFog =
`
#ifdef USE_FOG
#ifdef FOG_EXP2
float fogFactor = 1.0 - exp( - fogDensity * fogDensity * vFogDepth * vFogDepth );
#else
float fogFactor = smoothstep( fogNear, fogFar, vFogDepth );
#endif
gl_FragColor.a = saturate(1.0 - fogFactor);
#endif
`
shader.fragmentShader = shader.fragmentShader.replace(
'#include <fog_fragment>', alphaFog
);
material.userData.shader = shader;
};
material.transparent = true
return material;
}
and afterwards you can use it like
const cube = new THREE.Mesh(geometry, this.alphaFog());

THREE.js Reusing geometry does not seem to work efficiently

I am loading several models into scene using the same geometry like so (pseudo code):
var geoCache = [];
function parseJSONGeometry(json_geo){
// this code is the three.js model parser from the jsonloader
return geometry;
}
function loadCachedGeo(data){
if( !geoCache[data.id] ){
geoCache[json.id] = parseJSONGeometry(data);
}
return geoCache[json.id];
}
function loadObjects(json){
var mats = [];
combined = new THREE.Geometry();
for(i=0<i<json.geometries.length;i++){
data = json.geometries[i];
geo = loadCachedGeo(data.id);
mats.push(new THREE.MeshBasicMaterial(map:THREE.imageUtils.loadTexture(data.src)));
mesh = new THREE.Mesh(geo);
mesh.position.set(data.x,data.y,data.z);
combined = THREE.GeometryUtils.mergeGeometry(combined,mesh);
}
mesh = new THREE.Mesh(combined,new THREE.MeshFaceMaterial(mats));
scene.add(mesh);
}
I also cache the textures, however I omitted that for the sake of simplicity.
When I call:
renderer.info.render.faces
renderer.info.memory.textures
renderer.info.memory.programs
renderer.info.memory.geometries;
renderer.info.render.calls
I notice when one object is on the screen the poly count is say 1000, textures: 1, calls: 1, shaders: 1 and geometries: 1. When two objects are on the screen 2000 faces are reported, 1 texture, 1 shader, 2 calls, and 2 geometries.
I thought that reusing geometry in this fashion only loads the geometry once into the gpu. Am I missing something, can someone PLEASE explain this behavior?
Three.js r59
You need to inspect
renderer.info.memory.geometries
There is also
renderer.info.memory.textures
renderer.info.memory.programs
three.js r.59

How to make reflective materials change when camera rotates

I was able to make some nice metal and glass looking materials by using Skybox Cube / environment mapping.
I have made my own controls which allow one to both orbit and move/look around like in FirstPersonControls.
The problem is, the reflections look convincing when I move around - I can see the reflections move and change accordingly to my camera movement. However when I look around (rotate the camera / change it's target), there is no change in the reflections, they are just static.
I can see the same behaviour in for example three.js/examples/webgl_materials_cubemap_escher.html - if I modify it to use FirstPersonControls, the material does not look reflective/refractive at all when I look around.
Here's how I setup the cubemaps, to be honest it's copied from some example and I don't understand all of it. But it works, except for this one issue...
createSkyBox = function(urlPrefix) {
var sceneCube = new THREE.Scene();
var path = urlPrefix;
var format = '.jpg';
var urls = [
path + 'px' + format, path + 'nx' + format,
path + 'py' + format, path + 'ny' + format,
path + 'pz' + format, path + 'nz' + format
];
var reflectionCube = THREE.ImageUtils.loadTextureCube( urls );
reflectionCube.format = THREE.RGBFormat;
var refractionCube = new THREE.Texture( reflectionCube.image, new THREE.CubeRefractionMapping() );
refractionCube.format = THREE.RGBFormat;
// Skybox
var shader = THREE.ShaderUtils.lib[ "cube" ];
shader.uniforms[ "tCube" ].value = reflectionCube;
var material = new THREE.ShaderMaterial( {
fragmentShader: shader.fragmentShader,
vertexShader: shader.vertexShader,
uniforms: shader.uniforms,
depthWrite: false,
side: THREE.BackSide
} );
var size = 8000;
mesh = new THREE.Mesh( new THREE.CubeGeometry( size, size, size ), material );
mesh.geometry.computeBoundingBox();
sceneCube.add( mesh );
this._threejs_cube_scene = sceneCube;
this._threejs_cube_mesh = mesh;
this._threejs_envmap = reflectionCube;
this._threejs_envmap_refraction = refractionCube;
this._threejs_scene.add( sceneCube );
}
And here's the way I create the material:
var material = new THREE.MeshLambertMaterial( { color: 0xff00, ambient: 0xaaaaaa, envMap: this._threejs_envmap});
I then use the material in renderer.overrideMaterial (I'm using EffectComposer, if it makes any difference)
EDIT: now that I think about it, I'm not sure.. my brain melts.. it might be how the real life works :) At least intuitively when I see the code in action, the staticness while rotating camera doesn't feel right. But maybe it's because in real life it's hard to look around (eye.lookAt()) without also moving ever so slightly (eye.position = xyz).
you should calculate the reflection vector in world space (inside your code for 'fragmentShader' which you don't show here). If it's in object space, or view (camera) space, it won't move naturally.
Yes, this may mean some finagling with the surface normals. To convert object space normals to world space normals, use the inverse transpose of the world matrix. You'll also need to get the view vector in worldspace coordinates in order to calculate the final worldspace reflection vector.
Another thing to consider that's simpler than changing the shader may be giving your camera an offset if you want it to rotate like a human head. Add it to an Object3d and set it to be offset from the Object3d's position by a small amount (an amount equivalent to the distance from the human center to the eye) then rotate the Object3d instead of the camera.
It's sort-of hard to tell what effect you want though from your description, because when you simply turn your eyeballs, a reflection doesn't change. It's the slight tilt of your head that changes it.

Resources