Three js texture offset not updating - three.js

I am currently trying to create a mesh that is colored using a datatexture, my initial coloring shows up just fine, but now my next goal is to offset the texture along the y axis. very similar to this example.
http://math.hws.edu/graphicsbook/demos/c5/textures.html
How I create my texture / mesh:
this.colorTexture = new DataTexture(colors, this.frameWidth, frameCount, RGBFormat, FloatType, UVMapping, RepeatWrapping, RepeatWrapping);
const material = new MeshBasicMaterial({
side: FrontSide,
vertexColors: true,
wireframe: false,
map: this.colorTexture
});
this.mesh = new Mesh(geometry, material);
How I attempt to animate the texture using offset:
this.mesh.material.map.offset.y -= 0.001;
this.mesh.material.map.needsUpdate = true;
this.mesh.material.needsUpdate = true;
this.mesh.needsUpdate = true;
I have confirmed that the function I'm using to try to offset is being called during each animation frame, however the visualization itself is not animating or showing changes apart from the initial positioning of the colors I wrote to the texture.
Any help is greatly appreciated :)

The uv transformation matrix of a texture is updated automatically as long as Texture.matrixAutoUpdate is set to true (which is also the default value). You can simply modulate Texture.offset. There is no need to set any needsUpdate flags (Mesh.needsUpdate does not exist anyway).
It's best if you strictly stick to the code from the webgl_materials_texture_rotation example. If this code does not work, please demonstrate the issue with a live example.

Related

Is it possible to let Fog interact with the material's opacity?

I am working on a project that displays buildings. The requirement is to let the building gradually fade out (transparent) based on the distance between the camera and the buildings. Also, this effect has to follow the camera's movement.
I consider using THREE.Fog(), but the Fog seems can only change the material's color.
Above is a picture of the building with white fog.
The buildings are in tiles, each tile is one single geometry (I merged all the buildings into one) using
var bigGeometry = new THREE.Geometry();
bigGeometry.merge(smallGeometry);
The purple/blue color thing is the ground, and ground.material.fog = false;. So the ground won't interact with the fog.
My question is:
Is it possible to let the fog interact with the building's material's opacity instead of color? (more white translate to more transparent)
Or should I use Shader to control the material's opacity based on distance to the camera? But I have no idea of how to do this.
I also considered adding alphaMap. If so, each building tile have to map an alphaMap and all these alphaMap have to interact with the camera's movement. It's going to be a tons of work.
So any suggestions?
Best Regards,
Arthur
NOTE: I suspect there are probably easier/prettier ways to solve this than opacity. In particular, note that partially-opaque buildings will show other buildings behind them. To address that, consider using a gradient or some other scene background, and choosing a fog color to match that, rather than using opacity. But for the sake of trying it...
Here's how to alter an object's opacity based on its distance. This doesn't actually require THREE.Fog, I'm not sure how you would use the fog data directly. Instead I'll use THREE.NodeMaterial, which (as of three.js r96) is fairly experimental. The alternative would be to write a custom shader with THREE.ShaderMaterial, which is also fine.
const material = new THREE.StandardNodeMaterial();
material.transparent = true;
material.color = new THREE.ColorNode( 0xeeeeee );
// Calculate alpha of each fragment roughly as:
// alpha = 1.0 - saturate( distance / cutoff )
//
// Technically this is distance from the origin, for the demo, but
// distance from a custom THREE.Vector3Node would work just as well.
const distance = new THREE.Math2Node(
new THREE.PositionNode( THREE.PositionNode.WORLD ),
new THREE.PositionNode( THREE.PositionNode.WORLD ),
THREE.Math2Node.DOT
);
const normalizedDistance = new THREE.Math1Node(
new THREE.OperatorNode(
distance,
new THREE.FloatNode( 50 * 50 ),
THREE.OperatorNode.DIV
),
THREE.Math1Node.SAT
);
material.alpha = new THREE.OperatorNode(
new THREE.FloatNode( 1.0 ),
normalizedDistance,
THREE.OperatorNode.SUB
);
Demo: https://jsfiddle.net/donmccurdy/1L4s9e0c/
Screenshot:
I am the OP. After spending some time reading how to use Three.js's Shader material. I got some code that is working as desired.
Here's the code: https://jsfiddle.net/yingcai/4dxnysvq/
The basic idea is:
Create an Uniform that contains controls.target (Vector3 position).
Pass vertex position attributes to varying in the Vertex Shader. So
that the Fragment Shader can access it.
Get the distance between each vertex position and controls.target. Calculate alpha value based on the distance.
Assign alpha value to the vertex color.
Another important thing is: Because the fade out mask should follow the camera move, so don't forget to update the control in the uniforms every frame.
// Create uniforms that contains control position value.
uniforms = {
texture: {
value: new THREE.TextureLoader().load("https://threejs.org/examples/textures/water.jpg")
},
control: {
value: controls.target
}
};
// In the render() method.
// Update the uniforms value every frame.
uniforms.control.value = controls.target;
I had the same issue - a few years later - and solved it with the .onBeforeCompile function which is maybe more convenient to use.
There is a great tutorial here
The code itself is simple and could be easily changed for other materials. It just uses the fogFactor as alpha value in the material.
Here the material function:
alphaFog() {
const material = new THREE.MeshPhysicalMaterial();
material.onBeforeCompile = function (shader) {
const alphaFog =
`
#ifdef USE_FOG
#ifdef FOG_EXP2
float fogFactor = 1.0 - exp( - fogDensity * fogDensity * vFogDepth * vFogDepth );
#else
float fogFactor = smoothstep( fogNear, fogFar, vFogDepth );
#endif
gl_FragColor.a = saturate(1.0 - fogFactor);
#endif
`
shader.fragmentShader = shader.fragmentShader.replace(
'#include <fog_fragment>', alphaFog
);
material.userData.shader = shader;
};
material.transparent = true
return material;
}
and afterwards you can use it like
const cube = new THREE.Mesh(geometry, this.alphaFog());

Difficulty in sprite texture alignment

I have some code similar to the following...
this.texture = new THREE.ImageUtils.loadTexture( 'spritesheet.png' );
this.material = new THREE.MeshBasicMaterial( { map: this.texture, side:THREE.DoubleSide } );
this.geometry = new THREE.PlaneGeometry(32, 32, 1, 1);
this.sprite = new THREE.Mesh( this.geometry, this.material );
game.scene.add( this.sprite );
I've also tried along the lines of...
this.material = new THREE.SpriteMaterial( {
map: image,
useScreenCoordinates: true,
alignment: THREE.SpriteAlignment.center
} );
this.sprite = new THREE.Sprite( this.material );
These display the full spritesheet (sort of), as I would expect without further settings.
How do I align the sprite so it only displays say 32x32px starting at offset 50,60 for example ? The three.js documentation doesn't seem have much information, and the examples I've seen tend to use one image per sprite (which may be preferable, or only way possible ?)
Edit: I've spotted a material uvOffset and uvScale that I suspect is related to alignment in a Sprite object if anyone knows how these work. Will dig further.
Well, there is a "uvOffset" and "uvScale" parameter in spriteMaterial , i think you could use those but I cannot present any source code to you.
What you can of course do is using PlaneGeometry and calculate UV Coordinates for the 2 triangles (the plane). For example top-left is your offset and bottom right is calculated from a given offset and size (32x32) but using the whole image size in pixels to get the UV coordinates between 0 and 1
for example topleft is (50/imageSize, 60/imagesize) and bottom right is ( (50+32)/imgSize, (60+32)/imgSize). I think this should work, although i am not quite sure if you would get the result you want as OpenGL treats images "up side down". But you can try and go on from here. Hope this helps.

Make environment map scale when moving from the object

I use CubeCamera to build a simple reflection model. The setup can be seen on the picture below.
If the camera is close enough to the cube - the reflection looks fine. However, if i move away from the objects - the reflection just gets bigger. See the picture below.
This is not the way i want it. I'd like the reflection to proportionally get smaller. I tried to play with different settings, then I thought this could be achieved using a proper shader program (just squish the cube texture, kind of), so i've tried to mess with the existing PhongShader, but no luck there, i'm too newbie to this.
Also, i've noticed that if i change the width and height of the cubeCamera.renderTarget, i.e.
cubeCamera.renderTarget.width = cubeCamera.renderTarget.height = 150;
i can get the proper dimensions of the reflection, but its position on the surface is wrong. It's visible from the angle presented on the picture below, but not visible if i place the camera straight. Looks like the texture needs to be centered.
The actual code is pretty straightforward:
var cubeCamera = new THREE.CubeCamera(1, 520, 512);
cubeCamera.position.set(0, 1, 0);
cubeCamera.renderTarget.format = THREE.RGBAFormat;
scene.add(cubeCamera);
var reflectorObj = new THREE.Mesh(
new THREE.CubeGeometry(20, 20, 20),
new THREE.MeshPhongMaterial({
envMap: cubeCamera.renderTarget,
reflectivity: 0.3
})
);
reflectorObj.position.set(0, 0, 0);
scene.add(reflectorObj);
var reflectionObj = new THREE.Mesh(
new THREE.SphereGeometry(5),
new THREE.MeshBasicMaterial({
color: 0x00ff00
})
);
reflectionObj.position.set(0, -5, 20);
scene.add(reflectionObj);
function animate () {
reflectorObj.visible = false;
cubeCamera.updateCubeMap(renderer, scene);
reflectorObj.visible = true;
renderer.render(scene, camera);
requestAnimationFrame(animate);
}
Appreciate any help!
Environment mapping in three.js is based on the assumption that the object being reflected is "infinitely" far away from the reflective surface.
The reflected ray used in the environment map look-up does not emanate from the surface of the reflective material, but from the CubeCamera's center. This approximation is OK, as long as the reflected object is sufficiently far away. In your case it is not.
You can read more about this topic in this tutorial.
three.js r.58

Super sample antialiasing with threejs

I want to render my scene at twice the resolution of my canvas and then downscale it before displaying it. How would I do that using threejs?
for me the best way to have a perfect AA with not too much work (see the code below)
ps :if you increase more than 2 its start to be too sharpen
renderer = new THREE.WebGLRenderer({antialiasing:true});
renderer.setPixelRatio( window.devicePixelRatio * 1.5 );
This is my solution. The source comments should explain what's going on. Setup (init):
var renderer;
var composer;
var renderModel;
var effectCopy;
renderer = new THREE.WebGLRenderer({canvas: canvas});
// Disable autoclear, we do this manually in our animation loop.
renderer.autoClear = false;
// Double resolution (twice the size of the canvas)
var sampleRatio = 2;
// This render pass will render the big result.
renderModel = new THREE.RenderPass(scene, camera);
// Shader to copy result from renderModel to the canvas
effectCopy = new THREE.ShaderPass(THREE.CopyShader);
effectCopy.renderToScreen = true;
// The composer will compose a result for the actual drawing canvas.
composer = new THREE.EffectComposer(renderer);
composer.setSize(canvasWidth * sampleRatio, canvasHeight * sampleRatio);
// Add passes to the composer.
composer.addPass(renderModel);
composer.addPass(effectCopy);
Change your animation loop to:
// Manually clear you canvas.
renderer.clear();
// Tell the composer to produce an image for us. It will provide our renderer with the result.
composer.render();
Note: EffectComposer, RenderPass, ShaderPass and CopyShader are not part of the default three.js file. You have to include them in addition to three.js. At the time of writing they can be found in the threejs project under the examples folder:
/examples/js/postprocessing/EffectComposer.js
/examples/js/postprocessing/RenderPass.js
/examples/js/postprocessing/ShaderPass.js
/examples/js/shaders/CopyShader.js
Here's how you might be able to work it out: In your three.js initialization code, when you create your renderer, make it double the dimensions of your primary canvas, and set it to render to a secondary, hidden canvas element that is twice as large as your primary canvas. Perform the necessary image manipulation on the secondary canvas, and then display the result on the primary canvas.

Three.js custom objLoader geometry lighting

I have this object I'm loading with THREE.objLoader and then create a mesh with it like so:
mesh = new THREE.SceneUtils.createMultiMaterialObject(
geometry,
[
new THREE.MeshBasicMaterial({color: 0xFEC1EA}),
new THREE.MeshBasicMaterial({
color: 0x999999,
wireframe: true,
transparent: true,
opacity: 0.85
})
]
);
In my scene I then add a DirectionalLight, it works and I can see my object, however it's like the DirectionalLight was an ambient one. No face is getting darker or lighter as it should be.
The object is filled with the color, but no lighting is applied to it.
If someone can help me with that it would be much appreciated :)
What could I be missing ?
Jsfiddle here: http://jsfiddle.net/5hcDs/
Ok folks, thanks to Maƫl Nison and mr doob I was able to understand the few things I was missing, being the total 3d noob that I am... I believe people starting to get into the 3d may find useful a little recap:
Basic 3d concepts
A 3d Face is made of some points (Vertex), and a vector called a normal, indicating the direction of the face (which side is the front and which one is the backside).
Not having normals can be really bad, because lighting is applied on the frontside only by default. Hence the black model when trying to apply a LambertMaterial or PhongMaterial.
An OBJ file is a way to describe 3D information. Want more info on this? Read this wikipedia article (en). Also, the french page provides a cube example which can be useful for testing.
Three.js tips and tricks
When normals are not present, the lighting can't be applied, hence the black model render. Three.js can actually compute vertex and face normals with geometry.computeVertexNormals() and/or geometry.computeFaceNormals() depending on what's missing
When you do so, there's a chance Three.js' normal calculation will be wrong and your normals will be flipped, to fix this you can simply loop through your geometry's faces array like so:
/* Compute normals */
geometry.computeFaceNormals();
geometry.computeVertexNormals();
/* Next 3 lines seems not to be mandatory */
mesh.geometry.dynamic = true
mesh.geometry.__dirtyVertices = true;
mesh.geometry.__dirtyNormals = true;
mesh.flipSided = true;
mesh.doubleSided = true;
/* Flip normals*/
for(var i = 0; i<mesh.geometry.faces.length; i++) {
mesh.geometry.faces[i].normal.x = -1*mesh.geometry.faces[i].normal.x;
mesh.geometry.faces[i].normal.y = -1*mesh.geometry.faces[i].normal.y;
mesh.geometry.faces[i].normal.z = -1*mesh.geometry.faces[i].normal.z;
}
You have to use a MeshPhongMaterial. MeshBasicMaterial does not take light in account when computing fragment color.
However, when using a MeshPhongMaterial, your mesh becomes black. I've never used the OBJ loader, but are you sure your model normales are right ?
Btw : you probably want to use a PointLight instead. And its position should probably be set to the camera position (light.position = camera.position should do the trick, as it will allow the light to be moved when the camera position will be edited by the Controls).

Resources