How to apply 2-pass postprocessing without using EffectComposer? - three.js

I need to post-process the scene that I rendered previously on textureA (as render target), with my custom shader and save the result to textureB (input:textureA, output:textureB). Therefore, I don't need a scene and a camera. I think it's too simple to even bother with three.js classes like EffectComposer, Shaderpass, CopyShader, TexturePass, etc.
So, how do I setup this computing-like post-processing in a simple way?

I've create for you a fiddle that shows a basic post-processing effect without EffectComposer. The idea behind this code is to work with an instance of WebGLRenderTarget.
First, you draw the scene into this render target. In the next step, you use this render target as a texture for a plane which is rendered with an orthographic camera. The code in the render loop looks like this:
renderer.clear();
renderer.render( scene, camera, renderTarget );
renderer.render( sceneFX, cameraFX );
The corresponding material of this plane is your custom shader. I've used a luminosity shader from the official repo.
Therefore, I don't need a scene and a camera.
Please do it like in the example. That's the intended way of the library.
Demo: https://jsfiddle.net/f2Lommf5/5149/
three.js R91

Related

Setting MatrixWorld of camera in threeJS

I'm trying to sync the camera orientation of one threejs scene with another by passing the world matrix of the first scene's camera to the second and then using
camera.matrix.fromArray(cam1array); //cam1array is the flattened worldmatrix of the 1st scene
camera.updateMatrixWorld( true );
However, when I print camera.matrixWorld for the second scene, it would appear that there has been no update (matrixWorld is the same as before the previous commands). The second scene does use OrbitControls so maybe they are overriding my commands, but I wondered if anyone could advise me on how to achieve the same world matrix for the second scene as the first?
I should also clarify that part of the issue is that it would appear that setting
camera.matrixAutoUpdate = false;
in order to prevent overriding seems to prevent OrbitControls from functioning corrrectly.

How can I have a camera orbit without using WebGLRenderer, and use canvas only?

I'm getting started with three.js, and I don't want to use WebGLRenderer (renderer = new THREE.WebGLRenderer();) at all. Instead I want to use renderer = new THREE.CanvasRenderer();.
I don't want to use WebGLRenderer due to lack of support.
How can I have a camera orbit without using WebGLRenderer, and use canvas only in three.js?
The three.js site has a small but effective set of Canvas samples as well. Just take a look at the source of https://threejs.org/examples/#canvas_camera_orthographic: it has an orbiting camera. It is orthographic, which you might not want, but the source should be self-explanatory on how to change it to a perspective camera.

Render scene onto custom mesh with three.js

After messing around with this demo of Three.js rendering a scene to a texture, I successfully replicated the essence of it in my project: amidst my main scene, there's a now sphere and a secondary scene is drawn on it via a THREE.WebGLRenderTarget buffer.
I don't really need a sphere, though, and that's where I've hit a huge brick wall. When trying to map the buffer onto my simple custom mesh, I get an infinite stream of the following errors:
three.js:23444 WebGL: INVALID_VALUE: pixelStorei: invalid parameter for alignment
three.js:23557 Uncaught TypeError: Cannot read property 'width' of undefined
My geometry, approximating an annular shape, is created using this code. I've successfully UV-mapped a canvas onto it by passing {map: new THREE.Texture(canvas)} into the material options, but if I use {map: myWebGLRenderTarget} I get the errors above.
A cursory look through the call stack makes it look like three.js is assuming the presence of the texture.image attribute on myWebGLRenderTarget and attempting to call clampToMaxSize on it.
Is this a bug in three.js or am I simply doing something wrong? Since I only need flat rendering (with MeshBasicMaterial), one of the first thing I did when adapting the render-to-texture demo above was remove all trace of the shaders, and it worked great with just the sphere. Do I need those shaders back in order to use UV mapping and a custom mesh?
For what its worth, I was needlessly setting needsUpdate = true on my texture. (The handling of needsUpdate apparently assumes the presence of a <canvas> that the texture is based on.)

How to generate the top and perspective view of an object using ThreeJS?

I want to generate the top and perspective view of an object.
Input: A 3d object, maybe .obj or .dae file.
Output: the image files presenting the top and front view of the loaded object.
Here is some expected output:
The perspective view of a chair
The top view of a chair:
Can anyone give me some suggestions to solve this problem? Demo may be preferred
You could create a small three.js scene with your obj or collada-file loaded using the appropriate loaders. (see examples of the specific loaders). Then create the necessary cameras you want to have in the scene. See examples for orthographic and perspective cameras that come with three.js, too.
To produce the images you want, you could use the toDataURL function, see this thread here and use google
Three.js and HTML5 Canvas toDataURL
in essence, after the objects are loaded, you could do something like:
renderer.render(scene, topViewCamera);
dataurl = canvas.toDataURL();
renderer.render(scene, perspectiveCamera);
dataurl2 = canvas.toDataURL();
I think you could also use 2 renderTargets and then use those for output, too, but maybe if you are new to three.js, start with the toDataURL() method from HTML5.

Is there any reason why a camera's matrixworldinverse won't update when using trackball controls?

I'm viewing a scene using threejs and the trackball camera. I try to get the view matrix from the camera, but its matrixworldinverse isn't updating. I do call updateMatrixWorld in my render function. The matrixworld is updating, just not the inverse. Any ideas why?
You need to do it yourself:
camera.matrixWorldInverse.getInverse( camera.matrixWorld );
Make sure camera.matrixWorld is updated first. Note that by default, it is automatically updated by the renderer.
three.js r.58

Resources