How to blur an object in webVR? - three.js

In my scene i have a glowing cube. Firstly, i render the cube on a texture and then render the texture applying gaussian blur post processing. In this way i get the result right when i am not in VR mode. Here it is -
But when i go to VR mode, it gives me distorted result. Please check the following image -
Can anyone please tell me why is happening? Do i have to make any adjustment to render on texture for VR mode?
Update:
My code is like this -
function render() {
generate_blur_texture();
effect.render(scene, camera);
}
var rtTexture = new THREE.WebGLRenderTarget( window.innerWidth, window.innerHeight, { minFilter: THREE.LinearFilter, magFilter: THREE.NearestFilter, format: THREE.RGBAFormat});
var rtTextureFinal = new THREE.WebGLRenderTarget( window.innerWidth, window.innerHeight, { minFilter: THREE.LinearFilter, magFilter: THREE.NearestFilter, format: THREE.RGBAFormat});
function generate_blur_texture() {
// i think the key point is, i am using `renderer` here instead of 'effect'
renderer.render(scene, camera, rtTexture, true);
// then further rendering on rtTexture and rtTextureFinal (ping pong) to generate blur, this time i only draw a quad to sample from the texture
// ultimately, rtTexture contains the blurred texture
}
Three.js Revision: 77

Related

Multiple buffers in shaders in Three.js. WebGL with setRenderTarget instead of renderTarget

Im trying to understand how to write a shader in three.js with multiple buffers by converting a shader from shaderwith. I found this example:
https://codepen.io/lickedwindows/pen/jGOLJr
But when I run it with the newest version of Three.js I run into the errors:
"THREE.WebGLRenderer.render(): the renderTarget argument has been
removed. Use .setRenderTarget() instead."
"THREE.WebGLRenderer.render(): the forceClear argument has been
removed. Use .clear() instead."
Im trying to change this but cannot figure out how to do it. How can I change this code into the correct one that compiles with the most recent version of three.js
Basically I want to find a boilerplate for writing multibuffer shaders in three.js
//Create 2 buffer textures
textureA = new THREE.WebGLRenderTarget(window.innerWidth, window.innerHeight, { minFilter: THREE.LinearFilter, magFilter: THREE.NearestFilter });
textureB = new THREE.WebGLRenderTarget(window.innerWidth, window.innerHeight, { minFilter: THREE.LinearFilter, magFilter: THREE.NearestFilter });
Your demo uses Three.js r85, which is a bit outdated. As of September 2020, we have r120, and now you have to use renderer.setRenderTarget(renderT); each time you want to render to that target. You can read about it in the docs: https://threejs.org/docs/#api/en/renderers/WebGLRenderer.setRenderTarget
This is the new approach:
var textureA = new THREE.WebGLRenderTarget(window.innerWidth, window.innerHeight);
var textureB = new THREE.WebGLRenderTarget(window.innerWidth, window.innerHeight);
renderer.setRenderTarget(textureA);
renderer.render(scene1, camera1);
renderer.setRenderTarget(textureB);
renderer.render(scene2, camera2);

Three.js: Apply SSAO (Screen Space Ambient Occlusion) to Displacement map

have I've implemented the Screen Space Ambient Occlusion in my Three.js project correctly, and run perfect, like this:
//Setup SSAO pass
depthMaterial = new THREE.MeshDepthMaterial();
depthMaterial.depthPacking = THREE.RGBADepthPacking;
depthMaterial.blending = THREE.NoBlending;
var pars = { minFilter: THREE.LinearFilter, magFilter: THREE.LinearFilter, format: THREE.RGBAFormat, stencilBuffer: true }; //Stancilbuffer true because not effect transparent object
depthRenderTarget = new THREE.WebGLRenderTarget(window.innerWidth, window.innerHeight, pars);
depthRenderTarget.texture.name = "SSAOShader.rt";
ssaoPass = new THREE.ShaderPass(THREE.SSAOShader);
///////ssaoPass.uniforms[ "tDiffuse" ].value will be set by ShaderPass
ssaoPass.uniforms["tDepth"].value = depthRenderTarget.texture;
ssaoPass.uniforms['size'].value.set(window.innerWidth, window.innerHeight);
ssaoPass.uniforms['cameraNear'].value = camera.near;
ssaoPass.uniforms['cameraFar'].value = camera.far;
ssaoPass.uniforms['radius'].value = radius;
ssaoPass.uniforms['aoClamp'].value = aoClamp;
ssaoPass.uniforms['lumInfluence'].value = lumInfluence;
But, when I set a material with displacementMap (that run correctly without SSAO enabled), this is the result. Notice that the SSAO is applied "correctly" to the original sphere (with a strange-trasparent-artificat), but I need to apply it to the "displaced vertex" of the sphere)
This is my composer passes:
//Main render scene pass
postprocessingComposer.addPass(renderScene);
//Post processing pass
if (ssaoPass) {
postprocessingComposer.addPass(ssaoPass);
}
And this is the rendering loop with composer
if (postprocessingComposer) {
if (ssaoPass) {
//Render depth into depthRenderTarget
scene.overrideMaterial = depthMaterial;
renderer.render(scene, camera, depthRenderTarget, true);
//Render composer
scene.overrideMaterial = null;
postprocessingComposer.render();
renderer.clearDepth();
renderer.render(sceneOrtho, cameraOrtho);
}
else {
//Render loop with post processing (no SSAO, becasue need more checks, see above)
renderer.clear();
postprocessingComposer.render();
renderer.clearDepth();
renderer.render(sceneOrtho, cameraOrtho);
}
}
else {
//Simple render loop (no post-processing)
renderer.clear();
renderer.render(scene, camera);
renderer.clearDepth();
renderer.render(sceneOrtho, cameraOrtho);
}
How can i archive a correct Screen Space Ambient Occlusion applied to a mesh with Displacement Map? Thanks.
[UPDATE]:
After some work i tried to this procedure for every child in the scene, with displacement map, to define a new a new overrideMaterial of the scene equal to a depthMaterial with displacement map parameters of the child material.
var myDepthMaterial = new THREE.MeshDepthMaterial({
depthPacking: THREE.RGBADepthPacking,
displacementMap: child.material.displacementMap,
displacementScale: child.material.displacementScale,
displacementBias: child.material.displacementBias
});
child.onBeforeRender = function (renderer, scene, camera, geometry, material, group) {
scene.overrideMaterial = myDepthMaterial;
};
This solution sounds good, but doesnt work.
You are using SSAO with a displacement map. You need to specify the displacement map when you instantiate the depth material.
depthMaterial = new THREE.MeshDepthMaterial( {
depthPacking: THREE.RGBADepthPacking,
displacementMap: displacementMap,
displacementScale: displacementScale,
displacementBias: displacementBias
} );
three.js r.87

Why does this ThreeJs plane appear to get a kink in it as the camera moves down the y-axis?

I have an instance of THREE.PlaneBufferGeometry that I apply an image texture to like this:
var camera, scene, renderer;
var geometry, material, mesh, light, floor;
scene = new THREE.Scene();
THREE.ImageUtils.loadTexture( "someImage.png", undefined, handleLoaded, handleError );
function handleLoaded(texture) {
var geometry = new THREE.PlaneBufferGeometry(
texture.image.naturalWidth,
texture.image.naturalHeight,
1,
1
);
var material = new THREE.MeshBasicMaterial({
map: texture,
overdraw: true
});
floor = new THREE.Mesh( geometry, material );
floor.material.side = THREE.DoubleSide;
scene.add( floor );
camera = new THREE.PerspectiveCamera( 75, window.innerWidth / window.innerHeight, 1, texture.image.naturalHeight * A_BUNCH );
camera.position.z = texture.image.naturalWidth * 0.5;
camera.position.y = SOME_INT;
camera.lookAt(floor.position);
renderer = new THREE.CanvasRenderer();
renderer.setSize(window.innerWidth,window.innerHeight);
appendToDom();
animate();
}
function handleError() {
console.log(arguments);
}
function appendToDom() {
document.body.appendChild(renderer.domElement);
}
function animate() {
requestAnimationFrame(animate);
renderer.render(scene,camera);
}
Here's the code pen: http://codepen.io/anon/pen/qELxvj?editors=001
( Note: ThreeJs "pollutes" the global scope, to use a harsh term, and then decorates THREE using a decorator pattern--relying on scripts loading in the correct order without using a module loader system. So, for brevity's sake, I simply copy-pasted the source code of a few required decorators into the code pen to ensure they load in the right order. You'll have to scroll down several thousand lines to the bottom of the code pen to play with the code that instantiates the plane, paints it and moves the camera. )
In the code pen, I simply lay the plane flat against the x-y axis, looking straight up the z-axis, as it were. Then, I slowly pan the camera down along the y-axis, continuously pointing it at the plane.
As you can see in the code pen, as the camera moves along the y-axis in the negative direction, the texture on the plane appears to develop a kink in it around West Texas.
Why? How can I prevent this from happening?
I've seen similar behaviour, not in three.js, not in a browser with webGL but with directX and vvvv; still, i think you'll just have to set widthSegments/heightSegments of your PlaneBufferGeometry to a higher level (>4) and you're set!

Texture applied only on canvas click with Three.js

I am using WebGLRenderer from Three.js to render an object reconstructed from an IndexedFaceStructure that has texture. My problem is that when the page loads the object shows up with no texture, only a black colored mesh displays, however, when i click on the canvas where i render the object the texture shows up.
I have been looking around and tried the texture.needsUpdate = true; trick, but this one removes also the black meshed object on page load so i am at a loss here.
These are the main bits of my code:
function webGLStart() {
container = document.getElementById("webgl-canvas");
renderer = new THREE.WebGLRenderer({canvas:container, alpha:true, antialias: true});
renderer.setClearColor(0x696969,1);
renderer.setSize(container.width, container.height) ;
scene = new THREE.Scene();
camera = new THREE.PerspectiveCamera(45, container.width / container.height, 1, 100000);
camera.position.set(60, 120,2000) ;
//computing the geometry2
controls = new THREE.OrbitControls( camera );
controls.addEventListener( 'change', render );
texture = new THREE.ImageUtils.loadTexture(texFile);
//texture.needsUpdate = true;
material = new THREE.MeshBasicMaterial( {wireframe:false, map: texture, vertexColors: THREE.VertexColors} );
mesh = new THREE.Mesh(geometry2, material);
scene.add(mesh);
render();
animate();
}
function render()
{
renderer.render(scene, camera);
}
function animate()
{
controls.update();
}
And the html part: canvas id="webgl-canvas" style="border: none;" width="900" height="900" (i could not add it properly).
Do you happen to have a clue why is this happening?
If you have a static scene, you do not need an animation loop, and you only need to render the scene when OrbitControls modifies the camera position/orientation.
Consequently, you can use this pattern -- without an animation loop:
controls.addEventListener( 'change', render );
However, you also need to force a render when the texture loads. You do that by specifying a callback to render in the ImageUtils.loadTexture() method:
var texture = THREE.ImageUtils.loadTexture( "textureFile", undefined, render );
Alternatively, you could add the mesh to the scene and call render in the callback.
three.js r.70

THREE.js blur the frame buffer

I need to blur the frame buffer and I don't know how to get the frame buffer using THREE.js.
I want to blur the whole frame buffer rather than blur each textures in the scene. So I guess I should read the frame buffer and then blur, rather than doing this in shaders.
Here's what I have tried:
Call when init:
var renderTarget = new THREE.WebGLRenderTarget(512, 512, {
wrapS: THREE.RepeatWrapping,
wrapT: THREE.RepeatWrapping,
minFilter: THREE.NearestFilter,
magFilter: THREE.NearestFilter,
format: THREE.RGBAFormat,
type: THREE.FloatType,
stencilBuffer: false,
depthBuffer: true
});
renderTarget.generateMipmaps = false;
Call in each frame:
var gl = renderer.getContext();
// render to target
renderer.render(scene, camera, renderTarget, false);
framebuffer = renderTarget.__webglFramebuffer;
console.log(framebuffer);
gl.flush();
if (framebuffer != null)
gl.bindFramebuffer(gl.FRAMEBUFFER, framebuffer);
var width = height = 512;
var rdData = new Uint8Array(width * height * 4);
gl.readPixels(0, 0, width, height, gl.RGBA, gl.UNSIGNED_BYTE, rdData);
console.log(rdData);
// render to screen
renderer.render(scene, camera);
But framebuffer is WebFramebuffer {} and rdData is full of 0. Am I doing this in the right way?
Any blur should use shaders to be efficient, but in this case not as materials.
If you want to blur the entire frame buffer and render that to the screen use the effect composer. It's located in three.js/examples/js./postprocessing/EffectComposer.js
Set up the scene camera and renderer as normal but in addition add an instance of the effect composer. With the scene as a render pass.
composer = new THREE.EffectComposer( renderer );
composer.addPass( new THREE.RenderPass( scene, camera ) );
Then blur the whole buffer with two passes using the included blur shaders located in three.js/examples/shaders/
hblur = new THREE.ShaderPass( THREE.HorizontalBlurShader );
composer.addPass( hblur );
vblur = new THREE.ShaderPass( THREE.VerticalBlurShader );
// set this shader pass to render to screen so we can see the effects
vblur.renderToScreen = true;
composer.addPass( vblur );
finally in your method called in each frame render using the composer instead of the renderer
composer.render();
Here is a link to a working example of full screen blur
Try using the MeshDepthMaterial and render this into your shader.
I suggest rendering the blur pass with a dedicated camera using the same settings as the scene's diffuse camera. Then by adjusting the camera's frustrum you can do both screen and depth of blur effects. For a screen setup move the near frustrum towards the camera and move the far frustrum in increments away from the camera.
http://threejs.org/docs/#Reference/Materials/MeshDepthMaterial

Resources