I'm trying to add a SSAO shader pass to the an earlier fiddle which was the result of: EffectComposer second pass "overwrites" first pass
Unfortunately I get a black screen I created a new fiddle which I will update with my findings:
http://jsfiddle.net/mqt1ng2r/
Creating the depthshader and the target for the depth shader to render onto:
var depthShader = THREE.ShaderLib[ "depthRGBA" ];
var depthUniforms = THREE.UniformsUtils.clone( depthShader.uniforms );
depthMaterial = new THREE.ShaderMaterial( { fragmentShader: depthShader.fragmentShader, vertexShader: depthShader.vertexShader, uniforms: depthUniforms } );
depthMaterial.blending = THREE.NoBlending;
depthTarget = new THREE.WebGLRenderTarget( window.innerWidth, window.innerHeight, { minFilter: THREE.NearestFilter, magFilter: THREE.NearestFilter, format: THREE.RGBAFormat } );
Creating the SSAO shader pas and adding it to the composer
var effect = new THREE.ShaderPass( THREE.SSAOShader );
effect.uniforms[ 'tDepth' ].value = depthTarget;
effect.uniforms[ 'size' ].value.set( window.innerWidth, window.innerHeight );
effect.uniforms[ 'cameraNear' ].value = camera.near;
effect.uniforms[ 'cameraFar' ].value = camera.far;
effect.renderToScreen = true;
composer.addPass( effect );
The render loop rendering the scene with the override material set first befor the composer is rendered
requestAnimationFrame( animate );
//renderer.clear(); // changed -------------
scene.overrideMaterial = depthMaterial;
renderer.render( scene, camera, depthTarget );
scene.overrideMaterial = null;
composer.render();
A side question which came up is would you be able to add the rendering of the scene with the depth target to the composer too?
Reason for this question is that the rendertarget is different and a override material is to be set. I couldnt find any way do do this, does it even sound reasonable to do this or am i miss using the effect composer here?
Found out that the SSAO is partially working if i set effect.uniforms[ 'onlyAO' ].value = 1; then you can see it working but setting it to 0 results in a black screen. Looking at the code of the EffectComposer now to see what is happening.
Hopefully someone can help me out once again.
Like I said I'll update this post and the fiddle with my attempts
Interesting links on the subject:
Pull request to add resize support to SSAO demo https://github.com/mrdoob/three.js/pull/6820
Edits:
Added some relevant snippets of the code (posting all here would clutter the question)
Added interesting links
effect.uniforms[ 'onlyAO' ].value = 1;
Related
Three.js is up to v66 and the Gizmo transform controls seemed to have stopped working.
See a demo here:
This is the basic code
var geometry = new THREE.CubeGeometry( 200, 200, 200 );
var material = new THREE.MeshBasicMaterial( { color: 0x00ff00 } );
var mesh = new THREE.Mesh( geometry, material );
var control = new THREE.TransformControls( camera, renderer.domElement );
control.addEventListener( 'change', render );
control.attach( mesh );
scene.add( control.gizmo );
http://jsfiddle.net/Hq2Dx/5/
There is no apparent error but they do not appear. I've tried debugging but cannot isolate the issue. Anyone else managed to get it working?
Cheers
change your code.. it will do the trick.. check this fiddle... http://jsfiddle.net/Hq2Dx/7/
var control = new THREE.TransformControls( camera, renderer.domElement );
control.addEventListener( 'change', render );
control.attach( mesh );
scene.add( control);
I,m trying to put multiple passes in EffectComposer and everything is fine except for BokehPass.
My code look like this (I already got a scene, camera and renderer):
...
var renderPass = new THREE.RenderPass( scene, camera );
var postRenderer = new THREE.EffectComposer( renderer );
var copyPass = new THREE.ShaderPass( THREE.CopyShader );
var bokehSettings = {
focus : 1.0, aperture : 0.025, maxblur : 1.0,
width: window.innerWidth, height : window.innerHeight
}
var bokehPass = new THREE.BokehPass( scene, camera, bokehSettings );
var bleachPass = new THREE.ShaderPass( THREE.BleachBypassShader);//I make clone of uniforms but, for puspose, I don't write here.
postRenderer.addPass( renderPass );
postRenderer.addPass( bleachPass );
postRenderer.addPass( bokehPass );
postRenderer.addPass( copyPass );
...
function render(){
postRenderer.render( 0.1 );
}
...
The bleachPass work fine but not bokehPass in this order.
If I try : renderPass -> bleachPass -> bokehPass, bleachPass doesnt work.
Then I try : renderPass -> bleachPass -> copyPass -> bokehPass, but it give me some weird result.
Someone know how to mix multiple passes with bokeh?
Thanks!
Old question but for future reference here's the answer by #Mugen87:
https://github.com/mrdoob/three.js/issues/18634
BokehPass set needsSwap to false. That means the result of the buffer is not available in the read buffer for the subsequent post processing pass. This is done for performance reasons since usually this DOF pass is used on its own or at the end of the pass chain. So adding the following line of code should solve the issue:
bokehPass.needsSwap = true;
Updated fiddle: https://jsfiddle.net/5nxy0tqp/
Maybe it could still help people: I think you forgot the copyPass.renderToScreen = true (because it is your last shader to be addPass)
I have adapted this post processing example http://mrdoob.github.com/three.js/examples/webgl_postprocessing_dof.html to apply a Depth of Field / Bokeh effect. How can I specify the focus range (or whatever it could be called)?
If the camera far plane is at 10000, and the model size is 10 it was impossible to focus into invidual parts of the model - because it tries to focus from 1-10000 (camera-near to camera-far) instead of 1-10 (between camera and the back of my model), the actual area of interest.
It did work fine after I realised to set the camera far plane as low as possible (to about same as scene size), so the focus can adjusted where the actual model is.
Now I can't do the camera far plane trick anymore, because I added a skybox, so the camera needs to have it's far plane quite far related to the model size. That messes up the Depth of Field; I can focus very close or very far, but the whole model is either completely blurred or not blurred at all as the adjustable distance is way too big (all the way to the skybox).
If I know the area I want to be able to focus at, how can I specify it in my code?
Here is my setup code:
dof_material_depth = new THREE.MeshDepthMaterial();
dof_scene = new THREE.Scene();
dof_camera = new THREE.OrthographicCamera(SCREEN_WIDTH / - 2, SCREEN_WIDTH / 2, SCREEN_HEIGHT / 2, SCREEN_HEIGHT / - 2, -10000, 10000 );
dof_camera.position.z = 100;
dof_scene.add( dof_camera );
var pars = { minFilter: THREE.LinearFilter, magFilter: THREE.LinearFilter, format: THREE.RGBAFormat };
dof_rtTextureDepth = new THREE.WebGLRenderTarget(SCREEN_WIDTH, SCREEN_HEIGHT, pars );
dof_rtTextureColor = new THREE.WebGLRenderTarget(SCREEN_WIDTH, SCREEN_HEIGHT, pars );
dof_bokeh_shader = THREE.BokehShader;
dof_bokeh_uniforms = THREE.UniformsUtils.clone(dof_bokeh_shader.uniforms );
dof_bokeh_uniforms[ "tColor" ].value = dof_rtTextureColor;
dof_bokeh_uniforms[ "tDepth" ].value = dof_rtTextureDepth;
dof_bokeh_uniforms[ "focus" ].value = 1.1;
dof_bokeh_uniforms[ "aspect" ].value = SCREEN_WIDTH / SCREEN_HEIGHT;
dof_materialBokeh = new THREE.ShaderMaterial( {
uniforms: dof_bokeh_uniforms,
vertexShader: dof_bokeh_shader.vertexShader,
fragmentShader: dof_bokeh_shader.fragmentShader
});
dof_quad = new THREE.Mesh( new THREE.PlaneGeometry(SCREEN_WIDTH, SCREEN_HEIGHT), dof_materialBokeh );
dof_quad.position.z = -500;
dof_scene.add(dof_quad );
And here the rendering part:
renderer.render(scene, camera, dof_rtTextureColor, true );
scene.overrideMaterial = dof_material_depth;
renderer.render(scene, camera, dof_rtTextureDepth, true );
dof_scene.overrideMaterial = null;
render(dof_scene, dof_camera );
var delta = 0.01;
composerScene.render( delta);
EDIT:
I did manage to get desired results by setting a low far plane for the camera just before rendering the depth material, then reverting back to normal before rendering the composite:
renderer.render(scene, camera, dof_rtTextureColor, true );
var oldfar = camera.far; // this goes to skybox
camera.far = scenesize; // this goes to just behind the model
scene.overrideMaterial = dof_material_depth;
renderer.render(scene, camera, dof_rtTextureDepth, true );
camera.far = oldfar;
dof_scene.overrideMaterial = null;
render(dof_scene, dof_camera );
var delta = 0.01;
composerScene.render( delta);
This works perfect. I will leave the question open though, as I'm quite new to WebGLL / 3D programming in general, want to learn, and would like to know if it's possible to do this in the shaders/materials setup phase.
I can't manage to use ssao with three.js.
I tried to follow the webgl_postprocessing_dof.html example :
here is the function initPostprocessing
function initPostprocessing() {
postprocessing.scene = new THREE.Scene();
postprocessing.camera = new THREE.OrthographicCamera( window.innerWidth / - 2, window.innerWidth / 2, window.innerHeight / 2, window.innerHeight / - 2, -10000, 10000 );
postprocessing.camera.position.z = 100;
postprocessing.scene.add( postprocessing.camera );
var pars = { minFilter: THREE.LinearFilter, magFilter: THREE.LinearFilter, format: THREE.RGBFormat };
postprocessing.rtTextureDepth = new THREE.WebGLRenderTarget( window.innerWidth, height, pars ); //modifier 500
postprocessing.rtTextureColor = new THREE.WebGLRenderTarget( window.innerWidth, height, pars );
var ssao_shader = new THREE.ShaderMaterial(THREE.ShaderExtras[ "ssao" ]); //modification
postprocessing.ssao_uniforms = THREE.UniformsUtils.clone( ssao_shader.uniforms );
postprocessing.ssao_uniforms[ "tDepth" ].value=1;
postprocessing.ssao_uniforms[ "tDiffuse" ].value=1;
postprocessing.ssao_uniforms[ "fogEnabled" ].value=1;
postprocessing.ssao_uniforms[ "fogFar" ].value=100;
postprocessing.ssao_uniforms[ "fogNear" ].value=0;
postprocessing.ssao_uniforms[ "onlyAO" ].value=0;
postprocessing.ssao_uniforms[ "aoClamp" ].value=0.1;
postprocessing.ssao_uniforms[ "lumInfluence" ].value=0.1;
postprocessing.materialSSAO = new THREE.ShaderMaterial( {
uniforms: postprocessing.ssao_uniforms,
vertexShader: ssao_shader.vertexShader,
fragmentShader: ssao_shader.fragmentShader
});
}
and the render function :
function render() {
renderer.clear();
// Render depth into texture
scene.overrideMaterial=material_depth;
renderer.render( scene, camera, postprocessing.rtTextureDepth, true );
// Render color into texture
scene.overrideMaterial = null;
renderer.render( scene, camera, postprocessing.rtTextureColor);
//
postprocessing.materialSSAO.uniforms[ "tDepth" ].texture=postprocessing.rtTextureDepth;
postprocessing.materialSSAO.uniforms[ "tDiffuse" ].texture=postprocessing.rtTextureColor;
postprocessing.scene.overrideMaterial = postprocessing.materialSSAO;
renderer.render( postprocessing.scene, postprocessing.camera );
}
Maybe I misunderstood something.
I don't believe you can use the SSAO shader as a material in the way that you are. Materials are combined with geometry to draw meshes. Where as the SSAO shader is meant to draw its output not on top of multiple geometries but to a screen aligned quad.
typically you'd use the effect composer class to accomplish this.
composer = new THREE.EffectComposer( renderer );
composer.addPass( new THREE.RenderPass( postprocessing.scene, postprocessing.camera ) );
then instead of creating a material the SSAO is added as a shader pass to the composer and rendered to the screen
var effect = new THREE.ShaderPass( THREE.SSAOShader );
effect.uniforms[ 'tDepth' ].value = postprocessing.rtTextureDepth;
effect.uniforms[ 'size' ].value.set( window.innerWidth, window.innerHeight );
effect.uniforms[ 'cameraNear' ].value = postprocessing.camera.near;
effect.uniforms[ 'cameraFar' ].value = postprocessing.camera.far;
effect.renderToScreen = true;
composer.addPass( effect );
and finally in the render function you use the composer to render as opposed to the renderer
function render(){
scene.overrideMaterial = material_depth;
renderer.render( postprocessing.scene, postprocessing.camera, postprocessing.rtTextureDepth );
scene.overrideMaterial = null;
composer.render();
}
this also removes the necessity to have a seperate diffuse render target since the composer takes care of that for you with the render pass.
for a complete example of SSAO without the plugin see this one by alteredqualia: http://bit.ly/ZIPj2J
I don't understand why the lighting does not work in my code. I downloaded a simple OBJ. file to test the OBJLoader but the model isn't affected. Before I edited the lighting more, at least the Ambient Lighting would work. Maybe the OBJ. model needs a texture?
var container, stats;
var camera, scene, renderer, controls;
init();
animate();
function init() {
container = document.createElement( 'div' );
document.body.appendChild( container );
scene = new THREE.Scene();
camera = new THREE.PerspectiveCamera( 45, window.innerWidth / window.innerHeight, 1, 2000 );
camera.position.z = 2.5;
scene.add( camera );
controls = new THREE.TrackballControls( camera );
controls.rotateSpeed = 2.0;
controls.zoomSpeed = 1.2;
controls.panSpeed = 0.0;
controls.noZoom = false;
controls.noPan = true;
controls.staticMoving = true;
controls.dynamicDampingFactor = 0.3;
controls.keys = [ 65, 83, 68 ];
controls.addEventListener( 'change', render );
var ambient = new THREE.AmbientLight( 0x020202 );
scene.add( ambient );
directionalLight = new THREE.DirectionalLight( 0xffffff );
directionalLight.position.set( 1, 1, 0.5 ).normalize();
scene.add( directionalLight );
pointLight = new THREE.PointLight( 0xffaa00 );
pointLight.position.set( 0, 0, 0 );
scene.add( pointLight );
sphere = new THREE.SphereGeometry( 100, 16, 8 );
lightMesh = new THREE.Mesh( sphere, new THREE.MeshBasicMaterial( { color: 0xffaa00 } ) );
lightMesh.scale.set( 0.05, 0.05, 0.05 );
lightMesh.position = pointLight.position;
scene.add( lightMesh );
var loader = new THREE.OBJLoader();
loader.load( "originalMeanModel.obj", function ( object ) {
scene.add( object );
} );
renderer = new THREE.WebGLRenderer();
renderer.setSize( window.innerWidth, window.innerHeight );
container.appendChild( renderer.domElement );
}
function animate() {
requestAnimationFrame( animate );
controls.update();
}
function render() {
camera.lookAt( scene.position );
renderer.render( scene, camera );
}
MeshBasicMaterial in THREE.js is like a toon shader (good for silouhette, shadow drawing or wireframe) and and is not affected by lights.
Try MeshLambertMaterial or MeshPhongMaterial
I had similar problems when using the Three.js exporter for Blender, everything appeared dark even with diffuse colors set in the original blender model and an ambient light added to the scene in the Three.js code. It turns out the fix was to edit part of the converted model file, there was a line to the effect of:
"colorAmbient" : [0, 0, 0]
which I manually changed to
"colorAmbient" : [0.75, 0.75, 0.75]
everywhere it appeared, and that fixed the problem. I bring this up because my best guess is that you are experiencing a problem similar to this. Without seeing the *.obj file it is difficult to diagnose the problem exactly, but perhaps in your model settings you could try changing the ambient color value rather than, say, the diffuse color value, which is what we normally think of when assigning color to a model.
Maybe this will help you if you are experiencing the same problem like me a few days ago, if you have no normals in your obj that's definitely somewhere to look at.
You can try to start with a MeshBasicMaterial as well just to check the vertices/ faces are ok: new THREE.MeshBasicMaterial({ color: 0x999999, wireframe: true, transparent: true, opacity: 0.85 } )
Also, as mr doob said, please consider sharing the obj you're loading.