I'm trying to figure how to use the EffectComposer in ThreeJS.
Trying to apply motion blur to a mesh while keeping the background mesh sharp.
The only way I sort of was able to get it to work was by losing depth with the background overlapping the front elements:
http://code.michael-iriarte.com/post-process-test/test-1.html
But I'd like to be able to render something more like that (but with the motion blur):
http://code.michael-iriarte.com/post-process-test/test-2.html
See two examples below:
composerBack.addPass( renderBack );
composerFront.addPass( renderFront );
composerFront.addPass( renderMaskInverseFront );
composerFront.addPass( effectHBlur );
composerFront.addPass( effectVBlur );
composerFront.addPass( clearMask );
composerMerge.addPass( rttPassBack );
// composerMerge.addPass( renderMaskInverseBack );
composerMerge.addPass( renderMaskFront );
composerMerge.addPass( rttPassFront );
composerMerge.addPass( clearMask );
composerMerge.addPass( effectCopy);
Some help on the topic would be very welcome!
Well, this is a bit weird talking to myself on Stackoverflow :)
All I was missing was:
rttPassFront = new THREE.TexturePass( composerFront.renderTarget2.texture );
rttPassFront.material.transparent = true;
See a demo here:
http://code.michael-iriarte.com/post-process-test/solution-1.html
I hope it helps others. If you have different approach, please share!!
Related
I am developing a sample application in which iam using vtkplanes to crop the surfaceRendered output.
The changes are visualizing on the vtkRenderWindow clearly without any problem,but when i convert it into stl file the cropped changes are not saved instead it is saving the 3D object before i made the cropping.
here is my code
mapper->SetInputConnection( surfaceRenderedOutput->GetOutputPort() );
mapper->AddClippingPlane( plane6 );
mapper->AddClippingPlane( plane1 );
mapper->AddClippingPlane( plane2 );
mapper->AddClippingPlane( plane3 );
mapper->AddClippingPlane( plane5 );
mapper->AddClippingPlane( plane4 );
mapper->Update();
surfaceRenderedOutput->SetInputConnection(mapperr->GetOutputPort());
surfaceRenderedOutput->Update();
To write to stl i used like this
stlWriter->SetInput(surfaceRenderedOutput->GetOutput());
stlWriter->Write();
Can anyone help please
EDIT
I did like this
//vtkClipPolyData//
clipper1->SetClipFunction(plane1);
clipper2->SetClipFunction(plane2);
clipper3->SetClipFunction(plane3);
clipper4->SetClipFunction(plane4);
clipper5->SetClipFunction(plane5);
clipper6->SetClipFunction(plane6);
polyd1=clipper1->GetOutput();
polyd2=clipper2->GetOutput();
polyd3=clipper3->GetOutput();
polyd4=clipper4->GetOutput();
polyd5=clipper5->GetOutput();
polyd6=clipper6->GetOutput();
vtkSmartPointer<vtkAppendPolyData> appendFilter =
vtkSmartPointer<vtkAppendPolyData>::New();
appendFilter->SetNumberOfInputs(6);
appendFilter->AddInput(polyd1);
appendFilter->AddInput(polyd2);
appendFilter->AddInput(polyd3);
appendFilter->AddInput(polyd4);
appendFilter->AddInput(polyd5);
appendFilter->AddInput(polyd6);
appendFilter->Update();
stlWriter->SetInput(appendFilter->GetOutput());
still im not getting the output
The mapper doesn't do anything to the surfaceRenderedOutput. The output of the mapper presumably goes to a vtkActor which then goes to the vtkRenderWindow.
You want the output of the mapper. What is its type?
For the vtkClipPolyData object there is a getClippedOutput method which should allow you to get a clipped mesh.
If I set the renderTarget mapping for my cube camera to THREE.CubeRefractionMapping, it renders upside down.
_myCubeCamera.renderTarget.mapping = THREE.CubeRefractionMapping;
It seems related to the issue discussed in this post where the default orientation of the CubeCamera's component cameras are upside down. I tried tinkering around with these orientations in the THREE.js source code but only made things worse.
So, is there a correct way to use CubeRefractionMapping with CubeCamera? or a workaround?
r73
I found a workaround: When assigning the envMap to the material, use THREE.BackSide.
var sphereMaterial = new THREE.MeshBasicMaterial(
{
envMap: myCubeCamera.renderTarget,
side: THREE.BackSide,
refractionRatio: .09
} );
A complete example is here.
Not sure why this works, but it does, and that's what the original poster (me!) wanted.
I've tried finding an answer to this question but I couldn't. I am still very bad at WebGL and I only use Three.js to do my work. Do Three.js Mesh Constructors support the use of ANGLE_instanced_arrays to do Geometry Instancing?
If there is browser support for the ANGLE_instanced_array is there a way to create the THREE.Mesh() with Geometry Instancing rather than relying on "Pseudo Instancing" ?
Thanks in advance.
Yes it does (at least in r72). There are several examples that use the ANGLE_instanced_arrays extension like: http://threejs.org/examples/#webgl_buffergeometry_instancing, http://threejs.org/examples/#webgl_buffergeometry_instancing_billboards, http://threejs.org/examples/#webgl_buffergeometry_instancing_dynamic, http://threejs.org/examples/#webgl_buffergeometry_instancing_interleaved_dynamic
It doesn't appear that way. I've been looking for an authoritative answer to this question and haven't been able to find confirmation, but given that a search for the constant "ANGLE_instanced_arrays" on github yields no matches my guess is that it is not implemented.
THREE 's WebGLRenderer.supportsInstancedArrays() is now WebGLRenderer.extensions.get( 'ANGLE_instanced_arrays' ).
DeprecatedList
var geo = new THREE.InstancedBufferGeometry();
// src/core/InstancedBufferGeometry.js
isInstancedBufferGeometry: true,...
// src/renderers/WebGLRenderer.js
if ( geometry && geometry.isInstancedBufferGeometry ) {
if ( geometry.maxInstancedCount > 0 ) { renderer.renderInstances( geometry, drawStart, drawCount );
......
//src/renderers/webgl/WebGLIndexedBufferRenderer.js
function renderInstances( geometry, start, count ) {
var extension = extensions.get( 'ANGLE_instanced_arrays' );
I have some code similar to the following...
this.texture = new THREE.ImageUtils.loadTexture( 'spritesheet.png' );
this.material = new THREE.MeshBasicMaterial( { map: this.texture, side:THREE.DoubleSide } );
this.geometry = new THREE.PlaneGeometry(32, 32, 1, 1);
this.sprite = new THREE.Mesh( this.geometry, this.material );
game.scene.add( this.sprite );
I've also tried along the lines of...
this.material = new THREE.SpriteMaterial( {
map: image,
useScreenCoordinates: true,
alignment: THREE.SpriteAlignment.center
} );
this.sprite = new THREE.Sprite( this.material );
These display the full spritesheet (sort of), as I would expect without further settings.
How do I align the sprite so it only displays say 32x32px starting at offset 50,60 for example ? The three.js documentation doesn't seem have much information, and the examples I've seen tend to use one image per sprite (which may be preferable, or only way possible ?)
Edit: I've spotted a material uvOffset and uvScale that I suspect is related to alignment in a Sprite object if anyone knows how these work. Will dig further.
Well, there is a "uvOffset" and "uvScale" parameter in spriteMaterial , i think you could use those but I cannot present any source code to you.
What you can of course do is using PlaneGeometry and calculate UV Coordinates for the 2 triangles (the plane). For example top-left is your offset and bottom right is calculated from a given offset and size (32x32) but using the whole image size in pixels to get the UV coordinates between 0 and 1
for example topleft is (50/imageSize, 60/imagesize) and bottom right is ( (50+32)/imgSize, (60+32)/imgSize). I think this should work, although i am not quite sure if you would get the result you want as OpenGL treats images "up side down". But you can try and go on from here. Hope this helps.
I was very excited when I first saw this example (webgl_geometry_minecraft_oculusrift) in mrdoob/three.js ยท GitHub website. Undoubtedly, it's pretty awesome!
But I'm curious, how to apply this effect on other examples? So I try to implement this effect in the "webgl_interactive_cubes". However, the experimental result is worse than expected.
My problem is that I can't accurately align the cursor to a particular cube to make it change color, seems to be a problem with the projection function? Then I adjusted the screen width coefficient, like this
window.innerWidth * 2
in the whole program. But still can not improve this problem.
Summary my issue :
If I want to apply Oculus Rift Effect on any example, how should I do? by th way, I only added the following code
effect = new THREE.OculusRiftEffect( renderer );
effect.setSize( window.innerWidth, window.innerHeight );
// Right Oculus Parameters are yet to be determined
effect.separation = 20;
effect.distortion = 0.1;
effect.fov = 110;
in initialize block init(); and final added effect.render( scene, camera ); in render();
I am very curious to know how
var vector = new THREE.Vector3( mouse.x, mouse.y, 1 );
projector.unprojectVector( vector, camera );
works. Why do need to pass parameter 1? what if I change mouse.x to mouse.x * 2
Need to use dual monitors can only be fully present this effect?
Note: My English is not very good, if I have described is unclear, please ask your doubts, I will respond as soon as possible.
This is my DEMO link:
http://goo.gl/VCKyP
http://goo.gl/xuIhr
http://goo.gl/WjqC0
My Folder : https://googledrive.com/host/0B7yrjtQvNRwoYVQtMUc4M1ZZakk/
The third one is your example right?
This can help you to use the OR-Effect in a easier way:
https://github.com/carstenschwede/RiftThree
And your examples work all, just the third one have to Problem with the Controls. If I drag the move from the Stats-DIV (FPS) It works.