I have two scenes: scene1 and scene2. I have two cameras: camera1 and camera2
scene1 is the background and uses camera1
scene2 simulates a head-up display and uses camera2 (so changes to camera1 FOV do not affect element positions)
I can render (composite) scene2 on top of scene1 just fine when StereoEffect is not used.
However I cannot render scene 2 on top of scene1 when using the StereoEffect.
The StereoEffect will only render a single scene and a single camera. It appears it cannot render (composite) multiple scenes on top of each other. I have also tried creating multiple renderers, such as renderer2, which I applied to the StereoEffect - such as stereo = new THREE.StereoEffect(renderer2) but this also did not work.
Thoughts ?
I understand you want to render once on all the canvas for your main scene, and once at the bottom-center for your head-up display. THREE.StereoEffect indeed asks for only one scene, and to get a stereo effect. It was not written for what you describe.
It uses scissors to render once on the left, then on the right part of the canvas. Scissors allow you to change the area of the canvas you are drawing into. As you can see in the source of THREE.StereoEffect :
renderer.setScissor( 0, 0, _width, _height );
renderer.setViewport( 0, 0, _width, _height );
renderer.render( scene, _cameraL );
renderer.setScissor( _width, 0, _width, _height );
renderer.setViewport( _width, 0, _width, _height );
renderer.render( scene, _cameraR );
The four parameters of the setScissor and setViewport methods are start abscissa (from left), start ordinate (from bottom), width, height (all parameters are pixel numbers).
Knowing this you can use scissors for what you need, without THREE.StereoEffect. So first write in the first part of your code :renderer.enableScissorTest(true). Then in your loop, copy the code above with what you need : scissors and viewport covering all the screen for your first rendering, then covering only the head-up part for the second.
Related
I was reading the "Drawing lines" tutorial part on the three.js documentation as shown in the Picture below...
This is the code used to demonstrate drawing lines. The code itself is fine.
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>My first three.js app</title>
<style>
body { margin: 0; }
</style>
</head>
<body>
<script src="///C:/Users/pc/Desktop/threejs_tutorial/build_threejs.html"></script>
<script>
const scene = new THREE.Scene();
const camera = new THREE.PerspectiveCamera( 45, window.innerWidth / window.innerHeight, 1, 500 );
camera.position.set( 0, 0, 100 );
camera.lookAt( 0, 0, 0 );
const renderer = new THREE.WebGLRenderer();
renderer.setSize( window.innerWidth, window.innerHeight );
document.body.appendChild( renderer.domElement );
//create a blue LineBasicMaterial
const material = new THREE.LineBasicMaterial( { color: 0x0000ff } );
const points = [];
points.push( new THREE.Vector3( - 10, 0, 0 ) );
points.push( new THREE.Vector3( 0, 10, 0 ) );
points.push( new THREE.Vector3( 10, 0, 0 ) );
const geometry = new THREE.BufferGeometry().setFromPoints( points );
const line = new THREE.Line( geometry, material );
scene.add( line );
renderer.render( scene, camera );
</script>
</body>
</html>
Let's go over the commands used in the creation of lines as suggested by the three.js documentation.
One by one
First line: the command
const scene = new THREE.Scene()
It says "create scene" but what it really does is to create a 3D space like as shown in the Picture 1.
Second line: the command
const camera = new THREE.PerspectiveCamera( 45, window.innerWidth / window.innerHeight, 1, 500 );
It says create a Camera, given is the field of view, aspect ratio, distance from camera to near viewing plane and distance from camera to far viewing plane as shown in Picture 2
Third line: the command
camera.position.set( 0, 0, 100 );
It says position the camera on the z-axis as shown in Picture 4. I assume that the orientation of camera is always parallel to x-axis.
Fourth line: the command
camera.lookAt( 0, 0, 0 );
It says orient the camera in the direction of the point (0, 0, 0) as shown in Picture 5.
Fifth line: the command
const renderer = new THREE.WebGLRenderer();
It says the WebGL Renderer shall be ready when it will be summoned by calling its name as shown in Picture 6.
Sixth line: the command
renderer.setSize( window.innerWidth, window.innerHeight );
It says the window where the user will see will be adjusted by renderer. This window is your computer screen. Whatever the size of your computer screen, the renderer will adjust accordingly as shown in Picture 7.
Seventh line: the command
document.body.appendChild( renderer.domElement );
It says the renderer now exists in 3D space shown in Picture 8.
Eighth line: the command
const material = new THREE.LineBasicMaterial( { color: 0x0000ff } );
It says we have to set the properties of the future line first before actually drawing it. In this case, this future line will have a color of blue as shown in Picture 9.
Ninth line: the command
const points = [];
I don't know the purpose of the array in this context. I guess whatever inside the array will become "real", something that can be put inside the viewing frustrum of the camera where it will be rendered in the near future as shown in Picture 10.
Tenth line: the command
points.push( new THREE.Vector3( - 10, 0, 0 ) );
points.push( new THREE.Vector3( 0, 10, 0 ) );
points.push( new THREE.Vector3( 10, 0, 0 ) );
It says position the points specified by the Vector3 command and these points will be pushed along the three points positioned in the 3D space. The points doesn't appear yet because the renderer isn't yet summoned as shown in Picture 11.
Eleventh line: the command
const geometry = new THREE.BufferGeometry().setFromPoints( points );
It says the points positioned by the Vector 3 will be converted into renderable form by the Buffer Geometry because a Buffer Geometry is a representation of mesh, line, or point geometry as shown in Picture 12.
Twelfth line: the command
const line = new THREE.Line( geometry, material );
It says the line will be created based from the geometry and material set beforehand as shown in Picture 13.
Thirteenth line: the command
scene.add( line );
It says the line has been added to the 3D space inside the viewing frustrum of the camera as shown in Picture 14.
Fourteenth line: the command
renderer.render( scene, camera );
It says the renderer has been ordered to render the scene and the camera. But I wondered why the command isn't
renderer.render( scene, camera, line );
....
The final output looked like this:
My question is:
Is anything what I've said is correct?
Thank you! I'm open for learning and dispelling myths surrounding the commands used in this example.
You've made a few wrong assumptions:
camera.position.set( 0, 0, 100 ); puts the camera at 100 units along the z-axis, parallel to the z-axis because it hasn't been rotated.
document.body.appendChild( renderer.domElement ); adds the <canvas> element to your HTML document. It's one of the few commands that have nothing to do with 3D space.
renderer.render(scene, cam) renders everything that's been added to the scene. You already added the lines with scene.add(line), so there's no reason to specifically target line again.
It says the renderer now exists in 3D space shown in Picture 8.
Some of your screenshots use different axes systems. To get acquainted with the Three.js/WebGL coordinate system, I recommend you visit the Three.js editor and add a camera with Add > PerspectiveCamera (near the bottom). You can then modify its position attributes to see what the axes do. Also keep an eye on the axes widget on the corner:
x-axis: +right / -left
y-axis: +up / -down
z-axis: +toward user / - away
Bravo! I wish I had this when I was first learning!
To understand the 9th and 10th lines, I think it's best to understand a bit of history...
At one time, you were able to use a Geometry object:
let geometry = new THREE.Geometry()
geometry.vertices.push(new THREE.Vector3(-10, 0, 0))
geometry.vertices.push(new THREE.Vector3(0, 10, 0))
geometry.vertices.push(new THREE.Vector3(10, 0, 0))
So you're adding these points to the vertices attribute of the geometry. Which looks like this:
[{x:-10, y:0, z:0}, {x:0, y:10, z:0}, {x:10, y:0, z:0}]
Eventurally they did away with Geometry. So now you're supposed to use a BufferGeometry. But with the BufferGeometry, there is no more vertices attribute.
So what do we do...?
Well, now we create a points array, and use the setFromPoints function to basically apply these coordinates to your geometry object:
const points = [];
points.push(new THREE.Vector3(-10, 0, 0));
points.push(new THREE.Vector3(0, 10, 0));
points.push(new THREE.Vector3(10, 0, 0));
let geometry = new THREE.BufferGeometry().setFromPoints( points );
What this does is it sets an attribute of type Float32Array...
geometry.attributes.position.array
Which looks like this:
[ -10, 0, 0, 0, 10, 0, 10, 0, 0 ]
And if you do:
geometry.attributes.position.count
You get: 3. Three points.
Hope it helps :)
I have a three.js animation of a person running. I have embedded this in an iFrame on my site however the character runs off the screen.
I am very happy with the positioning and the camera angle, I just need to move it right so that the character is centred in the iFrame.
Below is the code I am using.
scene = new THREE.Scene();
camera = new THREE.PerspectiveCamera(30, window.innerWidth / window.innerHeight, 1, 4000);
camera.position.set(0, 150, 50);
camera.position.z = cz;
camera.zoom = 3.5;
camera.updateProjectionMatrix();
scene.add(camera);
You could use the camera.lookAt() method, which will point the camera towards the desired position.
// You could set a constant vector
var targetPos = new THREE.Vector3(50, 25, 0);
camera.lookAt(targetPos);
// You could also do it in the animation loop
// if the position will change on each frame
update() {
person.position.x += 0.5;
camera.lookAt(person.position);
renderer.render(scene, camera);
}
I feel like the lookAt() method wouldn't work. It will just rotate the camera, and you specified you like the camera placement/angle.
If you want to move the camera to the right along with you model, set the camera's position.x equal to model.x for every frame(assuming left/right is still the X axis).
person.position.x += 0.5;
camera.position.x = person.position.x;
Alternatively, you could keep the object and camera static and move the ground plane. Or even have a rotating cylinder with a big enough radius flipped on its side.
If I have a shader that discards (or makes otherwise transparent) portions of a mesh, this (understandably) does not affect the behavior of the raycasting. It should be possible to sample the Z buffer to obtain raycast positions, though of course we'd have other side-effects such as no longer being able to get any data about which object was "found".
Basically though if we can do a "normal" raycast, and then have the ability to do a z-buffer check, we can then start combing through the complete set of raycast intersections to find out the one that really corresponds to the thing we clicked that we're looking at...
It's unclear if it is possible to sample the Z buffer with three.js. Is it possible at all with WebGL?
No, Raycaster cannot sample the depth buffer.
However, you can use another technique referred to as "GPU-Picking".
By assigning a unique color to each object, you can figure out which object was selected. You can use a pattern like this one:
//render the picking scene off-screen
renderer.render( pickingScene, camera, pickingTexture );
//create buffer for reading single pixel
var pixelBuffer = new Uint8Array( 4 );
//read the pixel under the mouse from the texture
renderer.readRenderTargetPixels(pickingTexture, mouse.x, pickingTexture.height - mouse.y, 1, 1, pixelBuffer);
//interpret the pixel as an ID
var id = ( pixelBuffer[0] << 16 ) | ( pixelBuffer[1] << 8 ) | ( pixelBuffer[2] );
var data = pickingData[ id ];
renderer.render( scene, camera );
See these three.js examples:
http://threejs.org/examples/webgl_interactive_cubes_gpu.html
http://threejs.org/examples/webgl_interactive_instances_gpu.html
three.js r.84
I have a parent container called 'cubeAndLabelContainer' and a cube and a label (sprite) inside it.
The problem is that the location for pivot of the label is different than the cube's, even though the mesh and the sprite are both part of the 'cubeAndLabelContainer':
var cubeAndLabelContainer;
function initObjects()
{
cubeAndLabelContainer = new THREE.Object3D();
var cube = _createCube();
var label = _createLabel();
cube.position.set( 0, 0, 0 );
label.position.set(0, 20, 0 );
cubeAndLabelContainer.position.set( 0, 0, 0 );
cubeAndLabelContainer.add( cube );
cubeAndLabelContainer.add( label );
scene.add( cubeAndLabelContainer );
}
The label should rotate around the cube, but instead they appear like two separate objects rotating at the same rate.
http://codepen.io/anon/pen/pRpgPp
If I add more meshes to the 'cubeAndLabelContainer', their rotation and location is perfectly fine and aligned correctly relative to each other. But when I do the same with sprites, the pivot is different for some reason..
Any suggestions please?
Is there a way to setup the Three.js renderer in such a way that the lookat point of the camera is not in the center of the rendered image?
To clarify: image a scene with just one 1x1x1m cube at ( 0, 0, 0 ). The camera is located at ( 0, 0, 10 ) and the lookat point is at the origin, coinciding with the center of the cube. If I render this scene as is, I might end up with something like this:
normal render
However I'd like to be able to render this scene in such a way that the lookat point is in the upper left corner, giving me something like this:
desired render
If the normal image is 800x600, then the result I envision would be as if I rendered a 1600x1200 image with the lookat in the center and then cropped that normal image so that only the lower right part remains.
Of course, I can change the lookat to make the cube go to the upper left corner, but then I view the cube under an angle, giving me an undesired result like this:
test.moobels.com/temp/cube_angle.jpg
I could also actually render the full 1600x1200 image and hide 3/4 of the image, but one would hope there is a more elegant solution. Does anybody know it?
If you want your perspective camera to have an off-center view, the pattern you need to use is:
camera = new THREE.PerspectiveCamera( for, aspect, near, far );
camera.setViewOffset( fullWidth, fullHeight, viewX, viewY, viewWidth, viewHeight );
See the docs: https://threejs.org/docs/#api/cameras/PerspectiveCamera
You can find examples of this usage in this example and this example.
three.js r.73
Here's a simple solution:
Assuming your cube is 4 x 4 x 4, at position 0, 0, 0:
var geometry = new THREE.BoxGeometry( 4, 4, 4 );
var material = new THREE.MeshBasicMaterial( { color: 0x777777 } );
var cube = new THREE.Mesh( geometry, material );
cube.position.set( 0, 0, 0 );
Get cube's position:
var Vx = cube.position.x,
Vy = cube.position.y,
Vz = cube.position.z;
Then deduct by 2 from x position, then add 2 to y and z position, and use the values to create a new Vector3:
var newVx = Vx - 2,
newVy = Vy + 2;
newVz = Vz + 2;
var xyz = new THREE.Vector3(newVx, newVy, newVz)
Then camera lookAt:
camera.lookAt(xyz);
Using console log, it would show that the camera is now looking at -2, 2, 2, which is the upper-left of your cube.
console.log(xyz);