Three.js StereoEffect displays meshes across 2 eyes - three.js

I have a THREE.js scene using StereoEffect renderer. However, when I add new meshes to the scene, they are displayed across the two eyes instead of being duplicated for each eye. I believe THREE.js is supposed to do it automatically and I don't have to duplicate them myself ? (I tried duplicating them but it is a lot of annoying calculation and I could not manage it)
My meshes are actually transparent planes, and I add a DOM element on the top of them to have a flat display.
Illustrating Example

OK I finally found it ! I tried putting back on a texture (not invisible) on my meshes and I discovered the problem.
When we use StereoEffect and we see that our mesh is duplicated on both views, it is actually an illusion : THREE.JS puts an image there, but the actual object is invisible, put exactly at the middle of the two images !
See image here : explanation
If you use raycaster for instance, it will tell you there's not an instersection where you see the mesh, but at the center of the line from left image to right image ! Same for mesh.position.
So what I did was keep an invisible texture, and create two div tags that I placed symmetrically around the mesh position :
var middleX = window.offsetWidth/2;
//left div
if(this.element.id.indexOf("-2") == -1){
var posL = coords2d.x - middleX/2;
this.element.style.left = posL + 'px';
//Hide if gets on the right part of the screen
if(posL > middleX) this.element.style.display = 'none';
}
//right div
else{
var posR = coords2d.x + middleX/2;
this.element.style.left = posR + 'px';
//Hide if gets on the left part of the screen
if(posR < middleX) this.element.style.display = 'none';
}
That gives the illusion that my mesh is there, put it is just empty divs.
Then, to check if someone clicks on my mesh, I do the opposite : I go back to the real position of the mesh before sending it to raycaster !
function hasClicked(e) {
e.preventDefault();
var clientX,clientY;
//GET REAL POSITION OF OBJECT
var middleX = window.offsetWidth/2;
//Right screen
if(e.clientX>middleX){
clientX = e.clientX - middleX/2;
}
//Left screen
else {
clientX = e.clientX + middleX/2;
}
clientY = e.clientY; //Keep same Y coordinate
//TRANSFORM THESE COORDS ON SCREEN INTO THREE.JS COORDS
var mouse = new THREE.Vector2();
mouse.x = (clientX / window.innerWidth) * 2 - 1;
mouse.y = -(clientY / window.innerHeight) * 2 + 1;
//USE RAYCASTER
raycaster.setFromCamera(mouse, camera);
var intersects = raycaster.intersectObjects(arrowManager.storage);
if (intersects.length > 0) {
//It works !!!
}
}
And it works perfectly fine !! Hope this can help someone else, I was getting really desperate here ;)

Related

Does the point coordinate in three.js change if the camera moves?

I'm using the raycaster function to get the coordinates of portions of a texture as a preliminary to creating areas that will link to other portions of my website. The model I'm using is hollow and I'm raycasting to the intersection with the skin of the model from a point on the interior. I've used the standard technique suggested here and elsewhere to determine the coordinates in 3d space from mouse position:
//1. sets the mouse position with a coordinate system where the center
// of the screen is the origin
mouse.x = (event.clientX / window.innerWidth) * 2 - 1;
mouse.y = -(event.clientY / window.innerHeight) * 2 + 1;
console.log("mouse position: (" + mouse.x + ", "+ mouse.y + ")");
//2. set the picking ray from the camera position and mouse coordinates
raycaster.setFromCamera( mouse, camera );
//3. compute intersections
var intersects = raycaster.intersectObjects( scene.children, true );
var intersect = null;
var point = null;
//console.log(intersects);
for ( var i = 0; i < intersects.length; i++ ) {
console.log(intersects[i]);
if (i = intersects.length - 1) {
intersect = intersects[ i ];
point = intersect[ "point" ];
}
This works, but I'm getting inconsistent results if the camera position changes. My assumption right now is that this is because the mouse coordinates are generated from the center of the screen and that center has changed since I've moved the camera position. I know that getWorldPosition should stay consistent regardless of camera movement, but trying to call point.getWorldPosition returns "undefined" as a result. Is my thinking about why my results are inconsistent correct, and if so and I'm right that getWorldPosition is what I'm looking for how do I go about calling it so I can get the proper xyz coordinates for my intersect?
EDITED TO ADD:
When I target what should be the same point (or close to) on the screen I get very different results.
For example, this is my model (and forgive the janky code under the hood -- I'm still working on it):
http://www.minorworksoflydgate.net/Model/three/examples/clopton_chapel_dev.html
Hitting the upper left corner of the first panel of writing on the opposite wall (so the spot marked with the x in the picture) gets these results (you can capture them within that model by hitting C, escaping out of the pointerlock, and viewing in the console) with the camera at 0,0,0:
x: -0.1947601252025508,
​
y: 0.15833788110908806,
​
z: -0.1643094916216681
If I move in the space (so with a camera position of x: -6.140427450769398, y: 1.9021520960972597e-14, z: -0.30737391540643844) I get the following results for that same spot (as shown in the second picture):
x: -6.229400824609087,
​
y: 0.20157559303778091,
​
z: -0.5109691487471469
My understanding is that if these are the world coordinates for the intersect point they should stay relatively similar, but that x coordinate is much different. Which makes sense since that's the axis the camera moves on, but shouldn't it not make a difference for the point of intersection?
My comment will not be related to the camera but I had also an issue about the raycaster and calculating the position of the mouse is more accurate with the following way.
const rect = renderer.domElement.getBoundingClientRect();
mouse.x = ((event.clientX - rect.left) / rect.width) * 2 - 1;
mouse.y = - ((event.clientY - rect.top) / rect.height) * 2 + 1;
So the trick to this when there's no mouse available due to a pointer lock is to use the direction of the ray created by the object controls. It's actually pretty simple, but not really out there.
var ray_direction = new THREE.Vector3();
var ray = new THREE.Raycaster(); // create once and reuse
controls.getDirection( ray_direction );
ray.set( controls.getObject().position, ray_direction );

How to correctly position html elements in three js coordinate system?

I hopefully have a simple problem I can't get an answer to.
I have three js geometric spheres which move in a box. I place this box at the centre of the scene. The mechanics of how the spheres stay in the box is irrelevant. What is important is the spheres move about the origin (0,0) and the canvas always fills the page.
I want to draw a line from the moving spheres to a div or img element on the page. To do this I would assume I have to transform the css coordinates to three js coordinates. I found something I thought did something like this (Note: Over use of somethings to signify I am probably mistaken)
I can add a html element to the same scene/camera as webgl renderer but obviously using a different renderer but I am unsure how to proceed from there?
Basically I want to know:
How should I change the size of the div preserving aspect ratio if need be?
In essence I want the div or element to fill screen at some camera depth.
How to place the div at the centre of the scene by default?
Mines seems to be shifted 1000 in z direction but this might be the size of the div(img) which I have to bring into view.
How to draw a line between the webgl sphere and html div/img?
thanks in advance!
Unfortunately you have asked 3 questions, it is tricky to address them all at once.
I will explain how to position DIV element on top of some 3D object. My example would be a tooltip that appears when you hover the object by mouse: http://jsfiddle.net/mmalex/ycnh0wze/
So let's get started,
First of all you need to subscribe mouse events and convert 2D coordinates of a mouse to relative coordinates on the viewport. Very well explained you will find it here: Get mouse clicked point's 3D coordinate in three.js
Having 2D coordinates, raycast the object. These steps are quite trivial, but for completeness I provide the code chunk.
var raycaster = new THREE.Raycaster();
function handleManipulationUpdate() {
// cleanup previous results, mouse moved and they're obsolete now
latestMouseIntersection = undefined;
hoveredObj = undefined;
raycaster.setFromCamera(mouse, camera);
{
var intersects = raycaster.intersectObjects(tooltipEnabledObjects);
if (intersects.length > 0) {
// keep point in 3D for next steps
latestMouseIntersection = intersects[0].point;
// remember what object was hovered, as we will need to extract tooltip text from it
hoveredObj = intersects[0].object;
}
}
... // do anything else
//with some conditions it may show or hide tooltip
showTooltip();
}
// Following two functions will convert mouse coordinates
// from screen to three.js system (where [0,0] is in the middle of the screen)
function updateMouseCoords(event, coordsObj) {
coordsObj.x = ((event.clientX - renderer.domElement.offsetLeft + 0.5) / window.innerWidth) * 2 - 1;
coordsObj.y = -((event.clientY - renderer.domElement.offsetTop + 0.5) / window.innerHeight) * 2 + 1;
}
function onMouseMove(event) {
updateMouseCoords(event, mouse);
handleManipulationUpdate();
}
window.addEventListener('mousemove', onMouseMove, false);
And finally see the most important part, DIV element placement. To understand the code it is essential to get convenient with Vector3.project method.
The sequence of calculations is as follows:
Get 2D mouse coordinates,
Raycast object and remember 3D coordinate of intersection (if any),
Project 3D coordinate back into 2D (this step may seem redundant here, but what if you want to trigger object tooltip programmatically? You won't have mouse coordinates)
Mess around to place DIV centered above 2D point, with nice margin.
// This will move tooltip to the current mouse position and show it by timer.
function showTooltip() {
var divElement = $("#tooltip");
//element found and mouse hovers some object?
if (divElement && latestMouseIntersection) {
//hide until tooltip is ready (prevents some visual artifacts)
divElement.css({
display: "block",
opacity: 0.0
});
//!!! === IMPORTANT ===
// DIV element is positioned here
var canvasHalfWidth = renderer.domElement.offsetWidth / 2;
var canvasHalfHeight = renderer.domElement.offsetHeight / 2;
var tooltipPosition = latestMouseProjection.clone().project(camera);
tooltipPosition.x = (tooltipPosition.x * canvasHalfWidth) + canvasHalfWidth + renderer.domElement.offsetLeft;
tooltipPosition.y = -(tooltipPosition.y * canvasHalfHeight) + canvasHalfHeight + renderer.domElement.offsetTop;
var tootipWidth = divElement[0].offsetWidth;
var tootipHeight = divElement[0].offsetHeight;
divElement.css({
left: `${tooltipPosition.x - tootipWidth/2}px`,
top: `${tooltipPosition.y - tootipHeight - 5}px`
});
//get text from hovered object (we store it in .userData)
divElement.text(hoveredObj.userData.tooltipText);
divElement.css({
opacity: 1.0
});
}
}

Three.js raycaster intersection with sprites is completely off to the left

I have sprites with text on screen, placed in spherical pattern and I want to allow user to click on individual words and highlight them.
Now the problem is that when I do raycasting and do raycaster.intersectObjects() it returns sprites that are somewhere completely different where the click happened (usually it highlights words that are to the left of the words clicked). For debugging purposes I actually drew the rays from the raycaster object, and they are pretty much going trough the words I clicked.
In this picture I clicked the words "emma", "universe" and "inspector legrasse", but the words that got highlighted are in the red, I also rotated the camera so we can see the lines.
here is the relevant code:
Sprite creation:
var canvas = document.createElement('canvas');
var context = canvas.getContext('2d');
canvas.height = 256;
canvas.width = 1024;
...
context.fillStyle = "rgba(0, 0, 0, 1.0)";
context.fillText(message, borderThickness, fontsize + borderThickness);
var texture = new THREE.Texture(canvas);
texture.needsUpdate = true;
var spriteMaterial = new THREE.SpriteMaterial({map: texture});
var sprite = new THREE.Sprite(spriteMaterial);
sprite.scale.set(400, 100, 1.0);
Individual sprites are then added to "sprites" array, and then added to the scene.
Click detection:
function clickOnSprite(event) {
mouse.x = ( event.clientX / renderer.domElement.clientWidth ) * 2 - 1;
mouse.y = -( event.clientY / renderer.domElement.clientHeight ) * 2 + 1;
raycaster.setFromCamera(mouse, camera);
drawRaycastLine(raycaster);
var intersects = raycaster.intersectObjects(sprites);
if (intersects.length > 0) {
intersects[0].object.material.color.set(0xff0000);
console.log(intersects[0]);
}
}
I am using perspective camera with orbit controls.
I came to the conclusion, that this cannot be currently done with sprites (at least not currently).
As suggested in the comments, I ended up using plane geometries instead of sprites.

Nearby culling in Three.js despite camera not being near face

I've run into an issue after switching to a logarithmic depth buffer in Three.js. Everything runs nicely except for nearby culling of the ground as described in the following photos:
As you can see, the camera is elevated above the ground significantly. The character box that is shown is about 2 units above the ground, and my camera is set up as such:
var WIDTH = window.innerWidth
, HEIGHT = window.innerHeight;
var VIEW_ANGLE = 70
, ASPECT = WIDTH / HEIGHT
, NEAR = 1e-6
, FAR = 9000;
var aspect = WIDTH / HEIGHT;
var camera = new THREE.PerspectiveCamera(VIEW_ANGLE, ASPECT, NEAR, FAR);
camera.rotation.order = 'YXZ';
So my NEAR parameter is nowhere near 2, the distance between the camera and the ground. You can see in the second image that I even move up the camera with my PointerLockControls and still run into the issue.
Can anyone diagnose my issue?
I also tested my issue by seeing if this bug occurred with a static camera as well. It does.
Additionally, this problem only happens with the logarithmic depth buffer, as it doesn't happen with the default depth buffer.
I have my camera as a child to a controls object, which is defined as follows:
controls = new THREE.PointerLockControls(camera);
controls.getObject().position.set(strtx, 50, strtz);
scene.add(controls.getObject());
camera.position.z += 2;
camera.position.y += .1;
Here's the relevant code for PointerLockControls:
var pitchObject, yawObject;
var v = new THREE.Vector3(0, 0, -1);
THREE.PointerLockControls = function(camera){
var scope = this;
camera.rotation.set(0, 0, 0);
pitchObject = new THREE.Object3D();
pitchObject.rotation.x -= 0.3;
pitchObject.add(camera);
yawObject = new THREE.Object3D();
yawObject.position.y = 10;
yawObject.add(pitchObject);
var PI_2 = Math.PI / 2;
var onMouseMove = function(event){
if (scope.enabled === false) return;
var movementX = event.movementX || event.mozMovementX || event.webkitMovementX || 0;
var movementY = event.movementY || event.mozMovementY || event.webkitMovementY || 0;
yawObject.rotation.y -= movementX * 0.002;
pitchObject.rotation.x -= movementY * 0.002;
pitchObject.rotation.x = Math.max( - PI_2, Math.min( PI_2, pitchObject.rotation.x ) );
};
this.dispose = function() {
document.removeEventListener( 'mousemove', onMouseMove, false );
};
document.addEventListener( 'mousemove', onMouseMove, false );
this.enabled = false;
this.getObject = function () {
return yawObject;
};
this.getDirection = function() {
// assumes the camera itself is not rotated
var rotation = new THREE.Euler(0, 0, 0, "YXZ");
var direction = new THREE.Vector3(0, 0, -1);
return function() {
rotation.set(pitchObject.rotation.x, yawObject.rotation.y, 0);
v.copy(direction).applyEuler(rotation);
return v;
};
}();
};
You'll also notice that it's only the ground that is being culled, not other objects
Edit:
I've whipped up an isolated environment that shows the larger issue. In the first image, I have a flat PlaneBufferGeometry that has 400 segments for both width and height, defined by var g = new THREE.PlaneBufferGeometry(380, 380, 400, 400);. Even getting very close to the surface, no clipping is present:
However, if I provide only 1 segment, var g = new THREE.PlaneBufferGeometry(380, 380, 1, 1);, the clipping is present
I'm not sure if this intended in Three.js/WebGL, but it seems that I'll need to do something to work around it.
I don't think this is a bug, I think this is a feature of how the depthbuffer in the different settings works. Look at this example. On the right, the depthbuffer can't make up its mind between the letters in "microscopic" and the sphere. This is because it has lower precision at very small scales and starts doing rounding that oscilates between one object and another, and favoring draw order over z-depth.
It's always a tradeoff. If you want to forgo this issue, you can try raising the scale of your scene overall, so that the 'near' of the camera will never be so close to something that it can round it off - so just work in a number range that won't be rounded in the exponential model of the logarithmic z-buffer.
Also another question - how is the blue defined, because maybe what you're seeing is not clipping from being too close, but confusion between whether blue or the ground is closer. If it's just a blue box encompassing everything, you could try making it bigger and more distant from the ground.
EDIT:
Okay, this looks like it should work. so I would start looking for edge cases. What can you do to change the scene so that it does work? What can you do to make other things start breaking?
try moving the landscape far down/ far up (does the issue persist when looking up instead of down at it, does it persist even when it's unquestionably far away?)
try rotating the landscape
try changing the camera FOV
try changing the camera far plane
try changing the camera near plane from 1e-x notation to .000001, .0001,.01,.1, etc. see what effect it has.
console.log the camera object in your render function, and make sure that the fov, near, far etc, is as you set on setup and that it's not being overwritten and reset to default. check what it prints out in chrome's developer tools, you can browse the whole object, check position, parent name, all that stuff.
basically i don't see a blatant mistake, so I would guess it's something hard to spot, or it's working exactly as it should. Figure out what you can do to improve the effect/ make it worse, and that will clarify a direction to go.
A good rule of thumb for debugging is to try and just take things to an extreme, without trying to fix it, or keep the code true to its purpose, and just see in what way it breaks further/changes. report back when you find something.

How to convert world rotation to screen rotation?

I need to convert the position and rotation on a 3d object to screen position and rotation. I can convert the position easily but not the rotation. I've attempted to convert the rotation of the camera but it does not match up.
Attached is an example plunkr & conversion code.
The white facebook button should line up with the red plane.
https://plnkr.co/edit/0MOKrc1lc2Bqw1MMZnZV?p=preview
function toScreenPosition(position, camera, width, height) {
var p = new THREE.Vector3(position.x, position.y, position.z);
var vector = p.project(camera);
vector.x = (vector.x + 1) / 2 * width;
vector.y = -(vector.y - 1) / 2 * height;
return vector;
}
function updateScreenElements() {
var btn = document.querySelector('#btn-share')
var pos = plane.getWorldPosition();
var vec = toScreenPosition(pos, camera, canvas.width, canvas.height);
var translate = "translate3d("+vec.x+"px,"+vec.y+"px,"+vec.z+"px)";
var euler = camera.getWorldRotation();
var rotate = "rotateX("+euler.x+"rad)"+
" rotateY("+(euler.y)+"rad)"+
" rotateY("+(euler.z)+"rad)";
btn.style.transform= translate+ " "+rotate;
}
... And a screenshot of the issue.
I would highly recommend not trying to match this to the camera space, but instead to apply the image as a texture map to the red plane, and then use a raycast to see whether a click goes over the plane. You'll save yourself headache in translating and rotating and then hiding the symbol when it's behind the cube, etc
check out the THREEjs examples to see how to use the Raycaster. It's a lot more flexible and easier than trying to do rotations and matching. Then whatever the 'btn' onclick function is, you just call when you detect a raycast collision with the plane

Resources