How to manage WebGL (THREE.js) with variable screen size? - three.js

Is there is any way to manage Mesh size and position accordingly to display size.
What i am doing is a simple animated presentation which will have camera Zooming and changes in scene/camera position.
If the screen size is differs Mesh positioning and mesh size all are going wrong..
I have no idea how to take control on this?
How to find if a mesh position on screen?

My hunch is that someone smart is going to come along and give you the solution that you really want (something like how to move the camera so object stay the right position/size), but here are some direct answers to your questions that might help.
You could manage the mesh size by making the proportions some function of window.innerWidth and window.innerHeight.
To determine if a mesh is on screen, you can use the following code. It projects a mesh position in 3D space to 2D space in the browser. Also, be sure to call camera.updateMatrixWorld() after you move the camera or change what you are looking at--elsewise you will get wonky results (thanks to WestLangley for that tip). If vector.x > window.innerWidth or vector.x > window.innerHeight, then the object is outside of the viewable area of the screen.
function toXYCoord (object) {
var vector = projector.projectVector(object.position.clone(), camera);
vector.x = (vector.x + 1)/2 * window.innerWidth;
vector.y = -(vector.y - 1)/2 * window.innerHeight;
return vector;
}

what you want to do is check for resize on either the screen, or as in the example below, a 'bucket' div that i use as a container...when i resize the div (programmatic-ally or otherwise) i just make sure to call onBucketResize()
function onBucketResize() {
camera.aspect = bucket.clientWidth / bucket.clientHeight;
camera.updateProjectionMatrix();
renderer.setSize( bucket.clientWidth, bucket.clientHeight );
}
to setup, you could use something like:
bucket.addEventListener( 'resize', onBucketResize, false );
or maybe
document.addEventListener( 'resize', onBucketResize, false );
whatever is appropriate :)
let me know if this helps!

Related

GLTF inside Canvas Threejs is stretched

The problem
When loading my GLTF inside the canvas element of react-three-fiber in a 100vw x 100vh div the GLTF model seems to look fine. However when I change the size of the containing div and canvas to 50vw x 100vh the GLTF model seems to be stretched.
100vw x 100vh screenshot
100vw x 50 vh screenshot
What I have tried so far
I have tried to set the aspect ratio of the camera.
<Controls
enableDamping
rotateSpeed={0.3}
dampingFactor={0.1}
cameraProps={{
position: [11, 11, 11],
near: 0.1,
far: 1000,
fov: 50,
aspect: [random number here doesn't change anything]
}}
maxDistance={18}
/>
I have tried to add a window event listener on resize and setting the aspect ratio like so:
function onWindowResize() {
camera.aspect = 2.0 -> doesn't have any effect, not even with random numbers
camera.updateProjectionMatrix();
renderer.setSize(book.clientWidth, book.clientHeight);
}
Nothing of the above seems to work and I am out of options. I found several related posts on SO en google. I tried them all..
Versions etc
"#react-three/drei": "^7.25.0",
"#react-three/fiber": "^7.0.21",
"#react-three/postprocessing": "^2.0.5",
I am using orbot controls and a perspective camera. Here is the link to the code sandbox.
https://codesandbox.io/s/36uiq?file=/src/index.js
Hopefully someone is able to help me out.
When updating your camera's aspect ratio, make sure it matches your renderer's aspect ratio:
camera.aspect = book.clientWidth / book.clientHeight;
camera.updateProjectionMatrix();
renderer.setSize( book.clientWidth, book.clientHeight );
Make sure that camera exists, and it's the camera you're using to perform your render.

threejs raycaster cannot intersect in stereoscopic mode

I am trying to make use of Raycaster in a ThreeJS scene to create a sort of VR interaction.
Everything works fine in normal mode, but not when I enable stereo effect.
I am using the following snippet of code.
// "camera" is a ThreeJS camera, "objectContainer" contains objects (Object3D) that I want to interact with
var raycaster = new THREE.Raycaster(),
origin = new THREE.Vector2();
origin.x = 0; origin.y = 0;
raycaster.setFromCamera(origin, camera);
var intersects = raycaster.intersectObjects(objectContainer.children, true);
if (intersects.length > 0 && intersects[0].object.visible === true) {
// trigger some function myFunc()
}
So basically when I try the above snippet of code in normal mode, myFunc gets triggered whenever I am looking at any of the concerned 3d objects.
However as soon as I switch to stereo mode, it stops working; i.e., myFunc never gets triggered.
I tried updating the value of origin.x to -0.5. I did that because in VR mode, the screen gets split into two halves. However that didn't work either.
What should I do to make the raycaster intersect the 3D objects in VR mode (when stereo effect is turned on)?
Could you please provide a jsfiddle with the code?
Basically, if you are using stereo in your app, it means you are using 2 cameras, therefore you need to check your intersects on both cameras views, this could become an expensive process.
var cameras =
{ 'camera1': new THREE.PerspectiveCamera(50, window.innerWidth / window.innerHeight, 1, 10000),
'camera2': new THREE.PerspectiveCamera(50, window.innerWidth / window.innerHeight, 1, 10000)
};
for (var cam in cameras) {
raycaster.setFromCamera(origin, cameras[cam]);
//continue your logic
}
You could use a vector object that simulates the camera intersection to avoid checking twice, but this depends on what you are trying to achieve.
I encountered a similar problem, I eventually found the reason. Actually in StereoEffect THREE.js displays the meshes on the two eyes, but in reality is actually adds only one mesh to the scene, exactly in the middle of the line left-eye-mesh <-> right-eye-mesh, hidden to the viewer.
So when you use the raycaster, you need to use it on the real mesh on the middle, not the illusion displayed on each eye !
I detailled here how to do it
Three.js StereoEffect displays meshes across 2 eyes
Hopes it solves your problem !
You can use my StereoEffect.js file in your project for resolving problem. See example of using. See my Raycaster stereo pull request also.

Resizing Window when using EffectComposer

I found this fiddle a mounth ago and I implemented it succesfully. It works like a charm except a specific scenario. If I resize the window, from very small to large, it becomes really obvious that the camera Projection Matrix doesn't get updated. This happens both on the jsFiffle example and in my implementation of it. Any possible fix? Thank you!
onWindowResize = function(){
screenWidth = window.innerWidth;
screenHeight = window.innerHeight;
camera1.aspect = screenWidth / screenHeight;
camera2.aspect = camera1.aspect;
camera3.aspect = camera1.aspect;
camera1.updateProjectionMatrix();
camera2.updateProjectionMatrix();
camera3.updateProjectionMatrix();
renderer.setSize( screenWidth, screenHeight);
}
Outline with effect composer demo: http://jsfiddle.net/Eskel/g593q/5/
The renderTarget (or targets) used by EffectComposer is not being resized when the window is resized.
In your onWindowResize callback, be sure to call both of the following methods:
renderer.setSize( width, height );
composer.setSize( width, height );
three.js r.71

How to use "OculusRiftEffect.js" on the "webgl_interactive_cubes" examples

I was very excited when I first saw this example (webgl_geometry_minecraft_oculusrift) in mrdoob/three.js ยท GitHub website. Undoubtedly, it's pretty awesome!
But I'm curious, how to apply this effect on other examples? So I try to implement this effect in the "webgl_interactive_cubes". However, the experimental result is worse than expected.
My problem is that I can't accurately align the cursor to a particular cube to make it change color, seems to be a problem with the projection function? Then I adjusted the screen width coefficient, like this
window.innerWidth * 2
in the whole program. But still can not improve this problem.
Summary my issue :
If I want to apply Oculus Rift Effect on any example, how should I do? by th way, I only added the following code
effect = new THREE.OculusRiftEffect( renderer );
effect.setSize( window.innerWidth, window.innerHeight );
// Right Oculus Parameters are yet to be determined
effect.separation = 20;
effect.distortion = 0.1;
effect.fov = 110;
in initialize block init(); and final added effect.render( scene, camera ); in render();
I am very curious to know how
var vector = new THREE.Vector3( mouse.x, mouse.y, 1 );
projector.unprojectVector( vector, camera );
works. Why do need to pass parameter 1? what if I change mouse.x to mouse.x * 2
Need to use dual monitors can only be fully present this effect?
Note: My English is not very good, if I have described is unclear, please ask your doubts, I will respond as soon as possible.
This is my DEMO link:
http://goo.gl/VCKyP
http://goo.gl/xuIhr
http://goo.gl/WjqC0
My Folder : https://googledrive.com/host/0B7yrjtQvNRwoYVQtMUc4M1ZZakk/
The third one is your example right?
This can help you to use the OR-Effect in a easier way:
https://github.com/carstenschwede/RiftThree
And your examples work all, just the third one have to Problem with the Controls. If I drag the move from the Stats-DIV (FPS) It works.

Super sample antialiasing with threejs

I want to render my scene at twice the resolution of my canvas and then downscale it before displaying it. How would I do that using threejs?
for me the best way to have a perfect AA with not too much work (see the code below)
ps :if you increase more than 2 its start to be too sharpen
renderer = new THREE.WebGLRenderer({antialiasing:true});
renderer.setPixelRatio( window.devicePixelRatio * 1.5 );
This is my solution. The source comments should explain what's going on. Setup (init):
var renderer;
var composer;
var renderModel;
var effectCopy;
renderer = new THREE.WebGLRenderer({canvas: canvas});
// Disable autoclear, we do this manually in our animation loop.
renderer.autoClear = false;
// Double resolution (twice the size of the canvas)
var sampleRatio = 2;
// This render pass will render the big result.
renderModel = new THREE.RenderPass(scene, camera);
// Shader to copy result from renderModel to the canvas
effectCopy = new THREE.ShaderPass(THREE.CopyShader);
effectCopy.renderToScreen = true;
// The composer will compose a result for the actual drawing canvas.
composer = new THREE.EffectComposer(renderer);
composer.setSize(canvasWidth * sampleRatio, canvasHeight * sampleRatio);
// Add passes to the composer.
composer.addPass(renderModel);
composer.addPass(effectCopy);
Change your animation loop to:
// Manually clear you canvas.
renderer.clear();
// Tell the composer to produce an image for us. It will provide our renderer with the result.
composer.render();
Note: EffectComposer, RenderPass, ShaderPass and CopyShader are not part of the default three.js file. You have to include them in addition to three.js. At the time of writing they can be found in the threejs project under the examples folder:
/examples/js/postprocessing/EffectComposer.js
/examples/js/postprocessing/RenderPass.js
/examples/js/postprocessing/ShaderPass.js
/examples/js/shaders/CopyShader.js
Here's how you might be able to work it out: In your three.js initialization code, when you create your renderer, make it double the dimensions of your primary canvas, and set it to render to a secondary, hidden canvas element that is twice as large as your primary canvas. Perform the necessary image manipulation on the secondary canvas, and then display the result on the primary canvas.

Resources