The problem
When loading my GLTF inside the canvas element of react-three-fiber in a 100vw x 100vh div the GLTF model seems to look fine. However when I change the size of the containing div and canvas to 50vw x 100vh the GLTF model seems to be stretched.
100vw x 100vh screenshot
100vw x 50 vh screenshot
What I have tried so far
I have tried to set the aspect ratio of the camera.
<Controls
enableDamping
rotateSpeed={0.3}
dampingFactor={0.1}
cameraProps={{
position: [11, 11, 11],
near: 0.1,
far: 1000,
fov: 50,
aspect: [random number here doesn't change anything]
}}
maxDistance={18}
/>
I have tried to add a window event listener on resize and setting the aspect ratio like so:
function onWindowResize() {
camera.aspect = 2.0 -> doesn't have any effect, not even with random numbers
camera.updateProjectionMatrix();
renderer.setSize(book.clientWidth, book.clientHeight);
}
Nothing of the above seems to work and I am out of options. I found several related posts on SO en google. I tried them all..
Versions etc
"#react-three/drei": "^7.25.0",
"#react-three/fiber": "^7.0.21",
"#react-three/postprocessing": "^2.0.5",
I am using orbot controls and a perspective camera. Here is the link to the code sandbox.
https://codesandbox.io/s/36uiq?file=/src/index.js
Hopefully someone is able to help me out.
When updating your camera's aspect ratio, make sure it matches your renderer's aspect ratio:
camera.aspect = book.clientWidth / book.clientHeight;
camera.updateProjectionMatrix();
renderer.setSize( book.clientWidth, book.clientHeight );
Make sure that camera exists, and it's the camera you're using to perform your render.
Related
I am trying to create a 3d spirograph effect. In this fiddle I have a simple Points plane rotating. When you press the Enter key a transparent plane material is added in front of the camera creating a trailing effect. This is all working as intended, but I was hoping to be able to add OrbitControls to be able to rotate around the spirograph as a whole without the trailing effect created by the transparent plane being affected by the OrbitControls as well.
var fadeMaterial = new THREE.MeshBasicMaterial({
color: 0x000000,
transparent: true,
opacity: 0.01
});
var fadePlane = new THREE.PlaneBufferGeometry(10, 10);
var fadeMesh = new THREE.Mesh(fadePlane, fadeMaterial);
fadeMesh.position.z = -0.08;
fadeMesh.renderOrder = -1;
var camera = new THREE.PerspectiveCamera( 75, window.innerWidth / window.innerHeight, 0.1, 1000 );
camera.add(fadeMesh);
camera.position.z = 60;
The image below shows the effect from a static viewpoint (before moving the camera with OrbitControls), but as soon as I move the camera the trailing effect is also affected. Essentially I would like to be able to rotate around this without the trailing effect changing.
I think the issue is caused by the transparent plane being in front of the camera, but I'm not sure how else to create the trailing effect. Any advice on how to create this while being able to rotate around it without disturbing the trailing effect?
This is not possible with your approach. You're essentially rendering 24 points onto the canvas at a time, and the reason they stay visible on your canvas is because you're using preserveDrawingBuffer: true on your renderer. Now when you're trying to move the camera, those previous points have already been painted to the canvas but they no longer exist in 3D space. It's similar to painting with ink; once you've painted a dot, you can't re-position it.
If you want to be able to orbit the camera while maintaining the spirograph look, you'd need to re-think you approach, and not preserve the drawing buffer. I think you'd need to add 24 new points on each frame, so they always exist in 3D space. Your geometry would need to grow in quantity from 24, 48, 72, 96, 120, 144, etc...
I am trying to make use of Raycaster in a ThreeJS scene to create a sort of VR interaction.
Everything works fine in normal mode, but not when I enable stereo effect.
I am using the following snippet of code.
// "camera" is a ThreeJS camera, "objectContainer" contains objects (Object3D) that I want to interact with
var raycaster = new THREE.Raycaster(),
origin = new THREE.Vector2();
origin.x = 0; origin.y = 0;
raycaster.setFromCamera(origin, camera);
var intersects = raycaster.intersectObjects(objectContainer.children, true);
if (intersects.length > 0 && intersects[0].object.visible === true) {
// trigger some function myFunc()
}
So basically when I try the above snippet of code in normal mode, myFunc gets triggered whenever I am looking at any of the concerned 3d objects.
However as soon as I switch to stereo mode, it stops working; i.e., myFunc never gets triggered.
I tried updating the value of origin.x to -0.5. I did that because in VR mode, the screen gets split into two halves. However that didn't work either.
What should I do to make the raycaster intersect the 3D objects in VR mode (when stereo effect is turned on)?
Could you please provide a jsfiddle with the code?
Basically, if you are using stereo in your app, it means you are using 2 cameras, therefore you need to check your intersects on both cameras views, this could become an expensive process.
var cameras =
{ 'camera1': new THREE.PerspectiveCamera(50, window.innerWidth / window.innerHeight, 1, 10000),
'camera2': new THREE.PerspectiveCamera(50, window.innerWidth / window.innerHeight, 1, 10000)
};
for (var cam in cameras) {
raycaster.setFromCamera(origin, cameras[cam]);
//continue your logic
}
You could use a vector object that simulates the camera intersection to avoid checking twice, but this depends on what you are trying to achieve.
I encountered a similar problem, I eventually found the reason. Actually in StereoEffect THREE.js displays the meshes on the two eyes, but in reality is actually adds only one mesh to the scene, exactly in the middle of the line left-eye-mesh <-> right-eye-mesh, hidden to the viewer.
So when you use the raycaster, you need to use it on the real mesh on the middle, not the illusion displayed on each eye !
I detailled here how to do it
Three.js StereoEffect displays meshes across 2 eyes
Hopes it solves your problem !
You can use my StereoEffect.js file in your project for resolving problem. See example of using. See my Raycaster stereo pull request also.
I found this fiddle a mounth ago and I implemented it succesfully. It works like a charm except a specific scenario. If I resize the window, from very small to large, it becomes really obvious that the camera Projection Matrix doesn't get updated. This happens both on the jsFiffle example and in my implementation of it. Any possible fix? Thank you!
onWindowResize = function(){
screenWidth = window.innerWidth;
screenHeight = window.innerHeight;
camera1.aspect = screenWidth / screenHeight;
camera2.aspect = camera1.aspect;
camera3.aspect = camera1.aspect;
camera1.updateProjectionMatrix();
camera2.updateProjectionMatrix();
camera3.updateProjectionMatrix();
renderer.setSize( screenWidth, screenHeight);
}
Outline with effect composer demo: http://jsfiddle.net/Eskel/g593q/5/
The renderTarget (or targets) used by EffectComposer is not being resized when the window is resized.
In your onWindowResize callback, be sure to call both of the following methods:
renderer.setSize( width, height );
composer.setSize( width, height );
three.js r.71
I have the following logic to create a Three.js R69 WebGL renderer that is supposed to handle high DPI displays. It did for quite a while, but about a week ago one and only one three.js page started rendering like the high DPI was correctly set, but my 3D coordinate origin became the upper left corner of the rendering canvas rather than the expected center of the canvas. (No changes to my environment that I can tell, maybe the browsers auto-updated. I'm testing with Chrome, FF and Safari on OSX 10.10.1)
// create our renderer:
gCex3.renderer = new THREE.WebGLRenderer({ antialias:true, alpha:true,
devicePixelRatio: window.devicePixelRatio || 1
});
// Three.js R69: I started needing to explicitly set this so clear alpha is 1:
gCex3.renderer.setClearColor( new THREE.Color( 0x000000 ), 1 );
gCex3.rendererDOM = $('#A3DH_Three_wrapper');
gCex3.rendererDOM.append( gCex3.renderer.domElement );
// fbWidth & fbHeight are w,h of a div located within the page:
gCex3.renderer.setSize( gs.fbWidth, gs.fbHeight, true ); // 'true' means update the canvas style
Checking the latest R69 examples, they don't seem to do anything special for high dpi displays. Checking the WebGLRenderer source, it looks like the devicePixelRatio logic is now embedded within the WebGLRenderer() function.
I've tried that minimal logic in the examples, specifically this:
renderer = new THREE.WebGLRenderer( { antialias: false } );
renderer.setClearColor( new THREE.Color( 0x000000 ), 1 );
renderer.setSize( gs.fbWidth, gs.fbHeight, true );
And I see the same behavior: my coordinate system origin is the upper left of the rendering canvas.
Note that in Chrome, when running the javascript debugger, I see a "webgl context lost" event during the exiting of the previous page, but before this logic is being executed. Could the WebGLRenderer be getting created during the period when there is no WebGL context?
Is there is any way to manage Mesh size and position accordingly to display size.
What i am doing is a simple animated presentation which will have camera Zooming and changes in scene/camera position.
If the screen size is differs Mesh positioning and mesh size all are going wrong..
I have no idea how to take control on this?
How to find if a mesh position on screen?
My hunch is that someone smart is going to come along and give you the solution that you really want (something like how to move the camera so object stay the right position/size), but here are some direct answers to your questions that might help.
You could manage the mesh size by making the proportions some function of window.innerWidth and window.innerHeight.
To determine if a mesh is on screen, you can use the following code. It projects a mesh position in 3D space to 2D space in the browser. Also, be sure to call camera.updateMatrixWorld() after you move the camera or change what you are looking at--elsewise you will get wonky results (thanks to WestLangley for that tip). If vector.x > window.innerWidth or vector.x > window.innerHeight, then the object is outside of the viewable area of the screen.
function toXYCoord (object) {
var vector = projector.projectVector(object.position.clone(), camera);
vector.x = (vector.x + 1)/2 * window.innerWidth;
vector.y = -(vector.y - 1)/2 * window.innerHeight;
return vector;
}
what you want to do is check for resize on either the screen, or as in the example below, a 'bucket' div that i use as a container...when i resize the div (programmatic-ally or otherwise) i just make sure to call onBucketResize()
function onBucketResize() {
camera.aspect = bucket.clientWidth / bucket.clientHeight;
camera.updateProjectionMatrix();
renderer.setSize( bucket.clientWidth, bucket.clientHeight );
}
to setup, you could use something like:
bucket.addEventListener( 'resize', onBucketResize, false );
or maybe
document.addEventListener( 'resize', onBucketResize, false );
whatever is appropriate :)
let me know if this helps!