My scene perfectly works on desktop screen according to my expected object size and tween. However when i view it on my mobile browser(chrome-uptodate) the scene seems to be uneven.
I have used resize function
problem is when the mobile is on portrait position
Any other way to make my scene fit on mobile screen ?
here is the script which i used
https://jsfiddle.net/r49xdzs9/
window.addEventListener( 'resize', onWindowResize, false );
function onWindowResize() {
camera.aspect = window.innerWidth / window.innerHeight;
camera.updateProjectionMatrix();
webGLRenderer.setSize( window.innerWidth, window.innerHeight );
}
Related
I am currently trying to make my geometric shape which is an "IcosahedronGeometry" resize according to the size of the user's window in Nuxt.js (Vue.js).
For the moment, I manage to make the camera follow the size of the window but the size of my shape is still huge and doesn't resize.
For the moment I have tried this:
cube.scale.set( window.innerWidth, window.innerHeight, 1 );
But it doesn't work as expected, the object flattens...
function onWindowResize() {
renderer.setSize(window.innerWidth, window.innerHeight);
cube.scale.set( window.innerWidth, window.innerHeight, 1 );
camera.aspect = window.innerWidth / window.innerHeight;
camera.updateProjectionMatrix();
}
Hello Community,
I am New in Three Js.I am Creating 2D Game With Three JS and I am facing Some issue to make Game Responsive and also when Size of the game play is decrease some functionalities are not working properly like touch event on object.I want to know that can we apply css to the three js objects so that we can create the responsive desing game.
Please Guide me about it.
Initialize your camera and renderer to the window size from the start and then add an event listener that will alter properties of objects that relate to the screen size. Usually something like this:
// keep track of the window size in an object
const size = { width: window.innerWidth, height: window.innerHeight };
// initialize camera based on recorded sizes
const camera = new THREE.PerspectiveCamera(
75,
size.width / size.height,
0.1,
100
);
camera.position.set(...);
scene.add(camera);
// initialize renderer based on recorded sizes
const renderer = new THREE.WebGLRenderer({ canvas });
renderer.setSize(size.width, size.height);
renderer.setPixelRatio(Math.min(2, window.devicePixelRatio));
// adjust relevant properties on window size change
window.addEventListener('resize', () => {
size.width = window.innerWidth;
size.height = window.innerHeight;
camera.aspect = size.width / size.height;
camera.updateProjectionMatrix();
renderer.setSize(size.width, size.height);
renderer.setPixelRatio(Math.min(2, window.devicePixelRatio));
});
Hopefully this is the answer you're looking for.
Trying to learn three.js and I can only manage to see a black screen while trying to replicate a three.js example. I tried adding hemisphere lights and it did not change anything. If anyone can point me in the right direction I'd be so happy
renderer = new THREE.WebGLRenderer( { antialias: true } );
renderer.setPixelRatio( window.devicePixelRatio );
renderer.setSize( window.innerWidth, window.innerHeight );
renderer.toneMappingExposure = 0.8;
renderer.outputEncoding = THREE.sRGBEncoding
container.appendChild( renderer.domElement );
//var pmremGenerator = new THREE.PMREMGenerator( renderer );
//pmremGenerator.compileEquirectangularShader();
controls = new OrbitControls(camera, renderer.domElement );
controls.addEventListener( 'change', render ); //use if no anim loop
controls.minDistance = 2;
controls.maxDistance = 10
controls.target.set( 0, 0, - 0.2 );
controls.update();
window.addEventListener( 'resize', onWindowResize, false );
}
function.onWindowResize() {
camera.aspect = window.innerWidth / window.innerHeight;
camera.updateProjectionMatrix();
renderer.setSize( window.innerWidth, window.innerHeight );
render();
}
function render() {
renderer.render( scene, camera );
}
</script>
</body>
I Really don't know what to do as I'm just starting out. I'm running locally on my home comp so it should not be an issue regarding that. Thanks in advance.
I am trying to plot pcd file following the example mentioned here. WIth the given pcd I am able to plot but the pcd I have I am not able to plot. I cross checked the format and there were no errors as well in browser. Here is my pcd file. I am not getting whats going wrong.
I have tested your PCD file on my computer and it renders find. However, just replacing the PCD file in the mentioned example won't work since the spatial distribution of the data set is totally different. Hence you need a different camera and control setting. You should be able to see the point cloud with the following basic code:
var camera, container, scene, renderer;
init();
animate();
function init() {
scene = new THREE.Scene();
camera = new THREE.PerspectiveCamera( 15, window.innerWidth / window.innerHeight, 0.1, 1000 );
camera.position.z = 700;
scene.add( camera );
renderer = new THREE.WebGLRenderer( { antialias: true } );
renderer.setPixelRatio( window.devicePixelRatio );
renderer.setSize( window.innerWidth, window.innerHeight );
document.body.appendChild( renderer.domElement );
var loader = new THREE.PCDLoader();
loader.load( './models/pcd/binary/test.pcd', function ( points ) {
points.material.size = 5;
scene.add( points );
} );
container = document.createElement( 'div' );
document.body.appendChild( container );
container.appendChild( renderer.domElement );
window.addEventListener( 'resize', onWindowResize, false );
}
function onWindowResize() {
camera.aspect = window.innerWidth / window.innerHeight;
camera.updateProjectionMatrix();
renderer.setSize( window.innerWidth, window.innerHeight );
}
function animate() {
requestAnimationFrame( animate );
renderer.render( scene, camera );
}
I'm trying to create two scenes on two different canvases. Is it possible in Three.js?
var scene1 = new THREE.Scene()
var scene2 = new THREE.Scene()
scene1.add(camera1)
scene2.add(camera2)
...
renderer.render(scene1, camera1)
renderer.render(scene2, camera2)
Will it work like that?
Yes - it is totally possible, but the renderer-instance is always bound to the WebGLContext of the canvas. So you need to create create a renderer for every canvas you have. So this would be
renderer1.render(scene1, camera1);
renderer2.render(scene2, camera2);
(the other way around works as well: you can use multiple renderers to render the same scene with different cameras)
EDIT BASED ON COMMENTS
You can also render multiple scenes into different regions of the same canvas, using just one renderer. For this you need to setup a different viewport and scissor-test for every scene like this (based on https://threejs.org/examples/#webgl_multiple_views)
// first, render scene normally:
camera.aspect = totalWidth / totalHeight;
camera.updateProjectionMatrix();
renderer.setViewport(0, 0, totalWidth, totalHeight);
renderer.setScissorTest(false);
renderer.render( scene1, camera1 );
// then, render the overlay
renderer.setViewport(left, bottom, width, height);
renderer.setScissor(left, bottom, width, height);
renderer.setScissorTest(true);
renderer.setClearColor(view.background);
camera.aspect = width / height;
camera.updateProjectionMatrix();
renderer.render( scene2, camera2 );