I using own post-processing effect ('black hole' ), how can I calculate new screen position and size Hole depending on moving camera?
Related
I tried to rotate an object arount the Worlds-Y-Axis, with
myObject.rotateOnWorldAxis(new THREE.Vector3(0,1,0),THREE.Math.degToRad(1));
but the result was, that the object is only rotated in object space.
To be sure that I used the correct method I looked into the documentation and found that there are three methods to rotate an object:
.RotateY(rad) // rotate in Local Space
.rotateOnAxis(axis,rad) // rotation in Object Space
.rotateOnWorldAxis(axis,rad) // rotation in World Space
It seems that I used the correct method.
Is this a bug or an understanding problem on my side?
Here is a JSFiddle which illustrates my problem (the blue cube should rotate around the world axis).
Here is a second Fiddle where thy cyan cube is a child of another object.
It looks to me like your real question isn't regarding world space or object space rotations, cause those are working as expected in your examples.
You probably meant, how to change the point of rotation of an object. If that is the case, you have two options, you can either translate all your geometry vertices in respect to a pivot point of rotation. That way, your pivot will be centered at (0,0,0) and your vertices will rotate in respect to that.
mesh.geometry.translate( x, y, z );
Or you can make your object a child of a different Object3D (pivot), position your original mesh similarly to what was described above and rotate your pivot mesh.
var cube = new THREE.Mesh( geometry, material );
var pivot = new THREE.Object3D();
cube.position.set( 0, 12, 30 ); // offset from center
pivot.add( cube );
scene.add( pivot );
//...
pivot.rotation.y += Math.PI/2;
JSFiddle
I am dynamically adding a number of different objects of various sizes into a scene (one per click) and scaling and positioning them. Now I want to be able to rotate the objects around the Y axis at their center. I have added a boundingBox and an axesHelper, but the latter is showing up in the bottom corner of the objects. Reading this answer, which is similar to mine, it seems like this might be because of an offset. I can find the center of the object fine with this:
var box3 = new THREE.Box3();
var boundingBox = new THREE.BoxHelper( mesh, 0xffff00 );
scene.add( boundingBox );
box3.setFromObject( boundingBox );
center = box3.getCenter( boundingBox.position );
console.log( "center: ", center );
But when I try to reset the center to this position, following this answer, my object shoots way off into space.
box3.center(mesh.position);
mesh.position.multiplyScalar( -1 );
And I’m not really clear (even after reading the documentation) what “multiplyScalar” does/means. By playing with that number, I can get the object closer to the desired position, but the object still doesn't rotate around its center, and the AxesHelper is still at the original location.
Your object is not rotating around its center. Likely, your object's geometry is offset from the origin.
If you are dealing with a single Mesh or Line, you can center the object's geometry by calling
geometry.center();
This method will translate the vertices of the geometry so the geometry's bounding box is centered at the origin.
three.js r.97
I was trying to clip left half of a planeBuffer using material.clippingPlanes.
When the object is at center with rotation (0,0,0) then the clipping works.
object.material.clippingPlanes =
[object.getWorldDirection().cross(object.up).normalize(), object.position.z )];
But this code fails when the object is at a non-zero position with non-zero rotation and the cut does not change with orientation of object.
From Material.clippingPlanes:
User-defined clipping planes specified as THREE.Plane objects in world space.
Because the planes are in world space, they won't orient within your object's local space. You would need to apply your object's world transformation matrix to the planes in order to align them with your object.
myMesh.material.clippingPlanes[0].applyMatrix4(myMesh.matrixWorld);
Note that if your mesh is moving around, you'd need to store the original clipping plane for application of the new matrixWorld from each transformation.
// somehwere else in the code:
var clipPlane1 = new THREE.Plane(); // however you configure it is up to you
// later, probably in your render method:
myMesh.material.clippingPlanes[0].copy(clipPlane1);
myMesh.material.clippingPlanes[0].applyMatrix4(myMesh.matrixWorld);
I have mapped an equirectangular image onto a sphere using three.js, placed the camera in the middle of the sphere, and am using the OrbitControls to handle things like zooming & rotation.
This all works fantastically until I want to programmatically adjust what the camera is looking at (I tween camera.target) which ends up changing, I believe, the position of the camera. The issue here is that afterwards when you rotate, you rotate out of the sphere. What would be the proper way to achieve this only by adjusting camera.rotation and camera.zoom. I'm okay with stripping down the OrbitControls but don't fully understand how the rotation should work and am also open to other optins.
If you are using OrbitControls in the center of a panorama, you should leave controls.target at the origin, and set the camera position close to the origin:
camera.position.set( 0, 0, 1 );
By setting
controls.enablePan = false;
controls.enableZoom = false;
the camera will always remain a distance of 1 from the origin, i.e., on the unit sphere.
Then, to look at ( x, y, z ), you programmatically set the camera's position like so:
camera.position.set( x, y, z, ).normalize().negate();
That way, when the camera looks at the target (the origin), it will automatically be looking at ( x, y, z ), too.
three.js r.85
I have a mesh landscape in THREE.js that the camera points down at, I'd like to maintain a certain distance from that mesh (so if there's peaks in the terrain the camera moves further away).
I thought raycasting would be the correct way to start going about this (by getting the intersection distance) but all the examples I find relate to using mouse co-ordinates; when I try to set the origin as the camera position, and the direction co-ords to be the camera position but with a 0 on the Y axis (so camera up in the air facing down) the intersect results come up empty.
For example, on the render event I have:
t.o.ray.vector = new THREE.Vector3(t.o.camera.position.x, 0, t.o.camera.position.z );
t.o.ray.cast = new THREE.Raycaster(t.o.camera.position,t.o.ray.vector );
t.o.ray.intersect = t.o.ray.cast.intersectObject(object, true);
console.log(t.o.ray.intersect);
This results in an empty array, even when I'm moving the camera, the only way I can seem to get this to work is by using the examples that rely on mouse events.
Any ideas what I'm missing?
I realised it was because that setting 0 as the Y property was not enough. I had assumed that the vector co-ordinate simply helped calculate the direction in which the ray was pointing in, but this doesn't seem to be the case. i.e. -:
t.o.ray.vector = new THREE.Vector3(t.o.camera.position.x, -1000, t.o.camera.position.z );
t.o.ray.vector.normalize();
t.o.ray.cast = new THREE.Raycaster(t.o.camera.position,t.o.ray.vector );
t.o.ray.intersect = t.o.ray.cast.intersectObject(t.Terrain.terrain, true);
Produces the expected results.
What about this approach:
console.log( t.o.camera.position.distanceTo(t.o.vector.position) );