How to set rotation on a certain axis - rotation

I am using the FlyControl, and I want it to always have the camera upright. If I look down, and then look left, the whole view is tilted however far I looked down. Just setting the euler order somehow doesn't seem to do it, because it's still messed up. I've been trying to research this for quite a bit and I've gotten nowhere.
How do I rotate the camera so it is upright, but still pointed in the right direction?

Main app
camera.up = new THREE.Vector3( 0, 0, 1 );
camera.rotation.order = "ZYX";
FlyControl.js:
this.update = function( delta ) {
var moveMult = delta * this.movementSpeed * this.movementSpeedMultiplier;
var rotMult = delta * this.rollSpeed;
var cur = this.object.rotation;
this.object.translateX( this.moveVector.x * moveMult );
this.object.translateY( this.moveVector.y * moveMult );
this.object.translateZ( this.moveVector.z * moveMult );
//this.tmpQuaternion.set( this.rotationVector.x * rotMult, this.rotationVector.y * rotMult, this.rotationVector.z * rotMult, 1 ).normalize();
//this.object.quaternion.multiply( this.tmpQuaternion );
//this.object.lookAt(this.object.getWorldDirection())
this.object.rotation.set( cur.x + this.rotationVector.x * rotMult, /*cur.y + this.rotationVector.y * rotMult*/ 0, cur.z + this.rotationVector.y * rotMult, cur.order );
console.log(this.rotationVector);
// expose the rotation vector for convenience
//this.object.rotation.setFromQuaternion( this.object.quaternion, this.object.rotation.order );
};

Related

three js zooming in where the cursor is using orbit controls

I'm very new to three js and is currently trying to implement a feature where the user can zoom in where the cursor is. The plan is to use a raycaster to get the point of intersection and then and use it to update the vector of the orbit controls every time the cursor moves.
the orbit control is initialized like so
this.controls = new OrbitControls( this.camera_, this.threejs_.domElement );
this.controls.listenToKeyEvents( window );
this.controls.screenSpacePanning = false;
this.controls.minDistance = 30;
this.controls.maxDistance = 500;
this.controls.maxPolarAngle = Math.PI / 2;
this is the eventlistener
document.addEventListener('pointermove', (e) => this.onPointerMove(e), false);
and the onPointerMove function looks like this
onPointerMove(event){
const pointer = {
x: (event.clientX / window.innerWidth) * 2 - 1,
y: -(event.clientY / window.innerHeight) * 2 + 1,
}
this.rayCaster.setFromCamera( pointer, this.camera_);
const intersects = this.rayCaster.intersectObjects( this.scene_.children, false);
if ( intersects.length > 0 ) {
this.controls.target(intersects[0].point);
this.controls.update();
}
}
so far, intersects[0].point seems to be getting the intersect coordinate correctly but the orbit control is simply not getting updated. I have also tried changing the camera's position using
this.camera_.position.set(intersects[0].point.x+20,intersects[0].point.y+20,intersects[0].point.z+20);
this.controls.update();
however that just moves my camera everywhere i point.
Edit:
this doesnt work either
const newTarget = new Vector3(intersects[0].point.x,intersects[0].point.y,intersects[0].point.z);
this.controls.target.copy(newTarget);
found the answer here.
Apparently you need to use either copy or set to change the target of the orbit controls. Without calling update().
like so
this.controls.target.set(intersects[0].point.x,intersects[0].point.y,intersects[0].point.z);

How to calculate the needed velocity vector to fire an arrow to hit a certain point/

Im using Oimo.js Physics library with 3 js.
I fire my arrow at a target but my math doesn't seem to be right and I'm having trouble remembering exactly how all the kinematic formulas works.
I have an attack function which creates a projectile and fires it with a 3d vector. But its not behaving how I thought it would and ended up needing to hard code a y value which doesn't really work either. Can someone point me in the correct direction? I also want the arrow to have a slight arc in its trajectory.
public attack( target: Unit, isPlayer: boolean ): void {
let collisionGroup = isPlayer ? CollisionGroup.PLAYER_PROJECTILES : CollisionGroup.ENEMY_PROJECTILES;
let collidesWithGroup = isPlayer ? CollidesWith.PLAYER_PROJECTILES : CollidesWith.ENEMY_PROJECTILES;
this.model.lookAt( target.position );
let direction: Vector3 = new Vector3( 0, 0, 0 );
direction = this.model.getWorldDirection( direction );
let value = this.calculateVelocity();
let velocity = new Vector3( direction.x * value, Math.sin( _Math.degToRad( 30 ) ) * value, direction.z * value );
let arrow = this.gameWorld.addProjectile( 'arrow3d', 'box', false, new Vector3( this.model.position.x, 5, this.model.position.z ), new Vector3( 0, 0, 0 ), false, collisionGroup, collidesWithGroup );
arrow.scale = new Vector3( 10, 10, 5 );
arrow.setVelocity( velocity );
this.playAnimation( 'attack', false );
}
protected calculateVelocity(): number {
return Math.sqrt( -2 * ( -9.8 / 60 ) * this.distanceToTarget );
}
Im dividing by 60 because of the oimo.js timestep.

Update amount of particles

Please see attached fiddle. I would like to update the amount of particles with a click event.
So far I have only been able to update the camera settings.
Have gone through the three.js docs on updating things, but would appreciate a push in the right direction.
I was trying something along the lines of:
document.onclick = myClickHandler;
function myClickHandler() {
particle = new THREE.Sprite( material );
particle.position.x = Math.random() * 2 - 1;
particle.position.y = Math.random() * 2 - 1;
particle.position.z = Math.random() * 2 - 1;
particle.position.normalize();
particle.position.multiplyScalar( Math.random() * 10 + 150 );
particle.scale.x = particle.scale.y = 10;
scene.add( particle );
geometry.vertices.push( particle.position );
}
Thanks!
https://jsfiddle.net/007zmukr/8/
I got it working.
Basically by...
Making the geometry and material variables global.
Separating the particle generation into a separate function, addParticle.
calling addParticle within the initial particle generation loop and from within the click handler.
Pretty much what you did above except moving it into it's own function.
https://jsfiddle.net/2pha/007zmukr/26/

Compute 3D point from mouse-position and depth-map

I need to compute 3D coordinates from a screen-space position using a rendered depth-map. Unfortunately, using the regular raytracing is not an option for me because I am dealing with a single geometry containing something on the order of 5M faces.
So I figured I will do the following:
render a depth-map with RGBADepthPacking into a renderTarget
use a regular unproject-call to compute a ray from the mouse-position (exactly as I would do when using raycasting)
lookup the depth from the depth-map at the mouse-coordinates and compute a point along the ray using that distance.
This kind of works, but somehow the located point is always slightly behind the object, so there is probably something wrong with my depth-calculations.
Now some details about the steps above
Rendering the depth-map is pretty much straight-forward:
const depthTarget = new THREE.WebGLRenderTarget(w, h);
const depthMaterial = new THREE.MeshDepthMaterial({
depthPacking: THREE.RGBADepthPacking
});
// in renderloop
renderer.setClearColor(0xffffff, 1);
renderer.clear();
scene.overrideMaterial = depthMaterial;
renderer.render(scene, camera, depthTarget);
Lookup the stored color-value at the mouse-position with:
renderer.readRenderTargetPixels(
depthTarget, x, h - y, 1, 1, rgbaBuffer
);
And convert back to float using (adapted from the GLSL-Version in packing.glsl):
const v4 = new THREE.Vector4()
const unpackDownscale = 255 / 256;
const unpackFactors = new THREE.Vector4(
unpackDownscale / (256 * 256 * 256),
unpackDownscale / (256 * 256),
unpackDownscale / 256,
unpackDownscale
);
function unpackRGBAToDepth(rgbaBuffer) {
return v4.fromArray(rgbaBuffer)
.multiplyScalar(1 / 255)
.dot(unpackFactors);
}
and finally computing the depth-value (I found corresponding code in readDepth() in examples/js/shaders/SSAOShader.js which I ported to JS):
function computeDepth() {
const cameraFarPlusNear = cameraFar + cameraNear;
const cameraFarMinusNear = cameraFar - cameraNear;
const cameraCoef = 2.0 * cameraNear;
let z = unpackRGBAToDepth(rgbaBuffer);
return cameraCoef / (cameraFarPlusNear - z * cameraFarMinusNear);
}
Now, as this function returns values in range 0..1 I think it is the depth in clip-space coordinates, so I convert them into "real" units using:
const depth = camera.near + depth * (camera.far - camera.near);
There is obviously something slightly off with these calculations and I didn't figure out the math and details about how depth is stored yet.
Can someone please point me to the mistake I made?
Addition: other things I tried
First I thought it should be possible to just use the unpacked depth-value as value for z in my unproject-call like this:
const x = mouseX/w * 2 - 1;
const y = -mouseY/h * 2 + 1;
const v = new THREE.Vector3(x, y, depth).unproject(camera);
However, this also doesn't get the coordinates right.
[EDIT 1 2017-05-23 11:00CEST]
As per #WestLangleys comment I found the perspectiveDepthToViewZ() function which sounds like it should help. Written in JS that function is
function perspectiveDepthToViewZ(invClipZ, near, far) {
return (near * far) / ((far - near) * invClipZ - far);
}
However, when called with unpacked values from the depth-map, results are several orders of magnitude off. See here.
Ok, so. Finally solved it. So for everyone having trouble with similar issues, here's the solution:
The last line of the computeDepth-function was just wrong. There is a function perspectiveDepthToViewZ in packing.glsl, that is pretty easy to convert to JS:
function perspectiveDepthToViewZ(invClipZ, near, far) {
return (near * far) / ((far - near) * invClipZ - far);
}
(i believe this is somehow part of the inverse projection-matrix)
function computeDepth() {
let z = unpackRGBAToDepth(rgbaBuffer);
return perspectiveDepthToViewZ(z, camera.near, camera.far);
}
Now this will return the z-axis value in view-space for the point. Left to do is converting this back to world-space coordinates:
const setPositionFromViewZ = (function() {
const viewSpaceCoord = new THREE.Vector3();
const projInv = new THREE.Matrix4();
return function(position, viewZ) {
projInv.getInverse(camera.projectionMatrix);
position
.set(
mousePosition.x / windowWidth * 2 - 1,
-(mousePosition.y / windowHeight) * 2 + 1,
0.5
)
.applyMatrix4(projInv);
position.multiplyScalar(viewZ / position.z);
position.applyMatrix4(camera.matrixWorld);
};
}) ();

Ray tracing to a Point Cloud with a custom vertex shader in Three.js

How can you ray trace to a Point Cloud with a custom vertex shader in three.js.
This is my vertex shader
void main() {
vUvP = vec2( position.x / (width*2.0), position.y / (height*2.0)+0.5 );
colorP = vec2( position.x / (width*2.0)+0.5 , position.y / (height*2.0) );
vec4 pos = vec4(0.0,0.0,0.0,0.0);
depthVariance = 0.0;
if ( (vUvP.x<0.0)|| (vUvP.x>0.5) || (vUvP.y<0.5) || (vUvP.y>0.0)) {
vec2 smp = decodeDepth(vec2(position.x, position.y));
float depth = smp.x;
depthVariance = smp.y;
float z = -depth;
pos = vec4(( position.x / width - 0.5 ) * z * (1000.0/focallength) * -1.0,( position.y / height - 0.5 ) * z * (1000.0/focallength),(- z + zOffset / 1000.0) * 2.0,1.0);
vec2 maskP = vec2( position.x / (width*2.0), position.y / (height*2.0) );
vec4 maskColor = texture2D( map, maskP );
maskVal = ( maskColor.r + maskColor.g + maskColor.b ) / 3.0 ;
}
gl_PointSize = pointSize;
gl_Position = projectionMatrix * modelViewMatrix * pos;
}
In the Points class, ray tracing is implemented as follows:
function testPoint( point, index ) {
var rayPointDistanceSq = ray.distanceSqToPoint( point );
if ( rayPointDistanceSq < localThresholdSq ) {
var intersectPoint = ray.closestPointToPoint( point );
intersectPoint.applyMatrix4( matrixWorld );
var distance = raycaster.ray.origin.distanceTo( intersectPoint );
if ( distance < raycaster.near || distance > raycaster.far ) return;
intersects.push( {
distance: distance,
distanceToRay: Math.sqrt( rayPointDistanceSq ),
point: intersectPoint.clone(),
index: index,
face: null,
object: object
} );
}
}
var vertices = geometry.vertices;
for ( var i = 0, l = vertices.length; i < l; i ++ ) {
testPoint( vertices[ i ], i );
}
However, since I'm using a vertex shader, the geometry.vertices don't match up to the vertices on the screen which prevents the ray trace from working.
Can we get the points back from the vertex shader?
I didn't dive into what your vertex-shader actually does, and I assume there are good reasons for you to do it in the shader, so it's likely not feasible to redo the calculations in javascript when doing the ray-casting.
One approach could be to have some sort of estimate for where the points are, use those for a preselection and do some more involved calculation for the points that are closest to the ray.
If that won't work, your best bet would be to render a lookup-map of your scene, where color-values are the id of a point that is rendered at the coordinates (this is also referred to as GPU-picking, examples here, here and even some library here although that doesn't really do what you will need).
To do that, you need to render your scene twice: create a lookup-map in the first pass and render it regularly in the second pass. The lookup-map will store for every pixel which particle was rendered there.
To get that information you need to setup a THREE.RenderTarget (this might be downscaled to half the width/height for better performance) and a different material. The vertex-shader stays as it is, but the fragment-shader will just output a single, unique color-value for every particle (or anything that you can use to identify them). Then render the scene (or better: only the parts that should be raycast-targets) into the renderTarget:
var size = renderer.getSize();
var renderTarget = new THREE.WebGLRenderTarget(size.width / 2, size.height / 2);
renderer.render(pickingScene, camera, renderTarget);
After rendering, you can obtain the content of this lookup-texture using the renderer.readRenderTargetPixels-method:
var pixelData = new Uint8Array(width * height * 4);
renderer.readRenderTargetPixels(renderTarget, 0, 0, width, height, pixelData);
(the layout of pixelData here is the same as for a regular canvas imageData.data)
Once you have that, the raycaster will only need to lookup a single coordinate, read and interpret the color-value as object-id and do something with it.

Resources