How to convert world rotation to screen rotation? - three.js

I need to convert the position and rotation on a 3d object to screen position and rotation. I can convert the position easily but not the rotation. I've attempted to convert the rotation of the camera but it does not match up.
Attached is an example plunkr & conversion code.
The white facebook button should line up with the red plane.
https://plnkr.co/edit/0MOKrc1lc2Bqw1MMZnZV?p=preview
function toScreenPosition(position, camera, width, height) {
var p = new THREE.Vector3(position.x, position.y, position.z);
var vector = p.project(camera);
vector.x = (vector.x + 1) / 2 * width;
vector.y = -(vector.y - 1) / 2 * height;
return vector;
}
function updateScreenElements() {
var btn = document.querySelector('#btn-share')
var pos = plane.getWorldPosition();
var vec = toScreenPosition(pos, camera, canvas.width, canvas.height);
var translate = "translate3d("+vec.x+"px,"+vec.y+"px,"+vec.z+"px)";
var euler = camera.getWorldRotation();
var rotate = "rotateX("+euler.x+"rad)"+
" rotateY("+(euler.y)+"rad)"+
" rotateY("+(euler.z)+"rad)";
btn.style.transform= translate+ " "+rotate;
}
... And a screenshot of the issue.

I would highly recommend not trying to match this to the camera space, but instead to apply the image as a texture map to the red plane, and then use a raycast to see whether a click goes over the plane. You'll save yourself headache in translating and rotating and then hiding the symbol when it's behind the cube, etc
check out the THREEjs examples to see how to use the Raycaster. It's a lot more flexible and easier than trying to do rotations and matching. Then whatever the 'btn' onclick function is, you just call when you detect a raycast collision with the plane

Related

How to preserve threejs texture scale while applying texture rotation

I'd like to enable a user to rotate a texture on a rectangle while keeping the aspect ratio of the texture image intact. I'm doing the rotation of a 1:1 aspect ratio image on a surface that is rectangular (say width: 2 and length: 1)
Steps to reproduce:
In the below texture rotation example
https://threejs.org/examples/?q=rotation#webgl_materials_texture_rotation
If we change one of the faces of the geometry like below:
https://github.com/mrdoob/three.js/blob/master/examples/webgl_materials_texture_rotation.html#L57
var geometry = new THREE.BoxBufferGeometry( 20, 10, 10 );
Then you can see that as you play around with the rotation control, the image aspect ratio is distorted. (form a square to a weird shape)
At 0 degree:
At some angle between 0 and 90:
I understand that by changing the repeatX and repeatY factor I can control this. It's also easy to see what the values would be at 0 degree, 90 degree rotations.
But I'm struggling to come up with the formula for repeatX and repeatY that works for any texture rotation given length and width of the rectangular face.
Unfortunately when stretching geometry like that, you'll get a distortion in 3D space, not UV space. In this example, one UV.x unit occupies twice as much 3D space as one UV.y unit:
This is giving you those horizontally-skewed diamonds when in between rotations:
Sadly, there's no way to solve this with texture matrix transforms. The horizontal stretching will be applied after the texture transform, in 3D space, so texture.repeat won't help you avoid this. The only way to solve this is by modifying the UVs so the UV.x units take up as much 3D space as UV.y units:
With complex models, you'd do this kind of "equalizing" in a 3D editor, but since the geometry is simple enough, we can do it via code. See the example below. I'm using a width/height ratio variable to use in my UV.y remapping, that way the UV transformations will match up, regardless of how much wider it is.
//////// Boilerplate Three setup
const renderer = new THREE.WebGLRenderer({canvas: document.querySelector("canvas")});
const camera = new THREE.PerspectiveCamera(50, 1, 1, 100);
camera.position.z = 3;
const scene = new THREE.Scene();
/////////////////// CREATE GEOM & MATERIAL
const width = 2;
const height = 1;
const ratio= width / height; // <- magic number that will help with UV remapping
const geometry = new THREE.BoxBufferGeometry(width, height, width);
let uvY;
const uvArray = geometry.getAttribute("uv").array;
// Re-map UVs to avoid distortion
for (let i2 = 0; i2 < uvArray.length; i2 += 2){
uvY = uvArray[i2 + 1]; // Extract Y value,
uvY -= 0.5; // center around 0
uvY /= ratio; // divide by w/h ratio
uvY += 0.5; // remove center around 0
uvArray[i2 + 1] = uvY;
}
geometry.getAttribute("uv").needsUpdate = true;
const uvMap = new THREE.TextureLoader().load("https://raw.githubusercontent.com/mrdoob/three.js/dev/examples/textures/uv_grid_opengl.jpg");
// Now we can apply texture transformations as expected
uvMap.center.set(0.5, 0.5);
uvMap.repeat.set(0.25, 0.5);
uvMap.anisotropy = 16;
const material = new THREE.MeshBasicMaterial({map: uvMap});
const mesh = new THREE.Mesh(geometry, material);
scene.add(mesh);
window.addEventListener("mousemove", onMouseMove);
window.addEventListener("resize", resize);
// Add rotation on mousemove
function onMouseMove(ev) {
uvMap.rotation = (ev.clientX / window.innerWidth) * Math.PI * 2;
}
function resize() {
const width = window.innerWidth;
const height = window.innerHeight;
renderer.setSize(width, height);
camera.aspect = width / height;
camera.updateProjectionMatrix();
}
function animate(time) {
mesh.rotation.y = Math.cos(time/ 3000) * 2;
renderer.render(scene, camera);
requestAnimationFrame(animate);
}
resize();
requestAnimationFrame(animate);
body { margin: 0; }
canvas { width: 100vw; height: 100vh; display: block; }
<script src="https://threejs.org/build/three.js"></script>
<canvas></canvas>
First of all, I agree with the solution #Marquizzo provided to your problem. And setting UV explicitly to the geometry should be the easiest way to solve your problem.
But #Marquizzo did not answer why changing the matrix of the texture (set repeatX and repeatY) does not work.
We all know the 2D rotation matrix R
cos -sin
sin cos
UVs are calculated in the shader with a transform matrix T, which is the texture matrix from your question.
T * UV = new UV
To simplify the question, we only consider rotation. And assume we have another additional matrix X for calculating the new UV. Then we have
X * R * UV = new UV
The question now is whether we can find a solution ofX, so that with any rotation, new UV of any points in your question can be calculated correctly. If there is a solution of X, then we can simply use
var X = new Matrix3();
//X.set(x,y,z,...)
texture.matrix.premultiply(X);
Otherwise, we can't find the approach you expected.
Let's create several equations to figure out X.
In the pic below, ABCD is one face of your geometry, and the transparent green is the texture. The UV of point A is (0,1), point B is (0,0), and (1,0), (1,1) for C and D respectively.
The first equation comes from the consideration, without any rotation, the original UV should never be changed (UV for A is always (0,1)). So we should have
X * I * (0, 1) = (0, 1) // I is the identity matrix
From here we can see X should also be an identity matrix.
Then let's see whether the identity matrix X can satisfy the second equation. What's the second equation? Simplify again, let B be the rotation centre(origin) and rotate the texture 90 degrees(counterclockwise). We use -90 to calculate UV though we rotate 90 degrees.
The new UV for point A after rotating the texture 90 degrees should be the current value of E. The value of E is (a/b, 0). Then we have
From this equation we can see X should not be an identity matrix, which means, WE ARE NOT ABLE TO FIND A SOLUTION OF X TO SOLVE YOUR PROBLEM WITH
X * R * UV = new UV
Certainly, you can change the shader of calculating new UVs, but it's even harder than the way #Marquizzo provided.

In A-Frame/THREE.js, is there a method like the Camera.ScreenToWorldPoint() from Unity?

I know a method from Unity whichs is very useful to convert a screen position to a world position : https://docs.unity3d.com/ScriptReference/Camera.ScreenToWorldPoint.html
I've been looking for something similar in A-Frame/THREE.js, but I didn't find anything.
Is there an easy way to convert a screen position to a world position in a plane which is positioned a given distance from the camera ?
This is typically done using Raycaster. An equivalent function using three.js would be written like this:
function screenToWorldPoint(screenSpaceCoord, target = new THREE.Vector3()) {
// convert the screen-space coordinates to normalized device coordinates
// (x and y ranging from -1 to 1):
const ndc = new THREE.Vector2()
ndc.x = 2 * screenSpaceCoord.x / screenWidth - 1;
ndc.y = 2 * screenSpaceCoord.y / screenHeight - 1;
// `Raycaster` can be used to convert this into a ray:
const raycaster = new THREE.Raycaster();
raycaster.setFromCamera(ndc, camera);
// finally, apply the distance:
return raycaster.ray.at(screenSpaceCoord.z, target);
}
Note that coordinates in browsers are usually measured from the top/left corner with y pointing downwards. In that case, the NDC calculation should be:
ndc.y = 1 - 2 * screenSpaceCoord.y / screenHeight;
Another note: instead of using a set distance in screenSpaceCoord.z you could also let three.js compute an intersection with any Object in your scene. For that you can use raycaster.intersectObject() and get a precise depth for the point of intersection with that object. See the documentation and various examples linked here: https://threejs.org/docs/#api/core/Raycaster

How to accelerate calculations when update messive position from 3d to screen (hud)

I want to update hud positon form 3d position to 2d when mouse moving. Since it may have a large number of 3d objects to project to the screen position, I meet a performance problem.
Are there any way to accelerate calculations? The following is how I calculate 3d object position on 2d screen.
function toScreenPosition(obj) {
var vector = new THREE.Vector3();
//calculate screen half size
var widthHalf = 0.5 * renderer.context.canvas.width;
var heightHalf = 0.5 * renderer.context.canvas.height;
//get 3d object position
obj.updateMatrixWorld();
vector.setFromMatrixPosition(obj.matrixWorld);
vector.project(this.camera);
//get 2d position on screen
vector.x = (vector.x * widthHalf) + widthHalf;
vector.y = -(vector.y * heightHalf) + heightHalf;
return {
x: vector.x,
y: vector.y
};
}
Rather than repositioning your HUD in world space every time your camera moves, add your HUD object(s) to your camera object, and position them only once. Then, when your camera moves, your HUD moves along with it, because the camera's transformation is cascaded to it's children.
yourCamera.add(yourHUD);
yourHUD.position.z = 10;
Note that doing it this way (or even positioning it the way you were) may allow scene objects to clip through your HUD geometry, or even appear between your HUD and the camera, obscuring the HUD. If that's what you want, great! If not, you could move your HUD to a second render pass, allowing it to remain "on top."
First, here is an example of your function rewritten for (almost) optimal performance as written in the comments above, the renderloop is obviously just an example to illustrate where to do which calls:
var width = renderer.context.canvas.width;
var height = renderer.context.canvas.height;
// has to be called whenever the canvas-size changes
function onCanvasResize() {
width = renderer.context.canvas.width;
height = renderer.context.canvas.height;
});
var projMatrix = new THREE.Matrix4();
// renderloop-function, called per animation-frame
function render() {
// just needed once per frame (even better would be
// once per camera-movement)
projMatrix.multiplyMatrices(
camera.projectionMatrix,
projMatrix.getInverse(camera.matrixWorld)
);
hudObjects.forEach(function(obj) {
toScreenPosition(obj, projMatrix);
});
}
// wrapped in IIFE to store the local vector-variable (this pattern
// is used everywhere in three.js)
var toScreenPosition = (function() {
var vector = new THREE.Vector3();
return function __toScreenPosition(obj, projectionMatrix) {
// this could potentially be left away, but isn't too
// expensive as there are 'needsUpdate'-checks in place
obj.updateMatrixWorld();
vector.setFromMatrixPosition(obj.matrixWorld);
vector.applyMatrix4(projectionMatrix);
vector.x = (vector.x + 1) * width / 2;
vector.y = (1 - vector.y) * height / 2;
// might want to consider returning a Vector3-instance
// instead, depends on how the result is used
return {x: vector.x, y: vector.y};
}
}) ();
But, considering you want to render a HUD, it would be better to do that independently of the main-scene, making all of the above computations obsolete and also allowing you to choose a different coordinate-system for sizing and positioning of HUD-elements.
I have an example for this here: https://codepen.io/usefulthink/pen/ZKPvPB. There I used an orthographic camera and a seperate scene to render HUD-Elements on top of the 3d-scene. No extra computations required. Plus I can specify the size and position of HUD-elements conveniently in pixel-units (The same would work using a perspective camera, only requires a bit more trigonometry to get it right).

Drawing lines between the Icosahedron vertices without wireframe material and with some line width using WEBGLRenderer

I'm new to threejs
I need to draw a sphere connected with triangles. I use Icosahedron to construct the sphere in the following way
var material = new THREE.MeshPhongMaterial({
emissive : 0xffffff,
transparent: true,
opacity : 0.5,
wireframe : true
});
var icogeo = new THREE.IcosahedronGeometry(80,2);
var mesh = new THREE.Mesh(icogeo, material);
scean.add(mesh);
But i need the width of the line to be more but line width won't show up in windows so i taught of looping through the vertices and draw a cylinder/tube between the vertices. (I can't draw lines because the LineBasicMaterial was not responding to Light.)
for(i=0;i<icogeo.faces.length;i++){
var face = icogeo.faces[i];
//get vertices from face and draw cylinder/tube between the three vertices
}
Can some one please help on drawing the tube/cylinder between two vector3 vertices?
**the problem i'm facing with wireframe was it was not smooth and i can't increase width of it in windows.
If you really want to create a cylinder between two points one way to do is to create it in a unit space and then transform it to your line. But that is very mathy.
An intuitive way to create it is to think about how would you do it in a unit space? A circle around the z axis (in x,y) and another one a bit down z.
Creating a circle in 2d is easy: for ( angle(0,360,360/numsteps) ) (x,y)=(sin(angle),cos(angle))*radius. (see for example Calculating the position of points in a circle).
Now the two butt ends of your cylinder are not in x,y! But If you have two vectors dx,dy you can just multiply your x,y with them and get a 3d position!
So how to get dx, dy? One way is http://en.wikipedia.org/wiki/Gram%E2%80%93Schmidt_process
which reads way more scary than it is. You start with your forward direction, which is your line. forward = normalize(end-start). Then you just pick a direction "up". Usually (0,1,0). Unless forward is already close to up, then pick another one like (1,0,0). Take their cross product. This gives you "left". Then take the cross product between "left" and "forward" to get "right". Now "left" and "right" are you dx and dy!
That way you can make two circles at the two ends of your line. Add triangles in between and you have a cylinder!
Even though I do believe it is an overkill for what you are trying to achieve, here is code that draws a capsule (cylinder with spheres at the end) between two endpoints.
/**
* Returns a THREE.Object3D cylinder and spheres going from top to bottom positions
* #param radius - the radius of the capsule's cylinder
* #param top, bottom - THREE.Vector3, top and bottom positions of cone
* #param radiusSegments - tessellation around equator
* #param openTop, openBottom - whether the end is given a sphere; true means they are not
* #param material - THREE.Material
*/
function createCapsule (radius, top, bottom, radiusSegments, openTop, openBottom, material)
{
radiusSegments = (radiusSegments === undefined) ? 32 : radiusSegments;
openTop = (openTop === undefined) ? false : openTop;
openBottom = (openBottom === undefined) ? false : openBottom;
var capsule = new THREE.Object3D();
var cylinderAxis = new THREE.Vector3();
cylinderAxis.subVectors (top, bottom); // get cylinder height
var cylinderGeom = new THREE.CylinderGeometry (radius, radius, cylinderAxis.length(), radiusSegments, 1, true); // open-ended
var cylinderMesh = new THREE.Mesh (cylinderGeom, material);
// get cylinder center for translation
var center = new THREE.Vector3();
center.addVectors (top, bottom);
center.divideScalar (2.0);
// pass in the cylinder itself, its desired axis, and the place to move the center.
makeLengthAngleAxisTransform (cylinderMesh, cylinderAxis, center);
capsule.add (cylinderMesh);
if (! openTop || ! openBottom)
{
// instance geometry
var hemisphGeom = new THREE.SphereGeometry (radius, radiusSegments, radiusSegments/2, 0, 2*Math.PI, 0, Math.PI/2);
// make a cap instance of hemisphGeom around 'center', looking into some 'direction'
var makeHemiCapMesh = function (direction, center)
{
var cap = new THREE.Mesh (hemisphGeom, material);
makeLengthAngleAxisTransform (cap, direction, center);
return cap;
};
// ================================================================================
if (! openTop)
capsule.add (makeHemiCapMesh (cylinderAxis, top));
// reverse the axis so that the hemiCaps would look the other way
cylinderAxis.negate();
if (! openBottom)
capsule.add (makeHemiCapMesh (cylinderAxis, bottom));
}
return capsule;
}
// Transform object to align with given axis and then move to center
function makeLengthAngleAxisTransform (obj, align_axis, center)
{
obj.matrixAutoUpdate = false;
// From left to right using frames: translate, then rotate; TR.
// So translate is first.
obj.matrix.makeTranslation (center.x, center.y, center.z);
// take cross product of axis and up vector to get axis of rotation
var yAxis = new THREE.Vector3 (0, 1, 0);
// Needed later for dot product, just do it now;
var axis = new THREE.Vector3();
axis.copy (align_axis);
axis.normalize();
var rotationAxis = new THREE.Vector3();
rotationAxis.crossVectors (axis, yAxis);
if (rotationAxis.length() < 0.000001)
{
// Special case: if rotationAxis is just about zero, set to X axis,
// so that the angle can be given as 0 or PI. This works ONLY
// because we know one of the two axes is +Y.
rotationAxis.set (1, 0, 0);
}
rotationAxis.normalize();
// take dot product of axis and up vector to get cosine of angle of rotation
var theta = -Math.acos (axis.dot (yAxis));
// obj.matrix.makeRotationAxis (rotationAxis, theta);
var rotMatrix = new THREE.Matrix4();
rotMatrix.makeRotationAxis (rotationAxis, theta);
obj.matrix.multiply (rotMatrix);
}

three.js - Set the rotation of an object in relation to its own axes

I'm trying to make a static 3D prism out of point clouds with specific numbers of particles in each. I've got the the corner coordinates of each side of the prism based on the angle of turn, and tried spawning the particles in the area bound by these coordinates. Instead, the resulting point clouds have kept only the bottom left coordinate.
Screenshot: http://i.stack.imgur.com/uQ7Q8.png
I've tried to set the rotation of each cloud object such that their edges meet, but they will rotate only around the world centre. I gather this is something to do with rotation matrices and Euler angles, but, having been trying to work them out for 3 solid days, I've despaired. (I'm a sociologist, not a dev, and haven't touched graphics before this project.)
Please help? How do I set the rotation on each face of the prism? Or maybe there is a more sensible way to get the particles to spawn in the correct area in the first place?
The code:
// draw the particles
var n = 0;
do {
var geom = new THREE.Geometry();
var material = new THREE.PointCloudMaterial({size: 1, vertexColors: true, color: 0xffffff});
for (i = 0; i < group[n]; i++) {
if (geom.vertices.length < group[n]){
var particle = new THREE.Vector3(
Math.random() * screens[n].bottomrightback.x + screens[n].bottomleftfront.x,
Math.random() * screens[n].toprightback.y + screens[n].bottomleftfront.y,
Math.random() * screens[n].bottomrightfront.z + screens[n].bottomleftfront.z);
geom.vertices.push(particle);
geom.colors.push(new THREE.Color(Math.random() * 0x00ffff));
}
}
var system = new THREE.PointCloud(geom, material);
scene.add(system);
**// something something matrix Euler something?**
n++
}
while (n < numGroups);
I've tried to set the rotation of each cloud object such that their
edges meet, but they will rotate only around the world centre.
It is true they only rotate around 0,0,0. The simple solution then is to move the object to the center, rotate it, and then move it back to its original position.
For example (Code not tested so might take a bit of tweaking):
var m = new THREE.Matrix4();
var movetocenter = new THREE.Matrix4();
movetocenter.makeTranslation(-x, -y, -z);
var rotate = new THREE.Matrix4();
rotate.makeRotationFromEuler(); //Build your rotation here
var moveback = new THREE.Matrix4();
moveback .makeTranslation(x, y, z);
m.multiply(movetocenter);
m.multiply(rotate);
m.multiply(moveback);
//Now you can use geometry.applyMatrix(m)

Resources