I'm struggling with the positioning of some aframe text geometry and am wondering if I'm going about this the wrong way 😅
I'm finding that when the box renders, the center point is at the minimum point of all the axises (bottom-left-close). This means the text expands more to the top-right-far than I would expect. This is different from aframe geometry entitites where the center point is at the very center of all axises.
Sorry if the above phrasing is confusing, I'm still not sure how to best describe things in a 3d space 😆
What I'm thinking I need to do is calculate the bounding box after the element has loaded and change the position to the center. I've based that approach on the answer here AFRAME text-geometry component rotation from center?.
Does that seem like the right direction? If so, I'm currently trying to do this through an aframe component
aframe.registerComponent('center-all', {
update() {
// Need to wait for the element to be loaded
setTimeout(() => {
const mesh = this.el.getObject3D('mesh');
const bbox = new THREE.Box3().setFromObject(this.el.object3D);
const offsetX = (bbox.min.x - bbox.max.x) / 2;
const offsetY = (bbox.min.y - bbox.max.y) / 2;
const offsetZ = (bbox.min.z - bbox.max.z) / 2;
mesh.position.set(offsetX, offsetY, offsetZ);
}, 0);
}
});
This code illustrates the problem I'm seeing
This code shows my attempted solution
This code (with the translation hard coded) is more like what I would like
TextGeometry and TextBufferGeometry are both subclasses of the respective geometry classes, and so both have the boundingBox property. You just need to compute it, then get its center point:
textGeo.computeBoundingBox();
const center = textGeo.boundingBox.getCenter(new Vector3());
Then center will accurately reflect the center of the geometry, in local space. If you need it in global space, you will need to apply the matrix of the mesh that contains textGeo to the center vector, e.g.
textMesh.updateMatrixWorld();
center.applyMatrix4(textMesh.matrixWorld);
Related
Why won't a MeshPhongMaterial's envMap property work on polygonal faces when viewed through an orthographic camera?
It works on spheres but not an IcosahedronGeometry, for example. If I set the detail parameter of the IcosahedronGeometry to 2+ (more faces), the envMap begins to show. But if I switch to perspective cam, the envMap is fully visible even with detail of 0.
This is what it looks like with perspective cam, note the cubemap reflection of some clouds:
This is what it looks like with orthogonal cam and detail is 0, note the lack of cubemap reflection (please ignore the warping of the image):
Orthogonal cam, detail is 1; cubemap reflection is back:
The only difference between these two versions of the script is the camera.
Here's the code I'm using to create this object:
import uvGridImg from './img/grid.png';
import nxImg from './img/nx_50.png';
import pxImg from './img/px_50.png';
import nyImg from './img/ny_50.png';
import pyImg from './img/py_50.png';
import nzImg from './img/nz_50.png';
import pzImg from './img/pz_50.png';
const envTexture = new THREE.CubeTextureLoader().load([
pxImg, //right
nxImg, //left
pyImg, //top
nyImg, //bottom
pzImg, //back
nzImg, //front
])
envTexture.mapping = THREE.CubeReflectionMapping
const texture = new THREE.TextureLoader().load(uvGridImg)
const icosahedronGeometry = new THREE.IcosahedronGeometry(1, 0)
const material = new THREE.MeshPhongMaterial()
material.map = texture;
material.envMap = envTexture;
///An attempt to explicitly set every potentially relevant property...
material.envMapIntensity = 0.0;
material.transparent = false;
material.opacity = 1.0;
material.depthTest = true;
material.depthWrite = true;
material.alphaTest = 0.0;
material.visible = true;
material.side = THREE.FrontSide;
material.flatShading=true;
material.roughness = 0.0;
material.color.setHex(0xffffff);
material.emissive.setHex(0x0);
material.specular.setHex(0xffffff);
material.shininess = 30.0;
material.wireframe = false;
material.flatShading = false;
material.combine = THREE.MultiplyOperation;
material.reflectivity = 1.0;
material.refractionRatio = 1.0;
const icosahedron = new THREE.Mesh(icosahedronGeometry, material)
icosahedron.position.x = 0
scene.add(icosahedron);
For an MVCE, please see the example from this tutorial (you will have to add your own orthographic cam to compare with the given perspective cam). Here are image files for the textures.
UPDATE It seems like all non-spherical geometries cannot render a cubemap reflection correctly thru an orthographic cam. The plane, cylinder, box geometries all fail to render a environment map reflection beyond painting the entire face one uniform reflective color. The sphere, lathe, *hedron geometries (at high levels of detail) will render cubemap reflections.
Is there any way around this? This seems like a huge limitation while working with orthographic cameras.
This is the expected behavior.
With perspective cameras, the reflective "rays" separate as they get further away from the camera, reflecting a wider angle of the envMap.
With an ortho camera these reflective "rays" do not separate because they're parallel. So the reflection on a flat face is a very narrow angle of the envMap.
See this demo I quickly put together to demonstrate what you're seeing:
It seems to work on spheres because when the parallel orthographic "rays" bounce off a rounded surface, these rays grow wider apart. They are no longer parallel (as is the case with a Perspective camera).
You can see the reflections still work on your demo because the faces alternate between light and dark as you rotate them. You're just looking at a much narrower segment of the envMap:
I hopefully have a simple problem I can't get an answer to.
I have three js geometric spheres which move in a box. I place this box at the centre of the scene. The mechanics of how the spheres stay in the box is irrelevant. What is important is the spheres move about the origin (0,0) and the canvas always fills the page.
I want to draw a line from the moving spheres to a div or img element on the page. To do this I would assume I have to transform the css coordinates to three js coordinates. I found something I thought did something like this (Note: Over use of somethings to signify I am probably mistaken)
I can add a html element to the same scene/camera as webgl renderer but obviously using a different renderer but I am unsure how to proceed from there?
Basically I want to know:
How should I change the size of the div preserving aspect ratio if need be?
In essence I want the div or element to fill screen at some camera depth.
How to place the div at the centre of the scene by default?
Mines seems to be shifted 1000 in z direction but this might be the size of the div(img) which I have to bring into view.
How to draw a line between the webgl sphere and html div/img?
thanks in advance!
Unfortunately you have asked 3 questions, it is tricky to address them all at once.
I will explain how to position DIV element on top of some 3D object. My example would be a tooltip that appears when you hover the object by mouse: http://jsfiddle.net/mmalex/ycnh0wze/
So let's get started,
First of all you need to subscribe mouse events and convert 2D coordinates of a mouse to relative coordinates on the viewport. Very well explained you will find it here: Get mouse clicked point's 3D coordinate in three.js
Having 2D coordinates, raycast the object. These steps are quite trivial, but for completeness I provide the code chunk.
var raycaster = new THREE.Raycaster();
function handleManipulationUpdate() {
// cleanup previous results, mouse moved and they're obsolete now
latestMouseIntersection = undefined;
hoveredObj = undefined;
raycaster.setFromCamera(mouse, camera);
{
var intersects = raycaster.intersectObjects(tooltipEnabledObjects);
if (intersects.length > 0) {
// keep point in 3D for next steps
latestMouseIntersection = intersects[0].point;
// remember what object was hovered, as we will need to extract tooltip text from it
hoveredObj = intersects[0].object;
}
}
... // do anything else
//with some conditions it may show or hide tooltip
showTooltip();
}
// Following two functions will convert mouse coordinates
// from screen to three.js system (where [0,0] is in the middle of the screen)
function updateMouseCoords(event, coordsObj) {
coordsObj.x = ((event.clientX - renderer.domElement.offsetLeft + 0.5) / window.innerWidth) * 2 - 1;
coordsObj.y = -((event.clientY - renderer.domElement.offsetTop + 0.5) / window.innerHeight) * 2 + 1;
}
function onMouseMove(event) {
updateMouseCoords(event, mouse);
handleManipulationUpdate();
}
window.addEventListener('mousemove', onMouseMove, false);
And finally see the most important part, DIV element placement. To understand the code it is essential to get convenient with Vector3.project method.
The sequence of calculations is as follows:
Get 2D mouse coordinates,
Raycast object and remember 3D coordinate of intersection (if any),
Project 3D coordinate back into 2D (this step may seem redundant here, but what if you want to trigger object tooltip programmatically? You won't have mouse coordinates)
Mess around to place DIV centered above 2D point, with nice margin.
// This will move tooltip to the current mouse position and show it by timer.
function showTooltip() {
var divElement = $("#tooltip");
//element found and mouse hovers some object?
if (divElement && latestMouseIntersection) {
//hide until tooltip is ready (prevents some visual artifacts)
divElement.css({
display: "block",
opacity: 0.0
});
//!!! === IMPORTANT ===
// DIV element is positioned here
var canvasHalfWidth = renderer.domElement.offsetWidth / 2;
var canvasHalfHeight = renderer.domElement.offsetHeight / 2;
var tooltipPosition = latestMouseProjection.clone().project(camera);
tooltipPosition.x = (tooltipPosition.x * canvasHalfWidth) + canvasHalfWidth + renderer.domElement.offsetLeft;
tooltipPosition.y = -(tooltipPosition.y * canvasHalfHeight) + canvasHalfHeight + renderer.domElement.offsetTop;
var tootipWidth = divElement[0].offsetWidth;
var tootipHeight = divElement[0].offsetHeight;
divElement.css({
left: `${tooltipPosition.x - tootipWidth/2}px`,
top: `${tooltipPosition.y - tootipHeight - 5}px`
});
//get text from hovered object (we store it in .userData)
divElement.text(hoveredObj.userData.tooltipText);
divElement.css({
opacity: 1.0
});
}
}
I know a method from Unity whichs is very useful to convert a screen position to a world position : https://docs.unity3d.com/ScriptReference/Camera.ScreenToWorldPoint.html
I've been looking for something similar in A-Frame/THREE.js, but I didn't find anything.
Is there an easy way to convert a screen position to a world position in a plane which is positioned a given distance from the camera ?
This is typically done using Raycaster. An equivalent function using three.js would be written like this:
function screenToWorldPoint(screenSpaceCoord, target = new THREE.Vector3()) {
// convert the screen-space coordinates to normalized device coordinates
// (x and y ranging from -1 to 1):
const ndc = new THREE.Vector2()
ndc.x = 2 * screenSpaceCoord.x / screenWidth - 1;
ndc.y = 2 * screenSpaceCoord.y / screenHeight - 1;
// `Raycaster` can be used to convert this into a ray:
const raycaster = new THREE.Raycaster();
raycaster.setFromCamera(ndc, camera);
// finally, apply the distance:
return raycaster.ray.at(screenSpaceCoord.z, target);
}
Note that coordinates in browsers are usually measured from the top/left corner with y pointing downwards. In that case, the NDC calculation should be:
ndc.y = 1 - 2 * screenSpaceCoord.y / screenHeight;
Another note: instead of using a set distance in screenSpaceCoord.z you could also let three.js compute an intersection with any Object in your scene. For that you can use raycaster.intersectObject() and get a precise depth for the point of intersection with that object. See the documentation and various examples linked here: https://threejs.org/docs/#api/core/Raycaster
Relatively new to THREE.js. I am trying to figure out how to project a DIV with text into an equirectangular panorama.
I have this simple example working with my panorama images.
https://threejs.org/examples/webgl_panorama_equirectangular
The question: I have a latitude and longitude of a feature that's in my panorama, I'd like to project a DIV labeling such item into 3D space. How do I convert longitude and latitude into X and Y on the canvas so I can change the DIVS left and top style attributes so the label renders in 3D space and appears fixed to its coordinates?
UPDATE:
For clarity, how does one take planet earth longitude and latitude, and convert it into X Y pixels inside a mesh? I know where the image was taken on earth, and I know where an item in that picture was taken on earth. I want to label that item in 3D space.
Any help would be much appreciated. Thanks.
Code from this question seems to do the trick:
3d coordinates to 2d screen position
function getCoordinates( element, camera ) {
var screenVector = new THREE.Vector3();
element.localToWorld( screenVector );
screenVector.project( camera );
var posx = Math.round(( screenVector.x + 1 ) * renderer.domElement.offsetWidth / 2 );
var posy = Math.round(( 1 - screenVector.y ) * renderer.domElement.offsetHeight / 2 );
console.log( posx, posy );
}
I updated the jsfiddle with the new version of Three.js
http://jsfiddle.net/L0rdzbej/409/
I'm using THREE.js. I have a model of a human that I want to be able to select different portions of. For example, if you click on one of the legs a particular action will be executed. My original idea was to split the model up into separate meshes and then use raytracing to determine which object was selected. But now when i render the scene, the shading along the edges of each mesh doesn't blend with adjoining meshes. This leaves ragged looking lines across the model between selectable portions. Is there a way to blend the shading between the mesh pieces I've created? Or is there a better way to select part of a mesh other than creating separate meshes? I have some programming experience, but this is the first time I've tried to use three.js. Any insight would be greatly appreciated.
You may create additional attribute for each triangle, that would be color of the bodypart that it belongs to. So, all triangles of the left leg would be red, all triangles of right leg would be blue etc.
Render your model normally, and add second pass where you would render triangles colored in the way described above, so no shading at all. Then, you could get your mouse position where the user clicked and look up in that bodypart-colored framebuffer and just check the pixel color on the place where user clicked.
This technique of picking 3d objects by assigning them different colors, rendering those colors to another texture and then checking color of clicked pixel is quite common, although it has some flaws. On the other hand, neither is ray testing absolutely accurate.
I believe that this demo runs actually based on that concept - demo.
var aiGeojj = new t.CubeGeometry(30, 30, 30);
var uprighters = Math.floor((Math.random() * 11));
var aiMaterialjj = new t.MeshBasicMaterial({ map: t.ImageUtils.loadTexture('images/images_bots/greenbot/upright/' + uprighters + '.gif'), opacity: 0, transparent: true });
var ojj= new t.Mesh(aiGeojj, aiMaterialjj);
ojj.limbs = [];
ojj.trunk = [];
var aiGeojjkey2c = new t.CubeGeometry(50, 50, 50);
var uprightersc = Math.floor((Math.random() * 11));
var aiMaterialjjc = new t.MeshBasicMaterial({ map: t.ImageUtils.loadTexture('images/images_bots/greenbot/upright/' + uprightersc + '.gif'), opacity: 1, transparent: true });
var ojjkey2c = new t.Mesh(aiGeojjkey2c, aiMaterialjjc);
ojjkey2c.id = "hiworld";
ojj.add(ojjkey2c);
ojj.trunk.push(ojjkey2c);
for( var you = 0; you < ojj.length; you++){
for( var youb = 0; youb < ojj[you].trunk.length; youb++){
window.alert( ojj[you].trunk[youb].id);
}
}