Put Labels on every node(Three.Point) in scene Three.JS - three.js

I am new to three.js and modifying some existing code.
The existing code is rendering a graph using "THREE.BufferGeometry" + "THREE.Points"
var geometryPc = new THREE.BufferGeometry();
var materialPc = new THREE.ShaderMaterial({....});
this.mesh = new THREE.Points(geometryPc, materialPc);
I am trying to put a label text on every node which moves with the node.
I tried:
I tried creating "THREE.Sprite" for each node and then assigning it positions relative to that node.
let texture = new THREE.Texture(canvas);
let spriteMaterial = new THREE.SpriteMaterial({map: texture, useScreenCoordinates: false});
let sprite = new THREE.Sprite(spriteMaterial);
Thats seems to be working but UI becomes too heavy when number of nodes are relatively high.
I would prefer to use "BufferGeometry" to create texts as well. But could not find a way to do that.
Is there any better way to put text on the nodes?

Your approach with sprites, altough by far the most obvious, unfortunately will not be sufficient. Each sprite, if I understand correctly, creates its own mesh with its own texture, so each causes a separate draw call. This approach is not scalable.
The way I did it was to make a shader capable of rendering different parts of an image and then make an image containing letters (in a monospace font). Then, to each point in geometry (a place where a label should be rendered), I pass such a set of parameters (shader attributes) for every letter rendered:
positionX: this.position.x, //position of entire label
positionY: this.position.y,
positionZ: this.position.z,
colorR: this.color.r,
colorG: this.color.g,
colorB: this.color.b,
colorA: this.visible ? (this.finalAlpha) : 0,
scale: this.camera.zoom, //scale must depend on camera zoom
spriteNumber: this.getTextPosition(lines[i][j]), //see below ;p
offset: j + i * 32768, //this is for positioning one particular letter,
//x and y merged together because I ran out of parameters
size: this.size
i and j are a "x" and "y" position of a letter in a label, the shader does offsetting by itself; other parameters should be more or less obvious :)
ParticleLabel.prototype.getTextPosition = function(symbol){
switch(symbol){
case '0': return 1;
case '1': return 2;
case '2': return 3;
(...)
case 'A': return 20;
case 'B': return 21;
case 'C': return 22;
(...)
I can't show entire code as I made it for a commercial solution, but I'll make an example on codepen or sth later on to show a working solution.

Related

How to "move" or "traverse" the hyperbolic tessellation in MagicTile?

Alright I think I've mostly figured out how the MagicTile works, the source code at least (not really the Math as much yet). It all begins with the build and render calls in the MainForm.cs. It generates a tessellation like this:
First, it "generates" the tessellation. Since MagicTile is a Rubic's cube-like game, I guess it just statically computes all of the tiles up front. It does this by starting with a central tile, and reflecting its polygon (and the polygon's segments and points) using some sort of math which I've read about several times but I couldn't explain. Then it appears they allow rotations of the tessellation, where they call code like this in the "renderer":
Polygon p = sticker.Poly.Clone();
p.Transform( m_mouseMotion.Isometry );
Color color = GetStickerColor( sticker );
GLUtils.DrawConcavePolygon( p, color, GrabModelTransform() );
They track the mouse position, like if you are dragging, and somehow that is used to create an "isometry" to augment / transform the overall tessellation. So then we transform the polygon using that isometry. _It appears they only do the central tile and 1 or 2 levels after that, but I can't quite tell, I haven't gotten the app to run and debug yet (it's also in C# and that's a new language for me, coming from TypeScript). The Transform function digs down like this (here it is in TypeScript, as I've been converting it):
TransformIsometry(isometry: Isometry) {
for (let s of this.Segments) {
s.TransformIsometry(isometry)
}
this.Center = isometry.Apply(this.Center)
}
That goes into the transform for the segments here:
/// <summary>
/// Apply a transform to us.
/// </summary>
TransformInternal<T extends ITransform>(transform: T) {
// NOTES:
// Arcs can go to lines, and lines to arcs.
// Rotations may reverse arc directions as well.
// Arc centers can't be transformed directly.
// NOTE: We must calc this before altering the endpoints.
let mid: Vector3D = this.Midpoint
if (UtilsInfinity.IsInfiniteVector3D(mid)) {
mid = this.P2.MultiplyWithNumber(UtilsInfinity.FiniteScale)
}
mid = UtilsInfinity.IsInfiniteVector3D(this.P1)
? this.P2.MultiplyWithNumber(UtilsInfinity.FiniteScale)
: this.P1.MultiplyWithNumber(UtilsInfinity.FiniteScale)
this.P1 = transform.ApplyVector3D(this.P1)
this.P2 = transform.ApplyVector3D(this.P2)
mid = transform.ApplyVector3D(mid)
// Can we make a circle out of the transformed points?
let temp: Circle = new Circle()
if (
!UtilsInfinity.IsInfiniteVector3D(this.P1) &&
!UtilsInfinity.IsInfiniteVector3D(this.P2) &&
!UtilsInfinity.IsInfiniteVector3D(mid) &&
temp.From3Points(this.P1, mid, this.P2)
) {
this.Type = SegmentType.Arc
this.Center = temp.Center
// Work out the orientation of the arc.
let t1: Vector3D = this.P1.Subtract(this.Center)
let t2: Vector3D = mid.Subtract(this.Center)
let t3: Vector3D = this.P2.Subtract(this.Center)
let a1: number = Euclidean2D.AngleToCounterClock(t2, t1)
let a2: number = Euclidean2D.AngleToCounterClock(t3, t1)
this.Clockwise = a2 > a1
} else {
// The circle construction fails if the points
// are colinear (if the arc has been transformed into a line).
this.Type = SegmentType.Line
// XXX - need to do something about this.
// Turn into 2 segments?
// if( UtilsInfinity.IsInfiniteVector3D( mid ) )
// Actually the check should just be whether mid is between p1 and p2.
}
}
So as far as I can tell, this will adjust the segments based on the mouse position, somehow. Mouse position isometry updating code is here.
So it appears they don't have the functionality to "move" the tiling, like if you were walking on it, like in HyperRogue.
So after having studied this code for a few days, I am not sure how to move or walk along the tiles, moving the outer tiles toward the center, like you're a giant walking on Earth.
First small question, can you do this with MagicTile? Can you somehow update the tessellation to move a different tile to the center? (And have a function which I could plug a tween/animation into so it animates there). Or do I need to write some custom new code? If so, what do I need to do roughly speaking, maybe some pseudocode?
What I imagine is, user clicks on the outer part of the tessellation. We convert that click data to the tile index in the tessellation, then basically want to do tiling.moveToCenter(tile), but frame-by-frame-animation, so not quite sure how that would work. But that moveToCenter, what would that do in terms of the MagicTile rendering/tile-generating code?
As I described in the beginning, it first generates the full tessellation, then only updates 1-3 layers of the tiles for their puzzles. So it's like I need to first shift the frame of reference, and recompute all the potential visible tiles, somehow not recreating the ones that were already created. I don't quite see how that would work, do you? Once the tiles are recomputed, then I just re-render and it should show the updated center.
Is it a simple matter of calling some code like this again, for each tile, where the isometry is somehow updated with a border-ish position on the tessellation?
Polygon p = sticker.Poly.Clone();
p.Transform( m_mouseMotion.Isometry );
Or must I do something else? I can't quite see the full picture yet.
Or is that what these 3 functions are doing! TypeScript port of the C# MagicTile:
// Move from a point p1 -> p2 along a geodesic.
// Also somewhat from Don.
Geodesic(g: Geometry, p1: Complex, p2: Complex) {
let t: Mobius = Mobius.construct()
t.Isometry(g, 0, p1.Negate())
let p2t: Complex = t.ApplyComplex(p2)
let m2: Mobius = Mobius.construct()
let m1: Mobius = Mobius.construct()
m1.Isometry(g, 0, p1.Negate())
m2.Isometry(g, 0, p2t)
let m3: Mobius = m1.Inverse()
this.Merge(m3.Multiply(m2.Multiply(m1)))
}
Hyperbolic(g: Geometry, fixedPlus: Complex, scale: number) {
// To the origin.
let m1: Mobius = Mobius.construct()
m1.Isometry(g, 0, fixedPlus.Negate())
// Scale.
let m2: Mobius = Mobius.construct()
m2.A = new Complex(scale, 0)
m2.C = new Complex(0, 0)
m2.B = new Complex(0, 0)
m2.D = new Complex(1, 0)
// Back.
// Mobius m3 = m1.Inverse(); // Doesn't work well if fixedPlus is on disk boundary.
let m3: Mobius = Mobius.construct()
m3.Isometry(g, 0, fixedPlus)
// Compose them (multiply in reverse order).
this.Merge(m3.Multiply(m2.Multiply(m1)))
}
// Allow a hyperbolic transformation using an absolute offset.
// offset is specified in the respective geometry.
Hyperbolic2(
g: Geometry,
fixedPlus: Complex,
point: Complex,
offset: number,
) {
// To the origin.
let m: Mobius = Mobius.construct()
m.Isometry(g, 0, fixedPlus.Negate())
let eRadius: number = m.ApplyComplex(point).Magnitude
let scale: number = 1
switch (g) {
case Geometry.Spherical:
let sRadius: number = Spherical2D.e2sNorm(eRadius)
sRadius = sRadius + offset
scale = Spherical2D.s2eNorm(sRadius) / eRadius
break
case Geometry.Euclidean:
scale = (eRadius + offset) / eRadius
break
case Geometry.Hyperbolic:
let hRadius: number = DonHatch.e2hNorm(eRadius)
hRadius = hRadius + offset
scale = DonHatch.h2eNorm(hRadius) / eRadius
break
default:
break
}
this.Hyperbolic(g, fixedPlus, scale)
}

How to accelerate calculations when update messive position from 3d to screen (hud)

I want to update hud positon form 3d position to 2d when mouse moving. Since it may have a large number of 3d objects to project to the screen position, I meet a performance problem.
Are there any way to accelerate calculations? The following is how I calculate 3d object position on 2d screen.
function toScreenPosition(obj) {
var vector = new THREE.Vector3();
//calculate screen half size
var widthHalf = 0.5 * renderer.context.canvas.width;
var heightHalf = 0.5 * renderer.context.canvas.height;
//get 3d object position
obj.updateMatrixWorld();
vector.setFromMatrixPosition(obj.matrixWorld);
vector.project(this.camera);
//get 2d position on screen
vector.x = (vector.x * widthHalf) + widthHalf;
vector.y = -(vector.y * heightHalf) + heightHalf;
return {
x: vector.x,
y: vector.y
};
}
Rather than repositioning your HUD in world space every time your camera moves, add your HUD object(s) to your camera object, and position them only once. Then, when your camera moves, your HUD moves along with it, because the camera's transformation is cascaded to it's children.
yourCamera.add(yourHUD);
yourHUD.position.z = 10;
Note that doing it this way (or even positioning it the way you were) may allow scene objects to clip through your HUD geometry, or even appear between your HUD and the camera, obscuring the HUD. If that's what you want, great! If not, you could move your HUD to a second render pass, allowing it to remain "on top."
First, here is an example of your function rewritten for (almost) optimal performance as written in the comments above, the renderloop is obviously just an example to illustrate where to do which calls:
var width = renderer.context.canvas.width;
var height = renderer.context.canvas.height;
// has to be called whenever the canvas-size changes
function onCanvasResize() {
width = renderer.context.canvas.width;
height = renderer.context.canvas.height;
});
var projMatrix = new THREE.Matrix4();
// renderloop-function, called per animation-frame
function render() {
// just needed once per frame (even better would be
// once per camera-movement)
projMatrix.multiplyMatrices(
camera.projectionMatrix,
projMatrix.getInverse(camera.matrixWorld)
);
hudObjects.forEach(function(obj) {
toScreenPosition(obj, projMatrix);
});
}
// wrapped in IIFE to store the local vector-variable (this pattern
// is used everywhere in three.js)
var toScreenPosition = (function() {
var vector = new THREE.Vector3();
return function __toScreenPosition(obj, projectionMatrix) {
// this could potentially be left away, but isn't too
// expensive as there are 'needsUpdate'-checks in place
obj.updateMatrixWorld();
vector.setFromMatrixPosition(obj.matrixWorld);
vector.applyMatrix4(projectionMatrix);
vector.x = (vector.x + 1) * width / 2;
vector.y = (1 - vector.y) * height / 2;
// might want to consider returning a Vector3-instance
// instead, depends on how the result is used
return {x: vector.x, y: vector.y};
}
}) ();
But, considering you want to render a HUD, it would be better to do that independently of the main-scene, making all of the above computations obsolete and also allowing you to choose a different coordinate-system for sizing and positioning of HUD-elements.
I have an example for this here: https://codepen.io/usefulthink/pen/ZKPvPB. There I used an orthographic camera and a seperate scene to render HUD-Elements on top of the 3d-scene. No extra computations required. Plus I can specify the size and position of HUD-elements conveniently in pixel-units (The same would work using a perspective camera, only requires a bit more trigonometry to get it right).

Three.js - Arranging cubes in a grid

I would like to position cubes in a rectangular/square like grid. I'm having trouble trying to create some methodology in depending on what I pick through an HTML form input (checkboxes) to have it arrange left to right and up to down, a series of cubes, in a prearranged grid all on the same plane.
What measurement units is three.js in? Right now, I'm setting up my shapes using the built-in geometries, for instance.
var planeGeometry = new THREE.PlaneGeometry(4, 1, 1, 1);
The 4 and 1; I'm unsure what that measures up to in pixels, although I do see it rendered. I'm resorting to eyeballing it (guess and checking) every time so that it looks acceptable.
Without a fair bit of extra math THREE is not measured in pixels.
To make a simple grid (I leave optimizations, colors, etc for future refinements) try something like:
var hCount = from_my_web_form('horiz'),
vCount = from_my_web_form('vert'),
size = 1,
spacing = 1.3;
var grid = new THREE.Object3d(); // just to hold them all together
for (var h=0; h<hCount; h+=1) {
for (var v=0; v<vCount; v+=1) {
var box = new THREE.Mesh(new THREE.BoxGeometry(size,size,size),
new THREE.MeshBasicMaterial());
box.position.x = (h-hCount/2) * spacing;
box.position.y = (v-vCount/2) * spacing;
grid.add(box);
}
}
scene.add(grid);

threejs selecting different parts of a mesh

I'm using THREE.js. I have a model of a human that I want to be able to select different portions of. For example, if you click on one of the legs a particular action will be executed. My original idea was to split the model up into separate meshes and then use raytracing to determine which object was selected. But now when i render the scene, the shading along the edges of each mesh doesn't blend with adjoining meshes. This leaves ragged looking lines across the model between selectable portions. Is there a way to blend the shading between the mesh pieces I've created? Or is there a better way to select part of a mesh other than creating separate meshes? I have some programming experience, but this is the first time I've tried to use three.js. Any insight would be greatly appreciated.
You may create additional attribute for each triangle, that would be color of the bodypart that it belongs to. So, all triangles of the left leg would be red, all triangles of right leg would be blue etc.
Render your model normally, and add second pass where you would render triangles colored in the way described above, so no shading at all. Then, you could get your mouse position where the user clicked and look up in that bodypart-colored framebuffer and just check the pixel color on the place where user clicked.
This technique of picking 3d objects by assigning them different colors, rendering those colors to another texture and then checking color of clicked pixel is quite common, although it has some flaws. On the other hand, neither is ray testing absolutely accurate.
I believe that this demo runs actually based on that concept - demo.
var aiGeojj = new t.CubeGeometry(30, 30, 30);
var uprighters = Math.floor((Math.random() * 11));
var aiMaterialjj = new t.MeshBasicMaterial({ map: t.ImageUtils.loadTexture('images/images_bots/greenbot/upright/' + uprighters + '.gif'), opacity: 0, transparent: true });
var ojj= new t.Mesh(aiGeojj, aiMaterialjj);
ojj.limbs = [];
ojj.trunk = [];
var aiGeojjkey2c = new t.CubeGeometry(50, 50, 50);
var uprightersc = Math.floor((Math.random() * 11));
var aiMaterialjjc = new t.MeshBasicMaterial({ map: t.ImageUtils.loadTexture('images/images_bots/greenbot/upright/' + uprightersc + '.gif'), opacity: 1, transparent: true });
var ojjkey2c = new t.Mesh(aiGeojjkey2c, aiMaterialjjc);
ojjkey2c.id = "hiworld";
ojj.add(ojjkey2c);
ojj.trunk.push(ojjkey2c);
for( var you = 0; you < ojj.length; you++){
for( var youb = 0; youb < ojj[you].trunk.length; youb++){
window.alert( ojj[you].trunk[youb].id);
}
}

updating a box2d fixture position frame-by-frame for an animation

The sprites I am using in my game have complex shapes and animations. Also I'm only interested in setting contact listeners for certain parts of the sprite. I would like to set fixtures for the specific contact areas of interest. How can I keep moving body fixtures in the right positions as I change the sprite animations frame by frame?
It`s not possible to change fixtures position. Only destroying and creating then again (but it will decrease performance).
Instead of it, you can create 2 separated bodies and get then together using joints. It will be the same behavior of 2 fixtures.
I don't know if this is a right approach or not but for it doesn't make any performance issue, so you can try it.
First you have to destroy the current fixture of the body after saving its last position.
float body_x=Body.getPosition().x;
float body_y=Body.getPosition().y;
Body.destroyFixture(Body.getFixtureList().get(0));
And then you have to create a new fixture for that body like this
Body.createFixture(createFixturePart(
body_x,
body_y,
Width,
Height,
Angle, 1, 1, 0, -1));
Here createFixturePart is my custumized function to create fixture of a body. You can have that while you create a body.And for the new fixture you can change the Width, height and Angle of the fixture according to your requirement. But don't re-create the fixture for every render cycle, instead change it only during the change of frame in the animation or whole animation.
createFixturePart Method
public FixtureDef createFixturePart(float x, float y, float width,
float height, float angle, int mass, int density, int type,
int groupIndex) {
PolygonShape shape = new PolygonShape();
shape.setAsBox(width, height);
shape.setAsBox(width / 2, height / 2, new Vector2(0, 0),
(float) Math.toRadians(angle));
MassData massData = new MassData();
massData.mass = mass;
bodyDef.position.y = y;
bodyDef.position.x = x;
Body body = worldbox.createBody(bodyDef);
body.setMassData(massData);
FixtureDef fixtureDef = new FixtureDef();
fixtureDef.shape = shape;
fixtureDef.density = density;
fixtureDef.filter.groupIndex = (short) groupIndex;
fixtureDef.restitution = 10;
return fixtureDef;
}
To change fixtures positions destory them, and create new fixtures at needed positions. But, I think, it is not good solution to change body fixtures, because it may corrupt simulation and decrease perfomance.

Resources