Programmatically generate simple UV Mapping for models - three.js

Coming from this question I'm trying to generate UV Mappings programmatically with Three.js for some models, I need this because my models are being generated programmatically too and I need to apply a simple texture to them. I have read here and successfully generated UV mapping for some simple 3D text but when applying the same mapping to more complex models it just doesn't work.
The texture I'm trying to apply is something like this:
The black background it's just transparent in the PNG image. I need to apply this to my models, it's just a glitter effect so I don't care about the exact position in the model, is any way to create a simple UV Map programatically for this cases?
I'm using this code from the linked question which works great for planar models but doesn't work for non-planar models:
assignUVs = function( geometry ){
geometry.computeBoundingBox();
var max = geometry.boundingBox.max;
var min = geometry.boundingBox.min;
var offset = new THREE.Vector2(0 - min.x, 0 - min.y);
var range = new THREE.Vector2(max.x - min.x, max.y - min.y);
geometry.faceVertexUvs[0] = [];
var faces = geometry.faces;
for (i = 0; i < geometry.faces.length ; i++) {
var v1 = geometry.vertices[faces[i].a];
var v2 = geometry.vertices[faces[i].b];
var v3 = geometry.vertices[faces[i].c];
geometry.faceVertexUvs[0].push([
new THREE.Vector2( ( v1.x + offset.x ) / range.x , ( v1.y + offset.y ) / range.y ),
new THREE.Vector2( ( v2.x + offset.x ) / range.x , ( v2.y + offset.y ) / range.y ),
new THREE.Vector2( ( v3.x + offset.x ) / range.x , ( v3.y + offset.y ) / range.y )
]);
}
geometry.uvsNeedUpdate = true;
}

You need to be more specific. Here, I'll apply UV mapping programmatically
for (i = 0; i < geometry.faces.length ; i++) {
geometry.faceVertexUvs[0].push([
new THREE.Vector2( 0, 0 ),
new THREE.Vector2( 0, 0 ),
new THREE.Vector2( 0, 0 ),
]);
}
Happy?
There are an infinite ways of applying UV coordinates. How about this
for (i = 0; i < geometry.faces.length ; i++) {
geometry.faceVertexUvs[0].push([
new THREE.Vector2( Math.random(), Math.random() ),
new THREE.Vector2( Math.random(), Math.random() ),
new THREE.Vector2( Math.random(), Math.random() ),
]);
}
There's no RIGHT answer. There's just whatever you want to do is up to you. It's kind of like asking how do I apply pencil to paper.
Sorry to be so snarky, just pointing out the question is in one sense nonsensical.
Anyway, there are a few common methods for applying a texture.
Spherical mapping
Imagine your model is translucent, there's a sphere inside made of film and inside the sphere is a point light so that it projects (like a movie projector) from the sphere in all directions. So you do the math to computer the correct UVs for that situation
To get a point on there sphere multiply your points by the inverse of the world matrix for the sphere then normalize the result. After that though there's still the problem of how the texture itself is mapped to the imaginary sphere for which again there are an infinite number of ways.
The simplest way is I guess called mercator projection which is how most 2d maps of the world work. they have the problem that lots of space is wasted at the north and south poles. Assuming x,y,z are the normalized coordinates mentioned in the previous paragraph then
U = Math.atan2(z, x) / Math.PI * 0.5 - 0.5;
V = 0.5 - Math.asin(y) / Math.PI;
Projection Mapping
This is just like a movie. You have a 2d image being projected from a point. Imagine you pointed a movie projector (or a projection TV) at a chair. Compute those points
Computing these points is exactly like computing the 2D image from 3D data that nearly all WebGL apps do. Usually they have a line in their vertex shader like this
gl_Position = matrix * position;
Where matrix = worldViewProjection. You can then do
clipSpace = gl_Position.xy / gl_Position.w
You now have x,y values that go from -1 to +1. You then convert them
to 0 to 1 for UV coords
uv = clipSpace * 0.5 + 0.5;
Of course normally you'd compute UV coordinates at init time in JavaScript but the concept is the same.
Planar Mapping
This is the almost the same as projection mapping except imagine the projector, instead of being a point, is the same size as you want to project it. In other words, with projection mapping as you move your model closer to the projector the picture being projected will get smaller but with planar it won't.
Following the projection mapping example the only difference here is using an orthographic projection instead of a perspective projection.
Cube Mapping?
This is effectively planar mapping from 6 directions. It's up to you
to decide which UV coordinates get which of the 6 planes. I'd guess
most of the time you'd take the normal of the triangle to see which
plane it most faces, then do planar mapping from that plane.
Actually I might be getting my terms mixed up. You can also do
real cube mapping where you have a cube texture but that requires
U,V,W instead of just U,V. For that it's the same as the sphere
example except you just use the normalized coordinates directly as
U,V,W.
Cylindrical mapping
This is like sphere mapping except assume there's tiny cylinder projecting on to your model. Unlike a sphere a cylinder has orientation but basically you can move the points of the model into the orientation of the cylinder then assuming x,y,z are now relative to the cylinder (in other words you multiplied them by the inverse matrix of the matrix that represents the orientation of the cylinder), then .
U = Math.atan2(x, z) / Math.PI * 0.5 + 0.5
V = y
2 more solutions
Maybe you want Environment Mapping?
Here's 1 example and Here's another.
Maybe you should consider using a modeling package like Maya or Blender that have UV editors and UV projectors built in.

Related

Separate mesh by loose parts in threejs

I’ve created a some basic model in Blender. It’s 4 times subdivided cube (I need faces to look like squares), then faces was split by edges (in Blender too). Then I need to separate final mesh by loose parts in threejs (if I do that in Blender the exported file is too big, like a few MB big). So each face become separate one.
How should I do that?
Step 1 (blender)
Step 2 (blender)
After step 2 each face is a separate mesh. I need to replicate step 2 in ThreeJS.
As a result I need to explode faces of a sphere
Here's what I have so far
I'll need much more faces to achieve the desired result. One possible solution would be to place 2 spheres one inside another and then "explode" them simultaneosly. But I need faces to be much smaller too.
My "explosion" code is heavily based on this: https://github.com/akella/ExplodingObjects/blob/0ed8d2668e3fe9913133382bb139c73b9d554494/src/egg.js#L178
And here's demo:
https://tympanus.net/Development/ExplodingObjects/index-heart.html
In your case I would use bufferGeometry.
According to this showcase: https://threejs.org/examples/#webgl_buffergeometry
16000 triangles are generated with normal orientations.
I think you should use BufferGeometry.
Build on top of your codePen,
Here you'll find a solution to have quad faces (instead of your triangles) oriented along a sphere surface.
The core to get the quad faces laying along the surface of a sphere:
for (let down = 0; down < segmentsDown; ++down) {
const v0 = down / segmentsDown;
const v1 = (down + 1) / segmentsDown;
const lat0 = (v0 - 0.5) * Math.PI;
const lat1 = (v1 - 0.5) * Math.PI;
for (let across = 0; across < segmentsAround; ++across) {
//for each quad we randomize the radius
const radius = radiusOfSphere + Math.random()*1.5*radiusOfSphere;
const u0 = across / segmentsAround;
const u1 = (across + 1) / segmentsAround;
const long0 = u0 * Math.PI * 2;
const long1 = u1 * Math.PI * 2;
//for each quad you have 2 triangles
//first triangle of the quad
//getPoint() returns xyz coord in vector3 array with (latitude longitude radius) input
positions.push(...getPoint(lat0, long0, radius));
positions.push(...getPoint(lat1, long0, radius));
positions.push(...getPoint(lat0, long1, radius));
//second triangle of the quad. Order matter for UV mapping,
positions.push(...getPoint(lat1, long0, radius));
positions.push(...getPoint(lat1, long1, radius));
positions.push(...getPoint(lat0, long1, radius));
}
}
https://codepen.io/mquantin/pen/mdqmwMa
I hope this will do the job for you.

How to preserve threejs texture scale while applying texture rotation

I'd like to enable a user to rotate a texture on a rectangle while keeping the aspect ratio of the texture image intact. I'm doing the rotation of a 1:1 aspect ratio image on a surface that is rectangular (say width: 2 and length: 1)
Steps to reproduce:
In the below texture rotation example
https://threejs.org/examples/?q=rotation#webgl_materials_texture_rotation
If we change one of the faces of the geometry like below:
https://github.com/mrdoob/three.js/blob/master/examples/webgl_materials_texture_rotation.html#L57
var geometry = new THREE.BoxBufferGeometry( 20, 10, 10 );
Then you can see that as you play around with the rotation control, the image aspect ratio is distorted. (form a square to a weird shape)
At 0 degree:
At some angle between 0 and 90:
I understand that by changing the repeatX and repeatY factor I can control this. It's also easy to see what the values would be at 0 degree, 90 degree rotations.
But I'm struggling to come up with the formula for repeatX and repeatY that works for any texture rotation given length and width of the rectangular face.
Unfortunately when stretching geometry like that, you'll get a distortion in 3D space, not UV space. In this example, one UV.x unit occupies twice as much 3D space as one UV.y unit:
This is giving you those horizontally-skewed diamonds when in between rotations:
Sadly, there's no way to solve this with texture matrix transforms. The horizontal stretching will be applied after the texture transform, in 3D space, so texture.repeat won't help you avoid this. The only way to solve this is by modifying the UVs so the UV.x units take up as much 3D space as UV.y units:
With complex models, you'd do this kind of "equalizing" in a 3D editor, but since the geometry is simple enough, we can do it via code. See the example below. I'm using a width/height ratio variable to use in my UV.y remapping, that way the UV transformations will match up, regardless of how much wider it is.
//////// Boilerplate Three setup
const renderer = new THREE.WebGLRenderer({canvas: document.querySelector("canvas")});
const camera = new THREE.PerspectiveCamera(50, 1, 1, 100);
camera.position.z = 3;
const scene = new THREE.Scene();
/////////////////// CREATE GEOM & MATERIAL
const width = 2;
const height = 1;
const ratio= width / height; // <- magic number that will help with UV remapping
const geometry = new THREE.BoxBufferGeometry(width, height, width);
let uvY;
const uvArray = geometry.getAttribute("uv").array;
// Re-map UVs to avoid distortion
for (let i2 = 0; i2 < uvArray.length; i2 += 2){
uvY = uvArray[i2 + 1]; // Extract Y value,
uvY -= 0.5; // center around 0
uvY /= ratio; // divide by w/h ratio
uvY += 0.5; // remove center around 0
uvArray[i2 + 1] = uvY;
}
geometry.getAttribute("uv").needsUpdate = true;
const uvMap = new THREE.TextureLoader().load("https://raw.githubusercontent.com/mrdoob/three.js/dev/examples/textures/uv_grid_opengl.jpg");
// Now we can apply texture transformations as expected
uvMap.center.set(0.5, 0.5);
uvMap.repeat.set(0.25, 0.5);
uvMap.anisotropy = 16;
const material = new THREE.MeshBasicMaterial({map: uvMap});
const mesh = new THREE.Mesh(geometry, material);
scene.add(mesh);
window.addEventListener("mousemove", onMouseMove);
window.addEventListener("resize", resize);
// Add rotation on mousemove
function onMouseMove(ev) {
uvMap.rotation = (ev.clientX / window.innerWidth) * Math.PI * 2;
}
function resize() {
const width = window.innerWidth;
const height = window.innerHeight;
renderer.setSize(width, height);
camera.aspect = width / height;
camera.updateProjectionMatrix();
}
function animate(time) {
mesh.rotation.y = Math.cos(time/ 3000) * 2;
renderer.render(scene, camera);
requestAnimationFrame(animate);
}
resize();
requestAnimationFrame(animate);
body { margin: 0; }
canvas { width: 100vw; height: 100vh; display: block; }
<script src="https://threejs.org/build/three.js"></script>
<canvas></canvas>
First of all, I agree with the solution #Marquizzo provided to your problem. And setting UV explicitly to the geometry should be the easiest way to solve your problem.
But #Marquizzo did not answer why changing the matrix of the texture (set repeatX and repeatY) does not work.
We all know the 2D rotation matrix R
cos -sin
sin cos
UVs are calculated in the shader with a transform matrix T, which is the texture matrix from your question.
T * UV = new UV
To simplify the question, we only consider rotation. And assume we have another additional matrix X for calculating the new UV. Then we have
X * R * UV = new UV
The question now is whether we can find a solution ofX, so that with any rotation, new UV of any points in your question can be calculated correctly. If there is a solution of X, then we can simply use
var X = new Matrix3();
//X.set(x,y,z,...)
texture.matrix.premultiply(X);
Otherwise, we can't find the approach you expected.
Let's create several equations to figure out X.
In the pic below, ABCD is one face of your geometry, and the transparent green is the texture. The UV of point A is (0,1), point B is (0,0), and (1,0), (1,1) for C and D respectively.
The first equation comes from the consideration, without any rotation, the original UV should never be changed (UV for A is always (0,1)). So we should have
X * I * (0, 1) = (0, 1) // I is the identity matrix
From here we can see X should also be an identity matrix.
Then let's see whether the identity matrix X can satisfy the second equation. What's the second equation? Simplify again, let B be the rotation centre(origin) and rotate the texture 90 degrees(counterclockwise). We use -90 to calculate UV though we rotate 90 degrees.
The new UV for point A after rotating the texture 90 degrees should be the current value of E. The value of E is (a/b, 0). Then we have
From this equation we can see X should not be an identity matrix, which means, WE ARE NOT ABLE TO FIND A SOLUTION OF X TO SOLVE YOUR PROBLEM WITH
X * R * UV = new UV
Certainly, you can change the shader of calculating new UVs, but it's even harder than the way #Marquizzo provided.

In A-Frame/THREE.js, is there a method like the Camera.ScreenToWorldPoint() from Unity?

I know a method from Unity whichs is very useful to convert a screen position to a world position : https://docs.unity3d.com/ScriptReference/Camera.ScreenToWorldPoint.html
I've been looking for something similar in A-Frame/THREE.js, but I didn't find anything.
Is there an easy way to convert a screen position to a world position in a plane which is positioned a given distance from the camera ?
This is typically done using Raycaster. An equivalent function using three.js would be written like this:
function screenToWorldPoint(screenSpaceCoord, target = new THREE.Vector3()) {
// convert the screen-space coordinates to normalized device coordinates
// (x and y ranging from -1 to 1):
const ndc = new THREE.Vector2()
ndc.x = 2 * screenSpaceCoord.x / screenWidth - 1;
ndc.y = 2 * screenSpaceCoord.y / screenHeight - 1;
// `Raycaster` can be used to convert this into a ray:
const raycaster = new THREE.Raycaster();
raycaster.setFromCamera(ndc, camera);
// finally, apply the distance:
return raycaster.ray.at(screenSpaceCoord.z, target);
}
Note that coordinates in browsers are usually measured from the top/left corner with y pointing downwards. In that case, the NDC calculation should be:
ndc.y = 1 - 2 * screenSpaceCoord.y / screenHeight;
Another note: instead of using a set distance in screenSpaceCoord.z you could also let three.js compute an intersection with any Object in your scene. For that you can use raycaster.intersectObject() and get a precise depth for the point of intersection with that object. See the documentation and various examples linked here: https://threejs.org/docs/#api/core/Raycaster

Three.js - Drawing a torus but unable to understand the equation defined it

I try to do an animation which represents a sphere around which camera is rotating and I have drawn a circle on it (drawn with a THREE.TorusGeometry).
Then, I project a plane on the current point defined by the direction from camera position to the origin (0,0,0).
For a circle defined by y=0 and x²+z²=1 (i.e a circle defined into Oxz plane = equatorial plane of the sphere), you can see the result on :
link 1 : circle defined by y=0 and x²+z²=1
As you can see, the coordinates of plane are well drawn but I can't get to understand why the yellow circle is not drawn into Oxz plane (in this link, you can see that it is in Oxy plane).
Before the matrix multiplication, I defined above the vector of Torus by :
var coordTorus = new THREE.Vector3(radius*Math.cos(timer), 0, radius*Math.sin(timer));
i.e, by x'²+z'²=1 and y'=0 (choice 2). In this case, I don't get a valid result for the yellow circle, it is drawn into Oxy plane and not into Oxz plane like expected.
To get a good result, I have to define x'²+y'²=1 and z'=0 in local plane but I can't understand why ?
If someone could tell me the explication ?
It was hard to extract from all the code where exactly your problem was. I cleaned things up and solved it differently and I think this Fiddle shows what you wanted.
Instead of rotating all objects I rotated only the camera which seems much simpler then your solution:
/**
* Rotate camera
*/
function rotateCamera() {
// For camera rotation
stepSize += 0.002;
alpha = 2 * Math.PI * stepSize;
if (alpha > 2 * Math.PI) {
stepSize = 0;
}
// Rotate camera around a circle
camera.position.x = center.x + distance * Math.cos(alpha);
camera.position.z = center.y + distance * Math.sin(alpha);
// Camera should look at center
camera.lookAt(new THREE.Vector3(0, 0, 0));
}
And then I added your tangent plane to the camera instead of the scene:
So it rotates with the camera.
camera.add(plane);

CSS3Renderer ignores projectonMatrix property?

I'm doing augmented reality with Three.js and recenlty I tried to combine WebGL and CSS3 rendering to render both 3D content and DOM objects (Mostly for video playback) at the same time. I've started with Closing the gap between html and webgl tutorial, but I cannot get correct visualization using CSS (Although WebGL working fine).
Basically, when doing AR, we have two matrices we have to apply to our scene: projection matrix and camera matrix. The projection matrix (row-major) usually looks like this:
var projectionMatrix = [ 1.820090055466, 0, -0.000550820783, 0,
0, 3.227676868439, -0.036605358124, 0,
0, 0, -1.000199913979,-0.200020000339,
0, 0, -1, 0
];
And camera matrix (row-major) represents a rigid 3D transform (R|t composition) that represents camera transformation in virtual world:
var cameraMatrix = [ 0.790828585625,0.296402275562,-0.535477280617,-0.309822082520,
-0.612037420273,0.382129371166,-0.692378044128,-0.447699964046,
-0.000600785017,0.875284433365,0.483608126640,-0.637073278427,
0.000000000000,0.000000000000,0.000000000000,1.000000000000];
With WebGL it's pretty easy to apply these matrices to a pipeline:
self.wglCamera.matrixAutoUpdate = false;
self.wglCamera.projectionMatrix.set(
pm[0], pm[1], pm[2], pm[3],
pm[4], pm[5], pm[6], pm[7],
pm[8], pm[9], pm[10], pm[11],
pm[12], pm[13], pm[14], pm[15]);
self.wglCamera.matrix.set(
cm[0], cm[1], cm[2], cm[3],
cm[4], cm[5], cm[6], cm[7],
cm[8], cm[9], cm[10], cm[11],
cm[12], cm[13], cm[14], cm[15]);
When I do the same for CSS3 camera, I get incorrect rendering result (VIDEO):
There are two issues:
Red texture (CSS3Object) non-uniformly scaled (it's square in fact)
It always sits in screen center, however it should be located where a blue grid is.
After analyzing CSS3Renderer implementation, I found that only camera FOV property is used to set perspective effect, but the projectionMatrix property is totally ignored when rendering with CSS3Renderer. Is it intended?
// https://github.com/mrdoob/three.js/blob/master/examples/js/renderers/CSS3DRenderer.js#L225
this.render = function ( scene, camera ) {
var fov = 0.5 / Math.tan( THREE.Math.degToRad( camera.fov * 0.5 ) ) * _height;
...
camera.matrixWorldInverse.getInverse( camera.matrixWorld );
// Why we don't use camera.projection Matrix here?
var style = "translate3d(0,0," + fov + "px)" + getCameraCSSMatrix( camera.matrixWorldInverse ) +
" translate3d(" + _widthHalf + "px," + _heightHalf + "px, 0)";
...
};
And, if yes, how I can achieve desired result?
I've tried to pass PM * CM to camera matrix, but both problems still exists. Mainly I more worried about ignored translation, since rotation looks good.
I'd appreciate any ideas/suggestions! Thanks.

Resources