I’ve created a some basic model in Blender. It’s 4 times subdivided cube (I need faces to look like squares), then faces was split by edges (in Blender too). Then I need to separate final mesh by loose parts in threejs (if I do that in Blender the exported file is too big, like a few MB big). So each face become separate one.
How should I do that?
Step 1 (blender)
Step 2 (blender)
After step 2 each face is a separate mesh. I need to replicate step 2 in ThreeJS.
As a result I need to explode faces of a sphere
Here's what I have so far
I'll need much more faces to achieve the desired result. One possible solution would be to place 2 spheres one inside another and then "explode" them simultaneosly. But I need faces to be much smaller too.
My "explosion" code is heavily based on this: https://github.com/akella/ExplodingObjects/blob/0ed8d2668e3fe9913133382bb139c73b9d554494/src/egg.js#L178
And here's demo:
https://tympanus.net/Development/ExplodingObjects/index-heart.html
In your case I would use bufferGeometry.
According to this showcase: https://threejs.org/examples/#webgl_buffergeometry
16000 triangles are generated with normal orientations.
I think you should use BufferGeometry.
Build on top of your codePen,
Here you'll find a solution to have quad faces (instead of your triangles) oriented along a sphere surface.
The core to get the quad faces laying along the surface of a sphere:
for (let down = 0; down < segmentsDown; ++down) {
const v0 = down / segmentsDown;
const v1 = (down + 1) / segmentsDown;
const lat0 = (v0 - 0.5) * Math.PI;
const lat1 = (v1 - 0.5) * Math.PI;
for (let across = 0; across < segmentsAround; ++across) {
//for each quad we randomize the radius
const radius = radiusOfSphere + Math.random()*1.5*radiusOfSphere;
const u0 = across / segmentsAround;
const u1 = (across + 1) / segmentsAround;
const long0 = u0 * Math.PI * 2;
const long1 = u1 * Math.PI * 2;
//for each quad you have 2 triangles
//first triangle of the quad
//getPoint() returns xyz coord in vector3 array with (latitude longitude radius) input
positions.push(...getPoint(lat0, long0, radius));
positions.push(...getPoint(lat1, long0, radius));
positions.push(...getPoint(lat0, long1, radius));
//second triangle of the quad. Order matter for UV mapping,
positions.push(...getPoint(lat1, long0, radius));
positions.push(...getPoint(lat1, long1, radius));
positions.push(...getPoint(lat0, long1, radius));
}
}
https://codepen.io/mquantin/pen/mdqmwMa
I hope this will do the job for you.
Related
I have a problem and although I serached everywhere I couldn't find a solution.
I have a stacked sprite and I'm rotating this sprite around the center of the screen. So I iterate over a list of sprites (stacked) and increase the y-coordinate by 2 every loop (rotation is increased step by step by 0.01f outside of the loop):
foreach(var s in stacked)
{
Vector2 origin = new Vector2(Basic.width / 2, Basic.height / 2);
Rectangle newPosition = new Rectangle(position.X, position.Y - y, position.Width, position.Height);
float angle = 0f;
Matrix transform = Matrix.CreateTranslation(-origin.X, -origin.Y, 0f) *
Matrix.CreateRotationZ(rotation) *
Matrix.CreateTranslation(origin.X, origin.Y, 0f);
Vector2 pos = new Vector2(newPosition.X, newPosition.Y);
pos = Vector2.Transform(pos, transform);
newPosition.X = (int)pos.X;
newPosition.Y = (int)pos.Y;
angle += rotation;
s.Draw(newPosition, origin, angle, Color.White);
y += 2;
}
This works fine. But now my problem. I want not only to rotate the sprite around the center of the screen but also around itself. How to achieve this? I can only set one origin and one rotation per Draw. I would like to rotate the sprite around the origin 'Basic.width / 2, Basic.height / 2' and while it rotates, around 'position.Width / 2, position.Height / 2'. With different rotation speed each. How is this possible?
Thank you in advance!
Just to be clear:
When using SpriteBatch.Draw() with origin and angle, there is only one rotation: the final angle of the sprite.
The other rotations are positional offsets.
The origin in the Draw() call is a translation, rotation, translate back. Your transform matrix shows this quite well:
Matrix transform = Matrix.CreateTranslation(-origin.X, -origin.Y, 0f) *
Matrix.CreateRotationZ(rotation) *
Matrix.CreateTranslation(origin.X, origin.Y, 0f);
//Class level variables:
float ScreenRotation, ScreenRotationSpeed;
float ObjectRotation, ObjectRotationSpeed;
Vector2 ScreenOrigin, SpriteOrigin;
// ...
// In constructor and resize events:
ScreenOrigin = new Vector2(Basic.width <<1, Basic.height <<1);
// shifts are faster for `int` type. If "Basic.width" is `float`:
//ScreenOrigin = new Vector2(Basic.width, Basic.height) * 0.5f;
// In Update():
ScreenRotation += ScreenRotationSpeed; // * gameTime.ElapsedGameTime.Seconds; // for FPS invariant speed where speed = 60 * single frame speed
ObjectRotation+= ObjectRotationSpeed;
//Calculate the screen center rotation once per step
Matrix baseTransform = Matrix.CreateTranslation(-ScreenOrigin.X, -ScreenOrigin.Y, 0f) *
Matrix.CreateRotationZ(ScreenRotation) *
Matrix.CreateTranslation(ScreenOrigin.X, ScreenOrigin.Y, 0f);
// In Draw() at the start of your code snippet posted:
// moved outside of the loop for a translationally invariant vertical y interpretation
// or move it inside the loop and apply -y to position.Y for an elliptical effect
Vector2 ObjectOrigin = new Vector2(position.X, position.Y);
Matrix transform = baseTransform *
Matrix.CreateTranslation(-ObjectOrigin.X, -ObjectOrigin.Y, 0f) *
Matrix.CreateRotationZ(ObjectRotation) *
Matrix.CreateTranslation(ObjectOrigin.X, ObjectOrigin.Y, 0f);
foreach(var s in stacked)
{
Vector2 pos = new Vector2(ObjectOrigin.X, ObjectOrigin.Y - y);
pos = Vector2.Transform(pos, transform);
float DrawAngle = ObjectRotation;
// or float DrawAngle = ScreenRotation;
// or float DrawAngle = ScreenRotation + ObjectRotation;
// or float DrawAngle = 0;
s.Draw(pos, SpriteOrigin, DrawAngle, Color.White);
}
I suggest moving the Draw() parameter away from destinationRectangle and use the Vector2 position directly with scaling. Rotations within square rectangles can differ up to SQRT(2) in aspect ratio, i.e. stretching/squashing. Using Vector2 incurs a cost of higher collision complexity.
I am sorry for the ors, but without complete knowledge of the problem...YMMV
In my 2D projects, I use the vector form of polar coordinates.
The Matrix class requires more calculations than the polar equivalents in 2D. Matrix operates in 3D, wasting cycles calculating Z components.
With normalized direction vectors (cos t,sin t) and a radius(vector length),in many cases I use Vector2.LengthSquared() to avoid the square root when possible.
The only time I have used Matrices in 2D is display projection matrix(entire SpriteBatch) and Mouse and TouchScreen input deprojection(times the inverse of the projection matrix)
I'd like to enable a user to rotate a texture on a rectangle while keeping the aspect ratio of the texture image intact. I'm doing the rotation of a 1:1 aspect ratio image on a surface that is rectangular (say width: 2 and length: 1)
Steps to reproduce:
In the below texture rotation example
https://threejs.org/examples/?q=rotation#webgl_materials_texture_rotation
If we change one of the faces of the geometry like below:
https://github.com/mrdoob/three.js/blob/master/examples/webgl_materials_texture_rotation.html#L57
var geometry = new THREE.BoxBufferGeometry( 20, 10, 10 );
Then you can see that as you play around with the rotation control, the image aspect ratio is distorted. (form a square to a weird shape)
At 0 degree:
At some angle between 0 and 90:
I understand that by changing the repeatX and repeatY factor I can control this. It's also easy to see what the values would be at 0 degree, 90 degree rotations.
But I'm struggling to come up with the formula for repeatX and repeatY that works for any texture rotation given length and width of the rectangular face.
Unfortunately when stretching geometry like that, you'll get a distortion in 3D space, not UV space. In this example, one UV.x unit occupies twice as much 3D space as one UV.y unit:
This is giving you those horizontally-skewed diamonds when in between rotations:
Sadly, there's no way to solve this with texture matrix transforms. The horizontal stretching will be applied after the texture transform, in 3D space, so texture.repeat won't help you avoid this. The only way to solve this is by modifying the UVs so the UV.x units take up as much 3D space as UV.y units:
With complex models, you'd do this kind of "equalizing" in a 3D editor, but since the geometry is simple enough, we can do it via code. See the example below. I'm using a width/height ratio variable to use in my UV.y remapping, that way the UV transformations will match up, regardless of how much wider it is.
//////// Boilerplate Three setup
const renderer = new THREE.WebGLRenderer({canvas: document.querySelector("canvas")});
const camera = new THREE.PerspectiveCamera(50, 1, 1, 100);
camera.position.z = 3;
const scene = new THREE.Scene();
/////////////////// CREATE GEOM & MATERIAL
const width = 2;
const height = 1;
const ratio= width / height; // <- magic number that will help with UV remapping
const geometry = new THREE.BoxBufferGeometry(width, height, width);
let uvY;
const uvArray = geometry.getAttribute("uv").array;
// Re-map UVs to avoid distortion
for (let i2 = 0; i2 < uvArray.length; i2 += 2){
uvY = uvArray[i2 + 1]; // Extract Y value,
uvY -= 0.5; // center around 0
uvY /= ratio; // divide by w/h ratio
uvY += 0.5; // remove center around 0
uvArray[i2 + 1] = uvY;
}
geometry.getAttribute("uv").needsUpdate = true;
const uvMap = new THREE.TextureLoader().load("https://raw.githubusercontent.com/mrdoob/three.js/dev/examples/textures/uv_grid_opengl.jpg");
// Now we can apply texture transformations as expected
uvMap.center.set(0.5, 0.5);
uvMap.repeat.set(0.25, 0.5);
uvMap.anisotropy = 16;
const material = new THREE.MeshBasicMaterial({map: uvMap});
const mesh = new THREE.Mesh(geometry, material);
scene.add(mesh);
window.addEventListener("mousemove", onMouseMove);
window.addEventListener("resize", resize);
// Add rotation on mousemove
function onMouseMove(ev) {
uvMap.rotation = (ev.clientX / window.innerWidth) * Math.PI * 2;
}
function resize() {
const width = window.innerWidth;
const height = window.innerHeight;
renderer.setSize(width, height);
camera.aspect = width / height;
camera.updateProjectionMatrix();
}
function animate(time) {
mesh.rotation.y = Math.cos(time/ 3000) * 2;
renderer.render(scene, camera);
requestAnimationFrame(animate);
}
resize();
requestAnimationFrame(animate);
body { margin: 0; }
canvas { width: 100vw; height: 100vh; display: block; }
<script src="https://threejs.org/build/three.js"></script>
<canvas></canvas>
First of all, I agree with the solution #Marquizzo provided to your problem. And setting UV explicitly to the geometry should be the easiest way to solve your problem.
But #Marquizzo did not answer why changing the matrix of the texture (set repeatX and repeatY) does not work.
We all know the 2D rotation matrix R
cos -sin
sin cos
UVs are calculated in the shader with a transform matrix T, which is the texture matrix from your question.
T * UV = new UV
To simplify the question, we only consider rotation. And assume we have another additional matrix X for calculating the new UV. Then we have
X * R * UV = new UV
The question now is whether we can find a solution ofX, so that with any rotation, new UV of any points in your question can be calculated correctly. If there is a solution of X, then we can simply use
var X = new Matrix3();
//X.set(x,y,z,...)
texture.matrix.premultiply(X);
Otherwise, we can't find the approach you expected.
Let's create several equations to figure out X.
In the pic below, ABCD is one face of your geometry, and the transparent green is the texture. The UV of point A is (0,1), point B is (0,0), and (1,0), (1,1) for C and D respectively.
The first equation comes from the consideration, without any rotation, the original UV should never be changed (UV for A is always (0,1)). So we should have
X * I * (0, 1) = (0, 1) // I is the identity matrix
From here we can see X should also be an identity matrix.
Then let's see whether the identity matrix X can satisfy the second equation. What's the second equation? Simplify again, let B be the rotation centre(origin) and rotate the texture 90 degrees(counterclockwise). We use -90 to calculate UV though we rotate 90 degrees.
The new UV for point A after rotating the texture 90 degrees should be the current value of E. The value of E is (a/b, 0). Then we have
From this equation we can see X should not be an identity matrix, which means, WE ARE NOT ABLE TO FIND A SOLUTION OF X TO SOLVE YOUR PROBLEM WITH
X * R * UV = new UV
Certainly, you can change the shader of calculating new UVs, but it's even harder than the way #Marquizzo provided.
We have same rectangle position relative to 3 same type of staticly installed web cameras that are not on the same line. Say on a flat basketball field. Thus we have tham all inside one 3d space and (x, y, z); (ax, ay, az); positionas and orientations set for all of them.
We have a ball color and we found its position on all 3 images im1, im2, im3. Now having its position on 2d frames (p1x, p1y);(p2x, p2y);(p3x, p3y), and cameras pos\orientations how to get ball position in 3d space?
You need to unproject 2D screen coordinates into 3D coordinates in space.
You need to solve system of equation to find real point in 3D from 3 rays you got on the first step.
You can find source code for gluUnProject here. I also provide here my code for it:
public Vector4 Unproject(float x, float y, Matrix4 View)
{
var ndcX = x / Viewport.Width * 2 - 1.0f;
var ndcY = y / Viewport.Height * 2 - 1.0f;
var invVP = Matrix4.Invert(View * ProjectionMatrix);
// We don't z-coordinate of the point, so we choose 0.0f for it.
// We are going to find out it later.
var screenPos = new Vector4(ndcX, -ndcY, 0.0f, 1.0f);
var res = Vector4.Transform(screenPos, invVP);
return res / res.W;
}
Vector3 ComputeRay(Camera camera, Vector2 p)
{
var worldPos = Unproject(p.X, p.Y, camera.View);
var dir = new Vector3(worldPos) - camera.Eye;
return new Ray(camera.Eye, Vector3.Normalize(dir));
}
Now you need to find intersection of three such rays. Theoretically that would be enough to use only two rays. It depends on positions of your cameras.
If we had infinite precision floating point arithmetic and input was without noise that would be trivial. But in reality you might need to exploit some simple numerical scheme to find the point with an appropriate precision.
Coming from this question I'm trying to generate UV Mappings programmatically with Three.js for some models, I need this because my models are being generated programmatically too and I need to apply a simple texture to them. I have read here and successfully generated UV mapping for some simple 3D text but when applying the same mapping to more complex models it just doesn't work.
The texture I'm trying to apply is something like this:
The black background it's just transparent in the PNG image. I need to apply this to my models, it's just a glitter effect so I don't care about the exact position in the model, is any way to create a simple UV Map programatically for this cases?
I'm using this code from the linked question which works great for planar models but doesn't work for non-planar models:
assignUVs = function( geometry ){
geometry.computeBoundingBox();
var max = geometry.boundingBox.max;
var min = geometry.boundingBox.min;
var offset = new THREE.Vector2(0 - min.x, 0 - min.y);
var range = new THREE.Vector2(max.x - min.x, max.y - min.y);
geometry.faceVertexUvs[0] = [];
var faces = geometry.faces;
for (i = 0; i < geometry.faces.length ; i++) {
var v1 = geometry.vertices[faces[i].a];
var v2 = geometry.vertices[faces[i].b];
var v3 = geometry.vertices[faces[i].c];
geometry.faceVertexUvs[0].push([
new THREE.Vector2( ( v1.x + offset.x ) / range.x , ( v1.y + offset.y ) / range.y ),
new THREE.Vector2( ( v2.x + offset.x ) / range.x , ( v2.y + offset.y ) / range.y ),
new THREE.Vector2( ( v3.x + offset.x ) / range.x , ( v3.y + offset.y ) / range.y )
]);
}
geometry.uvsNeedUpdate = true;
}
You need to be more specific. Here, I'll apply UV mapping programmatically
for (i = 0; i < geometry.faces.length ; i++) {
geometry.faceVertexUvs[0].push([
new THREE.Vector2( 0, 0 ),
new THREE.Vector2( 0, 0 ),
new THREE.Vector2( 0, 0 ),
]);
}
Happy?
There are an infinite ways of applying UV coordinates. How about this
for (i = 0; i < geometry.faces.length ; i++) {
geometry.faceVertexUvs[0].push([
new THREE.Vector2( Math.random(), Math.random() ),
new THREE.Vector2( Math.random(), Math.random() ),
new THREE.Vector2( Math.random(), Math.random() ),
]);
}
There's no RIGHT answer. There's just whatever you want to do is up to you. It's kind of like asking how do I apply pencil to paper.
Sorry to be so snarky, just pointing out the question is in one sense nonsensical.
Anyway, there are a few common methods for applying a texture.
Spherical mapping
Imagine your model is translucent, there's a sphere inside made of film and inside the sphere is a point light so that it projects (like a movie projector) from the sphere in all directions. So you do the math to computer the correct UVs for that situation
To get a point on there sphere multiply your points by the inverse of the world matrix for the sphere then normalize the result. After that though there's still the problem of how the texture itself is mapped to the imaginary sphere for which again there are an infinite number of ways.
The simplest way is I guess called mercator projection which is how most 2d maps of the world work. they have the problem that lots of space is wasted at the north and south poles. Assuming x,y,z are the normalized coordinates mentioned in the previous paragraph then
U = Math.atan2(z, x) / Math.PI * 0.5 - 0.5;
V = 0.5 - Math.asin(y) / Math.PI;
Projection Mapping
This is just like a movie. You have a 2d image being projected from a point. Imagine you pointed a movie projector (or a projection TV) at a chair. Compute those points
Computing these points is exactly like computing the 2D image from 3D data that nearly all WebGL apps do. Usually they have a line in their vertex shader like this
gl_Position = matrix * position;
Where matrix = worldViewProjection. You can then do
clipSpace = gl_Position.xy / gl_Position.w
You now have x,y values that go from -1 to +1. You then convert them
to 0 to 1 for UV coords
uv = clipSpace * 0.5 + 0.5;
Of course normally you'd compute UV coordinates at init time in JavaScript but the concept is the same.
Planar Mapping
This is the almost the same as projection mapping except imagine the projector, instead of being a point, is the same size as you want to project it. In other words, with projection mapping as you move your model closer to the projector the picture being projected will get smaller but with planar it won't.
Following the projection mapping example the only difference here is using an orthographic projection instead of a perspective projection.
Cube Mapping?
This is effectively planar mapping from 6 directions. It's up to you
to decide which UV coordinates get which of the 6 planes. I'd guess
most of the time you'd take the normal of the triangle to see which
plane it most faces, then do planar mapping from that plane.
Actually I might be getting my terms mixed up. You can also do
real cube mapping where you have a cube texture but that requires
U,V,W instead of just U,V. For that it's the same as the sphere
example except you just use the normalized coordinates directly as
U,V,W.
Cylindrical mapping
This is like sphere mapping except assume there's tiny cylinder projecting on to your model. Unlike a sphere a cylinder has orientation but basically you can move the points of the model into the orientation of the cylinder then assuming x,y,z are now relative to the cylinder (in other words you multiplied them by the inverse matrix of the matrix that represents the orientation of the cylinder), then .
U = Math.atan2(x, z) / Math.PI * 0.5 + 0.5
V = y
2 more solutions
Maybe you want Environment Mapping?
Here's 1 example and Here's another.
Maybe you should consider using a modeling package like Maya or Blender that have UV editors and UV projectors built in.
I'm using jsc3d to load and display some 3d objects on a canvas. The viewer has already a built-in feature that allows to rotate the "view coordinates" (correct me if i'm wrong) about the Y axis by dragging the mouse.
The rotation is performed through a classic rotation matrix, and finally the trasformation matrix is multiplied by this rotation matrix.
The totation about the Y axis is calculated in a way that resembles a circular movement around the whole scene of loaded objects:
JSC3D.Matrix3x4.prototype.rotateAboutYAxis = function(angle) {
if(angle != 0) {
angle *= Math.PI / 180;
var c = Math.cos(angle);
var s = Math.sin(angle);
var m00 = c * this.m00 + s * this.m20;
var m01 = c * this.m01 + s * this.m21;
var m02 = c * this.m02 + s * this.m22;
var m03 = c * this.m03 + s * this.m23;
var m20 = c * this.m20 - s * this.m00;
var m21 = c * this.m21 - s * this.m01;
var m22 = c * this.m22 - s * this.m02;
var m23 = c * this.m23 - s * this.m03;
this.m00 = m00; this.m01 = m01; this.m02 = m02; this.m03 = m03;
this.m20 = m20; this.m21 = m21; this.m22 = m22; this.m23 = m23;
}
};
Now, dragging the mouse will apply this rotation about the Y axis on the whole world, like on the left side in the picture below. Is there a way, to apply a rotation about the Up vector to keep it in the initial position, like it appear on the right side?
I tried something like that:
var rotY = (x - viewer.mouseX) * 360 / viewer.canvas.height;
var rotMat = new JSC3D.Matrix3x4; // identity
rotMat.rotateAboutYAxis(rotY);
viewer.rotMatrix.multiply(rotMat);
but it has no effect.
What operations shall be applied to my rotation matrix to achieve a rotation about the Up vector?
Sample: https://jsfiddle.net/4xzjnnar/1/
This 3D library has already some built-in functions to allow scene rotation about X,Y,and Z axis, so there is no need to implement new matrix operations for that, we can use the existing functions rotateAboutXAyis, rotateAboutYAxis and rotateAboutZAxis, which apply an in-place matrix multiplication of the desired rotation angle in degrees.
The scene in JSC3D is transformed by a 3x4 matrix where the rotation is stored in the first 3 values of each row.
After applying a scene rotation and/or translation, applying a subsequent rotation about the Up vector, is a problem of calculate a rotation about an arbitrary axis.
A very clean and didactic explanation how to solve this problem is described here: http://ami.ektf.hu/uploads/papers/finalpdf/AMI_40_from175to186.pdf
Translate the P 0 (x 0 ,y 0 ,z 0 ) axis point to the origin of the coordinate system.
Perform appropriate rotations to make the axis of rotation coincident with
z-coordinate axis.
Rotate about the z-axis by the angle θ.
Perform the inverse of the combined rotation transformation.
Perform the inverse of the translation.
Now, its easy to write a function for that, because we use the functions already available in JSC3D (translation part is omitted here).
JSC3D.Viewer.prototype.rotateAboutUpVector = function(angle) {
angle %= 360;
/* pitch, counter-clockwise rotation about the Y axis */
var degX = this.rpy[0], degZ = this.rpy[2];
this.rotMatrix.rotateAboutXAxis(-degX);
this.rotMatrix.rotateAboutZAxis(-degZ);
this.rotMatrix.rotateAboutYAxis(angle);
this.rotMatrix.rotateAboutZAxis(degZ);
this.rotMatrix.rotateAboutXAxis(degX);
}
Because all above mentioned functions are using degrees, we need to get back the actual Euler angles from the rotation matrix (simplified):
JSC3D.Viewer.prototype.calcRollPitchYaw = function() {
var m = this.rotMatrix;
var radians = 180 / Math.PI;
var angleX = Math.atan2(-m.m12, m.m22) * radians;
var angleY = Math.asin(m.m01) * radians;
var angleZ = Math.atan2(-m.m01, m.m00) * radians;
this.rpy[0] = angleX;
this.rpy[1] = angleY;
this.rpy[2] = angleZ;
}
The tricky part here, is that we need always to get back the current rotation angles, as they results from the applied rotations, so a separate function must be used to store the current Euler angles every time that a rotation is applied to the scene.
For that, we can use a very simple structure:
JSC3D.Viewer.prototype.rpy = [0, 0, 0];
This will be the final result: