I'd like to enable a user to rotate a texture on a rectangle while keeping the aspect ratio of the texture image intact. I'm doing the rotation of a 1:1 aspect ratio image on a surface that is rectangular (say width: 2 and length: 1)
Steps to reproduce:
In the below texture rotation example
https://threejs.org/examples/?q=rotation#webgl_materials_texture_rotation
If we change one of the faces of the geometry like below:
https://github.com/mrdoob/three.js/blob/master/examples/webgl_materials_texture_rotation.html#L57
var geometry = new THREE.BoxBufferGeometry( 20, 10, 10 );
Then you can see that as you play around with the rotation control, the image aspect ratio is distorted. (form a square to a weird shape)
At 0 degree:
At some angle between 0 and 90:
I understand that by changing the repeatX and repeatY factor I can control this. It's also easy to see what the values would be at 0 degree, 90 degree rotations.
But I'm struggling to come up with the formula for repeatX and repeatY that works for any texture rotation given length and width of the rectangular face.
Unfortunately when stretching geometry like that, you'll get a distortion in 3D space, not UV space. In this example, one UV.x unit occupies twice as much 3D space as one UV.y unit:
This is giving you those horizontally-skewed diamonds when in between rotations:
Sadly, there's no way to solve this with texture matrix transforms. The horizontal stretching will be applied after the texture transform, in 3D space, so texture.repeat won't help you avoid this. The only way to solve this is by modifying the UVs so the UV.x units take up as much 3D space as UV.y units:
With complex models, you'd do this kind of "equalizing" in a 3D editor, but since the geometry is simple enough, we can do it via code. See the example below. I'm using a width/height ratio variable to use in my UV.y remapping, that way the UV transformations will match up, regardless of how much wider it is.
//////// Boilerplate Three setup
const renderer = new THREE.WebGLRenderer({canvas: document.querySelector("canvas")});
const camera = new THREE.PerspectiveCamera(50, 1, 1, 100);
camera.position.z = 3;
const scene = new THREE.Scene();
/////////////////// CREATE GEOM & MATERIAL
const width = 2;
const height = 1;
const ratio= width / height; // <- magic number that will help with UV remapping
const geometry = new THREE.BoxBufferGeometry(width, height, width);
let uvY;
const uvArray = geometry.getAttribute("uv").array;
// Re-map UVs to avoid distortion
for (let i2 = 0; i2 < uvArray.length; i2 += 2){
uvY = uvArray[i2 + 1]; // Extract Y value,
uvY -= 0.5; // center around 0
uvY /= ratio; // divide by w/h ratio
uvY += 0.5; // remove center around 0
uvArray[i2 + 1] = uvY;
}
geometry.getAttribute("uv").needsUpdate = true;
const uvMap = new THREE.TextureLoader().load("https://raw.githubusercontent.com/mrdoob/three.js/dev/examples/textures/uv_grid_opengl.jpg");
// Now we can apply texture transformations as expected
uvMap.center.set(0.5, 0.5);
uvMap.repeat.set(0.25, 0.5);
uvMap.anisotropy = 16;
const material = new THREE.MeshBasicMaterial({map: uvMap});
const mesh = new THREE.Mesh(geometry, material);
scene.add(mesh);
window.addEventListener("mousemove", onMouseMove);
window.addEventListener("resize", resize);
// Add rotation on mousemove
function onMouseMove(ev) {
uvMap.rotation = (ev.clientX / window.innerWidth) * Math.PI * 2;
}
function resize() {
const width = window.innerWidth;
const height = window.innerHeight;
renderer.setSize(width, height);
camera.aspect = width / height;
camera.updateProjectionMatrix();
}
function animate(time) {
mesh.rotation.y = Math.cos(time/ 3000) * 2;
renderer.render(scene, camera);
requestAnimationFrame(animate);
}
resize();
requestAnimationFrame(animate);
body { margin: 0; }
canvas { width: 100vw; height: 100vh; display: block; }
<script src="https://threejs.org/build/three.js"></script>
<canvas></canvas>
First of all, I agree with the solution #Marquizzo provided to your problem. And setting UV explicitly to the geometry should be the easiest way to solve your problem.
But #Marquizzo did not answer why changing the matrix of the texture (set repeatX and repeatY) does not work.
We all know the 2D rotation matrix R
cos -sin
sin cos
UVs are calculated in the shader with a transform matrix T, which is the texture matrix from your question.
T * UV = new UV
To simplify the question, we only consider rotation. And assume we have another additional matrix X for calculating the new UV. Then we have
X * R * UV = new UV
The question now is whether we can find a solution ofX, so that with any rotation, new UV of any points in your question can be calculated correctly. If there is a solution of X, then we can simply use
var X = new Matrix3();
//X.set(x,y,z,...)
texture.matrix.premultiply(X);
Otherwise, we can't find the approach you expected.
Let's create several equations to figure out X.
In the pic below, ABCD is one face of your geometry, and the transparent green is the texture. The UV of point A is (0,1), point B is (0,0), and (1,0), (1,1) for C and D respectively.
The first equation comes from the consideration, without any rotation, the original UV should never be changed (UV for A is always (0,1)). So we should have
X * I * (0, 1) = (0, 1) // I is the identity matrix
From here we can see X should also be an identity matrix.
Then let's see whether the identity matrix X can satisfy the second equation. What's the second equation? Simplify again, let B be the rotation centre(origin) and rotate the texture 90 degrees(counterclockwise). We use -90 to calculate UV though we rotate 90 degrees.
The new UV for point A after rotating the texture 90 degrees should be the current value of E. The value of E is (a/b, 0). Then we have
From this equation we can see X should not be an identity matrix, which means, WE ARE NOT ABLE TO FIND A SOLUTION OF X TO SOLVE YOUR PROBLEM WITH
X * R * UV = new UV
Certainly, you can change the shader of calculating new UVs, but it's even harder than the way #Marquizzo provided.
Related
I have a problem and although I serached everywhere I couldn't find a solution.
I have a stacked sprite and I'm rotating this sprite around the center of the screen. So I iterate over a list of sprites (stacked) and increase the y-coordinate by 2 every loop (rotation is increased step by step by 0.01f outside of the loop):
foreach(var s in stacked)
{
Vector2 origin = new Vector2(Basic.width / 2, Basic.height / 2);
Rectangle newPosition = new Rectangle(position.X, position.Y - y, position.Width, position.Height);
float angle = 0f;
Matrix transform = Matrix.CreateTranslation(-origin.X, -origin.Y, 0f) *
Matrix.CreateRotationZ(rotation) *
Matrix.CreateTranslation(origin.X, origin.Y, 0f);
Vector2 pos = new Vector2(newPosition.X, newPosition.Y);
pos = Vector2.Transform(pos, transform);
newPosition.X = (int)pos.X;
newPosition.Y = (int)pos.Y;
angle += rotation;
s.Draw(newPosition, origin, angle, Color.White);
y += 2;
}
This works fine. But now my problem. I want not only to rotate the sprite around the center of the screen but also around itself. How to achieve this? I can only set one origin and one rotation per Draw. I would like to rotate the sprite around the origin 'Basic.width / 2, Basic.height / 2' and while it rotates, around 'position.Width / 2, position.Height / 2'. With different rotation speed each. How is this possible?
Thank you in advance!
Just to be clear:
When using SpriteBatch.Draw() with origin and angle, there is only one rotation: the final angle of the sprite.
The other rotations are positional offsets.
The origin in the Draw() call is a translation, rotation, translate back. Your transform matrix shows this quite well:
Matrix transform = Matrix.CreateTranslation(-origin.X, -origin.Y, 0f) *
Matrix.CreateRotationZ(rotation) *
Matrix.CreateTranslation(origin.X, origin.Y, 0f);
//Class level variables:
float ScreenRotation, ScreenRotationSpeed;
float ObjectRotation, ObjectRotationSpeed;
Vector2 ScreenOrigin, SpriteOrigin;
// ...
// In constructor and resize events:
ScreenOrigin = new Vector2(Basic.width <<1, Basic.height <<1);
// shifts are faster for `int` type. If "Basic.width" is `float`:
//ScreenOrigin = new Vector2(Basic.width, Basic.height) * 0.5f;
// In Update():
ScreenRotation += ScreenRotationSpeed; // * gameTime.ElapsedGameTime.Seconds; // for FPS invariant speed where speed = 60 * single frame speed
ObjectRotation+= ObjectRotationSpeed;
//Calculate the screen center rotation once per step
Matrix baseTransform = Matrix.CreateTranslation(-ScreenOrigin.X, -ScreenOrigin.Y, 0f) *
Matrix.CreateRotationZ(ScreenRotation) *
Matrix.CreateTranslation(ScreenOrigin.X, ScreenOrigin.Y, 0f);
// In Draw() at the start of your code snippet posted:
// moved outside of the loop for a translationally invariant vertical y interpretation
// or move it inside the loop and apply -y to position.Y for an elliptical effect
Vector2 ObjectOrigin = new Vector2(position.X, position.Y);
Matrix transform = baseTransform *
Matrix.CreateTranslation(-ObjectOrigin.X, -ObjectOrigin.Y, 0f) *
Matrix.CreateRotationZ(ObjectRotation) *
Matrix.CreateTranslation(ObjectOrigin.X, ObjectOrigin.Y, 0f);
foreach(var s in stacked)
{
Vector2 pos = new Vector2(ObjectOrigin.X, ObjectOrigin.Y - y);
pos = Vector2.Transform(pos, transform);
float DrawAngle = ObjectRotation;
// or float DrawAngle = ScreenRotation;
// or float DrawAngle = ScreenRotation + ObjectRotation;
// or float DrawAngle = 0;
s.Draw(pos, SpriteOrigin, DrawAngle, Color.White);
}
I suggest moving the Draw() parameter away from destinationRectangle and use the Vector2 position directly with scaling. Rotations within square rectangles can differ up to SQRT(2) in aspect ratio, i.e. stretching/squashing. Using Vector2 incurs a cost of higher collision complexity.
I am sorry for the ors, but without complete knowledge of the problem...YMMV
In my 2D projects, I use the vector form of polar coordinates.
The Matrix class requires more calculations than the polar equivalents in 2D. Matrix operates in 3D, wasting cycles calculating Z components.
With normalized direction vectors (cos t,sin t) and a radius(vector length),in many cases I use Vector2.LengthSquared() to avoid the square root when possible.
The only time I have used Matrices in 2D is display projection matrix(entire SpriteBatch) and Mouse and TouchScreen input deprojection(times the inverse of the projection matrix)
I display a "curved tube" and color its vertices based on their distance to the plane the curve lays on.
It works mostly fine, however, when I reduce the resolution of the tube, artifacts starts to appear in the tube colors.
Those artifacts seem to depend on the camera position. If I move the camera around, sometimes the artifacts disappear. Not sure it makes sense.
Live demo: http://jsfiddle.net/gz1wu369/15/
I do not know if there is actually a problem in the interpolation or if it is just a "screen" artifact.
Afterwards I render the scene to a texture, looking at it from the "top". It then looks like a "deformation" field that I use in another shader, hence the need for continuous color.
I do not know if it is the expected behavior or if there is a problem in my code while setting the vertices color.
Would using the THREEJS Extrusion tools instead of the tube geometry solve my issue?
const tubeGeo = new THREE.TubeBufferGeometry(closedSpline, steps, radius, curveSegments, false);
const count = tubeGeo.attributes.position.count;
tubeGeo.addAttribute('color', new THREE.BufferAttribute(new Float32Array(count * 3), 3));
const colors = tubeGeo.attributes.color;
const color = new THREE.Color();
for (let i = 0; i < count; i++) {
const pp = new THREE.Vector3(
tubeGeo.attributes.position.array[3 * i],
tubeGeo.attributes.position.array[3 * i + 1],
tubeGeo.attributes.position.array[3 * i + 2]);
const distance = plane.distanceToPoint(pp);
const normalizedDist = Math.abs(distance) / radius;
const t2 = Math.floor(i / (curveSegments + 1));
color.setHSL(0.5 * t2 / steps, .8, .5);
const green = 1 - Math.cos(Math.asin(Math.abs(normslizedDist)));
colors.setXYZ(i, color.r, green, 0);
}
Low-res tubes with "Normals" material shows different artifact
High resolution tube hide the artifacts:
I need to convert the position and rotation on a 3d object to screen position and rotation. I can convert the position easily but not the rotation. I've attempted to convert the rotation of the camera but it does not match up.
Attached is an example plunkr & conversion code.
The white facebook button should line up with the red plane.
https://plnkr.co/edit/0MOKrc1lc2Bqw1MMZnZV?p=preview
function toScreenPosition(position, camera, width, height) {
var p = new THREE.Vector3(position.x, position.y, position.z);
var vector = p.project(camera);
vector.x = (vector.x + 1) / 2 * width;
vector.y = -(vector.y - 1) / 2 * height;
return vector;
}
function updateScreenElements() {
var btn = document.querySelector('#btn-share')
var pos = plane.getWorldPosition();
var vec = toScreenPosition(pos, camera, canvas.width, canvas.height);
var translate = "translate3d("+vec.x+"px,"+vec.y+"px,"+vec.z+"px)";
var euler = camera.getWorldRotation();
var rotate = "rotateX("+euler.x+"rad)"+
" rotateY("+(euler.y)+"rad)"+
" rotateY("+(euler.z)+"rad)";
btn.style.transform= translate+ " "+rotate;
}
... And a screenshot of the issue.
I would highly recommend not trying to match this to the camera space, but instead to apply the image as a texture map to the red plane, and then use a raycast to see whether a click goes over the plane. You'll save yourself headache in translating and rotating and then hiding the symbol when it's behind the cube, etc
check out the THREEjs examples to see how to use the Raycaster. It's a lot more flexible and easier than trying to do rotations and matching. Then whatever the 'btn' onclick function is, you just call when you detect a raycast collision with the plane
Coming from this question I'm trying to generate UV Mappings programmatically with Three.js for some models, I need this because my models are being generated programmatically too and I need to apply a simple texture to them. I have read here and successfully generated UV mapping for some simple 3D text but when applying the same mapping to more complex models it just doesn't work.
The texture I'm trying to apply is something like this:
The black background it's just transparent in the PNG image. I need to apply this to my models, it's just a glitter effect so I don't care about the exact position in the model, is any way to create a simple UV Map programatically for this cases?
I'm using this code from the linked question which works great for planar models but doesn't work for non-planar models:
assignUVs = function( geometry ){
geometry.computeBoundingBox();
var max = geometry.boundingBox.max;
var min = geometry.boundingBox.min;
var offset = new THREE.Vector2(0 - min.x, 0 - min.y);
var range = new THREE.Vector2(max.x - min.x, max.y - min.y);
geometry.faceVertexUvs[0] = [];
var faces = geometry.faces;
for (i = 0; i < geometry.faces.length ; i++) {
var v1 = geometry.vertices[faces[i].a];
var v2 = geometry.vertices[faces[i].b];
var v3 = geometry.vertices[faces[i].c];
geometry.faceVertexUvs[0].push([
new THREE.Vector2( ( v1.x + offset.x ) / range.x , ( v1.y + offset.y ) / range.y ),
new THREE.Vector2( ( v2.x + offset.x ) / range.x , ( v2.y + offset.y ) / range.y ),
new THREE.Vector2( ( v3.x + offset.x ) / range.x , ( v3.y + offset.y ) / range.y )
]);
}
geometry.uvsNeedUpdate = true;
}
You need to be more specific. Here, I'll apply UV mapping programmatically
for (i = 0; i < geometry.faces.length ; i++) {
geometry.faceVertexUvs[0].push([
new THREE.Vector2( 0, 0 ),
new THREE.Vector2( 0, 0 ),
new THREE.Vector2( 0, 0 ),
]);
}
Happy?
There are an infinite ways of applying UV coordinates. How about this
for (i = 0; i < geometry.faces.length ; i++) {
geometry.faceVertexUvs[0].push([
new THREE.Vector2( Math.random(), Math.random() ),
new THREE.Vector2( Math.random(), Math.random() ),
new THREE.Vector2( Math.random(), Math.random() ),
]);
}
There's no RIGHT answer. There's just whatever you want to do is up to you. It's kind of like asking how do I apply pencil to paper.
Sorry to be so snarky, just pointing out the question is in one sense nonsensical.
Anyway, there are a few common methods for applying a texture.
Spherical mapping
Imagine your model is translucent, there's a sphere inside made of film and inside the sphere is a point light so that it projects (like a movie projector) from the sphere in all directions. So you do the math to computer the correct UVs for that situation
To get a point on there sphere multiply your points by the inverse of the world matrix for the sphere then normalize the result. After that though there's still the problem of how the texture itself is mapped to the imaginary sphere for which again there are an infinite number of ways.
The simplest way is I guess called mercator projection which is how most 2d maps of the world work. they have the problem that lots of space is wasted at the north and south poles. Assuming x,y,z are the normalized coordinates mentioned in the previous paragraph then
U = Math.atan2(z, x) / Math.PI * 0.5 - 0.5;
V = 0.5 - Math.asin(y) / Math.PI;
Projection Mapping
This is just like a movie. You have a 2d image being projected from a point. Imagine you pointed a movie projector (or a projection TV) at a chair. Compute those points
Computing these points is exactly like computing the 2D image from 3D data that nearly all WebGL apps do. Usually they have a line in their vertex shader like this
gl_Position = matrix * position;
Where matrix = worldViewProjection. You can then do
clipSpace = gl_Position.xy / gl_Position.w
You now have x,y values that go from -1 to +1. You then convert them
to 0 to 1 for UV coords
uv = clipSpace * 0.5 + 0.5;
Of course normally you'd compute UV coordinates at init time in JavaScript but the concept is the same.
Planar Mapping
This is the almost the same as projection mapping except imagine the projector, instead of being a point, is the same size as you want to project it. In other words, with projection mapping as you move your model closer to the projector the picture being projected will get smaller but with planar it won't.
Following the projection mapping example the only difference here is using an orthographic projection instead of a perspective projection.
Cube Mapping?
This is effectively planar mapping from 6 directions. It's up to you
to decide which UV coordinates get which of the 6 planes. I'd guess
most of the time you'd take the normal of the triangle to see which
plane it most faces, then do planar mapping from that plane.
Actually I might be getting my terms mixed up. You can also do
real cube mapping where you have a cube texture but that requires
U,V,W instead of just U,V. For that it's the same as the sphere
example except you just use the normalized coordinates directly as
U,V,W.
Cylindrical mapping
This is like sphere mapping except assume there's tiny cylinder projecting on to your model. Unlike a sphere a cylinder has orientation but basically you can move the points of the model into the orientation of the cylinder then assuming x,y,z are now relative to the cylinder (in other words you multiplied them by the inverse matrix of the matrix that represents the orientation of the cylinder), then .
U = Math.atan2(x, z) / Math.PI * 0.5 + 0.5
V = y
2 more solutions
Maybe you want Environment Mapping?
Here's 1 example and Here's another.
Maybe you should consider using a modeling package like Maya or Blender that have UV editors and UV projectors built in.
I'm trying to make a static 3D prism out of point clouds with specific numbers of particles in each. I've got the the corner coordinates of each side of the prism based on the angle of turn, and tried spawning the particles in the area bound by these coordinates. Instead, the resulting point clouds have kept only the bottom left coordinate.
Screenshot: http://i.stack.imgur.com/uQ7Q8.png
I've tried to set the rotation of each cloud object such that their edges meet, but they will rotate only around the world centre. I gather this is something to do with rotation matrices and Euler angles, but, having been trying to work them out for 3 solid days, I've despaired. (I'm a sociologist, not a dev, and haven't touched graphics before this project.)
Please help? How do I set the rotation on each face of the prism? Or maybe there is a more sensible way to get the particles to spawn in the correct area in the first place?
The code:
// draw the particles
var n = 0;
do {
var geom = new THREE.Geometry();
var material = new THREE.PointCloudMaterial({size: 1, vertexColors: true, color: 0xffffff});
for (i = 0; i < group[n]; i++) {
if (geom.vertices.length < group[n]){
var particle = new THREE.Vector3(
Math.random() * screens[n].bottomrightback.x + screens[n].bottomleftfront.x,
Math.random() * screens[n].toprightback.y + screens[n].bottomleftfront.y,
Math.random() * screens[n].bottomrightfront.z + screens[n].bottomleftfront.z);
geom.vertices.push(particle);
geom.colors.push(new THREE.Color(Math.random() * 0x00ffff));
}
}
var system = new THREE.PointCloud(geom, material);
scene.add(system);
**// something something matrix Euler something?**
n++
}
while (n < numGroups);
I've tried to set the rotation of each cloud object such that their
edges meet, but they will rotate only around the world centre.
It is true they only rotate around 0,0,0. The simple solution then is to move the object to the center, rotate it, and then move it back to its original position.
For example (Code not tested so might take a bit of tweaking):
var m = new THREE.Matrix4();
var movetocenter = new THREE.Matrix4();
movetocenter.makeTranslation(-x, -y, -z);
var rotate = new THREE.Matrix4();
rotate.makeRotationFromEuler(); //Build your rotation here
var moveback = new THREE.Matrix4();
moveback .makeTranslation(x, y, z);
m.multiply(movetocenter);
m.multiply(rotate);
m.multiply(moveback);
//Now you can use geometry.applyMatrix(m)