Splitting sprites (texture atlases) into tiles with three.js - three.js

The goal is to create many thumbnails from a given set of pictures, assuming all thumbnails have same dimensions.
https://threejs.org/examples/#misc_ubiquity_test2 is a nice example showing the texture.wrapS/texture.wrapT + texture.offset approach, but it means cloning a texture for each thumbnail, that has performance implications. A question is: how to reuse a single texture?
Having the 16384x16384px limit in mind, another aspect to cover is: how to prepare multiple sprites, load corresponding textures and distribute them among tiles (thumbnails)?

Preparing the sprites
Assumptions:
original assets have the same aspect ratio, e.g. ~1.485 (2894x1949px)
we're about to render 128 thumbnails in the end
imagemagic is installed
./assets/images/thumbnails/ is a working directory for manipulations
./assets/images/sprite-0.jpg, ..., ./assets/images/sprite-<n>.jpg will be actual sprites - horizontal atlases (single rows of tiles)
First off, let's define desired thumbnail dimensions. Since three.js requires each texture dimension to be a power of 2, a height could be set to 256px, making a width equal 380px. This means 43 tiles per sprite (43*380=16340, where 16384 is a limit for the total width).
Building a single sprite
Cleanup ./assets/images/thumbnails/original-selected/ and copy a portion of 43 original assets there.
Execute a set of steps listed below.
Rename a resulting sprite.jpg into sprite-<iteration>.jpg.
Steps
Generate small assets:
$ mogrify -path ./assets/images/thumbnails/small/ -resize 380x256 ./assets/images/thumbnails/original-selected/*.png
Build a sprite out of small assets:
$ convert +append ./assets/images/thumbnails/small/*.png ./assets/images/sprite.png
Note that the sprite now is 16340x256, so it has to be resized to 16384x256 for
both dimensions to be a power of 2 (otherwise three.js will do that on the fly):
$ convert -resize 16384x256\! ./assets/images/sprite.png ./assets/images/sprite.png
Finally, convert the sprite to JPEG, reducing the size:
$ convert -quality 85 ./assets/images/sprite.png ./assets/images/sprite.jpg
Loading textures and creating thumbnails
Tiling itself (setting the geometry.faceVertexUvs value) is inspired by https://solutiondesign.com/blog/-/sdg/webgl-and-three-js-texture-mappi-1/19147
import {Scene, Texture, TextureLoader, Vector2, PlaneGeometry, BufferGeometry, MeshBasicMaterial, Mesh} from 'three';
const thumbnailWidth = 380;
const thumbnailHeight = 256;
const thumbnailsCount = 128;
const spriteLength = 43;
const spriteUrlPattern = 'assets/images/sprite-<index>.jpg';
const scene = new Scene();
const loader = new TextureLoader();
loadAllTextures()
.then(initializeAllThumbnails);
function loadAllTextures(): Promise<Texture[]> {
const spritesCount = Math.ceil(thumbnailsCount / spriteLength);
const singlePromises = [];
for (let i = 0; i < spritesCount; i += 1) {
singlePromises.push(loadSingleTexture(i));
}
return Promise.all(singlePromises);
}
function loadSingleTexture(index: number): Promise<Texture> {
const url = spriteUrlPattern.replace('<index>', String(index));
return new Promise((resolve) => {
loader.load(url, resolve);
});
}
// Tiles are taken from different sprites,
// so thumbnail meshes are built using corresponding textures.
// E.g. given 128 tiles packed into 3 sprites,
// thumbnails 0..43 take the 1st texture, 44..86 - the 2nd one and so on.
function initializeAllThumbnails(allTextures: Texture[]) {
const baseGeometry = new PlaneGeometry(thumbnailWidth, thumbnailHeight);
const materials = allTextures.map((texture) => new MeshBasicMaterial({
map: texture,
}));
for (let thumbnailIndex = 0; thumbnailIndex < thumbnailsCount; thumbnailIndex += 1) {
const geometry = getThumbnailGeometry(thumbnailIndex, baseGeometry);
const materialIndex = Math.floor(thumbnailIndex / spriteLength);
const material = materials[materialIndex]; // could be cloned in here, if each material will need individual transformations, e.g. opacity
const mesh = new Mesh(geometry, material);
scene.add(mesh);
}
}
function getThumbnailGeometry(thumbnailIndex: number, baseGeometry: PlaneGeometry): BufferGeometry {
const tileWidth = 1 / spriteLength;
const tileIndex = thumbnailIndex % spriteLength;
const offset = tileIndex * tileWidth;
// +---+---+---+
// | 3 | . | 2 |
// +---+---/---+
// | . | / | . |
// +---/---+---+
// | 0 | . | 1 |
// +---+---+---+
const tile = [
new Vector2(offset, 0),
new Vector2(offset + tileWidth, 0),
new Vector2(offset + tileWidth, 1),
new Vector2(offset, 1),
];
const plainGeometry = baseGeometry.clone();
const bufferGeometry = new BufferGeometry();
// a face consists of 2 triangles, coords defined counterclockwise
plainGeometry.faceVertexUvs[0] = [
[tile[3], tile[0], tile[2]],
[tile[0], tile[1], tile[2]],
];
bufferGeometry.fromGeometry(plainGeometry);
return bufferGeometry;
}

Related

Is it possible to use a texture material on objects with various sizes?

Working with Three.js r113, I'm creating walls from coordinates of a blueprint dynamically as custom geometries. I've set up the vertices, faces and faceVertexUvs already successfully. Now I'd like to wrap these geometries with a textured material, that repeats the texture and keeps the original aspect ratio.
Since the walls have different lengths, I was wondering which is the best approach to do this?
What I've tried so far is loading the texture once and then using different texture.repeat values, depending on the wall length:
let textures = function() {
let wall_brick = new THREE.TextureLoader().load('../textures/light_brick.jpg');
return {wall_brick};
}();
function makeTextureMaterial(texture, length, height) {
const scale = 2;
texture.wrapS = THREE.RepeatWrapping;
texture.wrapT = THREE.RepeatWrapping;
texture.repeat.set( length * scale, height * scale );
return new THREE.MeshStandardMaterial({map: texture});
}
I then call the above function, after creating the geometry and assign the returned materials to the material array to apply it to faces of front and back of each wall. Note: material.wall is an untextured MeshStandardMaterial for the other faces.
let scaledMaterial = [
makeTextureMaterial(textures.wall_brick, this.length.back, this.height),
makeTextureMaterial(textures.wall_brick, this.length.front, this.height),
material.wall
];
this.geometry.faces[0].materialIndex = 0; // back
this.geometry.faces[1].materialIndex = 0; // back
this.geometry.faces[2].materialIndex = 1; // front
this.geometry.faces[3].materialIndex = 1; // front
this.geometry.faces[4].materialIndex = 2;
this.geometry.faces[5].materialIndex = 2;
this.geometry.faces[6].materialIndex = 2;
this.geometry.faces[7].materialIndex = 2;
this.geometry.faces[8].materialIndex = 2;
this.geometry.faces[9].materialIndex = 2;
this.geometry.faces[10].materialIndex = 2;
this.geometry.faces[11].materialIndex = 2; // will do those with a loop later on :)
this.mesh = new THREE.Mesh(this.geometry, scaledMaterial);
What happens is that the texture is displayed on the desired faces, but it's not scaled individually by this.length.back and this.length.front
Any ideas how to do this? Thank you.
I have just found the proper approach to this. The individual scaling is done via faceVertexUvs, as West Langley answered here: https://stackoverflow.com/a/27098476/4355114

Vertex color interpolation artifacts

I display a "curved tube" and color its vertices based on their distance to the plane the curve lays on.
It works mostly fine, however, when I reduce the resolution of the tube, artifacts starts to appear in the tube colors.
Those artifacts seem to depend on the camera position. If I move the camera around, sometimes the artifacts disappear. Not sure it makes sense.
Live demo: http://jsfiddle.net/gz1wu369/15/
I do not know if there is actually a problem in the interpolation or if it is just a "screen" artifact.
Afterwards I render the scene to a texture, looking at it from the "top". It then looks like a "deformation" field that I use in another shader, hence the need for continuous color.
I do not know if it is the expected behavior or if there is a problem in my code while setting the vertices color.
Would using the THREEJS Extrusion tools instead of the tube geometry solve my issue?
const tubeGeo = new THREE.TubeBufferGeometry(closedSpline, steps, radius, curveSegments, false);
const count = tubeGeo.attributes.position.count;
tubeGeo.addAttribute('color', new THREE.BufferAttribute(new Float32Array(count * 3), 3));
const colors = tubeGeo.attributes.color;
const color = new THREE.Color();
for (let i = 0; i < count; i++) {
const pp = new THREE.Vector3(
tubeGeo.attributes.position.array[3 * i],
tubeGeo.attributes.position.array[3 * i + 1],
tubeGeo.attributes.position.array[3 * i + 2]);
const distance = plane.distanceToPoint(pp);
const normalizedDist = Math.abs(distance) / radius;
const t2 = Math.floor(i / (curveSegments + 1));
color.setHSL(0.5 * t2 / steps, .8, .5);
const green = 1 - Math.cos(Math.asin(Math.abs(normslizedDist)));
colors.setXYZ(i, color.r, green, 0);
}
Low-res tubes with "Normals" material shows different artifact
High resolution tube hide the artifacts:

Three.js merging mesh/geometry objects

I'm creating a three.js app which consists of floor (which is composed of different tiles) and shelving units (more than 5000...). I'm having some performance issues and low FPS (lower then 20), and I think it is because I'm creating a separate mesh for every tile and shelving unit. I know that I can leverage geometry/mesh merging in order to improve performance. This is the code for rendering the floor and shelving units (cells):
// add ground tiles
const tileGeometry = new THREE.PlaneBufferGeometry(
1,
1,
1
);
const edgeGeometry = new THREE.EdgesGeometry(tileGeometry);
const edges = new THREE.LineSegments(edgeGeometry, edgeMaterial);
let initialMesh = new THREE.Mesh(tileGeometry, floorMat);
Object.keys(groundTiles).forEach((key, index) => {
let tile = groundTiles[key];
let tileMesh = initialMesh.clone();
tileMesh.position.set(
tile.leftPoint[0] + tile.size[0] / 2,
tile.leftPoint[1] + tile.size[1] / 2,
0
);
tileMesh.scale.x = tile.size[0];
tileMesh.scale.y = tile.size[1];
tileMesh.name = `${tile.leftPoint[0]}-${tile.leftPoint[1]}`;
// Add tile edges (adds tile border lines)
tileMesh.add(edges.clone());
scene.add(tileMesh);
});
// add shelving units
const cellGeometry = new THREE.BoxBufferGeometry( 790, 790, 250 );
const wireframe = new THREE.WireframeGeometry( cellGeometry );
const cellLine = new THREE.LineSegments(wireframe, shelves_material);
Object.keys(cells).forEach((key, index) => {
let cell = cells[key];
const cellMesh = cellLine.clone();
cellMesh.position.set(
cell["x"] + 790 / 2,
// cell["x"],
cell["y"] + 490 / 2,
cell["z"] - 250
);
scene.add(cellMesh);
});
Also, here is a link to a screenshot from the final result.
I saw this article regarding merging of geometries, but I don't know how to implement it in my case because of the edges, line segments and wireframe objects I'm using..
Any help would be appriciated
Taking into account #Mugen87's comment, here's a possible approach :
Pretty straightforward merging of planes
Using a shader material to draw "borders"
Note : comment out the discard; line to fill the cards with red or whatever material you might want.
JsFiddle demo

Three.js - How to use the frames option in ExtrudeGeometry

I can't find anywhere an explaination about how to use the frames option for ExtrudeGeometry in Three.js. Its documentation says:
extrudePath — THREE.CurvePath. 3d spline path to extrude shape along. (creates Frames if (frames aren't defined)
frames — THREE.TubeGeometry.FrenetFrames. containing arrays of tangents, normals, binormals
but I don't understand how frames must be defined. I think using the "frames" option, passing three arrays for tangents, normals and binormals (calculated in some way), but how to pass them in frames?... Probably (like here for morphNormals):
frames = { tangents: [ new THREE.Vector3(), ... ], normals: [ new THREE.Vector3(), ... ], binormals: [ new THREE.Vector3(), ... ] };
with the three arrays of the same lenght (perhaps corresponding to steps or curveSegments option in ExtrudeGeometry)?
Many thanks for an explanation.
Edit 1:
String.prototype.format = function () {
var str = this;
for (var i = 0; i < arguments.length; i++) {
str = str.replace('{' + i + '}', arguments[i]);
}
return str;
}
var numSegments = 6;
var frames = new THREE.TubeGeometry.FrenetFrames( new THREE.SplineCurve3(spline), numSegments );
var tangents = frames.tangents,
normals = frames.normals,
binormals = frames.binormals;
var tangents_list = [],
normals_list = [],
binormals_list = [];
for ( i = 0; i < numSegments; i++ ) {
var tangent = tangents[ i ];
var normal = normals[ i ];
var binormal = binormals[ i ];
tangents_list.push("({0}, {1}, {2})".format(tangent.x, tangent.y, tangent.z));
normals_list.push("({0}, {1}, {2})".format(normal.x, normal.y, normal.z));
binormals_list.push("({0}, {1}, {2})".format(binormal.x, binormal.y, binormal.z));
}
alert(tangents_list);
alert(normals_list);
alert(binormals_list);
Edit 2
Times ago, I opened this topic for which I used this solution:
var spline = new THREE.SplineCurve3([
new THREE.Vector3(20.343, 19.827, 90.612), // t=0
new THREE.Vector3(22.768, 22.735, 90.716), // t=1/12
new THREE.Vector3(26.472, 23.183, 91.087), // t=2/12
new THREE.Vector3(27.770, 26.724, 91.458), // t=3/12
new THREE.Vector3(31.224, 26.976, 89.861), // t=4/12
new THREE.Vector3(32.317, 30.565, 89.396), // t=5/12
new THREE.Vector3(31.066, 33.784, 90.949), // t=6/12
new THREE.Vector3(30.787, 36.310, 88.136), // t=7/12
new THREE.Vector3(29.354, 39.154, 90.152), // t=8/12
new THREE.Vector3(28.414, 40.213, 93.636), // t=9/12
new THREE.Vector3(26.569, 43.190, 95.082), // t=10/12
new THREE.Vector3(24.237, 44.399, 97.808), // t=11/12
new THREE.Vector3(21.332, 42.137, 96.826) // t=12/12=1
]);
var spline_1 = [], spline_2 = [], t;
for( t = 0; t <= (7/12); t+=0.0001) {
spline_1.push(spline.getPoint(t));
}
for( t = (7/12); t <= 1; t+=0.0001) {
spline_2.push(spline.getPoint(t));
}
But I was thinking the possibility to set the tangent, normal and binormal for the first point (t=0) of spline_2 to be the same of last point (t=1) of spline_1; so I thought if that option, frames, could return in some way useful for the purpose. Could be possible to overwrite the value for a tangent, normal and binormal in the respective list, to obtain the same value for the last point (t=1) of spline_1 and the first point (t=0) of spline_2, so to guide the extrusion? For example, for the tangent at "t=0" of spline_2:
tangents[0].x = 0.301;
tangents[0].y = 0.543;
tangents[0].z = 0.138;
doing the same also for normals[0] and binormals[0], to ensure the same orientation for the last point (t=1) of spline_1 and the first one (t=0) of spline_2
Edit 3
I'm trying to visualize the tangent, normal and binormal for each control point of "mypath" (spline) using ArrowHelper, but, as you can see in the demo (on scene loading, you need zoom out the scene slowly, until you see the ArrowHelpers, to find them. The relative code starts from line 122 to line 152 in the fiddle), the ArrowHelper does not start at origin, but away from it. How to obtain the same result of this reference demo (when you check the "Debug normals" checkbox)?
Edit 4
I plotted two splines that respectively end (blue spline) and start (red spline) at point A (= origin), displaying tangent, normal and binormal vectors at point A for each spline (using cyan color for the blue spline's labels, and yellow color for the red spline's labels).
As mentioned above, to align and make continuous the two splines, I thought to exploit the three vectors (tangent, normal and binormal). Which mathematical operation, in theory, should I use to turn the end face of blue spline in a way that it views the initial face (yellow face) of red spline, so that the respective tangents (D, D'-hidden in the picture), normals (B, B') and binormals (C, C') are aligned? Should I use the ".setFromUnitVectors (vFrom, VTO)" method of quaternion? In its documentation I read: << Sets this quaternion to the rotation required to rotate vFrom direction vector to vector direction VTO ... vFrom VTO and are assumed to be normalized. >> So, probably, I need to define three quaternions:
quaternion for the rotation of the normalized tangent D vector in the direction of the normalized tangent D' vector
quaternion for the rotation of the normalized normal B vector in the direction of the normalized normal B' vector
quaternion for the rotation of the normalized binormal C vector in the direction of the normalized binormal C' vector
with:
vFrom = normalized D, B and C vectors
VTO ​​= normalized D', B' and C' vectors
and apply each of the three quaternions respectively to D, B and C (not normalized)?
Thanks a lot again
Edit 5
I tried this code (looking in the image how to align the vectors) but nothing has changed:
var numSegments_1 = points_1.length; // points_1 = list of points
var frames_1 = new THREE.TubeGeometry.FrenetFrames( points_1_spline, numSegments_1, false ); // path, segments, closed
var tangents_1 = frames_1.tangents,
normals_1 = frames_1.normals,
binormals_1 = frames_1.binormals;
var numSegments_2 = points_2.length;
var frames_2 = new THREE.TubeGeometry.FrenetFrames( points_2_spline, numSegments_2, false );
var tangents_2 = frames_2.tangents,
normals_2 = frames_2.normals,
binormals_2 = frames_2.binormals;
var b1_b2_angle = binormals_1[ binormals_1.length - 1 ].angleTo( binormals_2[ 0 ] ); // angle between binormals_1 (at point A of spline 1) and binormals_2 (at point A of spline 2)
var quaternion_n1_axis = new THREE.Quaternion();
quaternion_n1_axis.setFromAxisAngle( normals_1[ normals_1.length - 1 ], b1_b2_angle ); // quaternion equal to a rotation on normal_1 as axis
var vector_b1 = binormals_1[ binormals_1.length - 1 ];
vector_b1.applyQuaternion( quaternion_n1_axis ); // apply quaternion to binormals_1
var n1_n2_angle = normals_1[ normals_1.length - 1 ].angleTo( normals_2[ 0 ] ); // angle between normals_1 (at point A of spline 1) and normals_2 (at point A of spline 2)
var quaternion_b1_axis = new THREE.Quaternion();
quaternion_b1_axis.setFromAxisAngle( binormals_1[ binormals_1.length - 1 ], -n1_n2_angle ); // quaternion equal to a rotation on binormal_1 as axis
var vector_n1 = normals_1[ normals_1.length - 1 ];
vector_n1.applyQuaternion( quaternion_b1_axis ); // apply quaternion to normals_1
nothing in this other way also:
var numSegments_1 = points_1.length; // points_1 = list of points
var frames_1 = new THREE.TubeGeometry.FrenetFrames( points_1_spline, numSegments_1, false ); // path, segments, closed
var tangents_1 = frames_1.tangents,
normals_1 = frames_1.normals,
binormals_1 = frames_1.binormals;
var numSegments_2 = points_2.length;
var frames_2 = new THREE.TubeGeometry.FrenetFrames( points_2_spline, numSegments_2, false );
var tangents_2 = frames_2.tangents,
normals_2 = frames_2.normals,
binormals_2 = frames_2.binormals;
var quaternion_n1_axis = new THREE.Quaternion();
quaternion_n1_axis.setFromUnitVectors( binormals_1[ binormals_1.length - 1 ].normalize(), binormals_2[ 0 ].normalize() );
var vector_b1 = binormals_1[ binormals_1.length - 1 ];
vector_b1.applyQuaternion( quaternion_n1_axis );
var quaternion_b1_axis = new THREE.Quaternion();
quaternion_b1_axis.setFromUnitVectors( normals_1[ normals_1.length - 1 ].normalize(), normals_2[ 0 ].normalize() );
var vector_n1 = normals_1[ normals_1.length - 1 ];
vector_n1.applyQuaternion( quaternion_b1_axis );

Three.js: How to Repeat a Texture

With the following code, I want to set the rectangle's texture, but the problem is that the texture image does not repeat over the whole rectangle:
var penGeometry = new THREE.CubeGeometry(length, 15, 120);
var wallTexture = new THREE.ImageUtils.loadTexture('../../3D/brick2.jpg');
wallTexture.wrapS = wallTexture.wrapT = THREE.MirroredRepeatWrapping;
wallTexture.repeat.set(50, 1);
var wallMaterial = new THREE.MeshBasicMaterial({ map: wallTexture });
var line = new THREE.Mesh(penGeometry, wallMaterial);
line.position.x = PenArray.lastPosition.x + (PenArray.currentPosition.x - PenArray.lastPosition.x) / 2;
line.position.y = PenArray.lastPosition.y + (PenArray.currentPosition.y - PenArray.lastPosition.y) / 2;
line.position.z = PenArray.lastPosition.z + 60;
line.rotation.z = angle;
The texture image is http://wysnan.com/NightClubBooth/brick1.jpg
The result is http://wysnan.com/NightClubBooth/brick2.jpg
Only a piece of texture is rendered correctly but not all of the rectangle, why?
And how to render all the rectangle with this texture image?
For repeat wrapping, your texture's dimensions must be a power-of-two (POT).
For example ( 512 x 512 ) or ( 512 x 256 ).
three.js r.58

Resources