When I search for some skybox images (e.g. google images) I am getting hits showing single images in a sideways cross pattern. But all the three.js examples (for example) I've managed to find show loading 6 images.
It feels strange that I have to cut up a single image, and then have the extra load of 6 images instead of one image.
The documentation is a bit vague (i.e. as to whether 6 images is an option, or the only way to do it).
Here is a question that seems to be using a single image, but it is one row, and the answer uses a 2x3 grid; neither of them are the cross shape!
(BTW, bonus question: I tried working this out from the source code, but where is it? The THREE.CubeTextureLoader().load() code is a loop to load however many URLs it is given (NB. no checking that it is 6), then calls THREE.Texture, which seems very generic.)
Answer: yes, it can definitely be a single image. You have the proof in the stackoverflow question you provided.
Why the cross images exists: Sometimes (I have done this in OpenGl), you can specify coordinates in you images, for each one of the 6 faces of your cube. It doesn't look like the three.js library offers this functionnality.
What they offer is the .repeat and .offset attributes. This is what is being used in the single image jsfiddle.
for ( var i = 0; i < 6; i ++ ) {
t[i] = THREE.ImageUtils.loadTexture( imgData ); //2048x256
t[i].repeat.x = 1 / 8;
t[i].offset.x = i / 8;
// Could also use repeat.y and offset.y, which are probably used in the 2x3 image
....
You can experiment with the fiddle to see what happens if you modify those values. i.e.
t[i].repeat.x = 1 / 4;
t[i].offset.x = i / 4;
Good luck, hope this helped.
Bonus question edit : Also, in the THREE.CubeTextureLoader().load() code, it does in fact do an automatic update once 6 images have been loaded :
...
if ( loaded === 6 ) {
texture.needsUpdate = true;
if ( onLoad ) onLoad( texture );
}
FYI, correct mapping after tried it out:
x_offset = [3, 1, 2, 2, 2, 0];
y_offset = [1, 1, 2, 0, 1, 1];
for ( var i = 0; i < 6; i ++ ) {
t[i] = loader.load( imgData, render ); //2330 x 1740 // cubemap
t[i].repeat.x = 1 / 4;
t[i].offset.x = x_offset[i] / 4;
t[i].repeat.y = 1 / 3;
t[i].offset.y = y_offset[i] / 3;
//t[i].magFilter = THREE.NearestFilter;
t[i].minFilter = THREE.NearestFilter;
t[i].generateMipmaps = false;
materials.push( new THREE.MeshBasicMaterial( { map: t[i] } ) );
}
Related
Let’s say I want to make 100 objects - for example cars, like the one you see here:
This car is currently comprised of 5 meshes: one yellow Cube and four blue Spheres
What I’d like to know is what would be the most efficient/correct way to make 100 of these cars - or maybe 500 - in terms of memory management/ CPU performance, etc.
The way I’m currently going about doing this is as follows:
Make an empty THREE.Group called “newCarGroup” -
Create the yellow rectangular Mesh for the body of the car - called “carBodyMesh”
Create four blue Sphere Meshes for the Tires called “tire1Mesh”, “tire2Mesh”, “tire3Mesh”, and “tire4Mesh”
Add the Body and the four Tires to the “newCarGroup”
And finally, in a FOR loop, create/instantiate 100 “newCarGroup” objects, adding each one to the SCENE at a random position
The code is below.
It's working perfectly well right now, but I’d like to know if this is the “proper”/best way to do this?
Consider it’s possible I might end up needing 1,000 cars - or 5,000 cars. So will this scale properly?
Also, I need to add more objects to the car: like 4 windows - actually make that 6 windows, to also include the front and back windshields, then four headlights, etc.
So the final Car Object alone may end up being comprised of 20 meshes - or more.
Being that I’m kinda new to THREE.JS I wanna make sure I develop good habits and go about this sort of thing the right way.
Here’s my code:
function makeOneCar() {
var newCarGroup = new THREE.Group();
// 1. CAR-Body:
const bodyGeometry = new THREE.BoxGeometry(30, 10, 10);
const bodyMaterial = new THREE.MeshPhongMaterial({ color: "yellow" } );
const carBodyMesh = new THREE.Mesh(bodyGeometry, bodyMaterial);
// 2. TIRES:
const tireGeometry = new THREE.SphereGeometry(2, 16, 16);;
const tireMaterial = new THREE.MeshPhongMaterial( { color: "blue" } );
const tire1Mesh = new THREE.Mesh(tireGeometry, tireMaterial);
const tire2Mesh = new THREE.Mesh(tireGeometry, tireMaterial);
const tire3Mesh = new THREE.Mesh(tireGeometry, tireMaterial);
const tire4Mesh = new THREE.Mesh(tireGeometry, tireMaterial);
// TIRE 1 Position:
tire1Mesh.position.x = carBodyMesh.position.x - 11;
tire1Mesh.position.y = carBodyMesh.position.y - 4.15;
tire1Mesh.position.z = carBodyMesh.position.z + 4.5;
// TIRE 2 Position:
tire2Mesh.position.x = carBodyMesh.position.x + 11;
tire2Mesh.position.y = carBodyMesh.position.y - 4.15;
tire2Mesh.position.z = carBodyMesh.position.z + 4.5;
// TIRE 3 Position:
tire3Mesh.position.x = carBodyMesh.position.x - 11;
tire3Mesh.position.y = carBodyMesh.position.y - 4.15;
tire3Mesh.position.z = carBodyMesh.position.z - 4.5;
// TIRE 4 Position:
tire4Mesh.position.x = carBodyMesh.position.x + 11;
tire4Mesh.position.y = carBodyMesh.position.y - 4.15;
tire4Mesh.position.z = carBodyMesh.position.z - 4.5;
// Putting it all together:
newCarGroup.add(carBodyMesh);
newCarGroup.add(tire1Mesh);
newCarGroup.add(tire2Mesh);
newCarGroup.add(tire3Mesh);
newCarGroup.add(tire4Mesh);
// Setting (x, y, z) Coordinates - RANDOMLY
let randy = Math.floor(Math.random() * 10);
let newCarGroupX = randy % 2 == 0 ? Math.random() * 250 : Math.random() * -250;
let newCarGroupY = 0.0;
let newCarGroupZ = randy % 2 == 0 ? Math.random() * 250 : Math.random() * -250;
newCarGroup.position.set(newCarGroupX, newCarGroupY, newCarGroupZ)
scene.add(newCarGroup);
}
function makeCars() {
for(var carCount = 0; carCount < 100; carCount ++) {
makeOneCar();
}
}
I’d like to know if this is the “proper”/best way to do this?
This is subjective. You say the method works great for your current use-case, so for that use-case, it is fine.
So will this scale properly?
The simple answer is: No. The more complex answer is: ...not really.
You're re-using the geometry and materials, which is good. But every Mesh you create has meta information surrounding it, which adds to your overall memory footprint.
Also, every standard Mesh you add incurs what is known as a "draw call", which is the GPU drawing that particular shape. Instead, take a look at InstancedMesh. This allows the GPU to be given instructions on how to draw the shape throughout the scene once. Yes, rather than drawing each cube individually, the GPU can draw all the cubes at the same time, and they can even have different colors and transformations. There are limitations to this class, but it's a good starting point to understanding how instancing works.
Working with Three.js r113, I'm creating walls from coordinates of a blueprint dynamically as custom geometries. I've set up the vertices, faces and faceVertexUvs already successfully. Now I'd like to wrap these geometries with a textured material, that repeats the texture and keeps the original aspect ratio.
Since the walls have different lengths, I was wondering which is the best approach to do this?
What I've tried so far is loading the texture once and then using different texture.repeat values, depending on the wall length:
let textures = function() {
let wall_brick = new THREE.TextureLoader().load('../textures/light_brick.jpg');
return {wall_brick};
}();
function makeTextureMaterial(texture, length, height) {
const scale = 2;
texture.wrapS = THREE.RepeatWrapping;
texture.wrapT = THREE.RepeatWrapping;
texture.repeat.set( length * scale, height * scale );
return new THREE.MeshStandardMaterial({map: texture});
}
I then call the above function, after creating the geometry and assign the returned materials to the material array to apply it to faces of front and back of each wall. Note: material.wall is an untextured MeshStandardMaterial for the other faces.
let scaledMaterial = [
makeTextureMaterial(textures.wall_brick, this.length.back, this.height),
makeTextureMaterial(textures.wall_brick, this.length.front, this.height),
material.wall
];
this.geometry.faces[0].materialIndex = 0; // back
this.geometry.faces[1].materialIndex = 0; // back
this.geometry.faces[2].materialIndex = 1; // front
this.geometry.faces[3].materialIndex = 1; // front
this.geometry.faces[4].materialIndex = 2;
this.geometry.faces[5].materialIndex = 2;
this.geometry.faces[6].materialIndex = 2;
this.geometry.faces[7].materialIndex = 2;
this.geometry.faces[8].materialIndex = 2;
this.geometry.faces[9].materialIndex = 2;
this.geometry.faces[10].materialIndex = 2;
this.geometry.faces[11].materialIndex = 2; // will do those with a loop later on :)
this.mesh = new THREE.Mesh(this.geometry, scaledMaterial);
What happens is that the texture is displayed on the desired faces, but it's not scaled individually by this.length.back and this.length.front
Any ideas how to do this? Thank you.
I have just found the proper approach to this. The individual scaling is done via faceVertexUvs, as West Langley answered here: https://stackoverflow.com/a/27098476/4355114
I want to create ONE single buffer geometry that can hold many materials.
I have read that in order to achieve this in BufferGeometry, I need to use groups. So I created the following "floor" mesh:
var gg=new THREE.BufferGeometry(),vtx=[],fc=[[],[]],mm=[
new THREE.MeshLambertMaterial({ color:0xff0000 }),
new THREE.MeshLambertMaterial({ color:0x0000ff })
];
for(var y=0 ; y<11 ; y++)
for(var x=0 ; x<11 ; x++) {
vtx.push(x-5,0,y-5);
if(x&&y) {
var p=(vtx.length/3)-1;
fc[(x%2)^(y%2)].push(
p,p-11,p-1,
p-1,p-11,p-12
);
}
}
gg.addAttribute('position',new THREE.Float32BufferAttribute(vtx,3));
Array.prototype.push.apply(fc[0],fc[1]); gg.setIndex(fc[0]);
gg.computeVertexNormals();
gg.addGroup(0,100,0);
gg.addGroup(100,100,1);
scene.add(new THREE.Mesh(gg,mm));
THE ISSUE:
looking at the example in https://www.crazygao.com/vc/tst2.htm can see that the BLUE material looks weird.
Single material showup OK.
2 materials with group as above, in any case show the BLUE really strage.
Changing the 1st group to start=0, count=200 (for all triangles) and removing the 2nd group, will show MORE squares of RED (obviously) but still NOT in the way I would like it to show.
Changing the 1st group count to any value greater than 200 will cause a crash (obviously) of attempting to access vertex out of range...
Is anyone know clearly what shall I do?
I am using THREE.js v.101 and I prefer to not create special custom shader for that, or add another vertex buffer to duplicate those I already have, and I prefer to not create 2 meshes as this may get much more complicated with advanced models.
Check out this: https://jsfiddle.net/mmalex/zebos3va/
fix #1 - don't define group 0
fix #2 - 2nd parameter in .addGroup is buffer length, it must be multiple of 3 (100 was wrong)
var gg = new THREE.BufferGeometry(),
vtx = [],
fc = [[],[]],
mm = [
new THREE.MeshLambertMaterial({
color: 0xff0000
}),
new THREE.MeshLambertMaterial({
color: 0x0000ff
})
];
for (var y = 0; y < 11; y++)
for (var x = 0; x < 11; x++) {
vtx.push(x - 5, 0, y - 5);
if (x && y) {
var p = (vtx.length / 3) - 1;
fc[(x % 2) ^ (y % 2)].push(
p, p - 11, p - 1,
p - 1, p - 11, p - 12
);
}
}
gg.addAttribute('position', new THREE.Float32BufferAttribute(vtx, 3));
fc[0].push.apply(fc[1]);
gg.setIndex(fc[0]);
gg.computeVertexNormals();
// group 0 is everything, unless you define group 1
// fix #1 - don't define group 0
// fix #2 - 2nd parameter is buffer length, it must be multiple of 3 (100 was wrong)
gg.addGroup(0, 102, 1);
scene.add(new THREE.Mesh(gg, mm));
I recently stretched a gradient across the canvas using the ImageData data array; ie the ctx.getImageData() and ctx.putImageData() methods, and thought to myself, "this could be a really efficient way to animate a canvas full of moving objects". So I wrapped it into my main function along with the requestAnimationFrame(callback) statement, but that's when things got weird. The best I can do at describing is to say it's like the left most column of pixels on the canvas is duplicated in the right most column of pixels, and based on what coordinates you specify for the get and put ctx methods, this can have bizarre consequences.
I started out with the get and put methods targeting the canvas at 0, 0 like so:
imageData = ctx.getImageData( 0, 0, cvs.width, cvs.height );
// here I set the pixel colors according to their location
// on the canvas using nested for loops to target the
// the correct imageData array indexes.
ctx.putImageData( imageData, 0, 0 );
But I immediately noticed the right side of the canvas was wrong. Like the gradient has started over, and the last pixel just didn't get touched for some reason:
So scaled back my draw region changed the put ImageData coordinates to get some space between the drawn image and the edge of the canvas, and I changed the get coordinated to eliminate that line on the right edge of the canvas:
imageData = ctx.getImageData( 1, 1, cvs.width, cvs.height );
for ( var x = 0; x < cvs.width - 92; x++ ) {
for ( var y = 0; y < cvs.height - 92; y++ ) {
// array[ x + y * width ] = value / x; // or similar
}
}
ctx.putImageData( imageData, 2, 2 );
Pretty! But wrong... So I reproduced it in codepen. Can someone help me understand and overcome this behavior?
Note: The codepen has the scaled back draw area. If you change the get coordinates to 0 you'll see it basically behaves the same way as the first example but with white-space in between the expected square and the unexpected line. That said, I left the get at 1 and the put at zero for the most interesting behavior yet.
I've changed your code a little. In your double loop I am declaring a variable var i = (x + y*cvs.width)*4; This is only reducing the verbosity of your code so that I can see it better. The i variable represents the index of your pixel in the imageData.data array. Since you are doing
imageData.data[i - 4 ] ...
imageData.data[i - 3 ] ...
imageData.data[i - 2 ] ...
imageData.data[i - 1 ] ...
you are going one pixel backwards and the first pixel from every row appears as the last pixel of the previous row. So I've changed it from var i = (x + y*cvs.width)*4; to var i = 4 + (x + y*cvs.width)*4;.
When you are animating it, since the imageData is inside the test() function, you are recalculating the values of the imageData.data array in base of the last frame. So in the second frame you have that 1px line from the first frame copied again and moved 1px upward and 1px to the left.
I hope this is what you were asking.
var ctx, cvs, imageData;
cvs = document.getElementById('canv');
ctx = cvs.getContext('2d');
function test() {
// imageData = ctx.getImageData( 0, 0, cvs.width, cvs.height );
// produces a line on the right side of the screen
imageData = ctx.getImageData( 1, 1, cvs.width, cvs.height );
// bizzar reverse cascading
for (var x=0;x<cvs.width-92;x++) {
for (var y=0;y<cvs.height-92;y++) {
var i = 4+(x + y*cvs.width)*4;
imageData.data[i - 4 ] = Math.floor((255-y)-Math.floor(x/55)*55);
imageData.data[i - 3 ] = Math.floor(255/(cvs.height-92)*y);
imageData.data[i - 2 ] = Math.floor(255/(cvs.width-92)*x);
imageData.data[i - 1 ] = 255;
}
}
ctx.putImageData( imageData, 0, 0 );
requestAnimationFrame( test );
}
test();
canvas {
box-shadow: 0 0 2.5px 0 black;
}
<canvas id="canv" height="256" width="256"></canvas>
I'm trying to morph the vertices of a loaded .obj file like in this example: https://threejs.org/docs/#api/materials/MeshDepthMaterial - when 'wireframe' and 'morphTargets' are activated in THREE.MeshDepthMaterial.
But I can't reach the desired effect. From the above example the geometry can be morphed via geometry.morphTargets.push( { name: 'target1', vertices: vertices } ); however it seems that morphTargets is not available for my loaded 3D object as it is a BufferGeometry.
Instead I tried to change independently each vertices point from myMesh.child.child.geometry.attributes.position.array[i], it kind of works (the vertices of my mesh are moving) but not as good as the above example.
Here is a Codepen of what I could do.
How can I reach the desired effect on my loaded .obj file?
Adding morph targets to THREE.BufferGeometry is a bit different than THREE.Geometry. Example:
// after loading the mesh:
var morphAttributes = mesh.geometry.morphAttributes;
morphAttributes.position = [];
mesh.material.morphTargets = true;
var position = mesh.geometry.attributes.position.clone();
for ( var j = 0, jl = position.count; j < jl; j ++ ) {
position.setXYZ(
j,
position.getX( j ) * 2 * Math.random(),
position.getY( j ) * 2 * Math.random(),
position.getZ( j ) * 2 * Math.random()
);
}
morphAttributes.position.push(position); // I forgot this earlier.
mesh.updateMorphTargets();
mesh.morphTargetInfluences[ 0 ] = 0;
// later, in your render() loop:
mesh.morphTargetInfluences[ 0 ] += 0.001;
three.js r90