I'm trying to build some geometry myself rather than using a three.js primitive. I've added vertices and faces, and I've checked that none of the face indices exceed the vertex count.
So geo.vertices is filled with an array of THREE.Vector3.
Then tried adding UVs by doing:
geo.faceVertexUVs = [];
for( i = 0; i < numVertex; i++ ) {
// (calc u, v here)
geo.faceVertexUVs.push(new THREE.Vector2(u,v));
}
geo.verticesNeedUpdate = true;
geo.uvsNeedUpdate = true;
Then I'm building each face & normal like this:
geo.faces.push( new THREE.Face3(
i0, i1, i2,
new THREE.Vector3(nx,ny,nz),
clr, 0
));
// then create a mesh
mesh = new THREE.Mesh( geo, new THREE.MeshLambertMaterial({
shading:THREE.FlatShading, color:0xFFFFFF, map:tex
}));
geo.buffersNeedUpdate = true;
geo.uvsNeedUpdate = true;
scene.add(mesh);
Then when I try to render I get the error "attempt to access out of range vertices in attribute 2". Which attribute is #2?
Related
I am working on an arcade style Everest Flight Simulator.
In my debugger where I am building this, I have a terrain and helicopter class which generate the BufferGeometry terrain mesh, the Groups for the helipad Geometries, and the group for the helicopter Camera and Geometry.
My issue is that currently I can't seem to get any collision to detect. I imagine it may not support BufferGeometries so that is an issue for me because I need the terrain to be a Buffer since it's far too expansive... as a standard geometry it causes a memory crash in the browser.
However, testing the helipad geometries alone it still does not trigger. They are in a group so I add the groups to a global window array and set the collision check to be recursive but to no avail.
Ultimately, I am open to other forms of collision detection and may need two types as I have to use buffer geometries. Any ideas on how to fix this or a better solution?
The Helicopter Object Itself
// Rect to Simulate Helicopter
const geometry = new THREE.BoxGeometry( 2, 1, 4 ),
material = new THREE.MeshBasicMaterial(),
rect = new THREE.Mesh( geometry, material );
rect.position.x = 0;
rect.position.y = terrain.returnCameraStartPosY();
rect.position.z = 0;
rect.rotation.order = "YXZ";
rect.name = "heli";
// Link Camera and Helicopter
const heliCam = new THREE.Group(),
player = new Helicopter(heliCam, "OH-58 Kiowa", 14000);
heliCam.add(camera);
heliCam.add(rect);
heliCam.position.set( 0, 2040, -2000 );
heliCam.name = "heliCam";
scene.add(heliCam);
Adding Objects to Global Collision Array
// Add Terrain
const terrain = new Terrain.ProceduralTerrain(),
terrainObj = terrain.returnTerrainObj(),
helipadEnd = new Terrain.Helipad( 0, 1200, -3600, "Finish", true ),
helipadStart = new Terrain.Helipad( 0, 2000, -2000, "Start", false ),
helipadObjStart = helipadStart.returnHelipadObj(),
helipadObjEnd = helipadEnd.returnHelipadObj();
window.collidableMeshList.push(terrainObj);
window.collidableMeshList.push(helipadObjStart);
window.collidableMeshList.push(helipadObjEnd);
Collision Detection Function Run Every Frame
collisionDetection(){
const playerOrigin = this.heli.children[1].clone(); // Get Box Mesh from Player Group
for (let i = playerOrigin.geometry.vertices.length - 1; i >= 0; i--) {
const localVertex = playerOrigin.geometry.vertices[i].clone(),
globalVertex = localVertex.applyMatrix4( playerOrigin.matrix ),
directionVector = globalVertex.sub( playerOrigin.position ),
ray = new THREE.Raycaster( playerOrigin, directionVector.clone().normalize() ),
collisionResults = ray.intersectObjects( window.collidableMeshList, true ); // Recursive Boolean for children
if ( collisionResults.length > 0 ){
this.landed = true;
console.log("Collision");
}
// if ( collisionResults.length > 0 && collisionResults[0].distance < directionVector.length() ){
// this.landed = true;
// console.log("Collision with vectorLength")
// }
}
}
It's hard to tell what's going on inside your custom classes, but it looks like you're using an Object3D as the first argument of the raycaster, instead of a Vector3 when you use this.heli.children[1].clone(). Why don't you try something like:
var raycaster = new THREE.Raycaster();
var origin = this.heli.children[1].position;
raycaster.set(origin, direction);
Also, are you sure you're using a BufferGeometry? Because when you access a vertex value like this: playerOrigin.geometry.vertices[i], it should give you an error. There is no vertices attribute in a BufferGeometry so I don't know how you're determining the direction vector.
I'm trying to create a grid using the following code. The reason is I would like the grid this way is to have the face to be colored and then change the color when I parse in the x,y coordinates. But I get a "draw array attempt to get access out of bound arrays" error.
for (var i = 0, j = 0, k = -halfSize; i <= divisions; i++, k += step) {
vertices.push(-halfSize, 0, k, halfSize, 0, k);
vertices.push(k, 0, -halfSize, k, 0, halfSize);
var colorg = new THREE.Color("rgb(255, 0, 0)");
colorg.toArray( colors, j ); j += 3;
colorg.toArray( colors, j ); j += 3;
colorg.toArray( colors, j ); j += 3;
colorg.toArray( colors, j ); j += 3;
}
var vertices32 = new Float32Array(vertices);
var colors32 = new Float32Array(colors);
geometry.addAttribute('position', new THREE.BufferAttribute(vertices32, 3 ));
geometry.addAttribute('normal', new THREE.BufferAttribute(normals, 3));
geometry.addAttribute('color', new THREE.BufferAttribute(colors32, 3));
//fgeometry.addAttribute('uv', new THREE.BufferAttribute(uvs, 2));
// optional
geometry.computeBoundingBox();
geometry.computeBoundingSphere();
// set the normals
geometry.computeVertexNormals(); // computed vertex normals are orthogonal to the face for non-indexed BufferGeometry
// material
var material = new THREE.MeshPhongMaterial({
color: 0xffffff,
shading: THREE.FlatShading,
vertexColors: THREE.VertexColors,
side: THREE.DoubleSide
});
// mesh
mesh = new THREE.Mesh(geometry, material);
scene.add(mesh);
Have you tried BufferGeometry.toNonIndexed? In this way you can create a normal PlaneBufferGeometry and turn it into a non-indexed geometry. Each face has now unique vertices.
let geometry = new THREE.PlaneBufferGeometry();
geometry = geometry.toNonIndexed();
After that, you just add the color attribute to the geometry. Full example:
https://jsfiddle.net/f2Lommf5/4230/
I'm having a strange problem with raycasting. My scene consists of a room with a couple of components that you can move around inside that room. When the component is moving i'm measuring the distances to the walls, an invisible roof and floor. The problem is that the roof which is a ShapeGeometry is visible where it should be at the top of the walls but not hit when raycasting.
Here's where i create the mesh for the invisible roof
const roofShape = new THREE.Shape();
roofShape.moveTo(roofPoints[0].x, roofPoints[0].y);
for (let i = 1; i < roofPoints.length; i++) {
roofShape.lineTo(roofPoints[i].x, roofPoints[i].y);
}
roofShape.lineTo(roofPoints[0].x, roofPoints[0].y);
const geometry = new THREE.ShapeGeometry(roofShape);
const material = new THREE.MeshBasicMaterial({color: 0x000000, side: THREE.DoubleSide});
material.opacity = 0;
material.transparent = true;
const mesh = new THREE.Mesh(geometry, material);
mesh.position.x = 0;
mesh.position.y = 0;
mesh.position.z = room._height;
mesh.name = "ROOF";
mesh.userData = <Object3DUserData> {
id: IntersectType.INVISIBLE_ROOF,
intersectType: IntersectType.INVISIBLE_ROOF,
};
The function that's invoking the raycasting. The direction vector is(0, 0, 1) in this case. And the surfaces parameter is an array which only contains the mesh created above.
function getDistanceToSurface(componentPosition: THREE.Vector3, surfaces: THREE.Object3D[], direction: THREE.Vector3): number {
const rayCaster = new THREE.Raycaster(componentPosition, direction.normalize());
const intersections = rayCaster.intersectObjects(surfaces);
if (!intersections || !intersections.length) {
return 0;
}
const val = intersections[0].distance;
return val;
}
By changing the z direction to -1 i found that the raycaster found the roof at z=0. It seems that the geometry is still at position z=0.
I then tried to translate the geometry shape
geometry.translate(0, 0, room._height);
And now the raycaster finds it where i expect it to be. But visually it it's double the z position(mesh opacity=1). Setting the mesh position z to 0 makes it visibly correct and the raycasting still works.
I've been looking at the examples of raycasting but can't find anywhere where a ShapeGeometry needs do this.
Am i doing something wrong? Have i missed something? Do i have to set z position of the geometry, is it not enough with positioning the mesh?
As hinted in the comment by #radio the solution was as described in How to update vertices geometry after rotate or move object
mesh.position.z = room._height;
mesh.updateMatrix();
mesh.geometry.applyMatrix(mesh.matrix);
mesh.matrix.identity();
I'm generating a random plane that animates movement in the vertices to give a crystalline effect. When I use regular PlaneGeometry, shading is not a problem: http://codepen.io/shshaw/pen/GJppEX
However, I tried to switch to PlaneBufferGeometry to see if I could get better performance, but the shading disappeared.
http://codepen.io/shshaw/pen/oXjyJL?editors=001
var planeGeometry = new THREE.PlaneBufferGeometry(opts.planeSize, opts.planeSize, opts.planeDefinition, opts.planeDefinition),
planeMaterial = new THREE.MeshLambertMaterial({
color: 0x555555,
emissive: 0xdddddd,
shading: THREE.NoShading
}),
plane = new THREE.Mesh(planeGeometry, planeMaterial),
defaultVertices = planeGeometry.attributes.position.clone().array;
function randomVertices() {
var vertices = planeGeometry.attributes.position.clone().array;
for (var i = 0; i <= vertices.length; i += 3) {
// x
vertices[i] = defaultVertices[i] + (rand(-opts.variance.x, opts.variance.x));
// y
vertices[i + 1] = defaultVertices[i + 1] + (rand(-opts.variance.y, opts.variance.y));
// z
vertices[i + 2] = rand(-opts.variance.z, -opts.variance.z);
}
return vertices;
}
plane.geometry.attributes.position.array = randomVertices();
As I saw suggested in this answer to 'Shading on a plane', I tried:
plane.geometry.computeVertexNormals();
On render, I have tried all of the following attributes for the geometry to make sure it's updating the normals & vertices, like I've done on the working example with PlaneGeometry:
plane.geometry.verticesNeedUpdate = true;
plane.geometry.normalsNeedUpdate = true;
plane.geometry.computeVertexNormals();
plane.geometry.computeFaceNormals();
plane.geometry.normalizeNormals();
What has happened to the shading? Can I bring it back on a PlaneBufferGeometry mesh, or do I need to stick with PlaneGeometry?
Thanks!
I'm attempting to create a large terrain in three.js, i'm using physi.js as the physics engine.
generating the geometry from the heightmap is no problem so far. however, when i try to add it as a THREE.Mesh it works beautifully, when i try adding it as a Physijs.HeightfieldMesh i get the following error:
TypeError: geometry.vertices[(a + (((this._physijs.ypts - b) - 1) * this._physijs.ypts))] is undefined
The geometry is generated as a plane, then the Z position of each vertex gets modified according to the heightmap.
var geometry = new THREE.PlaneGeometry( img.naturalWidth, img.naturalHeight,img.naturalWidth -1, img.naturalHeight -1 );
var material = new THREE.MeshLambertMaterial( { color : 0x0F0F0F } );
//set height of vertices
for ( var i = 0; i<plane.geometry.vertices.length; i++ ) {
plane.geometry.vertices[i].z = data[i];//let's just assume the data is correct since it works as a THREE.Mesh
}
var terrain = new THREE.Mesh(geometry, material); // works
//does not work
var terrain = new Physijs.heightfieldMesh(
geometry,
material,
0
);
I think your problem is you are using "plane.geometry" instead of just "geometry" in the loop to set the vertex height.
Maybe it should be:
//set height of vertices
for ( var i = 0; i < geometry.vertices.length; i++ ) {
geometry.vertices[i].z = data[i];//let's just assume the data is correct since it works as a THREE.Mesh
}
This fiddle that I created seems to work ok.