I want to shade a THREE.BoxBufferGeometry using a simple THREE.MeshLambertMaterial. The material is supposed to use a Lambert illumination model to pick the colors for each vertex (and it does), and then use Gouraud shading to produce smooth gradients on each face.
The Gouraud part is not happening. Instead, the cube's faces are each shaded with one single, solid color.
I have tried various other BufferGeometrys, and gotten inconsistent results.
For example, if instead I make an IcosahedronBufferGeometry, I get the same problem: each face is one single, solid color.
geometry = new THREE.IcosahedronBufferGeometry(2, 0); // no Gouraud shading.
geometry = new THREE.IcosahedronBufferGeometry(2, 2); // no Gouraud shading.
On the other hand, if I make a SphereBufferGeometry, the Gouraud is present.
geometry = new THREE.SphereBufferGeometry(2, 3, 2); // yes Gouraud shading.
geometry = new THREE.SphereBufferGeometry(2, 16, 16); // yes Gouraud shading.
But then if I make a cube using a PolyhedronBufferGeometry, the Gouraud shading doesn't appear unless I set the detail to something other than 0.
const verticesOfCube = [
-1,-1,-1, 1,-1,-1, 1, 1,-1, -1, 1,-1,
-1,-1, 1, 1,-1, 1, 1, 1, 1, -1, 1, 1,
];
const indicesOfFaces = [
2,1,0, 0,3,2,
0,4,7, 7,3,0,
0,1,5, 5,4,0,
1,2,6, 6,5,1,
2,3,7, 7,6,2,
4,5,6, 6,7,4
];
const geometry = new THREE.PolyhedronBufferGeometry(verticesOfCube, indicesOfFaces, 1, 1); // no Gouraud shading
geometry = new THREE.PolyhedronBufferGeometry(verticesOfCube, indicesOfFaces, 1, 1); // yes Gouraud shading
I am aware of the existence of the BufferGeometry methods computeFaceNormals() and computeVertexNormals(). Normals are emphatically important here, as they are used to determine the colors for each face and vertice, respectively. But while they help with the Icosahedron, they have no effect on the Box, no matter whether they are present, only one is present, or both are present in both possible orders.
Here is the code I expect to work:
const geometry = new THREE.BoxBufferGeometry(2, 2, 2);
geometry.computeFaceNormals();
geometry.computeVertexNormals();
const material = new THREE.MeshLambertMaterial({
color: 0xBE6E37
});
const mesh = new THREE.Mesh(geometry, material);
I should be getting a cube whose faces (the real, triangular ones) are shaded with a gradient. First, the face normals should be computed, and then the vertex normals by averaging the normals of the faces formed by them. Here is a triangular bipyramid on which correct Gouraud shading is being applied:
But the code above produces this instead:
At no point does three.js log any errors or warnings to the console.
So what is it that's going on here? The only explanation I can think of is that the Box is actually comprised of 24 vertices, three at each corner of the cube, and that they form faces such that each vertex's computed normal is an average of at most two faces pointing in the same direction. But I can't find that written down anywhere, and that explanation doesn't fly for the Polyhedron where vertices and faces were explicitly specified in code.
Related
I want to check whether a point is inside a mesh or not. To do so, I use a raycaster, set it to the point's origin and if the ray intersects the mesh only once, it must be inside. Unfortunately, the intersectObject always returns no intersection, even in cases I know that the point is located inside the mesh.
The point's origin is given in world coordinates and the mesh's matrixWorld is up to date too. Also, I set the mesh.material.side to THREE.DoubleSide, so that the intersection from inside should be detected. I tried setting the recursive attribute to true as well, but as expected, this didn't have any effect (since the mesh is a box geometry). The mesh is coming from the Autodesk Forge viewer interface.
Here is my code:
mesh.material.side = THREE.DoubleSide;
const raycaster = new THREE.Raycaster();
let vertex = new THREE.Vector3();
vertex.fromArray(positions, positionIndex);
vertex.applyMatrix4(matrixWorld);
const rayDirection = new THREE.Vector3(1, 1, 1).normalize();
raycaster.set(vertex, rayDirection);
const intersects = raycaster.intersectObject(mesh);
if (intersects.length % 2 === 1) {
isPointInside = true;
}
The vertex looks like this (and it obviosly lies inside of the bounding box):
The mesh is a box shaped room with the following bounding box:
The mesh looks like this:
The geometry of the mesh holds the vertices in the vb. After applying the world matrix, the mesh vertices are correct in world space. Here is a part of the vb list:
Why does the raycaster not return any intersection? Is the matrixWorld of the mesh taken into account when computing the intersections?
Thanks for any kind of help!
Note that Forge Viewer is based on three.js version R71, and it had to modify/reimplement some parts of the library to handle large and complex models (especially architecture and infrastructure designs), so THREE.Mesh objects might have a slightly different structure. In that case I'd suggest to raycast using Forge Viewer's own mechanisms, e.g., using viewer.impl.rayIntersect(ray, ignoreTransparent, dbIds, modelIds, intersections);.
Is there a way in three.js to create a poly from multiple individual elements, rectangle for example.
I have attached an example.
I am using:
for(i = 0; i<5; i++){
var rand = Math.floor(Math.random() * 50)+1000;
var material = new THREE.MeshBasicMaterial({
color : "#ff"+i+ rand,
side : THREE.DoubleSide,
transparent : true,
opacity : 1
});
var mesh = new THREE.Mesh( geometry, material );
if(angle) mesh.rotation.y = angle;
mesh.position.set( loop+1, 4,4);
scene.add( mesh );
}
When I apply roatation mesh.rotation.y = angle; it doesn't come up with my below design, I rather get a cross + because the panel rotates on it's y from center, not from corner...
Thank you
The
There are 3 ways to achieve what you're trying to do. The problem you are facing stems from transform origin, as you noted, origin defaults to position [0,0,0]. So, your options are:
build a transform matrix using a different transform offset for rotation, this is probably an overkill for simple use-cases.
translate geometry to not be centered on [0,0,0], for example you can move the whole quad (your geometry) right so that the left edge of the quad aligns with [0,0,0], then, when you rotate, left edge will stay put.
embed Mesh inside a Group, rotate the Mesh and translate (position.set(....)) the Group.
no matter which route you take - you will still have to deal with the some trigonometry as you will need to compute the position for the next segment to align with the edge of the previous one.
One more way around that is to build the following type of structure
Group[
Mesh 1,
Mesh 2,
Mesh 3,
Group [
Mesh 4,
Mesh 5,
Mesh 6,
Group [
Mesh 7
]
]
]
Last group is unnecessary, it's there purely for consistency.
As far as the trigonometry that I mentioned - it's simple Sin and Cos stuff, so it should be quite simple. Here is some pseudo-code that you'll need:
prevPosition, prevAngle //position and angle of previous segment
// Compute next segment transform
nextPosition.x = Math.cos(prevAngle)*segmentSize + prevPosition.x;
nextPosition.z = Math.sin(prevAngle)*segmentSize + prevPosition.z;
In the documentation for THREE.BufferGeometry is written:
normal (itemSize: 3)
Stores the x, y, and z components of the face or vertex normal vector of each vertex in this geometry. Set by .fromGeometry().
When is this variable holding vertex normals and when face normals?
Is it as simple as if a THREE.MeshMaterial is used the normals are interpreted as face normals and when a THREE.LineMaterial is used the normals are used as vertex normals? Or is it more complicated then that.
I also understood that THREE.FlatShading can be used for rendering a mesh with flat shading (face normals point straight outward).
geometry = new THREE.BoxGeometry( 1000, 1000, 1000 );
material = new THREE.MeshPhongMaterial({
color: 0xff0000,
shading: THREE.FlatShading
});
mesh = new THREE.Mesh( geometry, material );
I would say normals are not necessary any more. Why are my buffer geometries made from for example a THREE.BoxGeometry still holding a normal attribute in such case? Is this information still used for rendering or would removing them from the buffer geometry be a possible optimization?
BufferGeometry normals are vector normals and shader interpolates normal value for each fragment from vertices belonging to that face (in most cases triangle)
when you convert THREE.BoxGeometry which has normals computed by default, they stay set up even in the BufferGeometry conversion output, as geometry does not have any way to "know" whether you need normals or any of the attributes (material program decides what attributes are used)
you can remove the normals with geometry.removeAttribute("normal")
According to various posts at the three.js github, MeshFaceMaterial will be deprecated eventually.
I currently use this for my terrain. Granted it's not the best way to do it. Actually its pretty crappy. For one I cannot use BufferGeometry which is not good considering I generally have 2 layers of 128x128 (segmented) planes for terrain. Very high memory usage.
I've adapted all my code to allow for the terrain to be BufferGeometry except two things don't work. MeshFaceMaterial and BufferGeometry.merge(). The merge doesn't work on indexed geometry which to me is weird considering THREE creates this geometry, yet it can merge non-indexed geometry from blender. It cannot merge geometry it creates itself but can merge geometry from external sources... Oh well that's another post there, back to MeshFaceMaterial.
I currently use a 128x128 "MaterialMap". Each pixel represents a materialIndex for each face of the plane. This has two serious drawbacks. Squared up sections of terrain (no curves) and harsh distinctions on the borders of textures.
My question is: How can I generate this terrain with multiple textures without using MeshFaceMaterial. The highest res texture I have is 2048x2048 and zone size can easily be 10000x10000 making repeat necessary (right?).
Ultimately my goal is to use BufferGeometry and get rid of MeshFaceMaterial.
MaterialMap example:
Terrain Example (terribly cropped sorry {work pc}):
You helped me out a while back via email with advice on culling meshes so I would like to return the favor (with my humble strategy) :)
If you want to use THREE.PlaneBufferGeometry (which, as you know, is where all geometry in THREE.js is soon headed), then my advice would be to layer different PlaneBufferGeometries right on top of each other. For instance in the above example picture, you could have
var stoneFloorGeometry = new THREE.PlaneBufferGeometry(arenaWidth, arenaHeight, 1, 1);
var stoneFloorMaterial = new THREE.MeshBasicMaterial({
depthWrite: false, // This is always underneath every other object
map: stoneFloorTexture
});
var stoneFloor = new THREE.Mesh(stoneFloorGeometry, stoneFloorMaterial);
stoneFloor.rotation.x = Math.PI / -2; // rotate to be flat in the X-Z plane
stoneFloor.position.set(0, 0, 0);
scene.add(stoneFloor);
// now add the grass plane right on top of that with its own texture and shape
var grassGeometry = new THREE.PlaneBufferGeometry(lawnWidth, lawnHeight, 1, 1);
var grassMaterial = new THREE.MeshBasicMaterial({
depthWrite: false, // this is rendered right on top of the stone floor
map: grassTexture
});
var grass = new THREE.Mesh(grassGeometry, grassMaterial);
grass.rotation.x = Math.PI / -2;
grass.position.set(0, 0, 0);
scene.add(grass);
// finally add the stone path walkway on top of the grass, leading to the castle
var walkwayGeometry = new THREE.PlaneBufferGeometry(walkwayWidth, walkwayHeight, 1, 1);
var walkwayMaterial = new THREE.MeshBasicMaterial({
depthWrite: false, // this is rendered right on top of the grass
map: stoneFloorTexture // uses same texture as large stoneFloor before
});
var walkway = new THREE.Mesh(walkwayGeometry, walkwayMaterial);
walkway.rotation.x = Math.PI / -2;
walkway.position.set(0, 0, 0);
scene.add(walkway);
As long as you layer the level from bottom to top and disable depthWrite, all the various textures will correctly show up on top of each other, and none will Z-fight. So, stoneFloor is added to the scene first, followed by grass, followed by walkway.
And since depthTest is still active, your moving game characters will render on top of all these various textures. Initially, it also looked like it worked with just disabling 'depthTest', but the textures ended up rendering over ('above') the characters/models which is incorrect.
Eventually when THREE.js moves ShapeGeometry over to BufferGeometry, it would be nice to define an arbitrary polygonal shape (like an octagon or something) and then texture map that and lay down shapes on top of each other for the game level in a similar manner, thus avoiding the 'square' problem you mentioned.
As for this current solution, on the modern CPU/GPU I don't think you will see much performance cost in creating 3 PlaneBufferGeometries instead of 1 large one with multiple faces/indexes. This way you have the advantages of using THREE's BufferGeometry while still having everything 'looking' like it is all texture-mapped to one large plane.
Hope this helps!
-Erich (erichlof on GitHub)
I do animation at which the part of tops moves. Thus lighting starts working incorrectly. For correct lighting it is necessary to change face.vertexNormals. At first I thought that it is enough
geometry.computeVertexNormals();
But it appeared, it does not absolutely that.
How to list me topmost vertex normals to the specified tops?
OR
How to list me the faces containing specified tops?
example here
Here example. But I need not only to see them, and to list and change in the program.
if (d<50) { var dist = 15 * Math.cos( d/20 - t );
geometry.vertices[i].z = dist; }
How to list me vertex normals for these tops?
Your plane is upside-down, which causes your vertex normals to point downward.
Set plane.rotation.x = -Math.PI/2;.
To see the normals, add
vnh = new THREE.VertexNormalsHelper( plane, 20, 0xff0000, 2 );
scene.add( vnh );
to your init() function, and in your animation loop call:
vnh.update();
three.js r.68