Drawing equilateral triangle in the middle - three.js

I want to draw a triangle in the middle of the scene so I pushed vertices like this and it worked:
geom.vertices.push(new THREE.Vector3(-1, 0, 0));
geom.vertices.push(new THREE.Vector3(1, 0, 0));
geom.vertices.push(new THREE.Vector3(0, 1, 0));
My question is, if I reorder push methods, triangle is not being rendered and I don't understand why, for instance:
geom.vertices.push(new THREE.Vector3(1, 0, 0));
geom.vertices.push(new THREE.Vector3(-1, 0, 0));
geom.vertices.push(new THREE.Vector3(0, 1, 0));
Why is that happening? I can't find anything in documentation about this.

In 3D, each face has a normal attribute computed when you call geometry.computeFaceNormals() in three.js. If a face normal points toward the camera, the face will be drawn, otherwise it won't. How to decide which side of the face will be the front side ? It depends of the order of the vertices. If you look at a face and if the indices of its 3 vertices grow clockwise, you are watching the front side. Otherwise it is the backside. It is called the winding order.
I guess you did not change their indices in the face declaration, new THREE.Face3(0,1,2) but if you change the order of the vertices in their array, it is the same of course.
So actually you are now looking your face from the backside. You can see it by changing the camera's point of view (change the order and move the camera to the back in this fiddle ).
If you don't want to worry about the winding order and be able to watch the faces from any point of view, three.js offers you that in the material parameters : just add
side:THREE.DoubleSide //default is THREE.FrontSide, there also is THREE.BackSide

Related

THREEjs create an intersection plane for a raycast with negative origin

I have a THREEJS scene with an object that 'looks at my mouse'. This works fine and I am using a raycast to get the mouse position like so:
this.intersectionPlane = new THREE.Plane(new THREE.Vector3(0, 0, 1), 10);
this.raycaster = new THREE.Raycaster();
this.mouse = new THREE.Vector2();
this.pointOfIntersection = new THREE.Vector3();
On the mouse-move event I lookAt the pointOfIntersection vector and the object rotates. This works really well.
onDocumentMouseMove = (event) => {
event.preventDefault();
this.mouse.x = (event.clientX / window.innerWidth) * 2 - 1;
this.mouse.y = -(event.clientY / window.innerHeight) * 2 + 1;
this.raycaster.setFromCamera(this.mouse, this.camera);
this.raycaster.ray.intersectPlane(this.intersectionPlane, this.pointOfIntersection);
let v3 = new THREE.Vector3(this.pointOfIntersection.x*0.05, this.pointOfIntersection.y*0.05, this.pointOfIntersection.z);
if(this.pebbleLogo){
this.pebbleLogo.lookAt(v3);
// console.log(v3);
}
if(this.videoWall){
this.videoWall.lookAt(v3);
}
}
BUT, I want to do the same thing with another object that lives at a z-depth of -20 and the camera flies through to this position. At this point, it also flies through the intersectionPlane and the raycast no longer works.
The intersectionPlane is not added to the scene so it doesn't have a position that I can move so how do I make sure that it stays with the camera?
I can see that the plane has two properties:
normal - (optional) a unit length Vector3 defining the normal of the plane. Default is (1, 0, 0).
constant - (optional) the signed distance from the origin to the plane. Default is 0.
I have been able to move the Plane using a translate but this is not ideal as I need the plane to be in a constant position in relation to the camera (just in front of it). I tried to make the plane a child of the camera but it didn't seem to make any difference to its position.
Any help appreciated.
When you perform renderer.render(scene, cam), the engine updates the transformation matrices of all objects that need to be rendered. However, since your camera and plane are not descendants of the scene, you'll have to manually update these matrices. The plane doesn't know that it's parent camera has moved, so you might need to perform plane.updateMatrix(). You can read about manually updating transformation matrices in the docs.
I think since only the parent moves, you might need to use updateMatrixWorld() or updateWorldMatrix() instead. But one of these 3 options should work.
Edit
Upon re-reading your code, it looks like you're using a purely Mathematical THREE.Plane object. This is not an Object3D, which means it cannot be added as a child of anything, so it doesn't behave as a regular object.
My answer assumed you were using a Mesh with PlaneGeometry, which is an Object3D, and it can be added as a child of the camera.

Use physijs to give three.js Questions about adding physical effects?

I created a ground, then I dug a gap in it, and finally added physical effects through physijs.
let Mesh = new THREE.Mesh(new THREE.BoxGeometry(800, 10, 800), material);
Mesh = new ThreeBSP(Mesh);
let Gap = new THREE.Mesh(new THREE.BoxGeometry(230, 10, 170), material);
Gap = new ThreeBSP(Gap);
Mesh = Mesh.subtract(Gap).toMesh(material);
Mesh = new Physijs.BoxMesh(Mesh.geometry, Mesh.material, 0);
scene.add(Mesh);
Then you create a collection with physical effects. The plan is to fall from the hole in the ground, and the result appears to be suspended in the hole. Why?
let geometry = new Physijs.BoxMesh(new THREE.CylinderGeometry(10, 15, 50, 25), material, 1);
geometry.position.set(0, 500, 0);
scene.add(geometry);
I'm building a house. I'm digging holes in the floor and walls to represent staircases and doors, and then I add physical effects to the floor and walls. In the plan, objects representing people can pass through these holes, but they are blocked. People are directly suspended above the holes in the stairway, and the door can't pass through, as if blocked by an invisible wall
Physijs.BoxMesh will just create a box with eight corners and flat planes in between. Have you looked into using Physijs.ConcaveMesh? I couldn't find any documentation, but you can see it in the source code.
Mesh = Mesh.subtract(Gap).toMesh(material);
Mesh = new Physijs.ConcaveMesh(Mesh.geometry, Mesh.material, 0);
scene.add(Mesh);

Why is three.js inconsistent about gouraud interpolation?

I want to shade a THREE.BoxBufferGeometry using a simple THREE.MeshLambertMaterial. The material is supposed to use a Lambert illumination model to pick the colors for each vertex (and it does), and then use Gouraud shading to produce smooth gradients on each face.
The Gouraud part is not happening. Instead, the cube's faces are each shaded with one single, solid color.
I have tried various other BufferGeometrys, and gotten inconsistent results.
For example, if instead I make an IcosahedronBufferGeometry, I get the same problem: each face is one single, solid color.
geometry = new THREE.IcosahedronBufferGeometry(2, 0); // no Gouraud shading.
geometry = new THREE.IcosahedronBufferGeometry(2, 2); // no Gouraud shading.
On the other hand, if I make a SphereBufferGeometry, the Gouraud is present.
geometry = new THREE.SphereBufferGeometry(2, 3, 2); // yes Gouraud shading.
geometry = new THREE.SphereBufferGeometry(2, 16, 16); // yes Gouraud shading.
But then if I make a cube using a PolyhedronBufferGeometry, the Gouraud shading doesn't appear unless I set the detail to something other than 0.
const verticesOfCube = [
-1,-1,-1, 1,-1,-1, 1, 1,-1, -1, 1,-1,
-1,-1, 1, 1,-1, 1, 1, 1, 1, -1, 1, 1,
];
const indicesOfFaces = [
2,1,0, 0,3,2,
0,4,7, 7,3,0,
0,1,5, 5,4,0,
1,2,6, 6,5,1,
2,3,7, 7,6,2,
4,5,6, 6,7,4
];
const geometry = new THREE.PolyhedronBufferGeometry(verticesOfCube, indicesOfFaces, 1, 1); // no Gouraud shading
geometry = new THREE.PolyhedronBufferGeometry(verticesOfCube, indicesOfFaces, 1, 1); // yes Gouraud shading
I am aware of the existence of the BufferGeometry methods computeFaceNormals() and computeVertexNormals(). Normals are emphatically important here, as they are used to determine the colors for each face and vertice, respectively. But while they help with the Icosahedron, they have no effect on the Box, no matter whether they are present, only one is present, or both are present in both possible orders.
Here is the code I expect to work:
const geometry = new THREE.BoxBufferGeometry(2, 2, 2);
geometry.computeFaceNormals();
geometry.computeVertexNormals();
const material = new THREE.MeshLambertMaterial({
color: 0xBE6E37
});
const mesh = new THREE.Mesh(geometry, material);
I should be getting a cube whose faces (the real, triangular ones) are shaded with a gradient. First, the face normals should be computed, and then the vertex normals by averaging the normals of the faces formed by them. Here is a triangular bipyramid on which correct Gouraud shading is being applied:
But the code above produces this instead:
At no point does three.js log any errors or warnings to the console.
So what is it that's going on here? The only explanation I can think of is that the Box is actually comprised of 24 vertices, three at each corner of the cube, and that they form faces such that each vertex's computed normal is an average of at most two faces pointing in the same direction. But I can't find that written down anywhere, and that explanation doesn't fly for the Polyhedron where vertices and faces were explicitly specified in code.

Translation and rotation around center

I'm trying to achieve something simple: Set a translation on X axis, and rotate the object around it's center by a fixed angle.
To achieve this, as far my current knowledge, it's necessary to move the object to the center, rotate, and move back to the original position. Okay. The problem I get although, is that it looks like the object rotate it's local axis and do the last translation along these axis, so it ends in a wrong position.
This is my code:
public void draw(GL10 gl) {
gl.glLoadIdentity();
GLU.gluLookAt(gl, 0, 0, 5, 0, 0, 0, 0, 1, 0);
gl.glTranslatef(x, 0, 0);
gl.glTranslatef(-x, 0, 0);
gl.glRotatef(-80, 0, 1, 0);
gl.glTranslatef(x, 0, 0);
gl.glBindTexture(GL10.GL_TEXTURE_2D, textureId);
gl.glEnableClientState(GL10.GL_VERTEX_ARRAY);
gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY);
gl.glFrontFace(GL10.GL_CW);
gl.glVertexPointer(3, GL10.GL_FLOAT, 0, verticesBuffer);
gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer);
gl.glDrawElements(GLES20.GL_TRIANGLES, indices.length, GLES10.GL_UNSIGNED_SHORT, indicesBuffer);
}
Before the rotation the object should be at 0,0,0. It rotates correctly. But then it comes near to the screen as if the x axis would be pointing to me (80°).
Note: I let only "opengl" as tag, since this is a general OpenGL question, the answer should not be Android related.
This is the deprecated way of doing this, but I guess that is no excuse for not answering the question.
OpenGL performs matrices multiplications in reverse order if multiple transforms are applied to a vertex. For example, If a vertex is transformed by MA first, and transformed by MB second, then OpenGL performs MB x MA first before multiplying the vertex. So, the last transform comes first and the first transform occurs last in your code.
gl.glPushMatrix();
gl.glTranslatef(globalX, 0, 0);
gl.glTranslatef(localX, 0 ,0);
gl.glRotatef(-80, 0, 1, 0);
gl.glTranslatef(-globalX, 0, 0);
gl.glPopMatrix();
First move from where you are in a hierarchy of transforms to the origin.
Then rotate around that origin.
Apply some local movement along any axis.
Move the object back to its global positioning.
Use glPushMatrix() and glPopMatrix() to undo changes for elements in the same level of relative positioning, this is having the same parent element to which they are relatively positioned.
The push preserves translations from previous (parent) objects that OpenGL applies after operations in the local code above, as it is the order of a common stack (LIFO), in this case the matrix stack.

Translating with GLKMatrix4Translate Seems to Move About the Camera, Not the Origin

I'm trying to enable a user to pan up/down and left/right an object in OpenGL ES. I'm using GLKit for all of the drawing and movement. I've enabled touch events to track how the user wants to move the object. I'm using GLKMatrix4Translate to slide the pan the object, but it has a rotational component to it as well for some reason.
I gather the translation points from the user's touch and store them in a CGPoint:
CGPoint center;
I use center.x and center.y for the X and Y positions I want to translate to. I perform the translation with this line:
GLKMatrix4 modelViewMatrix = GLKMatrix4Translate(GLKMatrix4Identity, center.x, center.y, 0.0f);
Any ideas?
I figured out what the problem was here. I stopped using GLKMatrix4Translate and replaced that with GLKMatrix4MakeLookAt. GLKMatrix4MakeLookAt allows you to move the camera around which gives the effect I was looking for.
Simply using this code results in the same problem I was already seeing. The model rotates as it pans.
GLKMatrix4MakeLookAt(0, 0, 7,
center.x, center.y 0,
0, 1, 0);
What this is saying is that you want the camera to always look at (0,0,7) with the center at (center.x, center.y, 0) with the y-axis pointing up. The pointing of the eye is the problem. If the model is rotating (which it is), you need to point the eye at the newly rotated point.
Replacing the above with the code below seems to do the trick.
GLKMatrix4MakeLookAt(rotation.x, rotation.y, 7,
center.x, center.y, 0,
0, 1, 0);

Resources