GL_PROJECTION and GL_MODELVIEW in ThreeJS - three.js

I'm trying to port some legacy OpenGL 1.x code to WebGL / Three.JS:
glMatrixMode(GL_PROJECTION);
glLoadMatrixf(...)
glMatrixMode(GL_MODELVIEW);
glLoadMatrixf(...)
// rest of rendering
I am setting my Three.JS camera's projection like so (note that I do not want to use a PerspectiveCamera, my projection matrix is pre-calculated):
var camera = new THREE.Camera()
camera.projectionMatrix.fromArray(...)
And I am setting my Three.JS camera's pose like so:
var mat = new THREE.Matrix4();
mat.fromArray(...);
mat.decompose(camera.position, camera.quaternion, camera.scale);
camera.updateMatrix();
scene.updateMatrixWorld(true);
I am testing this with the following:
var geometry = new THREE.SphereGeometry(10, 35, 35);
var material = new THREE.MeshLambertMaterial({color: 0xffff00});
mesh = new THREE.Mesh(geometry, material);
camera.add(mesh);
mesh.position.set(0, 0, -40); // fix in front of the camera
scene.add(mesh);
I can see that my camera's pose is being set correctly (by logging it), but nothing is being rendered to the screen. Am I setting the projection matrix incorrectly?

Are you sure your projection matrix is correct? (and as Sepehr Well pointed out, are you adding the camera to the scene?)
There are a few places that updateProjectionMatrix is called in the camera code, which will overwrite your matrix, so I'd put a break point in there to see if it is doing that.

The issue turned out to be my modelView matrix. This is how I ported my legacy code to ThreeJS:
// OpenGL 1.x code
glMatrixMode(GL_PROJECTION);
glLoadMatrixf(projectionMatrix)
glMatrixMode(GL_MODELVIEW);
glLoadMatrixf(modelViewMatrix)
// ThreeJS code
/* GL_PROJECTION matrix can be directly applied here */
camera = new THREE.Camera()
camera.projectionMatrix.copy(projectionMatrix)
/* GL_MODELVIEW should be inverted first */
modelViewMatrix.getInverse(modelViewMatrix)
modelViewMatrix.decompose(camera.position, camera.quaternion, camera.scale)
Looking at ThreeJS WebGL renderer source, ThreeJS' modelViewMatrix is calculated by multiplying camera's matrixWorldInverse into object's matrix.
Matrix4's decompose updates camera's matrixWorld, hence the actual matrix used in model-view calculation ends up inverted.
EDIT: here's a plug and play ThreeJS camera to use in this scenario:
/**
* #author Sepehr Laal
* #file OpenGLCamera.js
*/
function OpenGLCamera () {
THREE.Camera.call(this)
this.type = 'OpenGLCamera'
}
OpenGLCamera.prototype = Object.assign(Object.create(THREE.Camera.prototype), {
constructor: OpenGLCamera,
isOpenGLCamera: true,
/*
* Equivalent to OpenGL 1.x:
* glMatrixMode(GL_PROJECTION);
* glLoadMatrixf(...)
*/
setProjectionFromArray: function (arr) {
this.projectionMatrix.fromArray(arr)
},
/*
* Equivalent to OpenGL 1.x:
* glMatrixMode(GL_MODELVIEW);
* glLoadMatrixf(...)
*/
setModelViewFromArray: function () {
var m = new THREE.Matrix4();
return function (arr) {
m.fromArray(arr)
m.getInverse(m)
m.decompose(this.position, this.quaternion, this.scale)
};
}()
})

Related

GLTF model deformed upon being rotated in three.js

I've been trying to import a gltf model from blender into three js, but I'm having trouble rotating it as whenever I try to set its quaternion to anything but (0,0,0,1), the model is deformed. I've included screenshots of how the model looks with the (0,0,0,1) quaternion and then what it looks like when I change the quaternion to (0,1,0,1).
Here's my code for importing the model
loader.load('./resources/Machina_Colored.gltf',
function(gltf){
//pos and quat are a THREE.Vector3 and THREE.Quaternion respectively
const mMass = 0;
const size = new THREE.Vector3(4,8,4); //this is the size of the model in blender
const mShape = new Ammo.btBoxShape(new Ammo.btVector3(size.x * 0.5, size.y * 0.5, size.z * 0.5));
mShape.setMargin(0.05);
const mObj = gltf.scene.children[0];
createRigidBody(mObj,mShape,mMass,pos,quat); //this seems to work with any quaternion it's just the model itself that is having problems
},
function(xhr){
console.log(((xhr.loaded/xhr.total) * 100) + "% loaded");
},
function(error){
console.log('An error occured');
});
Here is the model (working normally) with the (0,0,0,1) Quaternion
And then here's what happens when I use the (0,1,0,1) Quaternion
I've also included a screenshot of what my file looks like in blender just in case
Thanks to WestLangley's comment I added these lines of code to the top of my program
const eul = new THREE.Euler(0,0,0); //this now controls the rotation of the model
quat.setFromEuler(eul);
and the rotation is reflected without deformation

Mangled rendering when transforming scene coordinates instead of camera coordinates

I've been learning how to integrate ThreeJS with Mapbox, using this example. It struck me as weird that the approach is to leave the loaded model in its own coordinate system, and transform the camera location on render. So I attempted to rewrite the code, so that the GLTF model is transformed when loaded, then the ThreeJS camera is just synchronised with the Mapbox camera, with no further modifications.
The code now looks like this:
function newScene() {
const scene = new THREE.Scene();
// create two three.js lights to illuminate the model
const directionalLight = new THREE.DirectionalLight(0xffffff);
directionalLight.position.set(0, -70, 100).normalize();
scene.add(directionalLight);
const directionalLight2 = new THREE.DirectionalLight(0xffffff);
directionalLight2.position.set(0, 70, 100).normalize();
scene.add(directionalLight2);
return scene;
}
function newRenderer(map, gl) {
// use the Mapbox GL JS map canvas for three.js
const renderer = new THREE.WebGLRenderer({
canvas: map.getCanvas(),
context: gl,
antialias: true
});
renderer.autoClear = false;
return renderer;
}
// create a custom layer for a 3D model per the CustomLayerInterface
export function addModel(modelPath, origin, altitude = 0, orientation = [Math.PI / 2, 0, 0]) {
const coords = mapboxgl.MercatorCoordinate.fromLngLat(origin, altitude);
// transformation parameters to position, rotate and scale the 3D model onto the map
const modelTransform = {
translateX: coords.x,
translateY: coords.y,
translateZ: coords.z,
rotateX: orientation[0],
rotateY: orientation[1],
rotateZ: orientation[2],
/* Since our 3D model is in real world meters, a scale transform needs to be
* applied since the CustomLayerInterface expects units in MercatorCoordinates.
*/
scale: coords.meterInMercatorCoordinateUnits()
};
const scaleVector = new THREE.Vector3(modelTransform.scale, -modelTransform.scale, modelTransform.scale)
return {
id: "3d-model",
type: "custom",
renderingMode: "3d",
onAdd: function(map, gl) {
this.map = map;
this.camera = new THREE.Camera();
this.scene = newScene();
this.renderer = newRenderer(map, gl);
// use the three.js GLTF loader to add the 3D model to the three.js scene
new THREE.GLTFLoader()
.load(modelPath, gltf => {
gltf.scene.position.fromArray([coords.x, coords.y, coords.z]);
gltf.scene.setRotationFromEuler(new THREE.Euler().fromArray(orientation));
gltf.scene.scale.copy(scaleVector);
this.scene.add(gltf.scene);
const bbox = new THREE.Box3().setFromObject(gltf.scene);
console.log(bbox);
this.scene.add(new THREE.Box3Helper(bbox, 'blue'));
});
},
render: function(gl, matrix) {
this.camera.projectionMatrix = new THREE.Matrix4().fromArray(matrix);
this.renderer.state.reset();
this.renderer.render(this.scene, this.camera);
// this.map.triggerRepaint();
}
}
}
It basically works, in that a model is loaded and drawn in the right location in the Mapbox world. However, instead of looking like this:
It now looks like this, a mangled mess that jitters around chaotically as the camera moves:
I'm not yet familiar enough with ThreeJS to have any idea what I did wrong.
Here's a side-by-side comparison of the old, functional code on the right, vs the new broken code on the left.
Further investigation
I suspect possibly the cause is to do with shrinking all the coordinates down to within the [0..1] range of the projected coordinate system, and losing mathematical precision, perhaps. When I scale the model up by 100 times, it renders like this - messy and glitchy, but at least recognisable as something.

Three.js custom Shader with Texture

I want to write a custom shader which manipulates my image with three.js.
For that I want to create a plane with the image as a texture. Afterwards I want to move vertices around to distort the image.
(If that an absolute wrong way to do this, please tell me).
First I have my shaders:
<script type="x-shader/x-vertex" id="vertexshader">
attribute vec2 a_texCoord;
varying vec2 v_texCoord;
void main() {
// Pass the texcoord to the fragment shader.
v_texCoord = a_texCoord;
gl_Position = projectionMatrix *
modelViewMatrix *
vec4(position,1.0);
}
</script>
<script type="x-shader/x-fragment" id="fragmentshader">
uniform sampler2D u_texture;
varying vec2 v_texCoord;
void main() {
vec4 color = texture2D(u_texture, v_texCoord);
gl_FragColor = color;
}
</script>
Where I don't really understand what the texture2D is doing, but I found that in other code fragments.
What I want with this sample: Just color the vertex (gl_FracColor) with the color from the «underlying» image (=texture).
In my code I have setup a normal three scene with a plane:
// set some camera attributes
var VIEW_ANGLE = 45,
ASPECT = window.innerWidth/window.innerHeight,
NEAR = 0.1,
FAR = 1000;
var scene = new THREE.Scene();
var camera = new THREE.PerspectiveCamera(VIEW_ANGLE, ASPECT, NEAR, FAR);
camera.position.set(0, 0, 15);
var vertShader = document.getElementById('vertexshader').innerHTML;
var fragShader = document.getElementById('fragmentshader').innerHTML;
var texloader = new THREE.TextureLoader();
var texture = texloader.load("img/color.jpeg");
var uniforms = {
u_texture: {type: 't', value: 0, texture: texture},
};
var attributes = {
a_texCoord: {type: 'v2', value: new THREE.Vector2()}
};
// create the final material
var shaderMaterial = new THREE.ShaderMaterial({
uniforms: uniforms,
vertexShader: vertShader,
fragmentShader: fragShader
});
var renderer = new THREE.WebGLRenderer();
renderer.setSize( window.innerWidth, window.innerHeight );
document.body.appendChild(renderer.domElement);
var plane = {
width: 5,
height: 5,
widthSegments: 10,
heightSegments: 15
}
var geometry = new THREE.PlaneBufferGeometry(plane.width, plane.height, plane.widthSegments, plane.heightSegments)
var material = new THREE.MeshBasicMaterial( { color: 0x00ff00 } );
var plane = new THREE.Mesh( geometry, shaderMaterial );
scene.add(plane);
plane.rotation.y += 0.2;
var render = function () {
requestAnimationFrame(render);
// plane.rotation.x += 0.1;
renderer.render(scene, camera);
};
render();
Unfortunately, after running that code I just see a black window. Although I know that if I use the material as material when creating the mesh, I can see it clearly.
So it must be the shaderMaterial or the shaders.
Questions:
do I have to define the uniform u_texture and the attribute
a_texCoord in my shader Material uniforms and attributes? And do
they have to have the exact same name?
How many vertices are there anyway? Will I get a vertices for every pixel in the image? Or is it just 4 for each corner of the plane?
What value does a_texCoord have? Nothing happens if I write:
var attributes = {
a_texCoord: {type: 'v2', value: new THREE.Vector2(1,1)}
};
Or do I have to use some mapping (built in map stuff from three)? But how would I then change vertex positions?
Could someone shed some light on that matter?
I got it to work by changing this:
var uniforms = {
u_texture: {type: 't', value: 0, texture: texture},
};
To this:
var uniforms = {
u_texture: {type: 't', value: texture},
};
Anyway all other questions are still open and answers highly appreciated.
(btw: why the downgrade of someone?)
do I have to define the uniform u_texture and the attribute a_texCoord
in my shader Material uniforms and attributes? And do they have to
have the exact same name?
Yes and yes. The uniforms are defined as part of the shader-material while the attributes haven been moved from shader-material to the BufferGeometry-class in version 72 (i'm assuming you are using an up to date version, so here is how you do this today):
var geometry = new THREE.PlaneBufferGeometry(...);
// first, create an array to hold the a_texCoord-values per vertex
var numVertices = (plane.widthSegments + 1) * (plane.heightSegments + 1);
var texCoordBuffer = new Float32Array(2 * numVertices);
// now register it as a new attribute (the 2 here indicates that there are
// two values per element (vec2))
geometry.addAttribute('a_texCoord', new THREE.BufferAttribute(texCoordBuffer, 2));
As you can see, the attribute will only work if it has the exact same name as specified in your shader-code.
I don't know exactly what you are planning to use this for, but it sounds suspiciously like you want to have the uv-coordinates. If that is the case, you can save yourself a lot of work if you have a look at the THREE.PlaneBufferGeometry-class. It already provides an attribute named uv that is probably exactly what you are looking for. So you just need to change the attribute-name in your shader-code to
attribute vec2 uv;
How many vertices are there anyway? Will I get a vertices for every
pixel in the image? Or is it just 4 for each corner of the plane?
The vertices are created according to the heightSegments and widthSegments parameters. So if you set both to 5, there will be (5 + 1) * (5 + 1) = 36 vertices (+1 because a line with only 1 segment has two vertices etc.) and 5 * 5 * 2 = 50 triangles (with 150 indices) in total.
Another thing to note is that the PlaneBufferGeometry is an indexed geometry. This means that every vertex (and every other attribute-value) is stored only once, although it is used by multiple triangles. There is a special index-attribute that contains the information which vertices are used to create which triangles.
What value does a_texCoord have? Nothing happens if I write: ...
I hope the above helps to answer that.
Or do I have to use some mapping (built in map stuff from three)?
I would suggest you use the uv attribute as described above. But you absolutely don't have to.
But how would I then change vertex positions?
There are at least two ways to do this: in the vertex-shader or via javascript. The latter can be seen here: http://codepen.io/usefulthink/pen/vKzRKr?editors=1010
(the relevant part for updating the geometry starts in line 84).

Weld edge vertices of BoxBufferGeometry

I am trying to create terrain in the shape of a cube which will allow for vertex displacement along the y‑axis of those on the top plane. All vertices adjacent to those of the top plane need to be connected.
In a performant manner, user input from either desktop or mobile would move them up or down.
From what I have read it is better to offload expensive operations to the GPU. I thought achieving the vertex displacement in a ShaderMaterial with a displacement attribute seemed like a perfect fit until I read the following:
As of THREE r72, directly assigning attributes in a ShaderMaterial is no longer supported. A BufferGeometry instance (instead of a Geometry instance) must be used instead.
So it seems that using attribute for my Geometry is out of the question?
My attempt at displacing the vertices along the top plane using BufferGeometry in the ShaderMaterial however results in the following:
The top plane's vertices of the BufferGeometry are not connected to the other planes, contrary to those of the Geometry, which are connected by using its mergeVertices method. To my knowledge that method is not available for BufferGeometry objects?
Basically what started my fear, uncertainty and doubt concerning Geometry was a post I read by mrdoob.
Summary
I already have this working for Geometry, but would like to make use of the GPU with ShaderMaterial's attributes, seemingly only supported by BufferGeometry, if it offers performance benefits for mobile and if Geometry might be deprecated in the future.
Here is a small snippet illustrating the issue:
let winX = window.innerWidth;
let winY = window.innerHeight;
const scene = new THREE.Scene();
const camera = new THREE.PerspectiveCamera(50, winX / winY, 0.1, 100);
camera.position.set(2, 1, 2);
camera.lookAt(scene.position);
const renderer = new THREE.WebGLRenderer();
renderer.setSize(winX, winY);
document.body.appendChild(renderer.domElement);
const terrainGeo = new THREE.BoxBufferGeometry(1, 1, 1);
const terrainMat = new THREE.ShaderMaterial({
vertexShader: `
attribute float displacement;
varying vec3 dPosition;
void main() {
dPosition = position;
dPosition.y += displacement;
gl_Position = projectionMatrix * modelViewMatrix * vec4(dPosition, 1.0);
}
`,
fragmentShader: `
void main() {
gl_FragColor = vec4(1.0, 0.0, 1.0, 1.0);
}
`
});
const terrainObj = new THREE.Mesh(terrainGeo, terrainMat);
let displacement = new Float32Array(terrainObj.geometry.attributes.position.count);
displacement.forEach((elem, index) => {
// Select vertex 8 - 11, the top of the cube
if (index >= 8 && index <= 11) {
displacement[index] = Math.random() * 0.1 + 0.25;
}
});
terrainObj.geometry.addAttribute('displacement',
new THREE.BufferAttribute(displacement, 1)
);
scene.add(camera);
scene.add(terrainObj);
const render = () => {
requestAnimationFrame(render);
renderer.render(scene, camera);
}
render();
const gui = new dat.GUI();
const updateBufferAttribute = () => {
terrainObj.geometry.attributes.displacement.needsUpdate = true;
};
gui.add(displacement, 8).min(0).max(2).step(0.05).onChange(updateBufferAttribute);
gui.add(displacement, 9).min(0).max(2).step(0.05).onChange(updateBufferAttribute);
gui.add(displacement, 10).min(0).max(2).step(0.05).onChange(updateBufferAttribute);
gui.add(displacement, 11).min(0).max(2).step(0.05).onChange(updateBufferAttribute);
<script src="https://cdnjs.cloudflare.com/ajax/libs/dat-gui/0.5.1/dat.gui.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/three.js/r76/three.min.js"></script>
<style type="text/css">body { margin: 0 } canvas { display: block }</style>

Object Overflow Clipping Three JS

Using three js is there anyway to define a clipping region for an object? I have for example a parent which contains child objects, I would like to clip the child objects based on the viewport.
Something like...
// Create container and children
var container = new THREE.Object3D();
for(var i = 0; i < 100; i++) {
var geometry = new THREE.PlaneGeometry(i, 0, 0);
var material = new THREE.MeshBasicMaterial({color: 0x00ff00});
var child = new THREE.Mesh(geometry, material);
container.add(child);
}
// Create bounding box which is my viewport
var geom = new THREE.Geometry();
geom.vertices.push(new THREE.Vector3(0, 0, 0));
geom.vertices.push(new THREE.Vector3(10, 0, 0));
geom.vertices.push(new THREE.Vector3(10, 1, 0));
geom.vertices.push(new THREE.Vector3(0, 1, 0));
geom.computeBoundingBox();
// Magic property (THIS DOESNT EXIST)
container.clipRegion = geom.boundingBox;
The final part doesn't exist but is there any way to achieve this with three js? I potentially want to animate the children within the parent and only show the visible region defined by the bounding box.
Update, Added the following image to describe my problem.
The resulting red area is the region I want to make visible, whilst masking anything that lies outside of this region. All other content should be visible.
I have been able to clip an object with another.
See the result here
fiddle
In this fiddle you will see a cube being clip by an sphere. Since this is a demo, there are some things that are not the final code.
You have in the right hand of the screen another camera view, where you see the scene from a high, static point view.
Also, the part of the cube that should be clipped, instead of this is showed green. In the fragment shader, you have to uncomment the discard statement to achieve real clipping.
if (shadowColor.r < 0.9) {
gl_FragColor = vec4 (0.3, 0.9, 0.0, 1.0);
} else {
gl_FragColor = vec4 (0.8, 0.8, 0.8, 1.0);
// discard;
}
It works by creating a spot light that can cast shadows
clippingLight = new THREE.SpotLight ( 0xafafaf, 0.97 );
clippingLight.position.set (100, 200, 1400);
clippingLight.castShadow = true;
scene.add (clippingLight);
The object that has to do the clipping casts shadows, and the object to be clipped receives shadows.
Then, in the animate , we set this light to the camera location
function animate() {
cameraControls.update();
clippingLight.position.x = camera.position.x;
clippingLight.position.y = camera.position.y;
clippingLight.position.z = camera.position.z;
requestAnimationFrame(animate);
}
Now, the parts that have to be visible in the clipped object are the ones at the shadow. We need a shader that handles that. The frag shader code is take from the standard one in the three.js library, just slightly modified.
I am very new working with three.js, so probably there are a lot of thing in the code that can be done better. Just take the idea :-)

Resources