Object Overflow Clipping Three JS - three.js

Using three js is there anyway to define a clipping region for an object? I have for example a parent which contains child objects, I would like to clip the child objects based on the viewport.
Something like...
// Create container and children
var container = new THREE.Object3D();
for(var i = 0; i < 100; i++) {
var geometry = new THREE.PlaneGeometry(i, 0, 0);
var material = new THREE.MeshBasicMaterial({color: 0x00ff00});
var child = new THREE.Mesh(geometry, material);
container.add(child);
}
// Create bounding box which is my viewport
var geom = new THREE.Geometry();
geom.vertices.push(new THREE.Vector3(0, 0, 0));
geom.vertices.push(new THREE.Vector3(10, 0, 0));
geom.vertices.push(new THREE.Vector3(10, 1, 0));
geom.vertices.push(new THREE.Vector3(0, 1, 0));
geom.computeBoundingBox();
// Magic property (THIS DOESNT EXIST)
container.clipRegion = geom.boundingBox;
The final part doesn't exist but is there any way to achieve this with three js? I potentially want to animate the children within the parent and only show the visible region defined by the bounding box.
Update, Added the following image to describe my problem.
The resulting red area is the region I want to make visible, whilst masking anything that lies outside of this region. All other content should be visible.

I have been able to clip an object with another.
See the result here
fiddle
In this fiddle you will see a cube being clip by an sphere. Since this is a demo, there are some things that are not the final code.
You have in the right hand of the screen another camera view, where you see the scene from a high, static point view.
Also, the part of the cube that should be clipped, instead of this is showed green. In the fragment shader, you have to uncomment the discard statement to achieve real clipping.
if (shadowColor.r < 0.9) {
gl_FragColor = vec4 (0.3, 0.9, 0.0, 1.0);
} else {
gl_FragColor = vec4 (0.8, 0.8, 0.8, 1.0);
// discard;
}
It works by creating a spot light that can cast shadows
clippingLight = new THREE.SpotLight ( 0xafafaf, 0.97 );
clippingLight.position.set (100, 200, 1400);
clippingLight.castShadow = true;
scene.add (clippingLight);
The object that has to do the clipping casts shadows, and the object to be clipped receives shadows.
Then, in the animate , we set this light to the camera location
function animate() {
cameraControls.update();
clippingLight.position.x = camera.position.x;
clippingLight.position.y = camera.position.y;
clippingLight.position.z = camera.position.z;
requestAnimationFrame(animate);
}
Now, the parts that have to be visible in the clipped object are the ones at the shadow. We need a shader that handles that. The frag shader code is take from the standard one in the three.js library, just slightly modified.
I am very new working with three.js, so probably there are a lot of thing in the code that can be done better. Just take the idea :-)

Related

Same image texture for merged shape

I have made a closed hemisphere by merging geometries of a hemisphere and a circle. I have a 360degree image for texture. I want the image to be applied as the texture to the combined geometry. Currently it is applying the texture twice: to the hemisphere and the circle separately.
I have seen some answers on editing the UV mapping, but I am not sure how to go about it.
Here is the code.
var loader = new THREE.TextureLoader();
loader.setPath(srcPath);
loader.load("./texture.jpg", function(texture) {
var hemiSphereGeom = new THREE.SphereGeometry(radius, radialSegments, Math.round(radialSegments / 4), 0, Math.PI * 2, 0, Math.PI * 0.5);
var objMaterial = new THREE.MeshPhongMaterial({
map: texture,
shading: THREE.FlatShading
});
objMaterial.side = THREE.BackSide;
var capGeom = new THREE.CircleGeometry(radius, radialSegments);
capGeom.rotateX(Math.PI * 0.5);
var singleGeometry = new THREE.Geometry();
var cap = new THREE.Mesh(capGeom);
var hemiSphere = new THREE.Mesh(hemiSphereGeom);
hemiSphere.updateMatrix();
singleGeometry.merge(hemiSphere.geometry, hemiSphere.matrix);
cap.updateMatrix();
singleGeometry.merge(cap.geometry, cap.matrix);
el.setObject3D('hemisphere',new THREE.Mesh(singleGeometry , objMaterial));
});
It appears that the code is seeing the closed hemisphere as the two separate entities still. I would try a 3D modeling program and make the shape there and loading it into the AFrame code. Then load the texture on the back side of the geometry.

Three.js light position visibly changes but position attribute stays the same

I have a light that is a child to a pivot object:
var pivotpoint = new THREE.Object3D();
pivotpoint.name="pivot";
scene.add(pivotpoint);
var light = new THREE.PointLight( 0xffffff, 1, 100 );
light.name = "light";
light.castShadow = true;
pivotpoint.add( light );
light.position.set(10,25,0);
Now, in my update() method I rotate the pivot object:
var o = scene.getObjectByName("pivot");
if(GLOBAL_KEYS['a'])
{
o.rotation.y += 0.05;
}
if(GLOBAL_KEYS['d'])
{
o.rotation.y -= 0.05;
}
This works perfectly well. I can see my light rotating around the pivot point, casting shadows and all.
However, if I do...
console.log(light.position);
...the position attribute always stays at (10,25,0).
What in god's name do I need to do in order to get the actual light position??
Thanks in advance!
object.position is a local position, relative to the object's parent in the scene graph. To compute position in global space, use getWorldPosition:
const worldPos = new THREE.Vector3();
light.getWorldPosition(worldPos);
console.log(worldPos);

Json 3D model only using 1 pixel of a texture

I'm having this weird issue, my 3D object is only taking 1 pixel (bottom left) of my texture, this is how i'm loading the object
loaderFrame.load('./models/filterFrame/filterFrame.json',(geometry) =>
{
const mat = new THREE.MeshBasicMaterial({
map: new THREE.TextureLoader().load('./models/filterFrame/textura_polar.jpeg'),
transparent: true,
morphTargets: true
});
mat.transparent = true;
// mat.morphTargets = true;
frameMesh = new THREE.Mesh(geometry, mat);
frameMesh.scale.multiplyScalar(frameOptions.frameScale);
frameMesh.frustumCulled = false;
frameMesh.transparent = true;
frameMesh.renderOrder = 0;
}
);
This is because your loaded object doesn't have proper UV mapping. If UVs are nonexistent, or if they're all 0, 0, then it's only going to sample from the bottom-left corner of your texture.
To fix this, open your model in a 3D editor and make sure the UVs are properly positioned across the texture plane. I don't know what your model looks like, but here's a basic example:

GL_PROJECTION and GL_MODELVIEW in ThreeJS

I'm trying to port some legacy OpenGL 1.x code to WebGL / Three.JS:
glMatrixMode(GL_PROJECTION);
glLoadMatrixf(...)
glMatrixMode(GL_MODELVIEW);
glLoadMatrixf(...)
// rest of rendering
I am setting my Three.JS camera's projection like so (note that I do not want to use a PerspectiveCamera, my projection matrix is pre-calculated):
var camera = new THREE.Camera()
camera.projectionMatrix.fromArray(...)
And I am setting my Three.JS camera's pose like so:
var mat = new THREE.Matrix4();
mat.fromArray(...);
mat.decompose(camera.position, camera.quaternion, camera.scale);
camera.updateMatrix();
scene.updateMatrixWorld(true);
I am testing this with the following:
var geometry = new THREE.SphereGeometry(10, 35, 35);
var material = new THREE.MeshLambertMaterial({color: 0xffff00});
mesh = new THREE.Mesh(geometry, material);
camera.add(mesh);
mesh.position.set(0, 0, -40); // fix in front of the camera
scene.add(mesh);
I can see that my camera's pose is being set correctly (by logging it), but nothing is being rendered to the screen. Am I setting the projection matrix incorrectly?
Are you sure your projection matrix is correct? (and as Sepehr Well pointed out, are you adding the camera to the scene?)
There are a few places that updateProjectionMatrix is called in the camera code, which will overwrite your matrix, so I'd put a break point in there to see if it is doing that.
The issue turned out to be my modelView matrix. This is how I ported my legacy code to ThreeJS:
// OpenGL 1.x code
glMatrixMode(GL_PROJECTION);
glLoadMatrixf(projectionMatrix)
glMatrixMode(GL_MODELVIEW);
glLoadMatrixf(modelViewMatrix)
// ThreeJS code
/* GL_PROJECTION matrix can be directly applied here */
camera = new THREE.Camera()
camera.projectionMatrix.copy(projectionMatrix)
/* GL_MODELVIEW should be inverted first */
modelViewMatrix.getInverse(modelViewMatrix)
modelViewMatrix.decompose(camera.position, camera.quaternion, camera.scale)
Looking at ThreeJS WebGL renderer source, ThreeJS' modelViewMatrix is calculated by multiplying camera's matrixWorldInverse into object's matrix.
Matrix4's decompose updates camera's matrixWorld, hence the actual matrix used in model-view calculation ends up inverted.
EDIT: here's a plug and play ThreeJS camera to use in this scenario:
/**
* #author Sepehr Laal
* #file OpenGLCamera.js
*/
function OpenGLCamera () {
THREE.Camera.call(this)
this.type = 'OpenGLCamera'
}
OpenGLCamera.prototype = Object.assign(Object.create(THREE.Camera.prototype), {
constructor: OpenGLCamera,
isOpenGLCamera: true,
/*
* Equivalent to OpenGL 1.x:
* glMatrixMode(GL_PROJECTION);
* glLoadMatrixf(...)
*/
setProjectionFromArray: function (arr) {
this.projectionMatrix.fromArray(arr)
},
/*
* Equivalent to OpenGL 1.x:
* glMatrixMode(GL_MODELVIEW);
* glLoadMatrixf(...)
*/
setModelViewFromArray: function () {
var m = new THREE.Matrix4();
return function (arr) {
m.fromArray(arr)
m.getInverse(m)
m.decompose(this.position, this.quaternion, this.scale)
};
}()
})

Threejs Raycasting a Sprite-Canvas Object is inaccurate [duplicate]

I am trying to use THREE.Raycaster to show an html label when the user hover an object. It works fine if I use THREE.Mesh but with THREE.Sprite it looks like that there is a space that increases with the scale of the object.
The creation process is the same for both scenario, I only change the type based on USE_SPRITE variable.
if ( USE_SPRITE ) {
// using SpriteMaterial / Sprite
m = new THREE.SpriteMaterial( { color: 0xff0000 } );
o = new THREE.Sprite( m );
} else {
// using MeshBasicMaterial / Material
m = new THREE.MeshBasicMaterial( { color: 0xff0000 } );
o = new THREE.Mesh(new THREE.PlaneGeometry( 1, 1, 1 ), m );
}
https://plnkr.co/edit/J0HHFMpDB5INYLSCTWHG?p=preview
I am not sure if it is a bug with THREE.Sprite or if I am doing something wrong.
Thanks in advance.
three.js r73
I would consider this a bug in three.js r.75.
Raycasting with meshes in three.js is exact. However, with sprites, it is an approximation.
Sprites always face the camera, can have different x-scale and y-scale applied (be non-square), and can be rotated (sprite.material.rotation = Math.random()).
In THREE.Sprite.prototype.raycast(), make this change:
var guessSizeSq = this.scale.x * this.scale.y / 4;
That should work much better for square sprites. The corners of the sprite will be missed, as the sprite is treated as a disk.
three.js r.75

Resources