background occlusion - three.js

I'm trying to render 2 scenes. The first scene is just a 2D background plane. In the second scene i have set to objects. The first objects (a head) material-opacity is set to 1. I thought this was an easy and fast way to calculate the occlusion for the second object (sunglasses) in the scene. In fact that work exactly like I wanted, but now the head is also occluding the background even though he should be transparent. ( I cleared the depth buffer before rendering the second scene and set renderer.autoClear = false )
renderer.autoClear = false;
var headMaterial = new THREE.MeshBasicMaterial({ color: 0x000000, opacity: 1 });
...
//Renderloop
renderer.clear();
renderer.render( background, camera );
renderer.clear(false,true,false);
renderer.render( scene, camera);

Is there a compelling reason for rendering two scenes, rather than just one scene that contains all objects? They share the same camera, no?
Try rendering the head and sunglasses first. Mask the sunglasses with a stencil. Then render the background with a stencil test (but not depth test). You'll render fewer background pixels, resulting in a bit faster overall render.

If by saying "background" you mean a scene with only plane that has some texture on in - why don't you set your background as html background? That was you won't have to calculate your plane's right position to fill all thre screen area.
Now, for transparency issue - here is some example for you: http://jsfiddle.net/ajJmx/1/
See how you've got your front cube side set to semi-transparent and you can see othe sides of cube. And if you turn cube 45deg - you'll see that even though there is some other object on the background - your solid cube sides stay solid.
What you basically want to do is to set transparent:true, opacity:0.6 to your sunglasses material. That's it! You could also play around with material blendings and try to set blending:THREE.AdditiveBlending depending on you sunglasses type.

Related

Black sphere within skybox sphere forms when I try using a giant sphere to create a skybox in threejs

I'm trying to make a spherical skybox since I think that would best fit the space scene I'm making. I'm running across two issues:
I can get the skybox to work, but for some reason a sphere within my giant sphere I've made appears. I don't think its a real Object3D sphere, I have the feeling its a graphical glitch of some sort.
If I make the sphere too big (Like 1800 for the radius) it just disappears and stops working. If I set it's side to THREE.DoubleSide, I can zoom out of the scene and still see its outer side of the sphere, but inside, regardless of DoubleSide or BackSide, its invisible when 1800 or bigger
let skyboxGeom = new SphereGeometry(900, 128, 128)
let skyboxMat = new THREE.MeshPhongMaterial({map: spaceTexture});
skyboxMat.side = THREE.BackSide
let skybox = new THREE.Mesh(skyboxGeom, skyboxMat);
skybox.position.set(0,0,0)
mainScene.add(skybox)
Here is a picture of what it looks like:
The colorful ball is correct, the inner black sphere is what appears incorrectly. If I zoom in enough to the scene, the black ball shrinks and gets smaller and smaller until it disappears entirely, but I would have to zoom way too far into my scene that none of my things would show if I tried to do this to make the black ball go away
If it helps, I am using a PerspectiveCamera, and am exploring the scene using OrbitControls

Reference existing WebGL depth buffer when rendering a new ThreeJS scene

I have an existing WebGL canvas that is being rendered without using ThreeJS, and is for all intents and purposes a black box to me, apart from two facts: (1) I have access to the underlying webgl canvas DOM element and can position and resize it on the screen, and (2) I know the properties of the camera for the scene, and get updates on every render cycle for that camera.
The problem I need to solve can be simplified to the following: I need to have my own separate ThreeJS canvas that displays both the black box canvas data, and then elements that I draw, like a cube for a simple example. I can already easily overlay the two canvases, set the transparency on my canvas for everything but the cube, and then align the two with the camera events from the black box library. This works quite well.
The issue with this is that when I draw my objects, like a cube, they don't respect the depth buffer of the black box canvas. So I might have a cube that is properly aligned with the backing scene and movements of the scene, but then it isn't properly masked when something in the black box canvas is closer to the camera than the cube. My thought is that I need to solve this in one of two ways: (1) I can have my renderer write to the other canvas with autoClear = false and preserveDrawingBuffer = true, or (2) I can somehow copy the depth buffer from the black box canvas into my canvas, and then set up my renderer so that it respects the new depth buffer.
I haven't been successful with either approach yet, so I'm wondering if this is possible, and if so which of the above approaches, or what other approach, can solve this problem?
--Edit--
See https://jsfiddle.net/zdxyoajb/ for angular/typescript implementation of the above attempts. In the following animate loop, if I comment out the overlayRenderer lines, the below sphere will be red and offset from the center (as it should be), but if I don't comment the lines, I get the below image. I also get the following error:
WebGL: INVALID_OPERATION: uniformMatrix4fv: location is not from current program
animate() {
requestAnimationFrame(() => this.animate());
this.blackBoxCamera.copy(this.overlayCamera);
this.blackBoxRenderer.render(this.blackBoxScene, this.blackBoxCamera);
this.overlayRenderer.state.reset();
this.overlayRenderer.render(this.overlayScene, this.overlayCamera);
}

Is it possible to Outline a view port in threejs?

I've created a scene with two cameras and one renderer. each camera is looking at the scene from a different angle and I have the first camera rendering on the entire screen then the second camera I have rendering in a small view port laying on top of the first render. I was wondering if there is a way to have that second view port outlined so that each look separate
Yes, you can outline an inset viewport by rendering a solid-colored rectangle slightly larger than the inset prior to rendering the inset.
// border
renderer.setScissorTest( true );
renderer.setScissor( x, y, width, height );
renderer.setClearColor( 0xffffff, 1 ); // border color
renderer.clearColor(); // clear color buffer
Then, render the inset. Just make sure the inset background is opaque.
three.js r.86
I guess you are using threejs viewport feature? As far as I know, by itself, it does not have such a feature.
But since it is rendered on to canvas... maybe you could draw an outline by yourself on canvas in desired coordinates, after each threejs render frame?
A basic example:
var c=document.getElementById("myCanvas");
var ctx=c.getContext("2d");
ctx.rect(20,20,150,100);
ctx.stroke();
(reference: https://www.w3schools.com/tags/canvas_rect.asp)

three.js create texture from cubecamera

When using a cube camera one normally sets the envMap of the material to the cubeCamera.renderTarget, e.g.:
var myMaterial = new THREE.MeshBasicMaterial({color:0xffffff,
envMap: myCubeCamera.renderTarget,
side: THREE.DoubleSide});
This works great for meshes that are meant to reflect or refract what the cube camera sees. However, I'd like to simply create a texture and apply that to my mesh. In other words, I don't want my object to reflect or refract. I want the face normals to be ignored.
I tried using a THREE.WebGLRenderTarget, but it won't handle a cube camera. And using a single perpspective camera with WebGLRenderTarget does not give me a 360 texture, obviously.
Finally, simply assigning the cubeCamera.renderTarget to the 'map' property of the material doesn't work either.
Is it possible to do what I want?
r73.
Edit: this is not what the author of the question is looking for, I'll keep my answer below for other people
Your envmap is already a texture so there's no need to apply it as a map. Also, cubemaps and textures are structurally different, so it won't be possible to swap them, or if you succeed in doing that the result is not what you probably you might expect.
I understand from what you're asking you want a static envmap instead to be updated at each frame, if that's the case simply don't run myCubeCamera.updateCubeMap() into your render function. Instead place it at the end of your scene initialization with your desired cube camera position, your envmap will show only that frame.
See examples below:
Dynamic Cubemap Example
Static Cubemap Example
The answer is: Set the refractionRatio on the material to 1.0. Then face normals are ignored since no refraction is occurring.
In a normal situation where the Cube Camera is in the same scene as the mesh, this would be pointless because the mesh would be invisible. But in cases where the Cube Camera is looking at a different scene, then this is a useful feature.

THREE.ShaderMaterial is seen inverted by the shadow Camera

I have a mesh that i am loading from 3d studio max into three.js. I modified three.js to hold another typed array for the binormal data. It all seems to be working fine and dandy until shadows are involved. For some reason, the shadow map is wrong, and it seems as if its rendering the mesh with faces flipped.
In this example, the shadows are showing up correctly on the floor, because the renderer has
.shadowMapCullFace = THREE.CullFaceBack
http://dusanbosnjak.com/test/webGL/new/StojadinCeo/stojadinCeo.html
I can get other shadows to show up on my shader, but self shadowing leads to horrible artifacts, and the shadow that my mesh casts on other meshes is always inverted.
I've tried reversing the order in which the face indecis come in, (acb instead of abc), which flips the faces. This creates proper shadow cast, but the mesh shows flipped.
What im thinking of doing at the moment is exporting a flipped mesh, and reversing the cull order in the shaderMaterial, but it would be wonderful to find out why this is happening.
I basically connected the phong and shadow mapping shader chunks with what i've had.
edit
Here is an updated scene with some shadow casters and receive shadows on imported meshes
http://dusanbosnjak.com/test/webGL/new/StojadinCeo/stojadinCeo2.html
light = new THREE.SpotLight(0xaaaaaa);
light.position.set(10,10,10);
light.shadowCameraVisible = true;
light.shadowDarkness = .5;
light.castShadow = true;
light.shadowCameraNear = 1;
light.shadowCameraFar = 250;
light.shadowCameraFov = 57;
light.shadowMapWidth = 2048;
light.shadowMapHeight = 2048;
scene.add(light);
the rest of the meshes just have receiveShadow and castShadow set to true
The shadow shows on the shaderMaterial (i copied the shadowfrag chunk)
THREE.Mesh() with THREE.CubeGeometry() both casts shadows and receives shadows properly, but the shadow cast by the shaderMaterial mesh is inverted.
I can't really isolate this to 50 lines of code as it's a whole import/export process from max.
I don't understand why would the shadow camera render this one particular mesh inverted, while the normal camera renders it correctly, if that is what is happening?
You can zoom out and move the car using wasd
Unless you changed the default settings in three.js, only back-faces cast shadows. A work-around is to set:
renderer.shadowMapCullFace = THREE.CullFaceBack;
or
renderer.shadowMapCullFace = THREE.CullFaceNone;
But these options can lead to other issues.
The best approach is to make sure every mesh has depth. Avoid planes, like the car roof.
For example, you can add an interior liner to the car roof to give it depth.
Shadow mapping in WebGL can be tricky, so read all you can about it so you will be familiar with the issues involved.
three.js r.66

Resources