How do I reduce opacity of geometry that is hidden from view? - three.js

I have a 3D model of a house with some lines draw on it in teal. I'd like to reduce the opacity of lines that should not be visible to the camera because they are hidden behind parts of the roof.

Related

Shader with alpha masking other objects

I'm trying to make a very simple shadow shader which consists of a plane with a shader showing a radial gradient on colors and alpha.
Beneath this shadow lies another plane with the same kind of shader but linear.
And as a background of all this, a linear gradient from dark blue to light blue.
The problem is that when my camera approaches the ground, the plane of the shadow masks the floor.
Why does it happen and what can I do to prevent that?
https://codesandbox.io/s/epic-sun-po9j3
https://po9j3.csb.app/
You'd need to post code to check for sure but it likely happens because three.js sorts the order it draws things based on the center of the objects and their distance from the camera.
You can force a different order by setting Object3D.renderOrder
three.js also generally draws opaque things before transparent things so my guess is your ground plane and your shadow plane are both set to transparent: true but the ground can be set to transparent: false in which case it will be drawn first.
You might find this article useful. It shows a similar example.
As for why there is a hole it's because of the depth buffer. If something in front gets drawn first then the pixels behind are not drawn. So if the shadow happens to be drawn first it ends up looking like a hole because the pixels of plane behind it are not drawn.
See this

fading a MeshBasicMaterial threejs

I have a MeshBasicMaterial which has a planegeomatry.the material is around the plane like an outline and will be green initially.i have a clock in the app.i need to reduce the length of the outline every seconds to alert user their time is running out.so the outline will flow from top of the plane clockwise fading and disappear when the time runs out.hope you get the idea.now can anyone help me how to achieve this.
One way to solve this: create a ring geometry and animate its property thetaLength using animejs or tweenjs.
Create a ring in a 3D app, that mapping coordinates that run down the length of the mesh. Then create a gradient texture, that is half black, half white, (1x256). Apply the texture to the material on the ring mesh. Animate the texture offset using animejs.
If you really want the best performance, create a procedural texture using glsl shader, and map that onto a plane.

Reference existing WebGL depth buffer when rendering a new ThreeJS scene

I have an existing WebGL canvas that is being rendered without using ThreeJS, and is for all intents and purposes a black box to me, apart from two facts: (1) I have access to the underlying webgl canvas DOM element and can position and resize it on the screen, and (2) I know the properties of the camera for the scene, and get updates on every render cycle for that camera.
The problem I need to solve can be simplified to the following: I need to have my own separate ThreeJS canvas that displays both the black box canvas data, and then elements that I draw, like a cube for a simple example. I can already easily overlay the two canvases, set the transparency on my canvas for everything but the cube, and then align the two with the camera events from the black box library. This works quite well.
The issue with this is that when I draw my objects, like a cube, they don't respect the depth buffer of the black box canvas. So I might have a cube that is properly aligned with the backing scene and movements of the scene, but then it isn't properly masked when something in the black box canvas is closer to the camera than the cube. My thought is that I need to solve this in one of two ways: (1) I can have my renderer write to the other canvas with autoClear = false and preserveDrawingBuffer = true, or (2) I can somehow copy the depth buffer from the black box canvas into my canvas, and then set up my renderer so that it respects the new depth buffer.
I haven't been successful with either approach yet, so I'm wondering if this is possible, and if so which of the above approaches, or what other approach, can solve this problem?
--Edit--
See https://jsfiddle.net/zdxyoajb/ for angular/typescript implementation of the above attempts. In the following animate loop, if I comment out the overlayRenderer lines, the below sphere will be red and offset from the center (as it should be), but if I don't comment the lines, I get the below image. I also get the following error:
WebGL: INVALID_OPERATION: uniformMatrix4fv: location is not from current program
animate() {
requestAnimationFrame(() => this.animate());
this.blackBoxCamera.copy(this.overlayCamera);
this.blackBoxRenderer.render(this.blackBoxScene, this.blackBoxCamera);
this.overlayRenderer.state.reset();
this.overlayRenderer.render(this.overlayScene, this.overlayCamera);
}

Creating a magnifying-glass effect in three.js WebGL

I'm working with an orthographic view in three.js/WebGL renderer, and I want a magnifying glass that tracks with the user mouse. I'm looking for the best way of doing this that's efficient.
When working with html5 canvas raw commands, this was easy: I simply defined a circular clip region, zoomed my coordinates, and re-drew the whole scene. With 3d objects, it's less obvious how do to it.
The method I've found so far is to do the following:
Define a second camera that looks into the zoomed region. Set the orthographic clip coordinates to be small so that it doesn't need to do much work
Create a THREE.WebGLRenderTarget
Tell the renderer and all my line textures that the resolution is about to change
Render the scene into the RenderTarget
Add a CircleGeometry as a MeshObject at the spot at the mouse position (in world coordinate but above the rest of the scene, close to the camera). Call this the lens.
Give the lens the WebGLRenderTarget as a texture.
Go back to my default camera, reset all my resolution parameters, and redraw the scene with the 'lens' object added.
This works (see image below) but I'm worried about parts of it:
I have to render twice per frame
Lines don't draw well, because the resolution problems. I have to keep track of all materials that need to know screen resolution and update all of them twice per screen render.
Related problems:
I want to overlay some plot axes on top of this, and possibly gridlines. These would change as the view pans. I'm not sure if I should make these 3d objects, or do it in a 2d canvas context I lay overtop.
I want to overlay some plot lines, and have them show up sensibly in the zoomed view. "Sensible" here is hard to figure out: I don't want them too fat in the zoomed view, but I also don't want to scale them up as much as the image detail (which is being rendered as a texture onto Plane objects behind).
This is a long post, but I'm still new to three.js and looking for good ideas.

getting sprites to work with three.js and different camera types

I've got a question about getting sprites to work with three.js using perspective and orthogonal cameras.
I have a building being rendered in one scene. At one location in the scene all of the levels are stacked on top of each other to give a 3D view of the building and an orthogonal camera is being used to view it. In another part of the scene, I have just the selected level of the building being shown and a perspective camera is being used. The screen is divided between the two views. The idea being the user selects a level from the building view and a more detailed map of that selected level is shown on the other part of the screen.
I played around with sprites for a little bit and as far as I understand it; if the sprite is being viewed with a perspective camera then the sprite's scale property is actual it's size property and if a sprite is being viewed with an orthogonal camera the scale property scales the sprite according to the view port.
I placed the sprite where both cameras can see it and this seems to be the case. If I scale the sprite by 0.5, then the sprite takes up half the orthogonal camera's view port and I can't see it with the perspective camera (presumably because for it, the sprite is 0.5px x 0.5px and is either rounded to 0px (not rendered, or 1px, effectively invisible). If I scale the sprite by say 50, the the perspective camera can see it (presumably because it's a 50px x 50px square) and the orthogonal camera is over taken by the sprite (presumably because it's being scaled by 50 times the view port).
Is my understanding correct?
I ask because in the scene I'm rendering, the building and detailed areas are ~1000 units apart on the x-axis. If I place a sprite somewhere on the detail map I need it to be ~35x35 pixels and when I do this it works fine for the detail view but building view is overtaken. I played with the numbers and it seems that if I scale the sprite by 4, it starts to show up on my building view, even though there's a 1000 unit distance between the views and the sprite isn't visible with the perspective camera.
So. If my understanding is correct then I need to either use separate scenes; have a much bigger gap between views; use the same camera type for both views; or not use sprites.
There are basically two different ways you can use sprites, either with 2D screen coordinates or 3D scene coordinates. Perhaps scene coordinates are what you need? For examples of both, check out the example at:
http://stemkoski.github.io/Three.js/Sprites.html
and in particular, when you zoom in and zoom out in that demo, notice that the sprites in-scene will change size, while the others do not.
Hope this helps!

Resources