Combined perspective and orthogonal rendering - three.js

I have a question regarding drawing objects in screen space. My issue is that I want to draw both in perspective and in screen space (orthogonal). I have tried using to cameras and two scenes. But when I render the second scene using the renderer the first scene i cleared.
Is there a way to do this without using two separate renderers?
The reason I want to do this is that I want to overlay selection graphics and other objects in screen space on top of the perspective render.

Solved using one renderer, 2 scenes, 2 cameras and renderer.autoClear = false.

Related

three.js: Render 2d projection

I want to render a cube similar to .
My problem is how to render the face projections.
I tried using Reflector, but it is tricky to size and position so it captures just the face that I want, and also shows the sides.
I also saw I can use a separate canvas to render (I imagine using an orthographic camera), but I wish for everything to be in the same canvas. I saw an example with multiple views, but it seems that they can't be positioned behind.
So, is there a way to achieve this?
One possible approach to solve the issue:
Setup an orthographic camera such that its frustum encloses the cube. You can then position the camera in front of each side of the cube, use lookAt( cube.position ) to orient it properly and then render the scene into a render target. You need one render target per side. You can then use it as a texture for the respective plane mesh.
There is an official live example that demonstrates how RTT (render-to-texture) is done with three.js. Try to use it as a code template for your own app.
https://threejs.org/examples/#webgl_rtt

Reference existing WebGL depth buffer when rendering a new ThreeJS scene

I have an existing WebGL canvas that is being rendered without using ThreeJS, and is for all intents and purposes a black box to me, apart from two facts: (1) I have access to the underlying webgl canvas DOM element and can position and resize it on the screen, and (2) I know the properties of the camera for the scene, and get updates on every render cycle for that camera.
The problem I need to solve can be simplified to the following: I need to have my own separate ThreeJS canvas that displays both the black box canvas data, and then elements that I draw, like a cube for a simple example. I can already easily overlay the two canvases, set the transparency on my canvas for everything but the cube, and then align the two with the camera events from the black box library. This works quite well.
The issue with this is that when I draw my objects, like a cube, they don't respect the depth buffer of the black box canvas. So I might have a cube that is properly aligned with the backing scene and movements of the scene, but then it isn't properly masked when something in the black box canvas is closer to the camera than the cube. My thought is that I need to solve this in one of two ways: (1) I can have my renderer write to the other canvas with autoClear = false and preserveDrawingBuffer = true, or (2) I can somehow copy the depth buffer from the black box canvas into my canvas, and then set up my renderer so that it respects the new depth buffer.
I haven't been successful with either approach yet, so I'm wondering if this is possible, and if so which of the above approaches, or what other approach, can solve this problem?
--Edit--
See https://jsfiddle.net/zdxyoajb/ for angular/typescript implementation of the above attempts. In the following animate loop, if I comment out the overlayRenderer lines, the below sphere will be red and offset from the center (as it should be), but if I don't comment the lines, I get the below image. I also get the following error:
WebGL: INVALID_OPERATION: uniformMatrix4fv: location is not from current program
animate() {
requestAnimationFrame(() => this.animate());
this.blackBoxCamera.copy(this.overlayCamera);
this.blackBoxRenderer.render(this.blackBoxScene, this.blackBoxCamera);
this.overlayRenderer.state.reset();
this.overlayRenderer.render(this.overlayScene, this.overlayCamera);
}

Creating a magnifying-glass effect in three.js WebGL

I'm working with an orthographic view in three.js/WebGL renderer, and I want a magnifying glass that tracks with the user mouse. I'm looking for the best way of doing this that's efficient.
When working with html5 canvas raw commands, this was easy: I simply defined a circular clip region, zoomed my coordinates, and re-drew the whole scene. With 3d objects, it's less obvious how do to it.
The method I've found so far is to do the following:
Define a second camera that looks into the zoomed region. Set the orthographic clip coordinates to be small so that it doesn't need to do much work
Create a THREE.WebGLRenderTarget
Tell the renderer and all my line textures that the resolution is about to change
Render the scene into the RenderTarget
Add a CircleGeometry as a MeshObject at the spot at the mouse position (in world coordinate but above the rest of the scene, close to the camera). Call this the lens.
Give the lens the WebGLRenderTarget as a texture.
Go back to my default camera, reset all my resolution parameters, and redraw the scene with the 'lens' object added.
This works (see image below) but I'm worried about parts of it:
I have to render twice per frame
Lines don't draw well, because the resolution problems. I have to keep track of all materials that need to know screen resolution and update all of them twice per screen render.
Related problems:
I want to overlay some plot axes on top of this, and possibly gridlines. These would change as the view pans. I'm not sure if I should make these 3d objects, or do it in a 2d canvas context I lay overtop.
I want to overlay some plot lines, and have them show up sensibly in the zoomed view. "Sensible" here is hard to figure out: I don't want them too fat in the zoomed view, but I also don't want to scale them up as much as the image detail (which is being rendered as a texture onto Plane objects behind).
This is a long post, but I'm still new to three.js and looking for good ideas.

Animated Sprites blocked by Canvas in Unity

I have managed to animate a series of sprites by dragging a group of sprites into the Hierarchy. However, when I enabled the Canvas, the Canvas was blocking all the sprites. What should I do to display the animation on the Canvas? I have tried adjusting the layers and camera modes, but to no avail.
I'm not sure if I understood your question correctly, but I understand that you want your sprites to be displayed on top of the UI. You can use 2 cameras and adjust their depths so that one of them displays the layer of the animation (make this one the top camera) and the other one displays the UI layer.
Use Culling Masks in Camera
How to use Multiple Cameras

Three.js Canvas rendering

I have a question about Three.js with Canvas rendering:
I use Canvas rendering for be full compatible, the speed is not important for me, but i have two viewport, each with same scene, and a textured object render only in one view, depending of the rendering order :( I am block on it since one week , so ,it si normal "feature" ?
You need to set up two separate renderers, attach them to separate HTML elements, and use CSS z-index to layer them on top of each other. As Mr. Doob commented, it won't save you any computation memory.
It is cool because you can use the same scene (mesh, material, lights), but different cameras.

Resources