Is there a way to render a scene with a lot of objects in smaller chunks? eg, render just the large objects first, then render the smaller objects and overlay them on the same render target. By breaking it up I'm hoping scene will have a responsive framerate. It should look like this: https://forge-rcdb.autodesk.io/configurator?id=58c7ae474c6d400bfa5aaf37&_ga=2.17878013.536468240.1515526269-1844418132.1512684792
I have tried to set renderer.autoclear = false and renderer.preserveDrawingBuffer = true. It seems to work when I render it synchronously. If the renders are separated by a small time interval the renderer clears and just shows what was last rendered.
Okay I figured out what I was doing wrong. Turns out the "preserveDrawingBuffer" field needs to be set when the renderer is instantiated like this:
renderer = new THREE.WebGLRenderer({ preserveDrawingBuffer: true });
I was assigning it after it it was already instantiated. Here's a demo I made if anybody's interested: https://jsfiddle.net/9tcoyhcc/2/
Related
NOTE: It appears I oversimplified my initial question. See below for the edit.
I'm trying to combine the technique shown in the clipping/stencil example of Three.js, which uses the stencil buffer to render 'caps' when clipping geometry, with an EffectComposer-based rendering pipeline, but I am running into some difficulties. A fiddle demonstrating the problem can be found at https://jsfiddle.net/2vc76ajd/1/.
The EffectComposer has two passes: a RenderPass and a ShaderPass using CopyShader (see code below).
composer = new EffectComposer(renderer);
composer.addPass(new RenderPass(scene, camera));
var shaderPass = new ShaderPass(CopyShader);
shaderPass.enabled = false;
composer.addPass(shaderPass);
The first renders the scene as usual, the latter merely copies the rendertarget onto a fullscreen quad. If I disable the ShaderPass everything works as intended: the geometry is clipped, and cutting planes are drawn in a different color:
When the ShaderPass is enabled by clicking the 'copy pass' checkbox in the upper right, however, the entire cutting plane gets rendered, rather than just the 'caps':
Presumably there is some interaction here between offscreen render targets and stencil buffers. However, I have so far been unable to find a way to have subsequent render passes look the same as the initial render. Can anyone tell me what I am missing?
EDIT: While WestLangley's answer solved my initial problem, it unfortunately doesn't work when you're using an SSAOPass, which is what I was doing before trying to simplify the problem for the question. I have posted an updated fiddle at https://jsfiddle.net/bavL98hf/1/, which includes the proposed fix and now toggles between a RenderPass or an SSAOPass. With SSAO turned on, the result is this:
I have tried setting stencilBuffer to true on all the render targets used in SSAOPass in addition to the ones in EffectComposer, but sadly that doesn't work this time. Can anyone tell me what else I am overlooking?
I need to render a single specific mesh from a scene into a texture using a THREE.WebGLRenderTarget. I already achieved that during the rendering of the scene, all other meshes, except this one, are being ignored. So, I basically achieved my goal. The thing i hate is, that there is still a lot of unnecessary work going on for my whole scene graph, during the render process of the scene. I need to render this texture every frame, so with my current method i get extrem fps drop downs. (There are lots of meshes in the whole scene graph).
So what i found was the function "renderBufferImmediate" from the THREE.WebGLRenderer. (Link to the renderer source code here) My pseudo code to achieve my goal would look like this:
var mesh = some_Mesh;
var renderer = some_WebGLRenderer;
var renderTarget = some_WebGLRenderTarget;
renderer.setRenderTarget(renderTarget);
var materialProperties = renderer.properties.get(mesh.material);
var program = materialProperties.program;
renderer.renderBufferImmediate(mesh, program, mesh.material);
var texture = renderTarget.texture;
The renderBufferImmediate function takes an instance of an THREE.Object3D, a WebGLShaderProgram and a THREE.Material. Problem i see here: The implementation of this function tries to lookup properties of the Object3D which, afaik doesn't exist. (Like "hasPositions" or "hasNormals"). In short: my approach doesn't work.
I would be grateful if someone could tell me if i can use this function for my purpose (Meaning i am currently using it wrong) or if there is another solution for my problem.
Thanks in advance.
As an example, I tried to set up an AR scene in three.js.
I use "aruco.js" to do that. When I load obj or any other models everything works great, However when the marker is placed in front of the camera it gets detected and the scene flickers/bumps violently. Any serious reason why this is happening?
As live demo would be hard to set up, I just uploaded a video on YouTube to illustrate my point: https://youtu.be/9jMso7vmw1M
So my exact question is: what is the best way to make AR scene stick to the marker without any flicker?
Code in jsfiddle: https://jsfiddle.net/6cw3ta57/
var markerObject3D = new THREE.Object3D()
scene.add(markerObject3D)
// set the markerObject3D as visible
markerObject3D.visible = false
//////////////////////////////////////////////////////////////////////////////////
// code in jsfiddle
//////////////////////////////////////////////////////////////////////////////////
He Guys,
I'm trying to combine THREE.js and Kinetic.js in my web-application. I'm having problems doing this with the THREE.WebGLRenderer. How can I setup my view that I have a 3D-Layer that is rendered by the THREE.WebGLRenderer and a seperate Layer on top of that for 2D-Elements, like e.g. Labels etc., using Kinetic.js?
I've tried to give the WebGLRenderer the canvas element of an instance of a Kinetic.Layer Element. But it does not work.
this.renderer = new THREE.WebGLRenderer({
antialias: true,
preserveDrawingBuffer: true,
canvas: this.layer3D.getCanvas()._canvas
});
Until now I only found examples that do this with the THREE.CanvasRenderer.
Ideas somebody? Thanks a lot.
A canvas can have either a 2D context or a 3D context, not both as they are considered incompatible. When you pass the Canvas from kinetic layer, it already has a 2D context bound to it.
However you can have another HTML element (ex, DIV) on top of the GL rendered canvas.
Hello I just want to say this may not be possible. As far as I know KineticJS is based on Canvas. So what you wan to do is only possible using Canvas Renderer.
The workaround I can think of is, if the browser supports WebGL, you might be able to place the webGL element on top of your KineticJS element.
I'm trying to render two different scenes and cameras on top of each other, like a HUD. Both render correctly when alone. Also this works as intended, so you can see mainscene under the helpscene:
renderer.render(mainscene, maincamera);
renderer.render(helpscene, helpcamera);
When I'm using EffectComposer to render the main scene, I can not see helpscene at all, I only see the results of composer rendering:
renderTargetParameters = { minFilter: THREE.LinearFilter, magFilter: THREE.LinearFilter, format: THREE.RGBAFormat, stencilBuffer: false };
renderTarget = new THREE.WebGLRenderTarget( width, height, renderTargetParameters );
composer = new THREE.EffectComposer(renderer, renderTarget);
---- cut out for brevity ---
composer.render(delta);
renderer.render(helpscene, helpcamera); // has no effect whatsoever on the screen, why?
What is happening here? Again if I comment either render call out, they work correctly. But both enabled, I only see the composer scene/rendering. I would expect the helpscene to overlay (or at least overwrite) whatever is rendered before that.
I have quite complex code before renderer.render(helpscene, helpcamera);, it might take various different render paths and use effectcomposer or not based on different settings. But I want the helpscene to always take the simple route with no effects or anything, that's why I'm using a separate render call and not incorporating it as an effectcomposer pass.
EDIT: Turns out it is because some funny business with depth buffers (?). If I set material.depthTest = false to everything in the helper scene, it will show kind of correctly. It looks like the depth is set to zero or very low by some composer pass or by the composer itself, and rather unexpectedly, it will have the effect of hiding anything rendered with subsequent render calls.
Because I'm only using LineMaterial in the helper scene it will do for now, but I expect some problems further down the road with the depthTest = false workaround (might have some real shaded objects there later, which would need depth test against other objects inside the same helper scene).
So I guess the REAL QUESTION IS: how do I reset the depth buffers (or something) after EffectComposer, so that further render calls are not affected by it? I can also do the helper scene rendering as the last composer pass, does not make much difference.
I should maybe mention that one of my composer setups main RenderPass renders as a texture to a distorted plane geometry near a perspective camera created for that purpose (like the ortographic camera & quad setup found in many postprocessing examples, but with distortion). Other setup has a "normal" RenderPass with the actual scene camera, where I would expect the depth information to be such that I should see the helper scene anyway (that's probably some seriously f****ed up english, sorry, non-native speaker here and I could not come up with better words). I am having the same problem with both alternatives.
...and answering my self. After finding the real cause, it's quite simple.
renderer.clear(false, true, false); will clear the depth buffers so the overlay render works as expected :)