He Guys,
I'm trying to combine THREE.js and Kinetic.js in my web-application. I'm having problems doing this with the THREE.WebGLRenderer. How can I setup my view that I have a 3D-Layer that is rendered by the THREE.WebGLRenderer and a seperate Layer on top of that for 2D-Elements, like e.g. Labels etc., using Kinetic.js?
I've tried to give the WebGLRenderer the canvas element of an instance of a Kinetic.Layer Element. But it does not work.
this.renderer = new THREE.WebGLRenderer({
antialias: true,
preserveDrawingBuffer: true,
canvas: this.layer3D.getCanvas()._canvas
});
Until now I only found examples that do this with the THREE.CanvasRenderer.
Ideas somebody? Thanks a lot.
A canvas can have either a 2D context or a 3D context, not both as they are considered incompatible. When you pass the Canvas from kinetic layer, it already has a 2D context bound to it.
However you can have another HTML element (ex, DIV) on top of the GL rendered canvas.
Hello I just want to say this may not be possible. As far as I know KineticJS is based on Canvas. So what you wan to do is only possible using Canvas Renderer.
The workaround I can think of is, if the browser supports WebGL, you might be able to place the webGL element on top of your KineticJS element.
Related
I have two different canvases: one for the background and one for the game scene
Principal canvas:
Background canvas:
I'm having this problem: If I put an object in the principal canvas, everything seems to works but If I add a light component to this object, I'll not see the light (it is like the background image is ahead the light):
Without the background canvas:
With the background canvas:
Any idea why?
(The problem is not the BGcanvas, the problem is the image component of the BGcanvas, if I disable it, I can see the light)
Lights are 3D scene objects
UI objects are not effected by lights or scene objects because they exist in a completely different rendering path:
Is there a way to render a scene with a lot of objects in smaller chunks? eg, render just the large objects first, then render the smaller objects and overlay them on the same render target. By breaking it up I'm hoping scene will have a responsive framerate. It should look like this: https://forge-rcdb.autodesk.io/configurator?id=58c7ae474c6d400bfa5aaf37&_ga=2.17878013.536468240.1515526269-1844418132.1512684792
I have tried to set renderer.autoclear = false and renderer.preserveDrawingBuffer = true. It seems to work when I render it synchronously. If the renders are separated by a small time interval the renderer clears and just shows what was last rendered.
Okay I figured out what I was doing wrong. Turns out the "preserveDrawingBuffer" field needs to be set when the renderer is instantiated like this:
renderer = new THREE.WebGLRenderer({ preserveDrawingBuffer: true });
I was assigning it after it it was already instantiated. Here's a demo I made if anybody's interested: https://jsfiddle.net/9tcoyhcc/2/
This, and this mentioned rederOrder, but it is undocumented. I set up a jsfiddle and it does not work, what is wrong?
http://jsfiddle.net/q4w56/y655cwqt/5/
// now mesh1 should be always on top of mesh2
mesh1.renderOrder = 1
mesh2.renderOrder = 0
Setting renderOrder in three.js does not cause a renderable object to be "on top". It just controls the rendering order. It can be a useful tool if some objects are transparent. If all objects in the scene are opaque, changing the rendering order will (in typical use cases) have no effect on the rendered output.
See this answer if you want some objects to render "on top".
three.js r.79
I’m using three.js inside an experimental augmented-reality web browser. (The browser is called Argon. Essentially, Argon uses Qualcomm’s Vuforia AR SDK to track images and objects in the phone camera. Argon sends the tracking information into Javascript, where it uses transparent web pages with three.js to create 3D graphics on top of the phone video feed.) My question is about three.js, however.
The data Argon sends into the web page allows me to align the 3D camera with the physical phone camera and draw 3D graphics such that they appear to align with the real world as expected. I would also like to have some of the things in the physical world occlude the 3D graphics (I have 3D models of the physical objects, because I’ve set the scene up or because they are prepared objects like boxes that are being tracked by Vuforia).
I’m wondering if folks have suggestions on the best way to accomplish this occlusion with three.js. Thanks.
EDIT: it appears that the next version of three.js (R71) will have a simpler way to do this, so if you can use the dev branch (or just wait), you can do this much more easily. See this post: three.js transparent object occlusion
MY ORIGINAL ANSWER (without using the new features in R71):
I think the best way to do this is (to avoid extra work by creating new rendering passes for example) to modify the WebGL renderer (src/renderers/WebGLRenderer.js) and add support for a new kind of object, perhaps call them “occlusionObjects”.
If you look in the renderer, you will see two current object lists, opaqueObjects and transparentObjects. The renderer sorts the renderable objects into these two lists, so that it can render the opaque objects first, and then the transparent objects after them. What you need to do is store all of your new objects into the occlusionObjects list rather than those two. You will see that the opaque and transparent objects are sorted based on their material properties. I think here, you may want to add a property to an object you want to be an occluder (“myObject.occluder = true”, perhaps), and just pull those objects out.
Once you have the three lists, look what the render() function does with these object lists. You’ll see a couple of places with rendering calls like this:
renderObjects( opaqueObjects, camera, lights, fog, true, material );
Add something like this before that line, to turn off writing into the color buffers, render the occlusion objects into the depth buffer only, and then turn color buffer writes back on before you render the remaining objects.
context.colorMask( false, false, false, false);
renderObjects( occluderObjects, camera, lights, fog, true, material );
context.colorMask(true, true, true, true);
You’ll need to do this in a couple of places, but it should work.
Now you can just mark any objects in your scene as “occluder = true” and they will only render into the depth buffer, allowing the video to show through and occluding any opaque or transparent objects rendered behind them.
I want to generate the top and perspective view of an object.
Input: A 3d object, maybe .obj or .dae file.
Output: the image files presenting the top and front view of the loaded object.
Here is some expected output:
The perspective view of a chair
The top view of a chair:
Can anyone give me some suggestions to solve this problem? Demo may be preferred
You could create a small three.js scene with your obj or collada-file loaded using the appropriate loaders. (see examples of the specific loaders). Then create the necessary cameras you want to have in the scene. See examples for orthographic and perspective cameras that come with three.js, too.
To produce the images you want, you could use the toDataURL function, see this thread here and use google
Three.js and HTML5 Canvas toDataURL
in essence, after the objects are loaded, you could do something like:
renderer.render(scene, topViewCamera);
dataurl = canvas.toDataURL();
renderer.render(scene, perspectiveCamera);
dataurl2 = canvas.toDataURL();
I think you could also use 2 renderTargets and then use those for output, too, but maybe if you are new to three.js, start with the toDataURL() method from HTML5.