Multiple Render Targets in Three.js - three.js

Is multiple render targets supported in Three.js? I mean, using one single fragment shader to write to different render targets.
I have a fragment shader that calculates several things and I need to output them separately into different textures for further manipulations.

Multiple render targets are not supported in three.js.
If you are interested in GPGPU within the framework of three.js, you can find a nice example here: http://jabtunes.com/labs/3d/gpuflocking/webgl_gpgpu_flocking3.html. Just be aware, it is using an older version of three.js.
three.js r.60

Though the above answer was correct at the time of its writing, it is no longer the case.
With the introduction of WebGL2, ThreeJS does indeed support multiple render targets.
https://threejs.org/docs/#api/en/renderers/WebGLMultipleRenderTargets

Related

How to apply multiple textures on one model in aframe

a have a model in maya/blender, which has multiple UV's.
I thought that the .mtl has all the info about materials/textures ( as i can see the links in the .mtl ), but apperently i have to link every texture to an object # src="texture.jpg".
Is there any other way than combining those textures in photoshop/gimp, or breaking my model into separate .obj's having their own texture ?
Should i look more into the custom shading options in aframe/three.js # registerShader ?
The OBJ/MTL format does not support multiple UV sets. It may not support multiple materials on the same geometry either, I'm not sure. FBX and Collada do support multiple UVs, so you could try one of those.
But, searching for "threejs multiple UVs" shows that it is not easy to do multiple UVs without custom shaders, even once you have a newer model format. I would maybe try to bake your multiple UVs down into a single set in the modeling software, if that's possible.
MTL files can associate different texture maps with different material groups in the OBJ file, but the OBJ file can only describe a single set of UVs per poly face. Whether or not your OBJ writer or THREE's OBJ reader supports it is a different matter.
On a side note: the actual Wavefront OBJ spec is interesting in that it supported all kinds of things no one implemented after 1999 or so, including NURBS patches with trim curves, and 1D texture maps (essentially LUTs)
https://en.wikipedia.org/wiki/Wavefront_.obj_file

Order independent transparency and mixed opaque and translucent object hierarchies

We use three.js as the foundation of our WebGL engine and up until now, we only used traditional alpha blending with back-to-front sorting (which we customized a little to match the desired behavior) in our projects.
Goal: Now our goal is to incorporate the order-independent transparency algorithm proposed by McGuire and Bavoil in this paper trying to rid ourselves of the
usual problems with sorting and conventional alpha blending in complex scenes. I got it working without much hassle in a small, three.js based prototype.
Problem: The problem we have in the WebGL engine is that we're dealing with object hierarchies consisting of both opaque and translucent objects which are currently added to the same scene so three.js will handle transform updates. This, however, is a problem, since for the above algorithm to work, we need to render to one or more FBOs (the latter due to the lack of support for MRTs in three.js r79) to calculate accumulation and revealage and finally blend the result with the front buffer to which opaque objects have been previously rendered and in fact, this is what I do in my working prototype.
I am aware that three.js already does separate passes for both types of objects but I'm not aware of any way to influence to which render target three.js renders (render(.., .., rt,..) is not applicable) and how to modify other pipeline state I need. If a mixed hierarchy is added to a single scene, I have no idea how to tell three.js where my fragments are supposed to end up and in addition, I need to reuse the depth buffer from the opaque pass during the transparent pass with depth testing enabled but depth writes disabled.
Solution A: Now, the first obvious answer would be to simply setup two scenes and render opaque and translucent objects separately, choosing the render targets as we please and finally do our compositing as needed.
This would be fine except we would have to do or at least trigger all transformation calculations manually to achieve correct hierarchical behavior. So far, doing this seems to be the most feasible.
Solution B: We could render the scene twice and set all opaque and transparent materials visible flag to false depending on which pass were currently doing.
This is a variant of Solution A, but with a single scene instead of two scenes. This would spare us the manual transformation calculations but we would have to alter materials for all objects per pass - not my favorite, but definitely worth thinking about.
Solution C: Patch three.js as to allow for more control of the rendering process.
The most simple approach here would be to tell the renderer which objects to render when render() is called or to introduce something like renderOpaque() and renderTransparent().
Another way would be to somehow define the concept of a render pass and then render based on the information for that pass (e.g. which objects, which render target, how the pipeline is configured and so on).
Is someone aware of other, readily accessible approaches? Am I missing something or am I thinking way too complicated here?

Unity 3D combine texture

I got a helm, sword and a shield which use 1 texture each, so 3 draw calls. I want to get them to use a single texture to get the draw call down to 1, but not combining them into 1 mesh as i need to disable any of them randomly, plus the sword and shield's position can change when attacking or dropped to ground. Is it doable?
If so how? I'm new to this, thanks.
To save on draw calls, you can use the same material for all three objects without combining their meshes. Then you create a texture file that has the three textures next to each other, and edit the UV maps for the models to use their own parts of the combined texture.
It's possible to do, and requires what are called Texture Atlases. I believe this is often done as an optimization step with the frequently used smaller textures that comprise a scene.
I don't think the free version of Unity has built in support for this (I might be wrong in assuming that the Pro version does natively support), but I believe there are also plugins - a quick google search found "Texture Packer" that appears to do what you want with the paid version being $15, but there's a free version too, so worth a closer look: http://forum.unity3d.com/threads/texture-packer-unity-tutorial.184596/
I don't have experience with any of these yet as I'm not at a stage where I'm trying to do this with my project, but when I get there I think Texture Packer is where I'll start.
Thanks,
Greg

How to combine shader effects in threejs

Coming from the Flash background, I'm used to creating a fragment shader in the following manner:
filters = [];
filters.push(new BasicFilter(SomeTexture));
filters.push(new NormalMapFilter(SomeOtherTexture));
myShader = new Shader(filters);
As a result, I could combine a number of effects freely and easily without the need of writing a large separate shader each time.
In case of threejs, I noticed that for complex visual effects, a single shader is written, like here: http://threejs.org/examples/#webgl_materials_bumpmap_skin
Is it possible to write e.g. a bump map shader and environment map shader separately and then combine them dynamically when needed? What would be the most proper way of doing this?
It is not possible directly in three.js. Three creates shaders by passing the shader code as a string directly to a WebGL shader object to compile. There is no mechanism to automatically build complex shaders, you have to write your own. Three.js conveniently adds a couple of uniforms/attributes to the shader, but you have to write what's done with them.
You can use Effect Composer now to combine shaders

OpenGL Render to texture

I know this has been asked before (I did search) but I promise you this one is different.
I am making an app for Mac OS X Mountain Lion, but I need to add a little bit of a bloom effect. I need to render the entire scene to a texture the size of the screen, reduce the size of the texture, pass it through a pixel buffer, then use it as a texture for a quad.
I ask this again because a few of the usual techniques do not seem to function. I cannot use the #version, layout, or out in my fragment shader, as they do not compile. If I just use gl_FragColor as normal, I get random pieces of the screen behind my app rather than the scene I am trying to render. The documentation doesn't say anything about such things.
So, basically, how can I render to a texture properly with the Mac implementation of OpenGL? Do you need to use extensions to do this?
I use the code from here
Rendering to a texture is best done using FBOs, which let you render directly into the texture. If your hardware/driver doesn't support OpenGL 3+, you will have to use the FBO functionality through the ARB_framebuffer_object core extension or the EXT_framebuffer_object extension.
If FBOs are not supported at all, you will either have to resort to a simple glCopyTexSubImage2D (which involves a copy though, even if just GPU-GPU) or use the more flexible but rather intricate (and deprecated) PBuffers.
This tutorial on FBOs provides a simple example for rendering to a texture and using this texture for rendering afterwards. Since the question lacks specific information about the particular problems you encountered with your approach, those rather general googlable pointers to the usual render-to-texture resources need to suffice for now.

Resources