ThreeJS: selective bloom - three.js

I use EffectComposer and BloomPass to make my scene overbright. However, I need to disable it for whole scene and use for selected objects only. I came up with something like that:
Render scene without selected objects.
Save colorbuffer
Clear color buffer, but not depth buffer
Render selected objects only.
Do bloom pass
Blend saved colorbuffer over with some kinky mode
Is it worth to twist my so called "engine" this way, or there is some easier way?

Related

Separate materials in an instancedMesh react three fiber

I essentially have a bunch of geometries that I need to display unique and updating text.
The approach I've been using to display the text is using a canvas material which(along with the mesh position) is constantly updated in a useFrame.
However, the only way I've been able to get the texture to work is as follows, and all geometries are obviously sharing it.
<instancedMesh ref={meshRef} args={[null, null, intervalData.length]}>
<circleBufferGeometry args={[sizes.radius ?? 0.6, sizes.segments ?? 48]}>
<instancedBufferAttribute attachObject={['attributes', 'color']} args={[colorArray, 3]} />
</circleBufferGeometry>
<meshStandardMaterial vertexColors={THREE.VertexColors} map={texture}/>
</instancedMesh>
What would be the way to set the textures per instance? Is there somewhere I can store an array of textures and assign them to the mesh?
Probably pretty late for you, but maybe if someone else is stuck with same question. I have solved this like this https://codesandbox.io/s/instancedmesh-with-different-textures-forked-iy5xh?file=/src/App.js
Though I am passing each texture separately, it has down side that you can only pass 16 textures to single shader, so maybe you'll have to use texture atlas (basically single texture composed of multiple texture and you also pass couple of more attributes to crop the particular texture part from whole texture)
Probably any performance boost from InstancedMesh would be surpassed by using a Sprite, not to mention more useful.

Three.js - is there a simple way to process a texture in a fragment shader and get it back in javascript code using GPUComputationRenderer?

I need to generate proceduraly in a shader a texture and get it back in my javascript code in order to apply it to an object.
As this texture is not meant to change over time, I want to process it only once.
I think that GPUComputationRenderer could do the trick but I don't figure out how and what is the minimal code that can achieve this.
I need to generate proceduraly in a shader a texture and get it back in my javascript code in order to apply it to an object.
Sounds like you just want to perform basic RTT. In this case, I suggest you use THREE.WebGLRenderTarget. The idea is to setup a simple scene with a full-screen quad and a custom instance of THREE.ShaderMaterial containing your shader code that produces the texture. Instead of rendering to the screen (or default framebuffer), you render to the render target. In the next step, you can use this render target as a texture in your actual scene.
Check out the following example that demonstrates this workflow.
https://threejs.org/examples/webgl_rtt
GPUComputationRenderer is actually intended for GPGPU which I don't think is necessary for your use case.

Depth component readRenderTargetPixels in Three.js?

Can depth pixel numbers be extracted from THREE.WebGLRenderer, similar to the .readRenderTargetPixels functionality? Basically, is there an update to this question. My starting point is Three.js r80. Normalized values are fine if I can also convert to distances.
Related methods:
I see that WebGL's gl.readPixels does not support gl.DEPTH_COMPONENT like OpenGL's .glReadPixels does.
THREE.WebGLRenderTarget does support a .depthTexture via THREE.WebGLRenderer's WEBGL_depth_texture extension. Although THREE.DepthTexture does not contain .image.data like THREE.DataTexture does.
I also see that THREE.WebGLShadowMap uses .renderBufferDirect with a THREE.MeshDepthMaterial.
Data types:
A non-rendered canvas, can use .getContext('2d') with .getImageData(x,y,w,h).data for the topToBottom pixels as a Uint8ClampedArray.
For a rendered canvas, render() uses getContext('webgl') and contexts may only be queried once, so getImageData cannot be used.
Instead render to a target and use .readRenderTargetPixels(...myArrToCopyInto...) to access (copy out) the bottomToTop pixels in your Uint8Array.
Any canvas can use .toDataURL("image/png") to return a String in the pattern "data:image/png;base64,theBase64PixelData".
You can't directly get the content of the FrameBuffer's depth attachment using readPixels. Whether it's a RenderBuffer or a (Depth) Texture.
You have to write depth data in the color attachment.
You can render your scene using MeshDepthMaterial, like shadow mapping technic. You ends up with the depth RGBA encoded in the color attachment. You can get it using readPixels (still RGBA encoded). It mean you have to render your scene twice, one for the depth and one to display the scene on screen.
If the depth you want match what you show on screen (same camera/point of view) you can use WEBGL_depth_texture to render depth and display in one single render loop. It can be faster if your scene contains lots of objects/materials.
Finally, if your hardware support OES_texture_float, you should be able to draw depth data to a LUMINANCE/FLOAT texture instead of RGBA. This way you can directly get floating point depth data and skip a costly decoding process in js.

Occlusion of real-world objects using three.js

I’m using three.js inside an experimental augmented-reality web browser. (The browser is called Argon. Essentially, Argon uses Qualcomm’s Vuforia AR SDK to track images and objects in the phone camera. Argon sends the tracking information into Javascript, where it uses transparent web pages with three.js to create 3D graphics on top of the phone video feed.) My question is about three.js, however.
The data Argon sends into the web page allows me to align the 3D camera with the physical phone camera and draw 3D graphics such that they appear to align with the real world as expected. I would also like to have some of the things in the physical world occlude the 3D graphics (I have 3D models of the physical objects, because I’ve set the scene up or because they are prepared objects like boxes that are being tracked by Vuforia).
I’m wondering if folks have suggestions on the best way to accomplish this occlusion with three.js. Thanks.
EDIT: it appears that the next version of three.js (R71) will have a simpler way to do this, so if you can use the dev branch (or just wait), you can do this much more easily. See this post: three.js transparent object occlusion
MY ORIGINAL ANSWER (without using the new features in R71):
I think the best way to do this is (to avoid extra work by creating new rendering passes for example) to modify the WebGL renderer (src/renderers/WebGLRenderer.js) and add support for a new kind of object, perhaps call them “occlusionObjects”.
If you look in the renderer, you will see two current object lists, opaqueObjects and transparentObjects. The renderer sorts the renderable objects into these two lists, so that it can render the opaque objects first, and then the transparent objects after them. What you need to do is store all of your new objects into the occlusionObjects list rather than those two. You will see that the opaque and transparent objects are sorted based on their material properties. I think here, you may want to add a property to an object you want to be an occluder (“myObject.occluder = true”, perhaps), and just pull those objects out.
Once you have the three lists, look what the render() function does with these object lists. You’ll see a couple of places with rendering calls like this:
renderObjects( opaqueObjects, camera, lights, fog, true, material );
Add something like this before that line, to turn off writing into the color buffers, render the occlusion objects into the depth buffer only, and then turn color buffer writes back on before you render the remaining objects.
context.colorMask( false, false, false, false);
renderObjects( occluderObjects, camera, lights, fog, true, material );
context.colorMask(true, true, true, true);
You’ll need to do this in a couple of places, but it should work.
Now you can just mark any objects in your scene as “occluder = true” and they will only render into the depth buffer, allowing the video to show through and occluding any opaque or transparent objects rendered behind them.

Multi-window support for opengles2

Recently I am writting game editor in my project. I want to implement a editor which has four viewport like 3ds max or other 3D software.
So, how to use opengles2 to render context on multi-window?
You can usually have multiple views with each having its own frame buffer. In this case all you need to do is bind the correct frame buffer before drawing to each of the views. You might also need to have different contexts for each view and setting them as current before drawing (also before binding the frame buffer). If you need multiple contexts you will need to find a way to share resources between them though.
Another approach is having a single view and simply using glViewport to draw to different parts. In this case you need to set glViewport for a specific part, setting ortho or frustum (if view segments are of different size) and that is it. For instance if you split the view with buffer having dimensions bWidth and bHeight into 4 equal rectangles and you want to refresh the top right:
glViewport(bWidth*.5f, .0f, bWidth*.5f, bWidth*.5f);
glOrthof(.0f, bWidth*.5f, bHeight*.5f, .0f, .1, 1.0); //same for each in this case
//do all the drawing
and when you are finished with all you want to update just present the frame buffer.

Resources