Is it possible to render to FBO texture once and then use the resulting texture handle to render all following frames?
For example, in case I'm rendering a hard shadow map and the scene geometry and light position are static, the depth map is always the same and I want to render it only once using a FBO and then just use it after that. However, if I simply put a flag to render the depth texture once, the texture remains empty for the rest of the frames.
Is FBO get reallocated after rendering a frame has been complete? What would be the right way to preserve rendered texture for rendering of the following frames?
Rendering to a texture is no different than if you had uploaded those pixels to the texture in the first place. The contents of a texture do not magically disappear. A texture's contents are changed when you change them. This could be by uploading data to the texture, or by setting one of the texture's images to be used for framebuffer operations (clearing, rendering to it, etc).
Unless you do something to explicitly change the data stored in a texture, it won't change.
Related
I'm having a large number of meshes in threejs. In order to render them efficiently I merge them by materials. However, I want to select them with mouse.
My approach is the following: In one rendering pass I bake the merged meshed into a texture and in a second pass I render only the highlighted as a transparent overlay. So far, it almost works except for wrong visibility. The problem is that as I use WebGLRenderTarget it stores only the FBO into the texture. I would actually need a second texture to fetch DepthBuffer, ideally without a third rendering pass. I did not find anything related in the Three.js documentation. Any ideas?
I think you need to think differently. You cannot use a depth texture to write into the depth buffer. The only way to write into the depth buffer is to render primitives.
How about this:
Bake your scene into a texture but render depth into the on-screen depth buffer.
Keep the depth buffer in the second pass
Render your baked texture with depth tests and depth writes disabled: gl.disable(gl.DEPTH_TEST); gl.depthMask(false);
Render your selected object(s) with the highlight material
I am using sprite sheets to create animated textures with THREE.js. Each sprite instance utilizes texture offsets to control which of the images to present that frame. Multiple animated sprites may be on the screen at once.
Currently I am using Texture.clone() to duplicate the sprite sheet texture. However, unless I set Texture.needsUpdate to true, the texture will not display on the sprites. Setting needsUpdate to true allows me to display multiple independent animated sprites at once, but unfortunately this causes the texture memory to be duplicate on the card (/ integrated chip). Using Chrome WebGL Inspector I can clearly see that the sprite texture has been duplicate the same number of times as animated sprites that have been rendered.
Is there any way to clone / reuse the texture with different offsets for each instance without duplicating the memory? Is this a bug or am I doing something wrong.
THREE.js r67
Update:
One way that we have gotten around this (not a great way I admit) is to duplicate the GL texture ID assigned to the original texture and set the cloned texture as being initialized.
clonedTexture.__webglTexture = origTexture.__webglTexture;
clonedTexture.__webglInit = true;
This requires that the texture has already been sent to the card, which we force with render.setTexture(origTexture...).
This is something that's in the works. Keep an eye for whenever THREE.Image lands.
If I render a big texture 1024x1024 but almost the texture is transparent, only about 40% of the texture have data (not transparent). Does it more slower than render a texture with less transparent part?
I have this question because when render a animation, it is more easy to set the pivot of sprite in the image itself, so when I render i only need to draw each sprite at the center of my object's position.
It is more performant, because your image will be smaller. But I doubt it will make a noticeable difference. So, the way you are doing it right now is good, thats how I do it.
I'm trying to render a set of textured geometry to a single FBO, and then render that FBO to the scene. The problem is that overlapping semi-transparent areas of that geometry are not rendered correctly. They end up too opaque and dark.
If I render the geometry directly to the scene it renders correctly, but I need to render it first to the FBO.
By the way, I'm using the following for blending (according to opengl - blending with previous contents of framebuffer):
rendering geometry to the FBO: glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ONE, GL_ONE)
rendering the FBO to the scene: glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA)
I'm doing this for OpenGL ES 2.0.
Actually, I found that the problem was the blending function used to render the geometry to the FBO.
Instead of:
glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ONE, GL_ONE)
I have to use:
glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ONE, GL_ONE_MINUS_SRC_ALPHA)
And now it works as good as if I was rendering the geometry directly to the scene.
i have a working Core Video setup (a frame captured from a USB camera via QTKit) and the current frame is rendered as a texture on an arbitary plane in 3d space in a subclassed NSOpenGLView. so far so good but i would like to use some Core Image filter on this frame.
i now have the basic code setup and it renders my unprocessed video frame like before, but the final processed output CIImage is rendererd as a screen aligned quad into the view. it feels like a image blitted over my 3d rendering. this is what i do not want!
i am looking for a way to process my video frame (a CVOpenGLTextureRef) with Core Image and just render the resulting image on my plane in 3d.
do i have to use offscreen rendering (store viewport, set new viewport and modelview and perspective matrices and render into a FBO) or is there any easier way?
thanks in advance!
Try the GPUImage! It's easy to use, and faster than CoreImage processing. It uses predefined, or custom shaders (GLSL)