Copy luminance/alpha texture to texture in GLES - opengl-es

I'm trying to copy a luminance texture to a luminance texture. More specifically, I'm trying to extend a single channel texture in memory with another texture.
My current steps:
create FBO with Luminance format, new width and height.
bind FBO.
render the textures concatenated.
unbind FBO
create Luminance texture the size of FBO
bind FBO with Luminance texture
render FBO's previous texture
unbind FBO
However the GLES 2.0 docs state, that FBO's can't render to luminance textures. How do I work with single byte textures then? Is it possible for me to copy luminance textures on the gpu anyway?

I ended up using RGBA_4444, which is 2 bytes, so an increase of 100%, but still acceptable.

Related

gltf exported from blender with metallic texture and no roughness texture is loaded into three.js with roughness texture matching metallic texture

If the metallic texture is being reused (same UUID) as the roughness texture and I don't need the roughness texture, would it be more performant to set the roughnessMap as null?
I don't understand why the metallic texture is being reused so an explanation of that would be appreciated as well.
The .metalnessMap uses the blue channel while the .roughnessMap uses the green channel of the same texture. This is done to save filesize and memory, since using a single RGB texture for multiple purposes is more cost-effective than using separate RGB textures.
Ambient occlusion uses the red channel of the assigned texture, so you could potentially have three separate maps for the price of one!

Improve texture resolution in GLSL fragment shader

Is there a way to icrease, or improve resolution of a texture using GLSL fragment shader processing? Let's say, I have 512x424 px source image, and want to have 1024x848 px as a result, with smooth pixels.
Update.
Under "improvement" I mean enlarging using some sort of resampling algorithm.
Create FBO, attach large destination texture with desired dimensions
Render "full-screen" textured quad with small source texture bound

Subrect drawing to texture FBO within OpenGL ES Fragment Shader

I'm trying to draw to a subrect of a texture based FBO, but am having difficulty. The FBO has dimensions of say 500x500 and I am trying to have the fragment shader only redraw say a 20x20 pixel subrect. Modiyfing the full texture works without difficulty.
At first I tried setting glViewport to the needed subrect, but it doesn't look to be that simple. I'm suspecting that the Vertex attributes affecting gl_Position and the varying texture coordinates are involved, but I can't figure out how.
Turns out that I was trying to modify the texture coordinate attributes, but was more easily able to just modify the viewport using glViewport and gl_FlagCoord within the shader.

OpenGL ES : Pre-Rendering to FBO texture

Is it possible to render to FBO texture once and then use the resulting texture handle to render all following frames?
For example, in case I'm rendering a hard shadow map and the scene geometry and light position are static, the depth map is always the same and I want to render it only once using a FBO and then just use it after that. However, if I simply put a flag to render the depth texture once, the texture remains empty for the rest of the frames.
Is FBO get reallocated after rendering a frame has been complete? What would be the right way to preserve rendered texture for rendering of the following frames?
Rendering to a texture is no different than if you had uploaded those pixels to the texture in the first place. The contents of a texture do not magically disappear. A texture's contents are changed when you change them. This could be by uploading data to the texture, or by setting one of the texture's images to be used for framebuffer operations (clearing, rendering to it, etc).
Unless you do something to explicitly change the data stored in a texture, it won't change.

Opengl-es 1.1 change pixel data on the FBO

Is there a way to get the FBO pixel data and then: greyscale it fast and take back the image to that FBO again?
If you're using the fixed-function pipeline (ES 1.1), you can use glReadPixels to pull pixel data off the GPU so you can process it directly. Then you'd need to create a texture from that result, and render a quad mapped to the new texture. But this is a fairly inefficient way of accomplishing the result.
If you're using shaders (ES 2.0), you can do this on the GPU directly, which is faster. That means doing the greyscaling in a fragment shader in one of a few ways:
If your rendering is simple to begin with, you can add the greyscale math in your normal fragment shader, and perhaps toggle it with a boolean uniform variable.
If you don't want to mess with greyscale in your normal pipeline, you can render normally to an offscreen FBO (texture), and then render the contents of that texture to the screen's FBO using a special greyscale texturing shader that does the math on sampled texels.
Here's the greyscale math if you need it: https://web.archive.org/web/20141230145627/http://bobpowell.net/grayscale.aspx Essentially, plug the RGB values into that formula, and use the resulting luminance value in all your channels.

Resources