OpenGLES 3.0 Cannot render to a texture larger than the screen size - opengl-es

I have made an image below to indicate my problem. I render my scene to an offscreen framebuffer with a texture the size of the screen. I then render said texture to a screen-filling quad. This produces the case 1 on the image. I then run the exact same program, but with with a texture size, let's say, 1.5 times greater (enough to contain the entire smiley), and afterwards render it once more to the screen-filling quad. I then get result 3, but I expected to get result 2.
I remember to change the viewport according to the new texture size before rendering to the texture, and reset the viewport before drawing the quad. I do NOT understand, what I am doing wrong.
problem shown as an image!
To summarize, this is the general flow (too much code to post it all):
Create MSAAframebuffer and ResolveFramebuffer (Resolve contains the texture).
Set glViewport(0, 0, Width*1.5, Height*1.5)
Bind MSAAframebuffer and render my scene (the smiley).
Blit the MSAAframebuffer into the ResolveFramebuffer
Set glViewport(0, 0, Width, Height), bind the texture and render my quad.
Note that all the MSAA is working perfectly fine. Also both buffers have the same dimensions, so when I blit it is simply: glBlitFramebuffer(0, 0, Width*1.5, Height*1.5, 0, 0, Width*1.5, Height*1.5, ClearBufferMask.ColorBufferBit, BlitFramebufferFilter.Nearest)
Hope someone has a good idea. I might get fired if not :D

I found that I actually used an AABB somewhere else in the code to determine what to render; and this AABB was computed from the "small viewport's size". Stupid mistake.

Related

Why does canvas re-render the entire canvas rather than only a specific portion?

From my very limited html5 canvas experience, it seems like the way to animate something is to clear the entire canvas, and draw everything from scratch.
This doesn't seem very performant. I'm wondering why this was the chosen approach?
Is there an alternative? For example, using an object-oriented approach, if you wanted to re-render a tree in the foreground, the system should cache the background, and only rerender that layer.
Your understanding is correct.
Typical canvas apps will completely erase the canvas and redraw objects.
This process works well because Html Canvas is designed with blazingly fast drawing speed.
Unlike object oriented design, the data that draws on the canvas has been completely "flattened".
There is a single data array containing the Red, Green, Blue & Alpha components of all pixels on the canvas.
[
pixel1Red, pixel1Green, pixel1Blue, pixel1Alpha,
pixel2Red, pixel2Green, pixel2Blue, pixel2Alpha,
pixel3Red, pixel3Green, pixel3Blue, pixel3Alpha,
...
]
This means that any color component of any pixel can be accessed with a single jump.
This flat structure also means that if an image needs to be drawn on the canvas, the browser only needs to copy sequential data from the source image directly into sequential data in the canvas pixel array.
In addition, Canvas is hardware accelerated when a GPU is available.
That's the basic technique, yes. You could also clear a specific area of the canvas instead, using clearRect():
https://developer.mozilla.org/en-US/docs/Web/Guide/HTML/Canvas_tutorial/Drawing_shapes
// Clear the specified rectangular area from x, y through width, height.
context.clearRect(x, y, width, height);
In your case of change foregrounds and backgrounds, however, consider modifying the globalCompositeOperation, which enables you to draw shapes under existing shapes:
// Draw new shapes behind the existing canvas content.
ctx.globalCompositeOperation = 'destination-over';
https://developer.mozilla.org/en-US/docs/Web/Guide/HTML/Canvas_tutorial/Compositing
Another useful method is clip(), which allows you to draw shapes as masks:
// Create a circular clipping path.
ctx.beginPath();
ctx.arc(0, 0, 60, 0, Math.PI * 2, true);
ctx.clip();
https://developer.mozilla.org/en-US/docs/Web/Guide/HTML/Canvas_tutorial/Compositing

Rendering to depth texture - unclarities about usage of GL_OES_depth_texture

I'm trying to replace OpenGL's gl_FragDepth feature which is missing in OpenGL ES 2.0.
I need a way to set the depth in the fragment shader, because setting it in the vertex shader is not accurate enough for my purpose. AFAIK the only way to do that is by having a render-to-texture framebuffer on which a first rendering pass is done. This depth texture stores the depth values for each pixel on the screen. Then, the depth texture is attached in the final rendering pass, so the final renderer knows the depth at each pixel.
Since iOS >= 4.1 supports GL_OES_depth_texture, I'm trying to use GL_DEPTH_COMPONENT24 or GL_DEPTH_COMPONENT16 for the depth texture. I'm using the following calls to create the texture:
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, width, height, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_INT, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, textureId, 0);
The framebuffer creation succeeds, but I don't know how to proceed. I'm lacking some fundamental understanding of depth textures attached to framebuffers.
What values should I output in the fragment shader? I mean, gl_FragColor is still an RGBA value, even though the texture is a depth texture. I cannot set the depth in the fragment shader, since gl_FragDepth is missing in OpenGL ES 2.0
How can I read from the depth texture in the final rendering pass, where the depth texture is attached as a sampler2D?
Why do I get an incomplete framebuffer if I set the third argument of glTexImage2D to GL_DEPTH_COMPONENT16, GL_DEPTH_COMPONENT16_OES or GL_DEPTH_COMPONENT24_OES?
Is it right to attach the texture to the GL_DEPTH_ATTACHMENT? If I'm changing that to GL_COLOR_ATTACHMENT0, I'm getting an incomplete framebuffer.
Depth textures do not affect the output of the fragment shader. The value that ends up in the depth texture when you're rendering to it will be the fixed-function depth value.
So without gl_FragDepth, you can't really "set the depth in the fragment shader". You can, however, do what you describe, i.e., render depth to a texture in one pass and then read access that value in a later pass.
You can read from a depth texture using the texture2D built-in function just like for regular color textures. The value you get back will be (d, d, d, 1.0).
According to the depth texture extension specification, GL_DEPTH_COMPONENT16_OES and GL_DEPTH_COMPONENT24_OES are not supported as internal formats for depth textures. I'd expect this to generate an error. The incomplete framebuffer status you get is probably related to this.
It is correct to attach the texture to the GL_DEPTH_ATTACHMENT.

OpenGL ES : Pre-Rendering to FBO texture

Is it possible to render to FBO texture once and then use the resulting texture handle to render all following frames?
For example, in case I'm rendering a hard shadow map and the scene geometry and light position are static, the depth map is always the same and I want to render it only once using a FBO and then just use it after that. However, if I simply put a flag to render the depth texture once, the texture remains empty for the rest of the frames.
Is FBO get reallocated after rendering a frame has been complete? What would be the right way to preserve rendered texture for rendering of the following frames?
Rendering to a texture is no different than if you had uploaded those pixels to the texture in the first place. The contents of a texture do not magically disappear. A texture's contents are changed when you change them. This could be by uploading data to the texture, or by setting one of the texture's images to be used for framebuffer operations (clearing, rendering to it, etc).
Unless you do something to explicitly change the data stored in a texture, it won't change.

Rendering a rectangle with multiple textures

I have a 6 vertex rectangle 100x100 in size, this i covered with a 100x100 background image. now i would like to render two "sub textures" on top of it.
lets say i have two sub textures size 20x20 one of then i like to position at x:10 y:10 and the other x:50 and y:50
(these are actually to get used as masks on the background image.)
how should i go with this ? my first thought was to send a uniform vec2 with the position of the two sub textures info the fragment shader, but i cant really figure out how to convert those into texture2d(subtexture, coordinate) because texture2d takes 0-1 values. i cant really wrap my head around this, and i hope to get some pointers in what direction i should go.
(this is to be used on OpenGL ES 2.0)
Then I think what you're really looking for is how to update a texture. In you case, the simplest method may be to use glTexSubImage2D. The following will do what you ask in your original post
glTexSubImage2D( GL_TEXTURE_2D, 0, 10, 10, 20, 20, <format>, <type>, <subimage1> );
glTexSubImage2D( GL_TEXTURE_2D, 0, 50, 50, 20, 20, <format>, <type>, <subimage2> );
where <format>, and <type> describe the pixels in the sub-texture stored at <subimage*>. There are quite a lot of additional answers in other questions; just search for glTexSubImage2D.
A more complex method (for when things aren't as simple as your problem) is to use framebuffer objects to render to textures (have search for that one too, if you need).

How to clear portion of a texture with alpha 0 using OpenGL ES?

I have a texture onto which I render 16 drawings. The texture is 1024x1024 in size and it's divided into 4x4 "slots", each 256x256 pixels.
Before I render a new drawing into a slot, I want to clear it so that the old drawing is erased and the slot is totally transparent (alpha=0).
Is there a way to do it with OpenGL or need I just access the texture pixels directly in memory and clear them with memset oslt?
I imagine you'd just update the current texture normally:
std::vector<unsigned char> emptyPixels(1024*1024*4, 0); // Assuming RGBA / GL_UNSIGNED_BYTE
glBindTexture(GL_TEXTURE_2D, yourTextureId);
glTexSubImage2D(GL_TEXTURE_2D,
0,
0,
0,
1024,
1024,
GL_RGBA,
GL_UNSIGNED_BYTE,
emptyPixels.data()); // Or &emptyPixels[0] if you're stuck with C++03
Even though you're replacing every pixel, glTexSubImage2D is faster than recreating a new texture.
Is your texture actively bound as a frame buffer target? (I'm assuming yes because you say you're rendering to it.)
If so, you can set a glScissor test, followed by a glClear to just clear a specific region of the framebuffer.

Resources