glPushMatrix() / glPopMatrix() doesn't affect blending states. Why is this? - opengl-es

I've been trying to get OpenGL-ES to do something roughly like the following to see if glPushMatrix() and glPopMatrix() could be used to put things such as blending states back how they were before glPushMatrix() was called.
It works for rotation/translation stuff - why doesn't it work for some other things such as blend states?
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA); //<-first blend mode
glPushMatrix();
glBlendFunc(GL_DST_COLOR, GL_ONE_MINUS_SRC_ALPHA); //<-second blend mode
//...drawing and stuff here...
glPopMatrix();
//at this point it appears the second blend mode is still in effect - why?
Am I properly confused or is there another pop/push combination of functions for states not popped/pushed by glPopMatrix() and glPushMatrix()?
Is there another way to easily set everything back to a previous state? Thanks for any illumination!

A stack for attributes does not exist for OpenGL-ES, sorry.
You can write one yourself if you really want to. All attributes are gettable, so any stack-datastructure would do.
Imho a better way is to define a hand full of useful blending presets and have a little state-machine that allows you to switch from one blending mode to another using the least calls into OpenGL-ES. After all - how many different blendmodes do you really need?

You can use glGet() to get all blending options. Then you can use them to restore the blending state.

As you know, OpenGL is a state machine and the various glPush and glPop functions control stacks. Now, there are multiple stacks. The matrix stack contains only the coordinate transformations. There is another stack, called the attribute stack, which does contain your blend function setting. Check out glPushAttrib.

Related

What does CIImageAccumulator do?

Problem
The apple documentation states when the CIImageAccumulater can be used, but unfortunately it does not say what it actually does.
The CIImageAccumulator class enables feedback-based image processing for such things as iterative painting operations or fluid dynamics simulations. You use CIImageAccumulator objects in conjunction with other Core Image classes, such as CIFilter, CIImage, CIVector, and CIContext, to take advantage of the built-in Core Image filters when processing images.
I have to fix code that used a CIImageAccumulator. It seems to me that all it is meant to do, despite its name, is to return a CIImage with all CIFilters applied to the image. Adding the first image however darkens the output. That is not what I would expect from an accumulator nor from any other Operator that enables feedback based image processing.
Question
Can anyone answer what logic / algorithm is being used when setting and getting images in and out of the CIImageAccumulator
The biggest advantage of the CIImageAccumulater is that stores its contents between different rendering steps (in contrast to CIFilter or CIImage). This allows you to use the state of a previous rendering step, blend it with something new and store that result again in the accumulator.
Apple's main use case is interactive painting: You retrieve the current image from the accumulator, blend a new stroke the user just painted with a gesture on top of it, and store the resulting image back into the accumulator. Then you display the content of the accumulator. You can read about it here.

Using a FrameBufferObject with several Color Texture attachments

I'm implementing in my program the gaussian blur effect. To do the job I need to render the first blur information (the one on Y axis) in a specific texture (let's call it tex_1) and use this same information contained in tex_1 as input information for a second render pass (for the X axis) to fill an other texture (let's call it tex_2) containing the final gaussian blur result.
A good practice should be to create 2 frame buffers (FBOs) with a texture attached for each of them and linked both to GL_COLOR_ATTACHMENT0 (for example). But I just wonder one thing:
Is it possible to fill these 2 textures using the same FBO ?
So I will have to enable GL_COLOR_ATTACHMENT0 and GL_COLOR_ATTACHMENT1 and bind the desired texture to the correct render pass as follow :
Pseudo code:
FrameBuffer->Bind()
{
FrameBuffer->GetTexture(GL_COLOR_ATTACHMENT0)->Bind(); //tex_1
{
//BIND external texture to blur
//DRAW code (Y axis blur pass) here...
//-> Write the result in texture COLOR_ATTACHEMENT0 (tex_1)
}
FrameBuffer->GetTexture(GL_COLOR_ATTACHMENT1)->Bind(); //tex_2
{
//BIND here first texture (tex_1) filled above in the first render pass
//Draw code (X axis blur pass) here...
//-> Use this texture in FS to compute the final result
//within COLOR_ATTACHEMENT1 (tex_2) -> The final result
}
}
FrameBuffer->Unbind()
But in my mind there is a problem because I need for each render pass to bind an external texture as an input in my fragment shader. Consequently, the first binding of the texture (the color_attachment) is lost!
So does it exist a way to solve my problem using one FBO or do I need to use 2 separate FBOs ?
I can think of at least 3 distinct options to do this. Where the 3rd one will actually not work in OpenGL ES, but I'll explain it anyway because you might be tempted to try it otherwise, and it is supported in desktop OpenGL.
I'm going to use pseudo-code as well to cut down on typing and improve readability.
2 FBOs, 1 attachment each
This is the most straightforward approach. You use a separate FBO for each texture. During setup, you would have:
attach(fbo1, ATTACHMENT0, tex1)
attach(fbo2, ATTACHMENT0, tex2)
Then for rendering:
bindFbo(fbo1)
render pass 1
bindFbo(fbo2)
bindTexture(tex1)
render pass 2
1 FBO, 1 attachment
In this approach, you use one FBO, and attach the texture you want to render to each time. During setup, you only create the FBO, without attaching anything yet.
Then for rendering:
bindFbo(fbo1)
attach(fbo1, ATTACHMENT0, tex1)
render pass 1
attach(fbo1, ATTACHMENT0, tex2)
bindTexture(tex1)
render pass 2
1 FBO, 2 attachments
This seems to be what you had in mind. You have one FBO, and attach both textures to different attachment points of this FBO. During setup:
attach(fbo1, ATTACHMENT0, tex1)
attach(fbo1, ATTACHMENT1, tex2)
Then for rendering:
bindFbo(fbo1)
drawBuffer(ATTACHMENT0)
render pass 1
drawBuffer(ATTACHMENT1)
bindTexture(tex1)
render pass 2
This renders to tex2 in pass 2 because it is attached to ATTACHMENT1, and we set the draw buffer to ATTACHMENT1.
The major caveat is that this does not work with OpenGL ES. In ES 2.0 (without using extensions) it's a non-starter because it only supports a single color buffer.
In ES 3.0/3.1, there is a more subtle restriction: They do not have the glDrawBuffer() call from full OpenGL, only glDrawBuffers(). The call you would try is:
GLenum bufs[1] = {GL_COLOR_ATTACHMENT1};
glDrawBuffers(bufs, 1);
This is totally valid in full OpenGL, but will produce an error in ES 3.0/3.1 because it violates the following constraint from the spec:
If the GL is bound to a draw framebuffer object, the ith buffer listed in bufs must be COLOR_ATTACHMENTi or NONE.
In other words, the only way to render to GL_COLOR_ATTACHMENT1 is to have at least two draw buffers. The following call is valid:
GLenum bufs[2] = {GL_NONE, GL_COLOR_ATTACHMENT1};
glDrawBuffers(bufs, 2);
But to make this actually work, you'll need a fragment shader that produces two outputs, where the first one will not be used. By now, you hopefully agree that this approach is not appealing for OpenGL ES.
Conclusion
For OpenGL ES, the first two approaches above will work, and are both absolutely fine to use. I don't think there's a very strong reason to choose one over the other. I would recommend the first approach, though.
You might think that using only one FBO would save resources. But keep in mind that FBOs are objects that contain only state, so they use very little memory. Creating an additional FBO is insignificant.
Most people would probably prefer the first approach. The thinking is that you can configure both FBOs during setup, and then only need glBindFramebuffer() calls to switch between them. Binding a different object is generally considered cheaper than modifying an existing object, which you need for the second approach.
Consequently, the first binding of the texture (the color_attachment)
is lost!
No, it isn't. Maybe your framebuffer class works that way, but then, it would be a very bad abstraction. The GL won't detach a texture from an FBO just because you bind this texture to some texture unit. You might get some undefined results if you create a feedback loop (rendering to a texture you are reading from).
EDIT
However, as #Reto Koradi pointed out in his excellent answer, (and his comment to this one), you can't simply render to a single color attachment in unextended GLES1/2, and need some tricks in GLES3. As a result, The fact I'm pointing out here is still true, but not really helpful for the ultimate goal you are trying to achieve.

OpenGL and (the lack of) glBlendFuncSeparate

I need to blend a few image together into a single one, pretty much as what's described here: OpenGL - mask with multiple textures.
I used the solution that is proposed there, but there's an issues with the glBlendFuncSeparate method.
Turns out that this method was introduced in later openGL versions, and according to my gl.h file the version I'm using is 1.
After much searching and reading I realized that this is what I have to work with and that I can't just upgrade my openGL version.
I went ahead and downloaded GLEW.
I added glew.h and glew.c into my VS10 project, defined GLEW_BUILD and now it finally compiles without complaining about glBlendFuncSeparate, but when I run the program it crashes when it tries to call the method, saying Access Violation, I guess that it points to NULL and then crashes when that's being run.
I continued reading and searching on this, and from what I understand, I need to use OpenGl Extensions to make it work.
If what's written in Using OpenGL extensions On Windows is correct then I'm missing something.
Let's say I do everything it says, I "download and install the latest drivers and SDKs for your graphics card" and then compile it, even if it runs on my machine, I see no guarantee that it won't crash on someone else's machine, since they might not have done the same.
I have two questions:
Am I missing something here? this whole process seems way too complicated, and environment dependent.
Is there an alternative for using glBlendFuncSeparate in this kind of a scenario?
You don't need glBlendFuncSeparate(GL_ZERO, GL_ONE, GL_SRC_COLOR, GL_ZERO); to use trick described in OpenGL - mask with multiple textures. Yes, you can't added color directly to alpha channel, like described in previous example, but you can be little tricky.
During writing your mask just disable writting all color channels, except alpha:
glColorMask(false, false, false, true);
and enable multiplying mask's alpha on background alpha-channel:
glBelndFunc(GL_ZERO, GL_SRC_ALPHA);
After writing bitmask, don't forget setup your glColorMask back.
glColorMask(true, true, true, true);
//-----------------------------------------------------------------------------------------------------------------------
And yes, you need mask with information in alpha channel:
1) It's can be done with GIMP (very simple, but required GIMP knowlege).
2) You can write you own rootine, for pushing color information to alpha channel, before mask texture creation (it's very simple - just few lines of code).
3) Or just use GL_ALPHA "format" attribute in glTexImage2D for mask texture. This flag just writes bitmaps color to texture alpha channel.

OpenGL 3.1+ with Ruby

I followed this post to play with OpenGL (programmable pipeline) on Ruby
Basically, I'm just trying to create a blue window, and here's the code.
Ray::GL.major_version = 3
Ray::GL.minor_version = 2
Ray::GL.core_profile = true # if you want/need one
window = Ray::Window.new("Test Window", [800, 600])
window.make_current
glClearColor(0, 0, 1, 1);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
Instead, I got a white window created. This indicated that I was missing something, but I couldn't figure out what I was missing as the resources for OpenGL on Ruby seemed limited. I have been searching all over the web, but all I found was fixed-pipeline OpenGL stuff for Ruby.
Yes, I could use Ray's built-in functions to set the background color and draw stuff, but I didn't want to do that. I just wanted to use Ray to setup the window, then called OpenGL APIs directly. However, I couldn't figure out what I was missing in the code above.
I would greatly appreciate any hint or pointer to this (maybe I needed to swap the buffer? but then I didn't know how to do it with Ray). Is there any body familiar with using Ray that can give me some hints on this?
Or, are there any other tools that would allow me to setup OpenGL binding (for none fixed-pipeline)?
It would appear that you set the clear color to be blue, then cleared the back buffer to make it blue. But, as you said, you have not swapped the buffers to put the back buffer onto your screen. As far as swapping buffers goes, here's another answer from stack overflow
"Swapping the front and back buffer of a double buffered window is a function provided by the underlying graphics system, i.e. Win32 GDI, or X11 GLX. The function's you're looking for are wglSwapBuffers and/or glXSwapBuffers. On MacOS X NSOpenGLViews are automatically swapped.
However most likely you're using some framework, like GLUT, GLFW or Qt, which provide a portable wrapper around those functions. Read the framework's documentation."
I've never used Ray, so I'd say just keep rooting around in the documentation or look through example projects to see how buffer swapping is done.

Rendering OpenGL just once rather than every frame

Nearly every example I see of OpenGL ES involves it updating every frame, even if the image itself is not moving in any way.
I did some tests and I see it works quite fine to just render (using drawArrays etc) and then present the render buffer (these two actions, together) just once and then not do either again until you have something change onscreen.
Is this "normal" ? I just don't see this really done much. Once drawn, the graphics stay on the screen without additional constant rendering.
Is this acceptable?
Yes, it is acceptable and completely valid. You also need to take account to render again when the context is lost. To give you an example, using Android standard OpenGL helper classes there is an option to only draw when needed, not in loop (RENDERMODE_WHEN_DIRTY).

Resources