OpenGL getting background pixels info - windows

I have an OpenGL application with fully transparent window and I need to draw a picture into it with pixels transparency depending on background. Is there any way of getting background pixel data that are BELOW my transparent window (like wallpaper, desktop, another windows etc) so I can dynamically change pixels in shaders?
For now I have code like this
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_COLOR, GL_DST_COLOR);
glClearColor(1.0, 0, 0, 1.0);
glClear(GL_COLOR_BUFFER_BIT);
[self.shader useShader];
[self drawTriangle];
useShader just calls the glUseProgram procedure and drawTriangle just draws a test triangle.
The shader is:
#version 120
void main()
{
gl_FragColor = gl_SecondaryColor + vec4(0.0, 1.0, 0.0, 0.0);
}
So if I clear the window with (1.0, 0, 0, 1.0) I get the yellow triangle as expected, but when i switch to (0, 0, 0, 0) it gets green. Is there any way of getting undercolor data?

Is there any way of getting background pixel data that are BELOW my transparent window
With just OpenGL? No. In fact you can't even read back destination framebuffer pixels in a shader.
You'll have to use operating specific functions to retrieve the screen contents below the window as an image, load it into a texture and pass this to rendering.

Related

Ruby, openGL : change texture luminosity

I have some problems with OpenGL and luminosity. Let me explain you my problem :
I drew this "sprite" (it's only a plane here) with a code like that :
sprite.set_active
left, right, top, bottom = 0.0, 1.0, 1.0, 0.0
glPushMatrix
glTranslate(#position.x - 16, #position.y, #position.z)
glRotate(-90 -#window.camera.horizontal_angle, 0, 1, 0)
glScale(chara.width, chara.height, 32.0)
begin
glEnable(GL_BLEND)
glBegin(GL_QUADS)
glColor4f(1.0, 1.0, 1.0, 1.0)
glTexCoord2d(left, top); glVertex3f(0, 1, 0.5)
glTexCoord2d(right, top); glVertex3f(1, 1, 0.5)
glTexCoord2d(right, bottom); glVertex3f(1, 0, 0.5)
glTexCoord2d(left, bottom); glVertex3f(0, 0, 0.5)
glEnd
glDisable(GL_BLEND)
rescue
end
glPopMatrix
My problem is with that line :
glColor4f(1.0, 1.0, 1.0, 1.0)
Well, I can put a number lesser than 1.0 to have a darker sprite, but I can't do the contrary. How can I do that ? How can I make the sprite be totally white, for example ?
To get full control over your fragment processing, the best approach is using the programmable pipeline, where you can implement exactly what you want with GLSL code.
But there are some options that could work for this case in the fixed pipeline. The simplest one is using a different GL_TEXTURE_ENV_MODE. The default value is GL_MODULATE, which means that the color you specified with glColor4f() is multiplied with the color from the texture. As you found, that allows you to make the texture darker, but not brighter.
You could try using GL_ADD instead. As the name suggests, this will produce the final output as the sum of the texture color and the color from glColor4f(). For example:
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_ADD);
glColor4f(0.2f, 0.2f, 0.2f, 0.0f);
would add 0.2 to the color components read from the texture.
There is more complex functionality in the fixed pipeline that gives you more control over how texture values are used to generate colors. You can find it by looking for "texture combiners". But in my personal opinion, you're much better off moving to the programmable pipeline if you need something complex enough to require texture combiners.

Is it possible to copy data from one framebuffer to another in OpenGL?

I guess it is somehow possible since this:
glBindFramebuffer(GL_READ_FRAMEBUFFER_APPLE, _multisampleFramebuffer);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER_APPLE, _framebuffer);
glResolveMultisampleFramebufferAPPLE();
does exactly that, and on top resolves the multisampling. However, it's an Apple extension and I was wondering if there is something similar that copies all the logical buffers from one framebuffer to another and doesn't do the multisampling part in the vanilla implementation. GL_READ_FRAMEBUFFER doesn't seem to be a valid target, so I'm guessing there is no direct way? How about workarounds?
EDIT: Seems it's possible to use glCopyImageSubData in OpenGL 4, unfortunately not in my case since I'm using OpenGL ES 2.0 on iPhone, which seems to be lacking that function. Any other way?
glBlitFramebuffer accomplishes what you are looking for. Additionally, you can blit one TEXTURE onto another without requiring two framebuffers. I'm not sure using one fbo is possible with OpenGL ES 2.0 but the following code could be easily modified to use two fbos. You just need to attach different textures to different framebuffer attachments. glBlitFramebuffer function will even manage downsampling / upsampling for anti-aliasing applications! Here is an example of it's usage:
// bind fbo as read / draw fbo
glBindFramebuffer(GL_DRAW_FRAMEBUFFER,m_fbo);
glBindFramebuffer(GL_READ_FRAMEBUFFER, m_fbo);
// bind source texture to color attachment
glBindTexture(GL_TEXTURE_2D,m_textureHandle0);
glFramebufferTexture2D(GL_TEXTURE_2D, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, m_textureHandle0, 0);
glDrawBuffer(GL_COLOR_ATTACHMENT0);
// bind destination texture to another color attachment
glBindTexture(GL_TEXTURE_2D,m_textureHandle1);
glFramebufferTexture2D(GL_TEXTURE_2D, GL_COLOR_ATTACHMENT1, GL_TEXTURE_2D, m_textureHandle1, 0);
glReadBuffer(GL_COLOR_ATTACHMENT1);
// specify source, destination drawing (sub)rectangles.
glBlitFramebuffer(from.left(),from.top(), from.width(), from.height(),
to.left(),to.top(), to.width(), to.height(), GL_COLOR_BUFFER_BIT, GL_NEAREST);
// release state
glBindTexture(GL_TEXTURE_2D,0);
glBindFramebuffer(GL_READ_FRAMEBUFFER, 0);
glBindFramebuffer(GL_DRAW_FRAMEBUFFER,0);
Tested in OpenGL 4, glBlitFramebuffer not supported in OpenGL ES 2.0.
I've fixed errors in the previous answer and generalized into a function that can support two framebuffers:
// Assumes the two textures are the same dimensions
void copyFrameBufferTexture(int width, int height, int fboIn, int textureIn, int fboOut, int textureOut)
{
// Bind input FBO + texture to a color attachment
glBindFramebuffer(GL_READ_FRAMEBUFFER, fboIn);
glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, textureIn, 0);
glReadBuffer(GL_COLOR_ATTACHMENT0);
// Bind destination FBO + texture to another color attachment
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fboOut);
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, GL_TEXTURE_2D, textureOut, 0);
glDrawBuffer(GL_COLOR_ATTACHMENT1);
// specify source, destination drawing (sub)rectangles.
glBlitFramebuffer(0, 0, width, height,
0, 0, width, height,
GL_COLOR_BUFFER_BIT, GL_NEAREST);
// unbind the color attachments
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, GL_TEXTURE_2D, 0, 0);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, 0, 0);
}
You can do it directly with OpenGL ES 2.0, and it seems that there is no extension neither.
I am not really sure of what your are trying to achieve but in a general way, simply remove attachements of the FBO in which you have accomplish your off-screen rendering. Then bind the default FBO to be able to draw on screen, here you can simply draw a quad with an orthographic camera that fill the screen and a shader that takes your off-screen generated textures as input.
You will be able to do the resolve too if you are using multi-sampled textures.
glBindFramebuffer(GL_FRAMEBUFFER, off_screenFBO);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, 0, 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0); // Default FBO, on iOS it is 1 if I am correct
// Set the viewport at the size of screen
// Use your compositing shader (it doesn't have to manage any transform)
// Active and bind your textures
// Sent textures uniforms
// Draw your quad
Here is an exemple of the shader:
// Vertex
attribute vec2 in_position2D;
attribute vec2 in_texCoord0;
varying lowp vec2 v_texCoord0;
void main()
{
v_texCoord0 = in_texCoord0;
gl_Position = vec4(in_position2D, 0.0, 1.0);
}
// Fragment
uniform sampler2D u_texture0;
varying lowp vec2 v_texCoord0;
void main()
{
gl_FragColor = texture2D(u_texture0, v_texCoord0);
}

OpenGL (ES) Model Within Translucent Model

I want "Face In a Crystal Ball" effect where I have a model (the face) doing things inside of a translucent model (the crystal ball). I feel like I'm taking crazy pills because I just can't get this inner face to show up partially occluded by the ball. My goal is to vary the alpha of the ball (and/or face) to make the face appear and disappear.
Below is the relevant bits code. As you'll see, I'm not using shaders, just good old GL/GLES1. If anyone can tell me what I'm doing wrong, I'll be VERY appreciative.
The setup code...
//-- CONFIGURATION ---------------
// Create The Depth Buffer Object
glGenRenderbuffersOES(1, &depth_renderbuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, depth_renderbuffer);
glRenderbufferStorageOES(GL_RENDERBUFFER_OES,
GL_DEPTH_COMPONENT16_OES,
width,
height);
// Create The FrameBuffer Object
glGenFramebuffersOES(1, &framebuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, framebuffer);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES,
GL_COLOR_ATTACHMENT0_OES,
GL_RENDERBUFFER_OES,
color_renderbuffer);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES,
GL_DEPTH_ATTACHMENT_OES,
GL_RENDERBUFFER_OES,
depth_renderbuffer);
// Bind Color Buffer
glBindRenderbufferOES(GL_RENDERBUFFER_OES, color_renderbuffer);
glViewport(0, 0, width, height);
//-- LIGHTING ----------------------
glEnable(GL_LIGHTING);
glEnable(GL_LIGHT0);
//-- PROJECTION ---------------------
glMatrixMode(GL_PROJECTION);
viewport_size = vec2((float) width,(float) height);
//Orthographic Projection
float max_x,max_y;
if(width>height){
max_y = 1;
max_x = (float)width/(float)height;
}
else{
max_x = 1;
max_y = (float)height/(float) width;
}
const float MAX_X = max_x;
const float MAX_Y = max_y;
const float Z_0 = 0;
const float MAX_Z = 1;
glOrthof(-MAX_X, MAX_X, -MAX_Y, MAX_Y, Z_0-MAX_Z, Z_0+MAX_Z);
world_size = vec3(2*MAX_X,2*MAX_Y,2*MAX_Z);
//Color Depth
glEnable(GL_DEPTH_TEST);
glDepthMask(GL_TRUE); //Dissapears if False
glDepthFunc(GL_LEQUAL);
glEnable(GL_BLEND);
//glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); //doesn't do it
glBlendFunc(GL_ONE, GL_ONE); //better
Here is the rendering call
glClearColor(world->background_color.x,
world->background_color.y,
world->background_color.z,
world->background_color.w);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
for(int s=0;s<surfaces.size();s++){
Surface* surface = surface[s];
glMaterialfv(GL_FRONT_AND_BACK, GL_AMBIENT, surface->getMatAmbient().Pointer());
glMaterialfv(GL_FRONT_AND_BACK, GL_DIFFUSE, surface->getMatDiffuse().Pointer());
glMatrixMode(GL_MODELVIEW);
//If I don't put this code in here (as opposed to above), the light gets all crazy! WHY!?
glPushMatrix();
glLoadIdentity();
vec4 light_position = vec4(world->light->position,1);
glLightfv(GL_LIGHT0,GL_POSITION,light_position.Pointer());
glPopMatrix();
glPushMatrix();
glMultMatrixf(surface->transform.Pointer());
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, surface->index_buffer);
glBindBuffer(GL_ARRAY_BUFFER, surface->vertex_buffer);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glVertexPointer(3, GL_FLOAT, VERTEX_STRIDE, 0);
glNormalPointer(GL_FLOAT, VERTEX_STRIDE, (GLvoid*) VERTEX_NORMAL_OFFSET);
glDrawElements(GL_TRIANGLES, surface->indices.size(), GL_UNSIGNED_SHORT, 0);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
glPopMatrix();
}
It sounds like you may be suffering from a simple case of the concept of a depth buffer not really applying to your scene. A depth buffer stores one depth for every pixel on screen, which in a scene with fully opaque objects would be the depth of the nearest object at that pixel.
The problem is that when you want to add partially transparent objects to the scene, you end up in a position where several objects contribute to the colour of an individual pixel. But you can still store the depth of only one of them.
So what's probably happening in your case is that you're drawing the crystal ball first, and that's putting the depths of the various crystal ball pixels into the depth buffer. You're then attempting to draw the face and OpenGL is seeing that it's further away than the values already in the buffer, so skipping those pixels.
So the quick-fix solution is just to re-order your scene geometry by hand such that the face is always drawn before the crystal ball, being always on the inside.
In an ideal solution, you'd draw all opaque geometry in one step (traditionally in something close to front-to-back order, though that's not as important on the PowerVR) to establish opaque depth values, then all transparent geometry back to front so that it is composited in the correct order.
In OpenGL you really want the order of certain things to be relatively fixed so that you can push the relevant values over to the GPU and not incur communications costs. People still tend to divide into opaque and transparent geometry and draw opaque first but often they'll just then disable z-buffer writes when they draw the transparent geometry, making an effort to do it something a bit like back-to-front order but not investing too much time in the problem.
If you're happy to use purely additive blending then clearly any order drawing for the transparencies is correct once the depth buffer has the opaque stuff set up.
What order are you rendering the objects? If you draw the ball before the face, then the entire face will get rejected because it is behind the ball in the z-buffer. If you want to do correct transparency, you have to render objects from back to front.
And regarding your inline question:
//If I don't put this code in here (as opposed to above), the light gets all crazy! WHY!?
When you call glLightfv with a position, the position is transformed by what's currently in the modelview matrix stack. You have to put it in the right place relative to what frame of reference you're defining the coordinates (is it relative to the view coordinates, or to the world coordinates, or to the object coordinates?).

Switching OpenGL to perspective mode on top of a half rendered orthographic scene?

We have a mostly 2D game that runs in orthographic mode, but one part shows a 3d model that is rendered in between the other 2D objects. How can I switch to perspective mode, render that model, then switch back to render the other objects in orthographic mode?
Kudos if you can show how it's done in OpenGL ES.
I think this isn't exactly specified question. Are you want more views? Or you want to have 2D background, 3D game objects, 2D gui. If you want this, then:
render fullscreen background
set viewport to position=obj.pos-obj.size/2, size=obj.size, render object
render 2D gui
Or you want something else?
EDIT:
Here's little code:
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0,w,0,h,near,far);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glTranslatef(pos.x,...);
DrawQuads();
//if you want to keep your previus matrix
glMatrixMode(GL_PROJECTION);
glPushMatrix();
glLoadIdentity();
gluPerspective(90,width/(float)height,0.001,1000);
glMatrixMode(GL_MODELVIEW);
glPushMatrix();
glLoadIdentity();
glTranslatef(pos.x,...);
glRotate(-45.0f,1,0,0);
glTranslate(0,0,distance from object);
glRotate(90.0f,1,0,0);
// or use gluLookAt
// 0,0,1 - if you want have z as up in view
// 0,1,0 - for y
//gluLookAt(pos.x,pos.y,pos.z,cam.x,cam.y,cam.z,0,0,1);
glScale(object.width/model.width,...);
DrawModel();
// Restore old ortho
glPopMatrix();
glMatrixMode(GL_PROJECTION);
glPopMatrix();
glMatrixMode(GL_MODELVIEW);
Well, "just do it"
set your projection matrix as ortho
set your modelview for 2D objects
render your 2D objects
set your projection matrix as projection
set your modelview for 3D objects
render your 3D objects
... and this can go on again an again
and swap buffers.
If you KNOW the order of your objects as you seem to do, you can also clear the z-buffer between each render.
I agree with previous posts, and I think more general case is like 3D object and 2D gui.
Just for re-emphasis. : )
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective( 45.0f, (GLfloat)s_width/(GLfloat)s_height, near, far);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
// render 3D object
glUseProgram(modelProgram);
glSetUniformMat(glGetUniformLocation(model.mvp, "mvp"), mvpMat);
glBindVertexArray(model.vao);
glDrawArrays(GL_TRIANGLES, 0, model.size);
glUseProgram(0);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glOrtho(0, width, 0, height, -1.0, 1.0);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
// draw GUI
renderGUI();

glDrawArrays() slow on iPad?

I was wondering how to speed up my iPad application using OpenGLES 2.0. At the moment we have every drawable object draw itself with a call to glDrawArrays(). Blend mode is on, we really need it. Without disabling blendmode, how would we improve performance for this app?
For instances, if we now draw 3 textures (1024x1024, 256x512, 256x512) across the whole screen, the app only gets 15FPS, which is really slow I think? Are we doing something terribly wrong? Our drawing code (for each drawable), is as follows:
- (void) draw {
GLuint textureAvailable = 0;
if(texture != nil){
textureAvailable = 1;
}
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture.name);
glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, vertices);
glEnableVertexAttribArray(ATTRIB_VERTEX);
glVertexAttribPointer(ATTRIB_COLOR, 4, GL_FLOAT, 1, 0, colorsWithMultipliedAlpha);
glEnableVertexAttribArray(ATTRIB_COLOR);
glVertexAttribPointer(ATTRIB_TEXTUREMAP, 2, GL_FLOAT, 1, 0, textureMapping);
glEnableVertexAttribArray(ATTRIB_TEXTUREMAP);
//Note that we are NOT using position.z here because that is only used to determine drawing order
int *jnUniforms = JNOpenGLConstants::getInstance().uniforms;
glUniform4f(jnUniforms[UNIFORM_TRANSLATE], position.x, position.y, 0.0, 0.0);
glUniform4f(jnUniforms[UNIFORM_SCALE], scale.x, scale.y, 1.0, 1.0);
glUniform1f(jnUniforms[UNIFORM_ROTATION], rotation);
glUniform1i(jnUniforms[UNIFORM_TEXTURE_SAMPLE], 0);
glUniform2f(jnUniforms[UNIFORM_TEXTURE_REPEAT], textureRepeat.x, textureRepeat.y);
glUniform1i(jnUniforms[UNIFORM_TEXTURE_AVAILABLE], textureAvailable);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
}
Possible optimizations I think won't work:
Drawing geometry in batches
I'm only drawing 3 items and the FPS is 15, I don't think batching the geometry would work here because it's such a small number of calls for drawing that it doesn't matter if we kill 2/3 of those calls.
Texture Atlas
Again, only drawing 3 textures. What I do wonder if it would matter (a lot) if we were to convert these to PVR? I haven't looked into it, but I must admit we're loading big PNGs at the moment. Is there any way to see if this is indeed the case, or is it easier just to check it out?
But please tell me if I'm wrong, I'm happy to hear any ideas.
Proposed solutions
Mipmapped textures
Loading mipmapped textures, doing it like this:
- (id) initWithUIImage: (UIImage * const) image {
glGenTextures(1, &name);
//JNLogString(#"Recieved name(%d), binding texture", name);
glBindTexture(GL_TEXTURE_2D, name);
//Set the needed parameters for the texture
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
//Load the image data into the texture
glGenerateMipmap(GL_TEXTURE_2D);
return self;
}
This doesn't seem to do anything for our FPS, I think this is because our textures are already roughly at the size they are rendered to on the screen, in most cases even 1:1.
Other solutions are welcome! I will try them out and post the results here
If you are using very large textures, try to create mipmap textures. The cost is basically 1/3 of the original texture memory. I think they can be created with this call when setting up the textures.
glTexParameteri(GL_TEXTURE_2D, GL_GENERATE_MIPMAP, GL_TRUE);
Some calculations: If you have 3 textures 2048x2048 (max size) at 15 Hz you will have a texel throughput (if they are fully shown, ie downscaled to screen resolution) of 2048x2048x3x15 = 188,743,680 / sec which is around the value we see at glbenchmark.com for single fill rate (173 Mtexel/sec). But if you are using mipmap textures the texel throughput should be closer to the screen size resolution (1024x768) which should be something like 1/4 of the previous throughput.
I had a branch in my fragment shader. I though that didn't put a lot of strain on it, but it did! Anyhow, that was the whole problem, I removed the branch and now my FPS has almost doubled.

Resources