Can OpenGL ES render into CPU memory? - opengl-es

Is it possible to render all of the OpenGL ES things into a normal allocated buffer instead of the framebuffer:
/* render into this buffer */
GLubyte* buffer =
(GLubyte*) calloc(width * height * 4, sizeof(GLubyte));
I want to be able to convert those rendered images into textures for other uses.
I'm using OpenGL ES 1.3 with the standard C API.

For this you won't get around a call to glReadPixels, which copies the content of the framebuffer into system memory. But when you want to copy it into a texture you can do this directly using glCopyTex(Sub)Image2D or by using FBOs and rendering directly into the texture without the need for a copy (but I'm not sure if FBOs are supported in ES). But of course, you cannot render directly into system memory (for textures it works using FBOs, as they are stored in GPU memory).

Related

Copying pixel data directly from windows Device Context to an openGL rendering context

Is it possible to copy pixel data directly from a windows device context into an openGL rendering context (an openGL texture, to be specific)? I know that I can copy the windows device context pixel data into system memory (take it out of the graphics card), and then later upload it back into my openGL framebuffer, but what I'm looking for is a direct transfer where the data doesn't have to leave the GPU.
I am trying to write an application that is essentially a virtual magnifier. It consists of one window, which displays the contents of any other windows that are open underneath it. The target for my application is machines with windows 8 and higher, and I am using the basic win32 API. The reason why I want to use openGL to display the contents of my window is because I wish to perform various spatial transformations (distortions) with the GPU, not just magnification. Using openGL, I believe I can perform these transformations very fast.
Previously, I thought that all I had to do was to "move" my openGL rendering context onto each "third party" window that I wanted to steal pixel data from, and use glReadPixels() to copy this data into a PBO. I could then switch my rendering context back to my magnifier window, and proceed with rendering. However, I understand that this isn't possible, because openGL doesn't have access to any pixel data that wasn't rendered using openGL itself.
Any help would be greatly appreciated.

How to redraw partially in opengl Es 2.0

As per my need I want to redraw only some part of the scene for each frame instead
of redrawing the entire scene only if some portion of it is updated.
Is there a way to do that in OpenGL ES 2.0?
Please any input on this will be really helpful
OpenGL does not really support incremental rendering. You need to draw the entire frame every time you are asked to redraw.
The closest I can think of is that you render your static data to an offscreen framebuffer, using a FBO (Frame Buffer Object). You should be able to find plenty of examples online and in books if you look for keywords like "OpenGL FBO". You will be using calls like glGenFramebuffers(), glBindFramebuffer(), glFramebufferTexture2D(), etc.
Once you rendered the static content into an FBO, you can copy it to the default framebuffer at the start of each redraw, and then render the dynamic content on top of it. This can be a worthwhile method if rendering the static content is very expensive. Otherwise, doing the copy from FBO to default framebuffer can be more expensive than simply re-rendering the static content each time.
The above is pretty easy if the static content is in the background, and the dynamic content is completely in front of it. If static and dynamic content overlap, it gets trickier. You will then have to restore the depth buffer resulting from rendering the static content each time before starting to render the dynamic content. I can't think of a good way to do that in ES 2.0. The features to do this relatively smoothly (depth textures, glBlitFramebuffer) are only in ES 3.0 and later.
There is one other option that I don't think is very appealing, but I wanted to mention it for completeness sake: EGL defines a EGL_SWAP_BEHAVIOR attribute that can be set to EGL_BUFFER_PRESERVED. One big caveat is that it's optional, and not supported on all devices. It also only preserves the color buffer, and not auxiliary buffers, like the depth buffer. If you want to read up on it anyway, see eglSwapBuffers and eglSurfaceAttrib.

Sharing an OpenGL framebuffer between processes in Mac OS X

Is there any way in Mac OS X to share an OpenGL framebuffer between processes? That is, I want to render to an off-screen target in one process and display it in another.
You can do this with DirectX (actually DXGI) in Windows by creating a surface (the DXGI equivalent of an OpenGL framebuffer) in shared mode, getting an opaque handle for that surface, passing that to another process via whatever means you like, then creating a surface in that other process, but passing in the existing handle. You use the surface as a render target in one process then and use it as a texture in the other to consume as you wish. And in fact the whole compositing Window system works like this from Vista onwards.
If this isn't possible I can of course get the contents of the framebuffer into system memory and use cross-process shared memory to get it to the target process, then upload it again from there, but that would be unnecessarily slow.
Depending on what you're really trying to do this sample code project may be what you want:
MultiGPUIOSurface sample code
It really depends upon the context of how you're using it.
Objects that may be shared between contexts include buffer objects,
program and shader objects, renderbuffer objects, sampler objects,
sync objects, and texture objects (except for the texture objects
named zero).
Some of these objects may contain views (alternate interpretations) of
part or all of the data store of another object. Examples are texture
buffer objects, which contain a view of a buffer object’s data store,
and texture views, which contain a view of another texture object’s
data store. Views act as references on the object whose data store is
viewed.
Objects which contain references to other objects include framebuffer,
program pipeline, query, transform feedback, and vertex array objects.
Such objects are called container objects and are not shared.
Chapter 5 / OpenGL-4.4 core specification
The reason you can do those things on Windows and not OS X is that graphics obviously utilizes an API that allows DirectX contexts to be shared between those processes. If OS X doesn't have the capability within the OpenGL API then you're going to have to come up with your own solution. Take a look at OpenGL Programming Guide for Mac, there's a small section that describes using multiple OpenGL contexts.

how to load an image in opengl

I need to upload an image into my background...
Does anyone know how to do this?
I know I have to do the following steps:
1) Load image data into system memory
2) Generate a texture name with glGenTextures
3) Bind the texture name with gBindTexture
4) Set wrapping and filtering mode with glTexParameter
5) call glTexImage2D with the right parameters depending on the image nature to load image data into video memory
but I don't know how to put them in opengl
OpenGL supports textures and images... but the user needs to provide the data. So you have to use sme library or additional code to load the data.
I suggest using very simple lib SOIL - http://www.lonesock.net/soil.html
Or some library provided by your SDK
in general:
load texture bytes into pBytes;
glTexImage2D(..., ..., ..., pBytes);

How to convert an OpenGL ES texture into a CIImage

I know how to do it the other way around. But how can I create a CIImage from a texture, without having to copy into CPU memory? [CIImage imageWithData]? CVOpenGLESTextureCache?
Unfortunately, I don't think there's any way to avoid having to read back pixel data using glReadPixels(). All of the inputs for a CIImage (data, CGImageRef, CVPixelBufferRef) are CPU-side, so I don't see a fast path to deliver that to a CIImage. It looks like your best alternative there would be to use glReadPixels() to pull in the raw RGBA data from your texture and send it into the CIImage using -initWithData:options: and an kCIFormatRGBA8 pixel format. (Update: 3/14/2012) On iOS 5.0, there is now a faster way to grab OpenGL ES frame data, using the new texture caches. I describe this in detail in this answer.
However, there might be another way to achieve what you want. If you simply want to apply filters on a texture for output to the screen, you might be able to use my GPUImage framework to do the processing. It already uses OpenGL ES 2.0 as the core of its rendering pipeline, with textures as the way that frames of images or video are passed from one filter to the next. It's also much faster than Core Image, in my benchmarks.
You can supply your texture as an input here, so that it never has to touch the CPU. I don't have a stock class for grabbing raw textures from OpenGL ES yet, but you can modify the code for one of the existing GPUImageOutput subclasses to use this as a source fairly easily. You can then chain filters on to that, and direct the output to the screen or to a still image. At some point, I'll add a class for this kind of data source, but the project's still fairly new.
As of iOS 6, you can use a built-in init method for this situation:
initWithTexture:size:flipped:colorSpace:
See the docs:
http://developer.apple.com/library/ios/#DOCUMENTATION/GraphicsImaging/Reference/QuartzCoreFramework/Classes/CIImage_Class/Reference/Reference.html
You might find these helpful:
https://developer.apple.com/library/ios/#samplecode/RosyWriter/Introduction/Intro.html
https://developer.apple.com/library/ios/#samplecode/GLCameraRipple/Listings/GLCameraRipple_RippleViewController_m.html
In general I think the image data will need to be copied from the GPU to the CPU. However the iOS features mentioned above might make this easier and more efficient.

Resources