I am updating an existing app for Mac OS that is based on the older fixed-pipeline OpenGL. (OpenGL 2, I guess it is.) It uses an NSOpenGLView and an ortho projection to create animated kaleidoscopes, with either still images or input from a connected video camera.
It was written before HD cameras were available (or at least readily so.) It's written to expect YCBCR_422 video frames from Quicktime (k422YpCbCr8CodecType) and passes the frames to OpenGL as GL_YCBCR_422_APPLE to map them to a texture.
My company decided it was time to update the app to support the new crop of HD cameras, and I am not sure how to proceed.
I have a decent amount of OpenGL experience, but my knowledge is spotty and has gaps in it.
I'm using the delegate method captureOutput:didOutputVideoFrame:withSampleBuffer:fromConnection to receive frames from the camera via a QTCaptureDecompressedVideoOutput
I have a Logitech c615 for testing, and the buffers I'm getting are reported as being in 32 bit RGBA (GL_RGBA8) format. I believe it's actually in ARGB format.
However, according to the docs on glTexImage2D, the only supported input pixel formats are GL_COLOR_INDEX, GL_RED, GL_GREEN, GL_BLUE, GL_ALPHA, GL_RGB, GL_BGR GL_RGBA, GL_BGRA, GL_LUMINANCE, or GL_LUMINANCE_ALPHA.
I would like to add a fragment shader that would swizzle my texture data into RGBA format when I map my texture into my output mesh.
Since writing this app I've learned the basics of shaders from writing iOS apps for OpenGL ES 2, which does not support fixed pipeline OpenGL at all.
I really don't want to rewrite the app to be fully shader based if I can help it. I'd like to implement an optional fragment shader that I can use to swizzle my pixel data for video sources when I need it, but continue to use the fixed pipeline for managing my projection matrix and model view matrix.
How do you go about adding a shader to the (otherwise fixed) rendering pipeline?
I don't like to ask this, but what exactly is your problem now? It is not that difficult to attach shaders to the fixed function pipeline; all you need is to reimplement that tiny bit of functionality that vertex and fragment shaders are replacing. You can use built-in GLSL variables like gl_ModelViewMatrix or such to use values that have been setup by your legacy OpenGL code; you can find some of them here: http://www.hamed3d.org/Res/GLSL_Reference/0321552628_AppI.pdf
Related
I'm currently using the Freetype library to render text to my opengl es 2.0 app (both ios and android) and I've come to the point where I seriously have to consider performance. I have a store of textures (one for each character) and whenever I render text, the character calls from that texture area and then opengl binds to it, renders it to the screen... For each letter! Very inefficient. So, I'm looking into ways to make this faster, and one of the ideas I had was to pre render each word instead of each letter. So, now for the big question: How do I combine textures side by side? I know I need to create a buffer and that somehow it will come from FT_LOAD_CHAR, but I don't really know much else.
Actually, you have misunderstood how to use FreeType. You can load font using FT library and load all characters in to a single bitmap texture. Then you can just use texture coordinates to point the right UV.
This link can help to some extend. Draw FreeType Font in OpenGL ES
I am trying to write a simple GPGPU benchmark. To load the data into vertex buffer array, do some computation in the vertex shader, and read the data back. Is it possible? I am planning to run this on SGX GPUs.
Is there any way to do that? I dont want it to go through the transformation, clipping, Tiling phases, and pixel processing. that incurs additional overhead, and changes my data.
can I read back the data and examine it in the CPU? is there anyway in opengl es?
I can do the computations in pixel shader aswell, by sending data through a texture and multiplying with some constants and writing it to another frame buffer. but how do I get it back? i dont really want to present it to the screen.
is there anyway to do that? can somebody point me to some tutorials if available?
Reading Vertex output: Look for Transform Feedback - But you will have to have OpenGL ES 3.0 to use this.
For ES2.0 I suggest using fragment shader and Render To Texture techniques. Here is some link with tutorial
After rendering to texture you basically have to read pixels of the texture.
Nice tutorial on reading the data here
tutorial about feedback: http://open.gl/feedback
You cannot read data from vertex shader directly in OpenGL ES 2.0.
So, you can sed your data to pixel/fragment shader, attach it to Frame Buffer Object(FBO) and then use glReadPixels to get the data as texture in your CPU.
This link discribes the concept and code snnipet:
here.
Hope this might help.
"...computation in the vertex shader, and read the data back. Is it possible? I am planning to run this on SGX GPUs."
As SGX supports Opengles 2.0 not 3.0, PowerVR SGX doesn't support reading vertex shader output.(OpenGL ES 2.0 spec doesn't include such functionality).
"can I read back the data and examine it in the CPU? is there anyway in opengl es?"
You can use framebuffer objects and read the same using glRead API. You can read about FrameBuffer Objects here
Ref: http://en.wikipedia.org/wiki/Framebuffer_Object
Ref: Reading the pixels values from the Frame Buffer Object (FBO) using Pixel Buffer Object (PBO)
If GPGPU calculation you are after, then i recommend you should go for OpenCL.
SGX Supports OpenCL EP 1.1.
While writing from vertex shader to offscreen is not possible in OpenGL ES2, you can do that from pixel shader.
First you need to render to the offscreen buffer, then you can either render that as a texture to another object on screen, or you can read it back using readPixels the usual way. A simple list of steps is given in https://gist.github.com/prabindh/9201855.
I'm writing a video player where my code decodes the video to raw YCbCr frames.
What would be the fastest way to output these through the Qt framework? I want to
avoid copying data around too much as the images are in HD format.
I am afraid that software color conversion into a QImage would be slow and that later the QImage will again be copied when drawing into the GUI.
I have had a look at QAbstractVideoSurface and even have running code,
but cannot grasp how this is faster, since like in the VideoWidget example
(http://idlebox.net/2010/apidocs/qt-everywhere-opensource-4.7.0.zip/multimedia-videowidget.html), rendering is still done by calling QPainter::drawImage with
QImage, which has to be in RGB.
The preferred solution seems to me to have access to a hardware surface directly
into which I could decode the YCbCr or at least directly do the RGB conversion (with libswscale) into.
But I cannot see how I could do this (without using OpenGL, which would give me
free scaling too, though).
One common solution is to use QGL Widget with texture mapping. The application allocates a texture buffer on first frame, then call update texture in remaining frames. This is pure GL call, Qt not supporting texture manipulation yet. But QGLwidget can be used as a container.
Decoding was done using SSE2. Hope this helps.
I have a realtime OpenGL application rendering some objects with textures. I have build a function to make an internal screenshot of the rendered scene by rendering it to an DIB via PFD_DRAW_TO_BITMAP and copy it to an image. It works quite well except for one kind of texture. These are JPGs with 24bpp (so 8 Bit for every R,G,B). I can load them and they render correctly in realtime but not when rendered to the DIB. For other textures it works well.
I have the same behaviour when testing my application on virtual machine (WinXP, no hardware acceleration!). Here these specific textures are not even shown in realtime rendering. Without hardware acceleration I guess WinXP uses its own software implementation of OpenGL and falls back to OpenGL 1.1.
So are there any kinds of textures that cant be drawn without 3d hardware acceleration? Or is there a common pitfall?
PFD_DRAW_TO_BITMAP will always drop you into the fallback OpenGL-1.1 software rasterizer. So you should not use it. Create an off-screen FBO, render to that, retrieve the pixel data using glReadPixels and write it to a file using an image file I/O library.
I know this has been asked before (I did search) but I promise you this one is different.
I am making an app for Mac OS X Mountain Lion, but I need to add a little bit of a bloom effect. I need to render the entire scene to a texture the size of the screen, reduce the size of the texture, pass it through a pixel buffer, then use it as a texture for a quad.
I ask this again because a few of the usual techniques do not seem to function. I cannot use the #version, layout, or out in my fragment shader, as they do not compile. If I just use gl_FragColor as normal, I get random pieces of the screen behind my app rather than the scene I am trying to render. The documentation doesn't say anything about such things.
So, basically, how can I render to a texture properly with the Mac implementation of OpenGL? Do you need to use extensions to do this?
I use the code from here
Rendering to a texture is best done using FBOs, which let you render directly into the texture. If your hardware/driver doesn't support OpenGL 3+, you will have to use the FBO functionality through the ARB_framebuffer_object core extension or the EXT_framebuffer_object extension.
If FBOs are not supported at all, you will either have to resort to a simple glCopyTexSubImage2D (which involves a copy though, even if just GPU-GPU) or use the more flexible but rather intricate (and deprecated) PBuffers.
This tutorial on FBOs provides a simple example for rendering to a texture and using this texture for rendering afterwards. Since the question lacks specific information about the particular problems you encountered with your approach, those rather general googlable pointers to the usual render-to-texture resources need to suffice for now.