OpenGL Render to texture - cocoa

I know this has been asked before (I did search) but I promise you this one is different.
I am making an app for Mac OS X Mountain Lion, but I need to add a little bit of a bloom effect. I need to render the entire scene to a texture the size of the screen, reduce the size of the texture, pass it through a pixel buffer, then use it as a texture for a quad.
I ask this again because a few of the usual techniques do not seem to function. I cannot use the #version, layout, or out in my fragment shader, as they do not compile. If I just use gl_FragColor as normal, I get random pieces of the screen behind my app rather than the scene I am trying to render. The documentation doesn't say anything about such things.
So, basically, how can I render to a texture properly with the Mac implementation of OpenGL? Do you need to use extensions to do this?
I use the code from here

Rendering to a texture is best done using FBOs, which let you render directly into the texture. If your hardware/driver doesn't support OpenGL 3+, you will have to use the FBO functionality through the ARB_framebuffer_object core extension or the EXT_framebuffer_object extension.
If FBOs are not supported at all, you will either have to resort to a simple glCopyTexSubImage2D (which involves a copy though, even if just GPU-GPU) or use the more flexible but rather intricate (and deprecated) PBuffers.
This tutorial on FBOs provides a simple example for rendering to a texture and using this texture for rendering afterwards. Since the question lacks specific information about the particular problems you encountered with your approach, those rather general googlable pointers to the usual render-to-texture resources need to suffice for now.

Related

If Core Graphics uses Metal under the hood, can a Metal implementation run faster than a CG one? Why?

Let's say I want to develop a Paint app and need to implement a brush engine. For a raster brush, you basically need to stamp a texture on touch locations with a given spacing.
-- Task: Composite a small image (brush tip) over a bigger one.
I decided to build a prototype first in CG using a CGContext to render the stamps and found out it performed pretty well even with coalesced touches and a decent size canvas (CGContext output size).
However, since I need to paint onto really big textures (8000x6000 would be great), I decided to give metal a chance. I know that this task might be trivial for someone with a background in Metal but I'm new in this field. So I tried to use CIFilters (Metal backed) for compositing the brush over the canvas and displaying it in a custom MetalImageView: GTKView.
I thought having the canvas and the brush as CIImages and displaying them in a Metal Layer would already be more performant than the naive CG implementation. But it's not. The CIFilter approach renders the entire canvas every single stamp(at: Point), whether in CG I just refresh a small rect around that point.
Now, I think I could accomplish that with the CIFilter if I could change the extent that is computed. I don't know if that can be done with Core Image, but I'm sure in metal would be really easy for someone with experience.
-- Question: Can a pure metal implementation be faster stamping images than the CG one, given that CG runs with Metal under the hood? If so, how faster? Is it worth learning how to do it, or should I better spend that time improving the CG implementation?
Note that I'm asking for a raster brush, not a vector brush with Bezier Paths which is way easier to code and runs faster but textured brushes can't be used.
I really appreciate any help.
There is actually a chapter in the Core Image Programming Guide about that. They describe continuous painting into the same texture using the CIImageAccumulator class. You can also download the sample app.
I think performance-wise there shouldn't be a huge difference. You should be able to optimize heavily by telling Core Image the region of interest and domain of definition (extent) of your brush stroke filter. Then it should be able to render only the necessary parts of the image instead of the whole thing in every frame.

Opengl Combine texture side by side

I'm currently using the Freetype library to render text to my opengl es 2.0 app (both ios and android) and I've come to the point where I seriously have to consider performance. I have a store of textures (one for each character) and whenever I render text, the character calls from that texture area and then opengl binds to it, renders it to the screen... For each letter! Very inefficient. So, I'm looking into ways to make this faster, and one of the ideas I had was to pre render each word instead of each letter. So, now for the big question: How do I combine textures side by side? I know I need to create a buffer and that somehow it will come from FT_LOAD_CHAR, but I don't really know much else.
Actually, you have misunderstood how to use FreeType. You can load font using FT library and load all characters in to a single bitmap texture. Then you can just use texture coordinates to point the right UV.
This link can help to some extend. Draw FreeType Font in OpenGL ES

Mixing OpenGL and software rendered GUI

I need to write application where the main content will be OpenGL rendered (something like game engine), but there is no good OpenGL based GUI library similiar to what Qt widgets does (but they are software rendered).
As i browsed the source code of Qt, all painting is done via QPainter and there is even QPainter implementation in OpenGL, but the suppport for multiple graphics backends was dropped in Qt 5, so you can't render Qt Widgets in OpenGL anymore (i don't know why).
The problem is that you can't paint to window surface using both software and hardware rendering. You can have the window associated with OpenGL context or use software rendering. That means if i want to have app with complex GUI with OpenGL based content, i need either paint everything using OpenGL (which is hard because as i said, there is no good GUI library for it), or i can render GUI to image using software rendering (for example Qt) and than load that image as OpenGL texture (probably big performance loss).
Does anyone know any good application that is using software rendered GUI loaded as texture to OpenGL? I need to be sure it will work without some big performance loss, but can't find good example that it will work well even for apps like game engines.
If you take the "render ui to texture then draw a textured quad over my game" route, and are worried about performances, try to avoid transfering the whole texture each frame.
If you think about it :
60fps is not necessary for ui : 30fps is enough, so update it one time out of two.
Most of the time, ui dont change between frames, and if it changes, only a small portion of it do.
ui framework often keep track of which part of the ui is "dirty" and need to be redrawn. If you can get your hand on that, you can stream to the texture only the parts that need to be updated (glTexSubImage2D).

How to add a shader to a fixed-pipeline OpenGL app

I am updating an existing app for Mac OS that is based on the older fixed-pipeline OpenGL. (OpenGL 2, I guess it is.) It uses an NSOpenGLView and an ortho projection to create animated kaleidoscopes, with either still images or input from a connected video camera.
It was written before HD cameras were available (or at least readily so.) It's written to expect YCBCR_422 video frames from Quicktime (k422YpCbCr8CodecType) and passes the frames to OpenGL as GL_YCBCR_422_APPLE to map them to a texture.
My company decided it was time to update the app to support the new crop of HD cameras, and I am not sure how to proceed.
I have a decent amount of OpenGL experience, but my knowledge is spotty and has gaps in it.
I'm using the delegate method captureOutput:didOutputVideoFrame:withSampleBuffer:fromConnection to receive frames from the camera via a QTCaptureDecompressedVideoOutput
I have a Logitech c615 for testing, and the buffers I'm getting are reported as being in 32 bit RGBA (GL_RGBA8) format. I believe it's actually in ARGB format.
However, according to the docs on glTexImage2D, the only supported input pixel formats are GL_COLOR_INDEX, GL_RED, GL_GREEN, GL_BLUE, GL_ALPHA, GL_RGB, GL_BGR GL_RGBA, GL_BGRA, GL_LUMINANCE, or GL_LUMINANCE_ALPHA.
I would like to add a fragment shader that would swizzle my texture data into RGBA format when I map my texture into my output mesh.
Since writing this app I've learned the basics of shaders from writing iOS apps for OpenGL ES 2, which does not support fixed pipeline OpenGL at all.
I really don't want to rewrite the app to be fully shader based if I can help it. I'd like to implement an optional fragment shader that I can use to swizzle my pixel data for video sources when I need it, but continue to use the fixed pipeline for managing my projection matrix and model view matrix.
How do you go about adding a shader to the (otherwise fixed) rendering pipeline?
I don't like to ask this, but what exactly is your problem now? It is not that difficult to attach shaders to the fixed function pipeline; all you need is to reimplement that tiny bit of functionality that vertex and fragment shaders are replacing. You can use built-in GLSL variables like gl_ModelViewMatrix or such to use values that have been setup by your legacy OpenGL code; you can find some of them here: http://www.hamed3d.org/Res/GLSL_Reference/0321552628_AppI.pdf

OpenVG and WebGL

Is there a JavaScript implementation of the OpenVG standard based on WebGL?
I'm pretty aware that we can render vector graphics in the browser, I'm just curious as to whether or not anyone has actually managed to render SVG with WebGL, with or without basing this on the OpenVG standard. If it doesn't exist, would it be useful to start a project?
Here's some C code for parsing and rendering SVG in OpenGL 1.x: https://github.com/tnovelli/vsprite. It might not be quite what you're looking for. We used a Stencil Buffer trick to draw 2D polygons. For 3D, I guess you'd have to render into an off-screen buffer and texture-map it onto a 3D object. (Why not pre-render SVG into raster images? Because our objects are breakable and deformable.)
I'm thinking about porting this to Javascript+WebGL. The browser's XML/SVG/CSS parsing features should eliminate a lot of the work, but the stencil trick could pose a challenge. This is a back burner project for me, so if anyone else wants to try something, don't hold your breath, just do it! :)
Well none of these answers answer the question explicitly. #tom shows that yes, we can render vector graphics on the webgl canvas using a neat stencil trick. This isn't however a full implementation of the OpenVG specification and I am curious as to how much of the OpenVG spec could be implemented this way. So to conclude:
There are no implementations currently, but there doesn't appear to be a particular demand for it.
As WebGL mixes just fine with any other HTML(5) technique, you can mix it up already. The only thing is that you won't be able to mix SVG with WebGL using its depth buffer - for example a logotype (SVG) placed in a 3D-world (WebGL). But maybe that's just what you like to do?

Resources