Rendering OpenGL just once rather than every frame - opengl-es

Nearly every example I see of OpenGL ES involves it updating every frame, even if the image itself is not moving in any way.
I did some tests and I see it works quite fine to just render (using drawArrays etc) and then present the render buffer (these two actions, together) just once and then not do either again until you have something change onscreen.
Is this "normal" ? I just don't see this really done much. Once drawn, the graphics stay on the screen without additional constant rendering.
Is this acceptable?

Yes, it is acceptable and completely valid. You also need to take account to render again when the context is lost. To give you an example, using Android standard OpenGL helper classes there is an option to only draw when needed, not in loop (RENDERMODE_WHEN_DIRTY).

Related

What does CIImageAccumulator do?

Problem
The apple documentation states when the CIImageAccumulater can be used, but unfortunately it does not say what it actually does.
The CIImageAccumulator class enables feedback-based image processing for such things as iterative painting operations or fluid dynamics simulations. You use CIImageAccumulator objects in conjunction with other Core Image classes, such as CIFilter, CIImage, CIVector, and CIContext, to take advantage of the built-in Core Image filters when processing images.
I have to fix code that used a CIImageAccumulator. It seems to me that all it is meant to do, despite its name, is to return a CIImage with all CIFilters applied to the image. Adding the first image however darkens the output. That is not what I would expect from an accumulator nor from any other Operator that enables feedback based image processing.
Question
Can anyone answer what logic / algorithm is being used when setting and getting images in and out of the CIImageAccumulator
The biggest advantage of the CIImageAccumulater is that stores its contents between different rendering steps (in contrast to CIFilter or CIImage). This allows you to use the state of a previous rendering step, blend it with something new and store that result again in the accumulator.
Apple's main use case is interactive painting: You retrieve the current image from the accumulator, blend a new stroke the user just painted with a gesture on top of it, and store the resulting image back into the accumulator. Then you display the content of the accumulator. You can read about it here.

Rendering to CAMetalLayer from dedicated render thread / loop

In Windows World, a dedicated render thread would loop something similar to this:
void RenderThread()
{
while (!quit)
{
UpdateStates();
RenderToDirect3D();
// Can either present with no synchronisation,
// or synchronise after 1-4 vertical blanks.
// See docs for IDXGISwapChain::Present
PresentToSwapChain();
}
}
What is the equivalent in Cocoa with CAMetalLayer? All the examples deal with updates being done in the main thread, either using MTKView (with it's internal timer) or using CADisplayLink in the iOS examples.
I want to be in control of the whole render loop, rather than just receiving a callback at some non-specified interval (and ideally blocking for V-Sync if it's enabled).
At some level, you're going to be throttled by the availability of drawables. A CAMetalLayer has a fixed pool of drawables available, and calling nextDrawable will block the current thread until a drawable becomes available. This doesn't imply you have to call nextDrawable at the top of your render loop, though.
If you want to draw on your own schedule without getting blocked waiting on a drawable, render to an off-screen renderbuffer (i.e., a MTLTexture with dimensions matching your drawable size), and then blit from the most-recently-drawn texture to a drawable's texture and present on whatever cadence you prefer. This can be useful for getting frame timings, but every frame you draw and then don't display is wasted work. It also increases the risk of judder.
Your options are limited when it comes to getting callbacks that match the v-sync cadence. Your best is almost certainly a CVDisplayLink scheduled in the default and tracking run loop modes, though this has caveats.
You could use something like a counting semaphore in concert with a display link if you want to free-run without getting too far ahead.
If your application is able to maintain a real-time framerate, you'll normally be rendering a frame or two ahead of what's going on the glass, so you don't want to literally block on v-sync; you just want to inform the window server that you'd like presentation to match v-sync. On macOS, you do this by setting the layer's displaySyncEnabled to true (the default). Turning this off may cause tearing on certain displays.
At the point where you want to render to screen, you obtain the drawable from the layer by calling nextDrawable. You obtain the drawable's texture from its texture property. You use that texture to set up the render target (color attachment) of a MTLRenderPassDescriptor. For example:
id<CAMetalDrawable> drawable = layer.nextDrawable;
id<MTLTexture> texture = drawable.texture;
MTLRenderPassDescriptor *desc = [MTLRenderPassDescriptor renderPassDescriptor];
desc.colorAttachments[0].texture = texture;
From here, it's pretty similar to what you do in an MTKView's drawRect: method. You create a command buffer (if you don't already have one), create a render command encoder using the descriptor, encode drawing commands, end encoding, tell the command buffer to present the drawable (using a -presentDrawable:... method), and commit the command buffer. Whatever was drawn to the drawable's texture is what will end up on-screen when it's presented.
I agree with Warren that you probably don't really want to sync your loop with the display refresh. You want parallelism. You want the CPU to be working on the next frame while the GPU is rendering the most current frame (and the display is showing the last frame).
The fact that there's a limit on how many drawables may be in flight at once and that nextDrawable will block waiting for one will prevent your render loop from getting too far ahead. (You'll probably use some other synchronization before that, like for managing a small pool of buffers.) If you want only double-buffering and not triple-buffering, you can set the layer's maximumDrawableCount to 2 instead of its default value of 3.

Anything wrong using display lists in OpenGL for a single object?

First of all, I know that displays lists were deprecated in OpenGL 3.0 and removed on 3.1. But I still have to use them on this university project for which OpenGL 2.1 is being used.
Every article or tutorial I read on display lists use them on some kind of object that is drawn multiple times, for instance, trees. Is there anything wrong in creating one list for a single object?
For instance, I have an object (in .obj format) for a building. This specific building is only drawn once. For basic performance analysis I have the current frames per second on the window title bar.
Doing it like this:
glmDraw(objBuilding.model, GLM_SMOOTH | GLM_TEXTURE);
I get around 260 FPS. If you don't about the GLM library, the glmDraw function basically makes a bunch of glVertex calls.
But doing it like this:
glNewList(gameDisplayLists.building, GL_COMPILE);
glmDraw(objBuilding.model, GLM_SMOOTH | GLM_TEXTURE);
glEndList();
glCallList(gameDisplayLists.building);
I get around 420 FPS. Of course, the screen refresh rate doesn't refresh that fast but like I said, it's just a simple and basic way to measure performance.
It looks much better to me.
I'm also using display lists for when I have some type of object that I repeat many times, like defense towers. Again, is there anything wrong doing this for a single object or can I keep doing this?
Using display lists for this stuff is perfectly fine and before Vertex Arrays were around they were the de-facto standard to place geometry in fast memory. Put display lists have a few interesting pitfalls. For example in OpenGL-1.0 there were no texture objects; instead placing the glTexImage call in a display list was the way to emulate this. And this behaviour still prevails, so binding a texture and calling glTexImage in a display list will effectively re-initialize the texture object with what's been done in the display list compilation, whenever the display list is called. On the other hand display lists don't work with vertex arrays.
TL;DR: If your geometry is static, you don't expect to transistion to OpenGL-3 core anytime and it gives you a huge performance increase (like it did for you), then just use them!

Smooth mouseover images inside scaled Flex App?

I have a flex app I am scaling using systemManager.stage.scaleMode=StageScaleMode.NO_BORDER; for the most part it works well except for my bitmap data (mostly png's from the designers).
I can set the mx:image tags to smoothBitmapContent=true and that works great for everything except my mouseover objects. When I do a mouseover, the source is being changed from one embedded image to another embedded image. I have tried several (many) online "smoothimage" classes, and tried to write my own, I have tried to reset smoothBitmapContent every chance I get but still no dice. It seems to mee that because I am scaling at the app level, that the flopped out bitmap is not getting smoothed when it renders.
How to keep things smooth? Perhaps there is a flag to tell Flex to smooth stuff when it scales it?
So the easy answer (that works for me) was to instead of changing the image.source from one embedded png to another, was to use two images and flip flop image.visible between the two... Granted that adds two extra objects to the screen, But for some reason they all stay smoothed and scaled that way, where as before switching sources was going from smooth to jaggidy and un-readable.
Josh

Qt Animation: Appearing & Disappearing Objects

I'm writing a video annotation application with Qt4 in which users need to be able to seek to various points in a video, putting markers on various objects and then setting keypoints for those markers so that they stay on the objects in the video as they move around. QGraphicsItemAnimation seems like a great place to start for these markers, however they need to be able to appear and disappear at specific times, which I can't figure out how to do with the QGraphicsItemAnimation. I could set the scale at 0 to make the objects disappear, but that seems like a pretty hacky solution, and I'm guessing that the paint engine would still waste cpu cycles trying to draw those invisible objects. Does anyone have a better solution than this? I'm using Qt 4.5.3 right now, but I'm willing to upgrade to 4.6 if it makes things easier. Thanks!!
It seems like the functionality you want of showing/hiding QGraphicsItem objects is beyond the scope of the simple "tweening" that the animation class performs. It is only for one object at a time, and any appearance or disappearance you have to write yourself.
You still might get some mileage out of QGraphicsItemAnimation (although the fact that it uses its own timer instead of being locked to the frame clock of your video is a little dodgy).
Neglecting "seeking" for a moment, there is a QTimeLine::finished() signal. If you let the end of an annotation's active animation timeline represent the point where you want it to disappear, you can trigger QGraphicsItem::hide() at that point. When it comes time to turn it back on, you would construct a new QGraphicsItemAnimation (based on the next run of keyframe data for that object) and call QGraphicsItem::show().
Note that one of the headlining features of Qt 4.6 is the QtAnimation framework, which is more sophisticated but also rather complex. I've not used it yet, but looking over the examples it seems like you might be able to "animate" a visibility or opacity property.

Resources