Anything wrong using display lists in OpenGL for a single object? - performance

First of all, I know that displays lists were deprecated in OpenGL 3.0 and removed on 3.1. But I still have to use them on this university project for which OpenGL 2.1 is being used.
Every article or tutorial I read on display lists use them on some kind of object that is drawn multiple times, for instance, trees. Is there anything wrong in creating one list for a single object?
For instance, I have an object (in .obj format) for a building. This specific building is only drawn once. For basic performance analysis I have the current frames per second on the window title bar.
Doing it like this:
glmDraw(objBuilding.model, GLM_SMOOTH | GLM_TEXTURE);
I get around 260 FPS. If you don't about the GLM library, the glmDraw function basically makes a bunch of glVertex calls.
But doing it like this:
glNewList(gameDisplayLists.building, GL_COMPILE);
glmDraw(objBuilding.model, GLM_SMOOTH | GLM_TEXTURE);
glEndList();
glCallList(gameDisplayLists.building);
I get around 420 FPS. Of course, the screen refresh rate doesn't refresh that fast but like I said, it's just a simple and basic way to measure performance.
It looks much better to me.
I'm also using display lists for when I have some type of object that I repeat many times, like defense towers. Again, is there anything wrong doing this for a single object or can I keep doing this?

Using display lists for this stuff is perfectly fine and before Vertex Arrays were around they were the de-facto standard to place geometry in fast memory. Put display lists have a few interesting pitfalls. For example in OpenGL-1.0 there were no texture objects; instead placing the glTexImage call in a display list was the way to emulate this. And this behaviour still prevails, so binding a texture and calling glTexImage in a display list will effectively re-initialize the texture object with what's been done in the display list compilation, whenever the display list is called. On the other hand display lists don't work with vertex arrays.
TL;DR: If your geometry is static, you don't expect to transistion to OpenGL-3 core anytime and it gives you a huge performance increase (like it did for you), then just use them!

Related

What does CIImageAccumulator do?

Problem
The apple documentation states when the CIImageAccumulater can be used, but unfortunately it does not say what it actually does.
The CIImageAccumulator class enables feedback-based image processing for such things as iterative painting operations or fluid dynamics simulations. You use CIImageAccumulator objects in conjunction with other Core Image classes, such as CIFilter, CIImage, CIVector, and CIContext, to take advantage of the built-in Core Image filters when processing images.
I have to fix code that used a CIImageAccumulator. It seems to me that all it is meant to do, despite its name, is to return a CIImage with all CIFilters applied to the image. Adding the first image however darkens the output. That is not what I would expect from an accumulator nor from any other Operator that enables feedback based image processing.
Question
Can anyone answer what logic / algorithm is being used when setting and getting images in and out of the CIImageAccumulator
The biggest advantage of the CIImageAccumulater is that stores its contents between different rendering steps (in contrast to CIFilter or CIImage). This allows you to use the state of a previous rendering step, blend it with something new and store that result again in the accumulator.
Apple's main use case is interactive painting: You retrieve the current image from the accumulator, blend a new stroke the user just painted with a gesture on top of it, and store the resulting image back into the accumulator. Then you display the content of the accumulator. You can read about it here.

Monogame Extended Tiled

I'm making an isometric city builder using Monogame Extended and Tiled. I've got everything set-up and now i need to somehow access the specific tiles so i can change them at runtime as the user clicks on a tile to build an object. The problem is, i can't seem to find a "map.GetLayer("Layername").GetTile(x,y) or .SetTile(x,y) function or something similar.
Now what i can do is edit the xml(.tmx) file which has a matrix in it that represents the map and it's drawn tiles. The problem with this is that i need to build the map in the content pipeline again after editing for the changes to be displayed. I can't really build at runtime or can i?
Thanks in advance!
Something like this will get you part way there.
var tileLayer = map.GetLayer<TiledMapTileLayer>("layername");
TiledMapTile tile;
if(tileLayer.TryGetTile(x, y, out tile))
{
// do something with tile
}
However, there's only a limited amount of things you can actually do with the tile once you've got it from the map.
There's no such thing as a SetTile method because changing tile data at runtime is not currently supported. This is a limitation of the renderer, which has been optimized for rendering very large maps by building static geometry that can't be changed once it's loaded into the graphics card.
There has been some discussion about building another renderer that would handle dynamic map changes but at this stage nothing like that has been implemented in the library. You could always have a go at implementing a simple renderer yourself, a really basic one is not as hard as you might think.
An alternative approach to dealing with this kind of problem might be to pre-process the map data before giving it to the renderer. The idea would be to effectively separate the layers of the map that are static from those that are dynamic and render the dynamic tiles as normal sprites. Just a thought, I'm not sure about the details of how this might work.
I plan to eventually revisit the Tiled API in the next major version of MonoGame.Extended. Don't hold your breath, these things can take a lot of time, but I am paying attention to the feedback and kinds of problems people are experiencing with the existing API.
Since the map data is stored in a XML (or csv) file which runs through the Content Pipeline you can not change it at runtime.
Anyways, in a city builder you usually do not change existing tiles but you place object on top of existing tiles.

Rendering OpenGL just once rather than every frame

Nearly every example I see of OpenGL ES involves it updating every frame, even if the image itself is not moving in any way.
I did some tests and I see it works quite fine to just render (using drawArrays etc) and then present the render buffer (these two actions, together) just once and then not do either again until you have something change onscreen.
Is this "normal" ? I just don't see this really done much. Once drawn, the graphics stay on the screen without additional constant rendering.
Is this acceptable?
Yes, it is acceptable and completely valid. You also need to take account to render again when the context is lost. To give you an example, using Android standard OpenGL helper classes there is an option to only draw when needed, not in loop (RENDERMODE_WHEN_DIRTY).

Binding a collection of data to objects (not using a ListBox!) in WP7/Silverlight

I have an observable collection of objects that I'd like to display on the screen, but not in a listbox format. For the sake of example, let's say that they're an observable collection of planets and I'd like to display them on the screen as they appear in the sky. Is this something I can do neatly in Silverlight binding? At the moment I'm thinking of just looping through my planet collection and creating an ellipse object for each, but it would be great if I could do this via data binding instead.
Hope this makes sense!
This is possible and there is a great example from Bea Stollnitz that demonstrates the power of xaml and styling. It's an old post (5 years! wow) but still well worth the read as are her other blogs
http://bea.stollnitz.com/blog/?p=40

Qt Animation: Appearing & Disappearing Objects

I'm writing a video annotation application with Qt4 in which users need to be able to seek to various points in a video, putting markers on various objects and then setting keypoints for those markers so that they stay on the objects in the video as they move around. QGraphicsItemAnimation seems like a great place to start for these markers, however they need to be able to appear and disappear at specific times, which I can't figure out how to do with the QGraphicsItemAnimation. I could set the scale at 0 to make the objects disappear, but that seems like a pretty hacky solution, and I'm guessing that the paint engine would still waste cpu cycles trying to draw those invisible objects. Does anyone have a better solution than this? I'm using Qt 4.5.3 right now, but I'm willing to upgrade to 4.6 if it makes things easier. Thanks!!
It seems like the functionality you want of showing/hiding QGraphicsItem objects is beyond the scope of the simple "tweening" that the animation class performs. It is only for one object at a time, and any appearance or disappearance you have to write yourself.
You still might get some mileage out of QGraphicsItemAnimation (although the fact that it uses its own timer instead of being locked to the frame clock of your video is a little dodgy).
Neglecting "seeking" for a moment, there is a QTimeLine::finished() signal. If you let the end of an annotation's active animation timeline represent the point where you want it to disappear, you can trigger QGraphicsItem::hide() at that point. When it comes time to turn it back on, you would construct a new QGraphicsItemAnimation (based on the next run of keyframe data for that object) and call QGraphicsItem::show().
Note that one of the headlining features of Qt 4.6 is the QtAnimation framework, which is more sophisticated but also rather complex. I've not used it yet, but looking over the examples it seems like you might be able to "animate" a visibility or opacity property.

Resources