When should glDeleteBuffers, glDeleteShader, and glDeleteProgram be used in OpenGL ES 2? - opengl-es

While working with VBOs in OpenGL ES 2, I came across glDeleteBuffers, glDeleteShader, and glDeleteProgram. I looked around on the web but I couldn't find any good answers to when these methods are supposed to be called. Are these calls even necessary or does the computer automatically delete the objects on its own? Any answers are appreciated, thanks.

Every glGen* call should be paired with the appropriate glDelete* call which is called when you are finished with the resource.
The computer will not delete the objects on its own while your application is still running because it doesn't know whether you plan to re-use them later. If you are creating new objects throughout the life of your application and failing to delete old ones, then that's a resource leak which will eventually cause a shutdown of your application due to excessive memory usage.
The computer will delete objects for you when the application terminates, so there's no real benefit to deleting the objects that are permanently required throughout the lifetime of your application, but it is generally considered good practice to have a leak-free clean up.
You can call the glDelete* functions as soon as you are finished with the object (e.g. as soon as you've made your last draw call that uses it). You do not need to worry about whether the object might still be in the GPU's queues or pipelines, that is the OpenGL driver's problem.

Related

Deleting WebGL contexts

I have a page that is using many many webgl contexts, one per canvas. canvases can be reloaded, resized, etc. each time creating new contexts. It works for several reloads, but eventually when I try to create a new context it returns a null value. I assume that I'm running out of memory.
I would like to be able to delete the contexts that I'm no longer using so I can recover the memory and use it for my new contexts. Is there any way to do this? Or is there a better way to handle many canvases?
Thanks.
This is a long standing bug in Chrome and WebKit
http://code.google.com/p/chromium/issues/detail?id=124238
There is no way to "delete" a context in WebGL. Contexts are deleted by garbage collection whenever the system gets around to it. All resources will be freed at that point but it's best if you delete your own resources rather than waiting for the browser to delete them.
As I said this is a bug. It will eventually be fixed but I have no ETA.
I would suggest not deleting canvases. Keep them around and re-use them.
Another suggestion would be tell us why you need 200 canvases. Maybe the problem you are trying to solve would be better solved a different way.
I would assume, that until you release all the resources attached to your context, something will still hold references to it, and therefore it will still exist.
A few things to try:
Here is some debug gl code. There is a function there to reset a context to it's initial state. Try that before deleting the canvas it belongs to.
It's possible that some event system could hold reference to your contexts, keeping them in a zombie state.
Are you deleting your canvases from the DOM? I'm sure there is a limit on resources the page can maintain in one instance.
SO was complaining that our comment thread was getting a bit long. Try these things first and let me know if it help.

Switch OpenGL contexts or switch context render target instead, what is preferable?

On MacOS X, you can render OpenGL to any NSView object of your choice, simply by creating an NSOpenGLContext and then calling -setView: on it. However, you can only associate one view with a single OpenGL context at any time. My question is, if I want to render OpenGL to two different views within a single window (or possibly within two different windows), I have two options:
Create one context and always change the view, by calling setView as appropriate each time I want to render to the other view. This will even work if the views are within different windows or on different screens.
Create two NSOpenGLContext objects and associate one view with either one. These two contexts could be shared, which means most resources (like textures, buffers, etc.) will be available in both views without wasting twice the memory. In that case, though, I have to keep switching the current context each time I want to render to the other view, by calling -makeCurrentContext on the right context before making any OpenGL calls.
I have in fact used either option in the past, each of them worked okay for my needs, however, I asked myself, which way is better in terms of performance, compatibility, and so on. I read that context switching is actually horribly slow, or at least it used to be very slow in the past, might have changed meanwhile. It may depend on how many data is associated with a context (e.g. resources), since switching the active context might cause data to be transferred between system memory and GPU memory.
On the other hand switching the view could be very slow as well, especially if this might cause the underlying renderer to change; e.g. if your two views are part of two different windows located on two different screens that are driven by two different graphic adapters. Even if the renderer does not change, I have no idea if the system performs a lot of expensive OpenGL setup/clean-up when switching a view, like creating/destroying render-/framebuffer objects for example.
I investigated context switching between 3 windows on Lion, where I tried to resolve some performance issues with a somewhat misused VTK library, which itself is terribly slow already.
Wether you switch render contexts or the windows doesn't really matter,
because there is always the overhead of making both of them current to the calling thread as a triple. I measured roughly 50ms per switch, where some OS/Window manager overhead charges in aswell. This overhead depends also greatly on the arrangement of other GL calls, because the driver could be forced to wait for commands to be finished, which can be achieved manually by a blocking call to glFinish().
The most efficient setup I got working is similar to your 2nd, but has two dedicated render threads having their render context (shared) and window permanently bound. Aforesaid context switches/bindings are done just once on init.
The threads can be controlled using some threading stuff like a common barrier, which lets both threads render single frames in sync (both get stalled at the barrier before they can be launched again). Data handling must also be interlocked, which can be done in one thread while stalling other render threads.

What is the order of destruction of objects in VBScript?

In what order are objects in a .vbs destroyed?
That is, given these globals:
Set x = New Xxx
Set y = New Yyy
I'm interested in answers to any of the following.
For instances of classes implemented in the .VBS, in what order will Class_Terminate be called? Cursory poking suggests in the order (not reverse order!) of creation, but is this guaranteed?
EDIT: I understand that Class_Terminate will be called when the last last reference to an object is released. What I meant was: in what order will x and y be released, and is it guaranteed? Assume for simplicity that x & y are the only references to their respective objects.
Does the type of object matter? e.g. if I have classes implemented in the .VBS mixed in with other COM objects such as Scripting.FileSystemObject.
EDIT: I understand that a COM library may set up its own internal circular references that the script host engine knows nothing about; I'm interested in exploring what could affect the answer to the first question.
Are the answers to the above different if x and y were local to a Sub or Function rather than global?
Does it depend on whether the exit is normal, by exception, or via WScript.Quit? (In the latter case, it seems that Class_Terminate is still called on any outstanding objects before exiting, however these may cause an error to be reported).
When is the WScript object destroyed?
Does the script host matter? (wscript.exe vs cscript.exe vs. whatever the web host engine is called)
Does JScript's object destruction model differ to VBScript's?
I can find the answers to some of these questions empirically, but I'm interested in whether any of them are guaranteed / documented.
Do post even if you only know some of the answers - or further relevant issues.
I designed and implemented this feature in VBScript.
Most of the answers are in my articles that Mark references, but just to clarify:
in what order will Class_Terminate be called?
Terminators are in general called immediately when the last reference to an object is released. However, due to circular references and other issues, it is generally a very bad idea to rely upon a deterministic order of termination.
Cursory poking suggests in the order (not reverse order!) of creation, but is this guaranteed?
As I noted in my articles, unterminated objects are terminated when the engine is shut down. As an implementation detail, the termination queue is executed in the order that the objects were created in. However, this is an undocumented implementation detail that you should not rely upon.
Does the type of object matter? e.g. if I have classes implemented in the .VBS mixed in with other COM objects such as Scripting.FileSystemObject.
It can. There could be circular references amongst those objects that are torn down at unpredictable times.
I'm thinking of objects at global scope, when the program quits - is it different for objects at e.g. function scope?
I don't understand the question. Can you clarify?
Does it depend on whether the exit is normal, by exception, or via WScript.Quit? (In the latter case, it seems that Class_Terminate is still called on any outstanding objects before exiting, however these may cause an error to be reported).
It can matter, yes. VBScript does not make any guarantee that terminators always run. The host that owns the engine can shut down its process by "failing fast" in a manner that is not guaranteed to cleanly shut down the engine, for example. (In the event of a catastrophic failure, this is sometimes desirable; if you don't know what is wrong then sometimes running termination code makes the problem worse, not better.)
Windows Script Host does attempt to shut down the engine cleanly when Quit is called.
When is the WScript object destroyed?
When the Windows Script Host process termination logic runs.
Does the script host matter? (wscript.exe vs cscript.exe vs. whatever the web host engine is called)
Yes, it can matter.
Does JScript's object destruction model differ to VBScript's?
Yes, very much so.
JScript "Classic" from the period when I worked on it (pre 2001) uses a nondeterministic mark-and-sweep garbage collector which does handle circular references amongst script objects, but does NOT handle circular references between script and browser objects. More recent versions of JScript "Classic" have a modified garbage collector that DOES handle circular references between script and browser objects (though it does not necessarily detect circularities involving JScript objects and third party ActiveX objects.)
The IE 9 version of JScript has a completely rewritten garbage collector that uses very different technology; I have chatted a bit with its designer but I do not have enough technical knowledge to discuss its characteristics in any kind of depth.
JScript .NET of course uses the CLR garbage collector.
Can I ask why you care about all this stuff?
Also, note that I haven't looked at this code in over a decade; take all of this with the appropriate level of skepticism. My memory may be faulty.

Core Data and threading

What are some of the obscure pitfalls of using Core Data and threads? I've read much of the documentation, and so far I've come across the following either in the docs or through painful experience:
Use a new NSManagedObjectContext for each thread, but a single NSPersistentStoreCoordinator is enough for the whole app.
Before sending an NSManagedObject's objectID back to the main thread (or any other thread), be sure the context has been saved (or at a minimum, it wasn't a newly-inserted-but-not-yet-saved object) - otherwise the objectID will actually be a temporary ID and not a persistent one.
Use mergeChangesFromContextDidSaveNotification: to detect when a save happens in another thread and use that to merge those changes with the current thread's context.
Bonus question/observation: I was led to believe by the wording of some of the docs that mergeChangesFromContextDidSaveNotification: is something only needed by the main thread to merge changes into the "main" context from worker threads - but I don't think that's the case.
I set up my importer to create batches of data which are imported using a subclass of an NSOperation that owns it's own context. The operations are loaded into an NSOperationQueue that's set to allow the default number of concurrent operations, so it's possible for several import batches to be running at the same time. I would occasionally get very strange validation errors and exceptions (like trying to add nil to a relationship) and other failures that I had never seen when I did all the same stuff on the main thread. It occurred to me (and perhaps this should have been obvious) that maybe the context merging needed to be done for all contexts in every thread - not just the "main" one! I don't know why I didn't think of that before, but I think this helped. (It hasn't been tested well enough yet for me to feel sure, though.) In any case, is it true that you need to observe that notification for ALL import threads that may be working with the same datasets and adding/updating the same entities? If so, this is yet another pitfall bullet point, IMO, although I have yet to be certain that it'll work.
Given how many of these I've run into with Core Data in general (and not all of them just about multi-threading), I have to wonder how many more are lurking. Since multi-threading so often ends up with bugs that are difficult if not impossible to reproduce due to the timing issues, I figured I'd ask if anyone had other important things that I may be missing that I need to concern myself with.
There is an entire rather large bit of documentation devoted to the subject of Core Data and Threading.
It isn't clear from your set of issues what isn't covered by that documentation.

Is it safe to manipulate objects that I created outside my thread if I don't explicitly access them on the thread which created them?

I am working on a cocoa software and in order to keep the GUI responsive during a massive data import (Core Data) I need to run the import outside the main thread.
Is it safe to access those objects even if I created them in the main thread without using locks if I don't explicitly access those objects while the thread is running.
With Core Data, you should have a separate managed object context to use for your import thread, connected to the same coordinator and persistent store. You cannot simply throw objects created in a context used by the main thread into another thread and expect them to work. Furthermore, you cannot do your own locking for this; you must at minimum lock the managed object context the objects are in, as appropriate. But if those objects are bound to by your views a controls, there are no "hooks" that you can add that locking of the context to.
There's no free lunch.
Ben Trumbull explains some of the reasons why you need to use a separate context, and why "just reading" isn't as simple or as safe as you might think, in this great post from late 2004 on the webobjects-dev list. (The whole thread is great.) He's discussing the Enterprise Objects Framework and WebObjects, but his advice is fully applicable to Core Data as well. Just replace "EC" with "NSManagedObjectContext" and "EOF" with "Core Data" in the meat of his message.
The solution to the problem of sharing data between threads in Core Data, like the Enterprise Objects Framework before it, is "don't." If you've thought about it further and you really, honestly do have to share data between threads, then the solution is to keep independent object graphs in thread-isolated contexts, and use the information in the save notification from one context to tell the other context what to re-fetch. -[NSManagedObjectContext refreshObject:mergeChanges:] is specifically designed to support this use.
I believe that this is not safe to do with NSManagedObjects (or subclasses) that are managed by a CoreData NSManagedObjectContext. In general, CoreData may do many tricky things with the sate of managed objects, including firing faults related to those objects in separate threads. In particular, [NSManagedObject initWithEntity:insertIntoManagedObjectContext:] (the designated initializer for NSManagedObjects as of OS X 10.5), does not guarantee that the returned object is safe to pass to an other thread.
Using CoreData with multiple threads is well documented on Apple's dev site.
The whole point of using locks is to ensure that two threads don't try to access the same resource. If you can guarantee that through some other mechanism, go for it.
Even if it's safe, but it's not the best practice to use shared data between threads without synchronizing the access to those fields. It doesn't matter which thread created the object, but if more than one line of execution (thread/process) is accessing the object at the same time, since it can lead to data inconsistency.
If you're absolutely sure that only one thread will ever access this object, than it'd be safe to not synchronize the access. Even then, I'd rather put synchronization in my code now than wait till later when a change in the application puts a second thread sharing the same data without concern about synchronizing access.
Yes, it's safe. A pretty common pattern is to create an object, then add it to a queue or some other collection. A second "consumer" thread takes items from the queue and does something with them. Here, you'd need to synchronize the queue but not the objects that are added to the queue.
It's NOT a good idea to just synchronize everything and hope for the best. You will need to think very carefully about your design and exactly which threads can act upon your objects.
Two things to consider are:
You must be able to guarantee that the object is fully created and initialised before it is made available to other threads.
There must be some mechanism by which the main (GUI) thread detects that the data has been loaded and all is well. To be thread safe this will inevitably involve locking of some kind.
Yes you can do it, it will be safe
...
until the second programmer comes around and does not understand the same assumptions you have made. That second (or 3rd, 4th, 5th, ...) programmer is likely to start using the object in a non safe way (in the creator thread). The problems caused could be very subtle and difficult to track down. For that reason alone, and because its so tempting to use this object in multiple threads, I would make the object thread safe.
To clarify, (thanks to those who left comments):
By "thread safe" I mean programatically devising a scheme to avoid threading issues. I don't necessarily mean devise a locking scheme around your object. You could find a way in your language to make it illegal (or very hard) to use the object in the creator thread. For example, limiting the scope, in the creator thread, to the block of code that creates the object. Once created, pass the object over to the user thread, making sure that the creator thread no longer has a reference to it.
For example, in C++
void CreateObject()
{
Object* sharedObj = new Object();
PassObjectToUsingThread( sharedObj); // this function would be system dependent
}
Then in your creating thread, you no longer have access to the object after its creation, responsibility is passed to the using thread.

Resources