Calling Dealloc method in sprite kit - memory-management

As a test to ensure that scenes are being dealloc'd i've been adding the:
-(void)dealloc{
NSLog(#"scenename Dealloc);
}
I've noticed that sometimes this method isn't called, i had previous issues with retain cycles which i believe i fixed, the main issue is that surely if it gets called sometimes it should be called every time?
I've also heard that using the nslog in this method in the scene causes its to be overwritten and therefore not called correctly, resulting in the scene not being dealloc'd, is this true? Could this be the problem causing the game to crash at present? I do see memory fluctuations (up and down) even with these log messages in place.

If you want to see exactly what objects exist within your game at different points you can use the Allocations instrument. You can find it under XCode > Open Developer Tool > Instruments
arrange the list by name, and look for the name of your project. You should see how many of your different game objects exist in memory there.

As has been previously suggested by the people above i had a memory leak and this was resolved via debugging and instruments.

Related

How to track/find out which userdata are GC-ed at certain time?

I've written an app in LuaJIT, using a third-party GUI framework (FFI-based) + some additional custom FFI calls. The app suddenly loses part of its functionality at some point soon after being run, and I'm quite confident it's because of some unpinned objects being GC-ed. I assume they're only referenced from the C world1, so Lua GC thinks they're unreferenced and can free them. The problem is, I don't know which of the numerous userdata are unreferenced (unpinned) on Lua side?
To confirm my theory, I've run the app with GC disabled, via:
collectgarbage 'stop'
and lo, with this line, the app works perfectly well long past the point where it got broken before. Obviously, it's an ugly workaround, and I'd much prefer to have the GC enabled, and the app still working correctly...
I want to find out which unpinned object (userdata, I assume) gets GCed, so I can pin it properly on Lua side, to prevent it being GCed prematurely. Thus, my question is:
(How) can I track which userdata objects got collected when my app loses functionality?
One problem is, that AFAIK, the LuaJIT FFI already assigns custom __gc handlers, so I cannot add my own, as there can be only one per object. And anyway, the framework is too big for me to try adding __gc in each and every imaginable place in it. Also, I've already eliminated the "most obviously suspected" places in the code, by removing local from some variables — thus making them part of _G, so I assume not GC-able. (Or is that not enough?)
1 Specifically, WinAPI.
For now, I've added some ffi.gc() handlers to some of my objects (printing some easily visible ALL-CAPS messages), then added some eager collectgarbage() calls to try triggering the issue as soon as possible:
ffi.gc(foo, function()
print '\n\nGC FOO !!!\n\n'
end)
[...]
collectgarbage()
And indeed, this exposed some GCing I didn't expect. Specifically, it led me to discover a note in luajit's FFI docs, which is most certainly relevant in my case:
Please note that [C] pointers [...] are not followed by the garbage collector. So e.g. if you assign a cdata array to a pointer, you must keep the cdata object holding the array alive [in Lua] as long as the pointer is still in use.

Cocoa: Finding the missing reference for deallocating

I'm almost done with and app and I'm using instruments to analyse it. I'm having a problem with ARC deallocating something, but I don't know what. I run instruments using the allocations tool ,what I'm doing is starting the app at the main view, then I mark a heap, I interact with the app a little and return to the original main view and mark another heap.
I do this several times and as I understand it, there should not be any significant heap growth because I am returning to the exact same place, everything I did in between should have been deallocated, providing no heap growth. However I have significant growth so I dive into the heaps and I find that almost everything on it has a retain count of 1, which leads me to believe that one object or view, etc is not being deallocated because of a mistake I've made and that object is what's holding references to everything else.
What I'm trying to find out is which object is not being deallocated. Instruments is very vague and only offers obscure pointers that do not allow me to trace back the problem.
Please let me know if there is a way for me to trace what is holding a reference that may be keeping the retain count at 1.
Thanks.
My 1st thought are 2 things:
1) You may have a retain cycle: As an example, one object has to a delegate a strong reference. And the delegate has also a strong reference (instead of a weak reference) to the 1st object back. Since both of them "hold" the other one, none of them can be released.
2) You may have a multi-threaded app, one of the threads does not have an autorelease pool assigned (i.e. does not have an #autoreleasepool block), and is creating autorelease objects. This may happen even in a simple getter method that returns an autorelease object. If so, the autorelease object is "put" into an non-existing autorelease pool (which does not give you an error message, since you can send any message to nil), and it is never released.
Maybe one of these cases applies to your problem.

Deleting WebGL contexts

I have a page that is using many many webgl contexts, one per canvas. canvases can be reloaded, resized, etc. each time creating new contexts. It works for several reloads, but eventually when I try to create a new context it returns a null value. I assume that I'm running out of memory.
I would like to be able to delete the contexts that I'm no longer using so I can recover the memory and use it for my new contexts. Is there any way to do this? Or is there a better way to handle many canvases?
Thanks.
This is a long standing bug in Chrome and WebKit
http://code.google.com/p/chromium/issues/detail?id=124238
There is no way to "delete" a context in WebGL. Contexts are deleted by garbage collection whenever the system gets around to it. All resources will be freed at that point but it's best if you delete your own resources rather than waiting for the browser to delete them.
As I said this is a bug. It will eventually be fixed but I have no ETA.
I would suggest not deleting canvases. Keep them around and re-use them.
Another suggestion would be tell us why you need 200 canvases. Maybe the problem you are trying to solve would be better solved a different way.
I would assume, that until you release all the resources attached to your context, something will still hold references to it, and therefore it will still exist.
A few things to try:
Here is some debug gl code. There is a function there to reset a context to it's initial state. Try that before deleting the canvas it belongs to.
It's possible that some event system could hold reference to your contexts, keeping them in a zombie state.
Are you deleting your canvases from the DOM? I'm sure there is a limit on resources the page can maintain in one instance.
SO was complaining that our comment thread was getting a bit long. Try these things first and let me know if it help.

Switch OpenGL contexts or switch context render target instead, what is preferable?

On MacOS X, you can render OpenGL to any NSView object of your choice, simply by creating an NSOpenGLContext and then calling -setView: on it. However, you can only associate one view with a single OpenGL context at any time. My question is, if I want to render OpenGL to two different views within a single window (or possibly within two different windows), I have two options:
Create one context and always change the view, by calling setView as appropriate each time I want to render to the other view. This will even work if the views are within different windows or on different screens.
Create two NSOpenGLContext objects and associate one view with either one. These two contexts could be shared, which means most resources (like textures, buffers, etc.) will be available in both views without wasting twice the memory. In that case, though, I have to keep switching the current context each time I want to render to the other view, by calling -makeCurrentContext on the right context before making any OpenGL calls.
I have in fact used either option in the past, each of them worked okay for my needs, however, I asked myself, which way is better in terms of performance, compatibility, and so on. I read that context switching is actually horribly slow, or at least it used to be very slow in the past, might have changed meanwhile. It may depend on how many data is associated with a context (e.g. resources), since switching the active context might cause data to be transferred between system memory and GPU memory.
On the other hand switching the view could be very slow as well, especially if this might cause the underlying renderer to change; e.g. if your two views are part of two different windows located on two different screens that are driven by two different graphic adapters. Even if the renderer does not change, I have no idea if the system performs a lot of expensive OpenGL setup/clean-up when switching a view, like creating/destroying render-/framebuffer objects for example.
I investigated context switching between 3 windows on Lion, where I tried to resolve some performance issues with a somewhat misused VTK library, which itself is terribly slow already.
Wether you switch render contexts or the windows doesn't really matter,
because there is always the overhead of making both of them current to the calling thread as a triple. I measured roughly 50ms per switch, where some OS/Window manager overhead charges in aswell. This overhead depends also greatly on the arrangement of other GL calls, because the driver could be forced to wait for commands to be finished, which can be achieved manually by a blocking call to glFinish().
The most efficient setup I got working is similar to your 2nd, but has two dedicated render threads having their render context (shared) and window permanently bound. Aforesaid context switches/bindings are done just once on init.
The threads can be controlled using some threading stuff like a common barrier, which lets both threads render single frames in sync (both get stalled at the barrier before they can be launched again). Data handling must also be interlocked, which can be done in one thread while stalling other render threads.

How to interpret Instrument Leaks results

I could use some help interpreting results from the Instruments Leaks tool. This is for an iPhone app that I am writing. This is using XCode 4.2 and Instruments 4.2, if it makes a difference.
After running the tool, I get a list of leaked objects. When I examine the details of one these objects, it shows what I assume to be a history of what happened to the object. For example, it shows a sequence of: malloc, retain, release, retain, release, release. It also shows a resulting reference count of 0. See the picture for a screenshot.
So my question is: why does Leaks think this is memory leak, and what do I need to do to fix that?
Check the -dealloc method for your Waypoint and make sure it is calling [super dealloc].

Resources