What type of reference does NSNotificationCenter keep for 'observer's & 'object's? - cocoa

Can anyone clarify/elucidate the situation with respect to -[NSNotificationCenter addObserver:selector:name:object:]?
What types of references are kept by the notification center of the 'observer' and 'object' arguments?
What are the best practices for removing observers from the notification center?
What are the special concerns for multi-threaded applications, especially with respect to the 'object' argument?
What are the differences in behavior of this method in GC and non-GC environments?
Are the any significant differences (from a client perspective) between mobile and desktop environments in this method's behavior?
Also, any pointers to existing articles which cover this would be greatly appreciated. I Googled, but was surprised to find little in-depth discussion of these issues (although maybe I didn't use the right magic keywords).

what types of references are kept by
the notification center of the
'observer' and 'object' arguments?
I believe a weak reference, though that's just from memory (no pun intended).
what are the best practices for
removing observers from the
notification center?
Always remove the registered object from the notification center before they're released. The object's dealloc method is a good place for this if it set up the registration itself, or when you release it if another object is managing the notification subscriptions. Keep this in mind and the above won't matter.
what are the special concerns for
multi-threaded applications,
especially WRT the 'object' argument?
NSNotificationCenter works fine on threads, but if you send a notification from a background thread, the object will receive it on that same thread. Because of this behavior you should use a different approach if you're updating the UI or doing anything else that's not thread safe (or, dispatch the notification from another method on the main thread).
what are the differences in behavior
of this method in GC and non-GC
environments?
I don't remember hearing of anything that you need to worry about, though I haven't used the GC much yet.
are the any significant differences
(from a client perspective) between
mobile and desktop environments in
this method's behavior?
Not that I've heard of, no. When you register your object you can choose to register for all notifications or only notifications from a certain object. If you're using notifications heavily the latter may be a little faster, but always test to be sure.
Also, any pointers to existing
articles which cover this would be
greatly appreciated. I googled, but
was surprised to find little in-depth
discussion of these issues (although
maybe i didn't use the right magic
keywords).
I think it's more because NSNotificationCenter is pretty easy to use, in general. If you're worried about certain cases, don't be afraid to write a quick test app!

Current situation in 2016:
iOS 9 has changed NSNotificationCenter such that it weak references the target object.
That also to say you no longer have to removeObserver in when the object is dealloc.

Related

where is fyne's thread safety defined?

I was attracted to Fyne (and hence Go) by a promise of thread safety. But now that I'm getting better at reading Go I'm seeing things that make be believe that the API as a whole is not thread safe and perhaps was never intended to be. So I'm trying to determine what "thread safe" means in Fyne.
I'm looking specifically at
func (l *Label) SetText(text string) {
l.Text = text
l.textProvider.SetText(text) // calls refresh
}
and noting that l.Text is also a string. Assignments in Go are not thread safe, so it seems obvious to me that if two threads fight over the text of a label and both call label.SetText at the same time, I can expect memory corruption.
"But you wouldn't do that", one might say. No, but I am worried about the case of someone editing the content of an Entry while an app thread decides it needs to replace all the Entry's text - this is entirely possible in my app because it supports simultaneous editing by multiple users over a network, so updates to all sorts of widgets come in asynchronously. (Note I don't care what happens if two people edit the same Entry at the same time; someone's changes will be lost and I don't care who's. But it must not result in memory corruption.) Note that one approach I could take would be to have the background thread create an entirely new Entry widget, which would then replace the one in the current Box. But is that thread safe?
It's not that I don't know how to serialize things with channels. But I was hoping that Fyne would eliminate the need for it (a blog post claims it does); and even using channels I can't convince myself that a user meddling with a widget in various ways while some background thread is altering it, hiding it, etc, isn't going to result in crashes. Maybe all that is serialized under the covers and is perfectly safe, but I don't want to find out the hard way that it isn't, because I'll have no way to fix it.
Fyne is clearly pretty new and seems to have tons of promise, but documentation seems light on details. Is more information available somewhere? Have people tried this successfully?
You have found some race conditions here. There are plans to improve, but the 1.2 release was required to get a new "BaseWidget" first - and that was only released a few weeks ago.
Setting fields directly is primarily for setup purposes and so not expected to be used in the way you illustrate. That said, we do want to support it. The base widget will soon introduce something akin to SetFieldsAndRefresh(func()) which will ensure the safety of the code passed and refresh the widget afterward.
There is indeed a race currently within Refresh(). The use of channels internally were designed to remove this - but there are some corners such as multiple goroutines calling it. This is the area that our new BaseWidget code can help with - as they can internally lock automatically. Using this approach will be thread safe with no changes to the developer in a future release.
The API so far has made it possible for developers to not worry about threading and work from any goroutines - we do need to work internally to make it safer - you are quite right. https://github.com/fyne-io/fyne/issues/506

When to create a custom NSNotificationCenter?

I've been playing about with the NSNotificationCenter and I've been wondering when you would use your own custom notification center rather than the defaultCenter? And what would the advantages of this be?
Forgive my ignorance, but it seems like I could get along quite happily just using the defaultCenter and nothing else, but I want to make sure I am not missing something vital.
Apple documentation is vague, and it just states that usually a programmer wouldn't need to create a new one:
Each running Cocoa program has a default notification center. You typically don’t create your own. An NSNotificationCenter object can deliver notifications only within a single program.
Full source: NSNotificationCenter documentation.
However every notification center can handle a network of notifications, distinguished by name and object. When you add an observer you typically call the method in some way like this:
[center addObserver: self selector: #selector(observe:) name: #"observe" object: someObject];
And when you post a notification you can specify the object:
[center postNotificationName: #"observe" object: someObject];
This way say that you use N names and M objects, you can handle N*M distinguished notifications. I think there is no need to use two notification centers. Theoretically if you have finished all names you can create another one using alloc+init, but I hardly see how it can actually turn out handy.
Also consider that notification center is often used when there are two objects that do not own a direct pointer to each other (otherwise why not simply calling a method on it?), due to avoid complicated bindings (specially when you use a lot of xib files), so having an unique notification center object is very handy.
If instead you use a notification center got with allot+init, then you must ensure that all the communicating objects have a pointer to that notification center, and this would add some complexity. All notification center's power would be wasted.
Although it's used heavily in AppKit via the defaultCenter singleton accessor, at it's heart, NSNotificationCenter is really just a "generic decoupling mechanism." Allowing you to alloc/init your own instances of it is just an expression of that generic-ness. If you wanted to use it for something else, you could.
To illustrate with a somewhat absurd example, think of it this way: NSDocument has a windowControllers accessor that returns a specific, blessed, important instance of NSArray that contains references to all the window controllers specific to that document. That said, NSArray is just a "generic list data structure". Just because there exists this special instance of it with a specified purpose doesn't mean it might not be useful to reuse NSArray for your own purposes. Both NSArray and NSNotificationCenter provide generic data structures/building blocks, specific instances of which are used in blessed "occupations" around AppKit, but both of which could be useful on their own.
The primary use case I've seen for creating standalone instances of NSNotificationCenter is when you want to run multiple instances of some complex subsystem on multiple threads in parallel and not have them potentially get confused by cross-thread notifications. In this case, the general pattern is to allocate one NSNotificationCenter per thread. This compartmentalizes the notifications for each network of objects to a single thread. This will generally be required if observers pass nil for the object parameter intending to listen to all notifications with a given name, regardless of source.
All that said, I concede that, in my experience, making private instances of NSNotificationCenter is pretty rare.
Apple's Doc
If your app uses notifications extensively, you may want to create and post to your own notification centers rather than posting only to the default notification center. When a notification is posted to a notification center, the notification center scans through the list of registered observers, which may slow down your app. By organizing notifications functionally around one or more notification centers, less work is done each time a notification is posted, which can improve performance throughout your app.

What kind of memory leaks XCode Analyzer may not notice?

I'm afraid that asking this question may result in some downvotes, but after making some not satisfying research I decided to take a risk and ask more experienced people...
There are many questions here referring to some specific problems connected with the XCode Analayzer Tool. It seems to be very helpful solution. But I would like to ask you - as a beginner in iOS world - what kind of memory management stuff cannot be noticed by this tool.
In other words, are there any common memory management aspects, about which the iOS beginners should think "Oh, be careful with that, because in this case XCode Analyzer may not warn you about your mistake"...
For instance, I've found here Why cannot XCode static analyzer detect un-released retained properties? that:
(...)the analyzer can't reliably detect retain/release issues across
method/library boundaries(...)
It sounds like a good hint to consider, but maybe you know about some other common issues...
The analyzer is very good at finding the routine leaks that plague new programmers writing non-ARC code (failures to call release, returning objects of the wrong retain count, etc.).
In my experience, there are a couple of types of memory issues it does not find:
It cannot generally identify strong reference cycles (a.k.a. retain cycles). For example, you add a repeating NSTimer to a view controller, unaware that the timer maintains a strong reference to the view controller, and if you don't invalidate the timer (or do it in the wrong place, such as the dealloc method), neither the view controller nor the timer will get released.
It cannot find circular logic errors. For example, if you have some circular references where view controller A presents view controller B, which in turn presents a new copy of A (rather than dismissing/popping to get back to A).
It cannot find many non-reference counting memory issues. While it's getting better in dealing with Core Foundation functions, if you have code that is doing manual memory allocations (such as via malloc and free), the static analyzer may be of limited use. The same is true whenever you're using non-reference counting code (e.g. you use SQLite sqlite3_prepare_v2 and fail to call sqlite3_finalize).
I'm sure that's not a complete list of what it doesn't find, but those are the common issues I see asked about on Stack Overflow for which the static analyzer will be of limited help. But the analyzer is still a wonderful tool (it finds issues other than memory issues, too) and for those individuals not using ARC, it's invaluable.
Having said that, while the static analyzer is an under-appreciated first line of defense, you really should use Instruments to find leaks. See Locating Memory Issues in Your App in the Instruments User Guide. That's the best way to identify leaks.

Is it easy to abuse the observer pattern?

I have a project where I am using the observer pattern extensively for the first time. One thing I've found though, is that if I inspect a typical object in this project, it tends to be astonishingly large with all of the observers and observables, and then the times when an observer has other observers, etc.
That seems to be be beside the point since the performance is fine. But I've found that occasionally when I'm in the debugger, if I try to print an instance variable that it will lock up my machine until I kill the process. This has me concerned that there is some opportunity for this to happen while the code is in production. Or that this is just a warning that I am abusing the pattern.
Any tips, suggestions?
TL;DR: Yep, but that doesn't mean it's not perfect sometimes.
"Astonishingly large" implies... it's pretty large; what does that actually mean? How many observers/observables are there? Are they deeply nested?
IMO the correlation between doing stuff in a debugger and "real life" isn't particularly strong; has it ever locked up in production or testing? I'd be more likely to think it's an artifact of the debugging process/app.
"Spooky action at a distance" creates non-locality that must be understood in order to reason correctly about code and behavior. This kind of development needs to be groomed aggressively; rather than saying "I'll just create a new observer", architect it in, and keep reasoning as linear as possible.
You could override the inspect method to be less verbose.

Should I use Cocoa bindings for my latest project?

I'm starting a project which I think would benefit from bindings (I've got a source list table, several browser views, etc), but I think it would also be quite doable, and perhaps more understandable, without them. From my limited experience I've found bindings to be difficult to troubleshoot and very "magic" (e.g. it's difficult to insert logging anywhere to figure out where stuff is breaking, everything either works or it doesn't).
Is this just my inexperience talking (in which case I could sit down and spend some time just working on my understanding of bindings and expect things to start becoming clearer/easier) or would I be better off just writing all the glue code myself in a manner which I was sure I could understand and troubleshoot.
Use Bindings.
Note that you must follow the MVC pattern to get the most from bindings. This is easier than it seems, as Cocoa does almost everything for you nowadays:
View: NSView and subclasses (of course), NSCell and subclasses, NSWindow and subclasses
Controller: NSController and subclasses (especially NSArrayController)
Model: Core Data
If you're not going to use Core Data, then you get to roll your own model objects, but this is easy. Most of these objects' methods will be simple accessors, which you can just #synthesize if you're targeting Leopard.
You usually can't get away with not writing any code, but Bindings can enable you to write very little code.
Recommended reading:
Key-Value Coding (KVC) Programming Guide
Key-Value Observing (KVO) Programming Guide
Model Object Implementation Guide
KVC Accessor Methods (part of the aforementioned KVC Programming Guide) and my complete list of KVC-compliant accessor selector formats
Bindings can seem magical in nature. To understand the magic behind bindings, I think one must understand KVC/KVO thoroughly. I really do mean thoroughly.
However, in my case (new to Obj-C -- 9 months), once I got KVC/KVO bindings was a thrill. It has significantly reduced my glue code and made my life significantly easier. Debugging bindings became a case of making sure my key-value changes were observable. I find that I am able to spend more time writing what my app is supposed to do rather than making sure the view reflects the data.
I do agree though that bindings is highly intimidating at first.
My general approach is to start out as much as possible using bindings and see how things go. However, if a particular interface element start to become problematic using bindings, or more effort than it's worth, then I don't hesitate to fall back to using more traditional methods (e.g. data sources, actions) when it makes sense. I've found these things can be pretty hard to predict ahead of time, but I think favoring bindings is better in the long run, as long as you don't get too dogmatic about sticking with them in situations when they don't provide any benefit.
After a while of working with Bindings I've found that it's not magic at all, thought it is sufficiently advanced technology. Debugging a bound interface takes different techniques than a glued interface, but once you have those techniques, the advantages in terms of reuse, maintainability and consistency are IMO significant.
It seems like I use bindings, KVO and data source methods all about equally in my applications. It really depends on the context. For example, in one of my projects I use bindings just about everywhere except the main window's outline view, which is complex enough that I wouldn't want to even try to fit it into an NSTreeController. At the same time I also use KVO to reload UI objects and track dependancies in my model objects.
The important thing to keep in mind when learning advanced Cocoa topics like Bindings or Core Data is that you must understand all the technologies behind them; everything from data source protocols, notifications KVO, and so one. Once you've had enough experience working with them to know how the "magic" works, you'll be able to integrate the higher level stuff into your application with ease.
In your particular case, you'll have to decide if it's worth the extra time to learn bindings on top of developing your application. If possible, it might benefit you to develop a simplified prototype of your application using bindings, so you know how to best fit the pieces together when you start the actual project.
My opinion is that yes, you should adopt bindings; the technology is well-understood and stable now, and it's worth doing for the amount of code you no longer need to write. When I first switched to bindings, I had quite a bit of trouble with getting the lifetime of observing and observed objects to match up, and with UI breakages because it was observing a valid object, but the incorrect one. Once you've seen those problems a couple of times, knowing how to avoid them and how to spot them if they do appear becomes straightforward. Ish. I still wish for "this event here caused this update here" traces in the debugger, but I'm still glad I made the move.
For the curious, I did end up using bindings and after a couple of days they suddenly just started "making sense". So I would definitely recommend just going ahead and taking the time to learn them.
I also found the advice of Brian Webster quite helpful, as I did indeed end up doing a handful of things the old fashioned way either because bindings couldn't do what I wanted or because it would have been prohibitively complicated to do what I needed using bindings.

Resources