So I've got an NSSearchField hooked up directly to an NSArrayController via bindings, attached to the filterPredicate, so that without any code, the user can just type in the NSSearchField and filter the list of objects in the NSArrayController presented to him in the interface (an NSCollectionView, to be specific).
The NSSearchField is hooked up to provide live searching, so that the NSCollectionView is filtered instantly as the user types, not after waiting for a short period for the user to stop typing.
However, the problem is that this makes the interface really laggy. Typing is delayed significantly, by 0.5-1 seconds, and it seems like the NSCollectionView is trying to animate each and every rearrangement of items for each portion of the search string that the user enters.
What I'd like is for the searching to be live, but the typing in the search field to be fluid, and the results to filter as fast as possible. Is there a way to do this via bindings, or will I need to put in some custom code that triggers the filterPredicate on a separate thread?
(Note that I've got a custom sorting algorithm set up on the NSArrayController, and removing it seems to help a bit with the laggyness, but not completely.)
I would definitely go with the predicate on the separate thread. It seems you know what you have to do. Obviously blocking on the current thread is the lag issue.
Actually, it looks like you can't call setFilterPredicate: from a separate thread. It causes a crash.
It turns out that my problem was actually caused by some slow code being called over and over when setting the filter predicate, which severely slowed down the performance of the filtering. I found this by using a Time Profiler tool in Instruments on my app. That showed me which method was taking the most time, and optimizing that method fixed the lagginess issue.
Related
I've got a button in a toolbar that I'd like to use to control the state (expanded/collapsed) of the right-hand-side split of a split-view. In a xib-based project this is trivial, but I'm using storyboards, and can't work out what the best approach is.
The key limitation is the fact that I'm not able to create target-action connections between objects in different scenes. As I see it, this leaves me with three options:
Have the window controller handle the expand/collapse request - this seems inappropriate - what's the window controller got to do with the state of the split view?
Have the window 'collect' the action and refer it on to the split-view controller for processing - this is better but to me it still seems like bad design to have the window controller in this process at all.
Add a custom action to the responder chain, and connect the button's selector to the responder chain. This is what I've done, but it still seems like a distant second-best compared with the xib-style approach - a direct drag-and-drop between objects.
I suppose as I was hoping that storyboards would just make everything easier - am I now right in thinking that this target-action dilemma is a genuine shortcoming of the storyboard approach, and that I just need to get used to one of the above?
I'd really appreciate an answer to this, but can't afford a bounty (!).
Here we have a very simple GUI: the user just enters a source word and a target word in two text boxes, then presses a button. Then a lot of whirring takes place, and half a second later an answer is shown. The user goes on doing this until bored, then closes the app. Naturally, when the app restarts, the focus should be on the source, and I am hoping there is a neater way of achieving this than the one described. The commenter below has confirmed my feeling that the problem was an artefact of Lion persistence, which is a real nuisance in simple cases like this.
I set an NSTextField as First Responder (using the window's makeFirstResponder) in the awakeFromNib method of a simple 'controller' class, in a simple Cocoa application in Xcode 4.3, running under Lion.
The makeFirstResponder works fine the first time the app is loaded after reboot, but on every rerun the focus is set to the last field accessed. (I had tried connecting the window's initialFirstResponder outlet to the desired NSTextField, but got the same problem).
I fixed it finally by calling an initialisation function from the NSApplication delegate, and putting the makeFirstResponder call there.
The fix is a bit messy - I added a global variable to the controller, and initialised it to self in awakeFromNib.
I add the information that the Cocoa part of the app is simple, but the bulk of it is a mass of STL stuff in .cpp files, ported from Windows.
Deselect the "Restorable" check box in the attributes inspector for your window in IB. Of course, you then won't have the other behaviors you get with a restorable window like remembering its position and size.
Although I have searched for many information about Cocoa Bindings, I still remain relatively unsatisfied with information I have and got. It seems that topic is somewhat troublesome for many and many are just avoiding this pattern, which I believe should not be.
Of course, it may seem that bindings are sometimes too complicated or perhaps designed with too much overhead...
However, I have one very direct and specific question: Why is NSObjectController needed if I can establish bindings directly?
For example, the code:
[controller bind:#"contentObject" toObject:self withKeyPath:#"numberOfPieSlices" options:nil];
[slicesTextField bind:#"value" toObject:controller withKeyPath:#"content" options:nil];
[stepperControl bind:#"value" toObject:controller withKeyPath:#"content" options:nil];
Does exactly the same as:
[slicesTextField bind:#"value" toObject:self withKeyPath:#"numberOfPieSlices" options:nil];
[stepperControl bind:#"value" toObject:self withKeyPath:#"numberOfPieSlices" options:nil];
In my case here, we are talking about property of the class inside which everything is happening, so I am guessing the need for NSObjectController is when:
key path for controller is object and binding of other controls is needed to its properties, not to its value as with primitives and wrappers around them is the case (numberOfPiesSlices in my case is NSInteger)
or when binding is needed from other outside objects, not only between objects within one
Can anybody confirm or reject this?
One of the benefits/points of bindings is to eliminate code. To that end, NSObjectController etc. have the benefit that they can be used directly in interface builder and set up with bindings to various UI elements.
Bindings only represent part of the functionality on offer. The *ObjectController classes can also automatically take care of a lot of the other more repetitive controller (as in Model, View, Controller) code that an application usually needs. For example they can:
connect to your core data store and perform the necessary fetches, inserts and deletes
manage the undo / redo stack
Pick up edited but not committed changes to your UI and save them (e.g. if a window is closed while focus is still on an edited text field - this was a new one to me, I found it from mmalc's answer in the thread below).
If you're doing none of this, then it probably isn't worth using NSObjectController. Its subclasses (NSArrayController etc) are more useful.
Also see here for a discussion of your exact question!
Why is NSObjectController needed if I can establish bindings directly?
I read this question a few days ago while looking for some information about NSObjectController, and today while continuing my search, I found the following passage which seemed relevant to the question:
There are benefits if the object being bound to implements
NSEditorRegistration. This is one reason why it’s a good idea to bind
to controller objects rather than binding directly to the model.
NSEditorRegistration lets the binding tell the controller that its
content is in the process of being edited. The controller keeps track
of which views are currently editing the controller’s content. If the
user closes the window, for example, every controller associated with
that window can tell all such views to immediately commit their
pending edits, and thus the user will not lose any data. Apple supply some generic controller objects (NSObjectController,
NSArrayController, NSTreeController) that can be used to wrap your
model objects, providing the editor registration functionality.
Using
a controller also has the advantage that the bindings system isn’t
directly observing your model object — so if you replace your model
object with a new one (such as in a detail view where the user has
changed the record that is being inspected), you can just replace the
model object inside the controller, KVO notices and the binding
updates.
Although I know of a solution to this problem, I am interested if someone can explain this solution to me. I also wanted to get this out there because I could not find any mention of this problem online, and it took me several hours over several days to track down. I have an NSTableView behaving strangely regarding redraws and its selection. The problem looks like this:
Table contents fades in, instead of appearing instantly upon it's appearance on screen. When scrolling through the contents, the newly appearing rows also fade in. When you make a selection (single or multiple), and scroll it off screen, then make another selection (that should replace, not add-to first selection), the first selection does not get cleared properly. If you scroll back to it, it is still there, in addition to your new selection. This is a display-update problem, not selection problem - i.e. your new selection is valid, it is just displayed wrong.
I tracked this through the NSArrayController I was binding to, the underlying Array, sorting, all the connections, and settings, etc., but all that has nothing to do with it.
What solved the problem was:
In the View Effects (right-most) Inspector, uncheck "Core Animation Layer" for the Window's main view.
Can anyone explain what is happening here, and perhaps improve upon the solution ?
It looks like Core Animation and NSTableView aren't getting along so well. The "fading" effect is a by-product of the way core animation works. When you have core animation in one view, it is also enabled in all of that view's subviews.
I don't recommend using core animation on the Mac unless absolutely necessary, because some interface elements (NSTextView and NSTableView, for example) aren't compatible with it. iOS has much better support for table views and such using core animation, mainly because it was designed with core animation in mind.
I know that some more simple UI elements are compatible (NSTextField and NSButton, for example).
If you absolutely need core animation in the rest of the window, put all the other views in a subview of the content view, while leaving the table view directly in the content view. You can then enable Core Animation in the other view.
Commenters, feel free to add to the list of what is and isn't compatible.
In my experience I have only had 2 patterns work for large-scale desktop application development when trying to keep the model and UI in sync.
1-An eventbus approach via a shared eventbus command objects are fired (ie:UserDemographicsUpdatedEvent) and have various parts of the UI update if they are bound to the same user object updated in this event.
2-Attempt to bind the UI directly to the model adding listeners to the model itself as needed. I find this approach rather clunky as it pollutes the domain model.
Does anybody have other suggestions? In a web application with something like JSP binding to the model is easy as you ussually only care about the state of the model at the time your request comes in, not so in a desktop type application.
Any ideas?
I am currently using the event bus approach to synchronize the models and the UI in my application, but I have hit a hurdle with it in that it's difficult to make it very fine grained, for example, at the property level where you are just interested in knowing if property x of an object gets updated, and there are hundreds or thousands of such cases.
For such a fine grained control, you might want to checkout how KVC (Key Value Coding) and KVO (Key Value observing) works in Cocoa. It basically allows an object to observe any other object's properties as long as it uses some basic principles of KVC. The interested objects automatically get notified upon changes, and you don't have to explicitly notify the observing objects on each property change as that is taken care of by the underlying implementation of KVO. It is somewhat similar to the PropertyChange listeners in Java beans.
If there is too many observations going on, and writing the glue code to update models/views on property changes becomes problematic, you might want to take it a step further and have data-binding to keep models and views synchronized. Built upon the concepts of KVO, the idea is to bind properties of objects, so that a change in one automatically updates the other, and vice versa. For example, you could bind the text in SO's answer field, to the answer preview that we see right below.
.bind('answer.value', 'answerPreview.text')
Both happen to be view elements in this case, so data-binding is a generic approach, and can be used to bind objects more appropriately and not just UI with models.