Should I use Cocoa bindings for my latest project? - cocoa

I'm starting a project which I think would benefit from bindings (I've got a source list table, several browser views, etc), but I think it would also be quite doable, and perhaps more understandable, without them. From my limited experience I've found bindings to be difficult to troubleshoot and very "magic" (e.g. it's difficult to insert logging anywhere to figure out where stuff is breaking, everything either works or it doesn't).
Is this just my inexperience talking (in which case I could sit down and spend some time just working on my understanding of bindings and expect things to start becoming clearer/easier) or would I be better off just writing all the glue code myself in a manner which I was sure I could understand and troubleshoot.

Use Bindings.
Note that you must follow the MVC pattern to get the most from bindings. This is easier than it seems, as Cocoa does almost everything for you nowadays:
View: NSView and subclasses (of course), NSCell and subclasses, NSWindow and subclasses
Controller: NSController and subclasses (especially NSArrayController)
Model: Core Data
If you're not going to use Core Data, then you get to roll your own model objects, but this is easy. Most of these objects' methods will be simple accessors, which you can just #synthesize if you're targeting Leopard.
You usually can't get away with not writing any code, but Bindings can enable you to write very little code.
Recommended reading:
Key-Value Coding (KVC) Programming Guide
Key-Value Observing (KVO) Programming Guide
Model Object Implementation Guide
KVC Accessor Methods (part of the aforementioned KVC Programming Guide) and my complete list of KVC-compliant accessor selector formats

Bindings can seem magical in nature. To understand the magic behind bindings, I think one must understand KVC/KVO thoroughly. I really do mean thoroughly.
However, in my case (new to Obj-C -- 9 months), once I got KVC/KVO bindings was a thrill. It has significantly reduced my glue code and made my life significantly easier. Debugging bindings became a case of making sure my key-value changes were observable. I find that I am able to spend more time writing what my app is supposed to do rather than making sure the view reflects the data.
I do agree though that bindings is highly intimidating at first.

My general approach is to start out as much as possible using bindings and see how things go. However, if a particular interface element start to become problematic using bindings, or more effort than it's worth, then I don't hesitate to fall back to using more traditional methods (e.g. data sources, actions) when it makes sense. I've found these things can be pretty hard to predict ahead of time, but I think favoring bindings is better in the long run, as long as you don't get too dogmatic about sticking with them in situations when they don't provide any benefit.

After a while of working with Bindings I've found that it's not magic at all, thought it is sufficiently advanced technology. Debugging a bound interface takes different techniques than a glued interface, but once you have those techniques, the advantages in terms of reuse, maintainability and consistency are IMO significant.

It seems like I use bindings, KVO and data source methods all about equally in my applications. It really depends on the context. For example, in one of my projects I use bindings just about everywhere except the main window's outline view, which is complex enough that I wouldn't want to even try to fit it into an NSTreeController. At the same time I also use KVO to reload UI objects and track dependancies in my model objects.
The important thing to keep in mind when learning advanced Cocoa topics like Bindings or Core Data is that you must understand all the technologies behind them; everything from data source protocols, notifications KVO, and so one. Once you've had enough experience working with them to know how the "magic" works, you'll be able to integrate the higher level stuff into your application with ease.
In your particular case, you'll have to decide if it's worth the extra time to learn bindings on top of developing your application. If possible, it might benefit you to develop a simplified prototype of your application using bindings, so you know how to best fit the pieces together when you start the actual project.

My opinion is that yes, you should adopt bindings; the technology is well-understood and stable now, and it's worth doing for the amount of code you no longer need to write. When I first switched to bindings, I had quite a bit of trouble with getting the lifetime of observing and observed objects to match up, and with UI breakages because it was observing a valid object, but the incorrect one. Once you've seen those problems a couple of times, knowing how to avoid them and how to spot them if they do appear becomes straightforward. Ish. I still wish for "this event here caused this update here" traces in the debugger, but I'm still glad I made the move.

For the curious, I did end up using bindings and after a couple of days they suddenly just started "making sense". So I would definitely recommend just going ahead and taking the time to learn them.
I also found the advice of Brian Webster quite helpful, as I did indeed end up doing a handful of things the old fashioned way either because bindings couldn't do what I wanted or because it would have been prohibitively complicated to do what I needed using bindings.

Related

Is it good to develop view before model (in MVC pattern)?

I was wondering about the best practice when using the MVC pattern.
When you develop an app for a client, you want to think business. You want to think as the customer would. That's why I'm wondering :
Isn't it better to develop the view part, without any data treatment, so the customer can validate it ?
I see this practice as powerful as TDD is, I mean if you clearly know what your program will look like, you know which treatment it will require, making the model part a bit more concrete and business oriented, instead of making it too abstract and global.
I can not see downsides to this, so if you can see some, or explain me why it's not a good idea, please do.
Thanks :-)
The main benefit, as I see it, would be the ability to provide the client with hands-on prototype.
It not uncommon for clients to change their mind, because often, when they hire a developer or company, they actually have only a vague goal for the end product. This way you would mitigate the risk of large scale changes late in the projects life-cycle.
As for implementation of such approach, I would recommend for you to look into concept of "presentation objects" (Fowler has this annoying habit of slapping "model" on every damned term).
With presentation objects you would gain an ability to "shim" the data from model layer's services. And it also would let you figure out, what exactly services (and service calls) will you *UI layer( interact with).
Note: of course I am assuming that with "MVC" you do not mean some Rails-style abomination, where "views" are just dumb templates.

Are these still valid negatives against using Storyboards in developing iOS 6 applications?

Using storyboards in lieu of the traditional .xib strategy is something I'm still wrestling with as there is some hesitancy about adopting something that does so much under-the-covers without really understanding what it is doing, and what control I'm really losing.
The BNR iOS Programming Book highlights several "cons" to using Storyboards. I've listed them below, and my question is: Are these negatives still valid with iOS 6?
Storyboards are difficult to work with in a team
Storyboards screw up version control
Storyboards disrupt the flow of programming
Storyboards sacrifice flexibility and control for ease of use
Storyboards always create new view controller instances
I'm looking for answers from guys who are actually building, and preferably deployed real iOS applications and have struggled with the "storyboards vs .xib" thing themselves.
Thanks
I don't think iOS 6 fixes any of these situations. More to the point, xcode 4.5 doesn't fix them or even attempt to do so. The issues listed seem to reflect opinions or stylistic preferences, and perhaps some misinformation. These aren't the kind of thing that COULD be fixed in code.
I'm using storyboards for a substantial app and I find them to be a real productivity boon. I encourage you to try them to see if you don't agree.
A couple of comments on the issues list:
I'm not aware of any issues with teams and SBs, but if 2 were true
(which it isn't) that would explain this concern. I think this is a
misconception based on 2.
Not true. I use Git religiously, and commit frequently. No problemo. During commits, SBs are displayed in their source code form (XML). The diffs work perfectly and actually provide some insight into how SBs are implemented. This reduces the feeling of mysterious "under the covers" behaviors, which becomes a non-issue with familiarity.
Disagree. They don't disrupt the flow, they offer a different flow - which is where they get their power. Lots of programmers find value in the separation imposed by the MVC discipline. SBs introduce a separation between UI element placement and the supporting code. It's a natural split, and eliminates a ton of mindless code (which eliminates the opportunity for typos, and "de-clutters" the REAL code that remains).
Partly agree - they definitely improve ease of use. But I don't find any sacrifice at all. Even when using SBs you can always revert to hand coding any objects if you need to. There's no sacrifice of flexibility or control.
Not sure what this means, and why it might be a problem. Of course we create different VCs for different scenes - that's natural. But it's certainly possible to reuse VC classes in SBs. This item might be a misconception about how to set the class for a SB object. It's easy to forget to do this step, and it sometimes baffles beginners. But it's trivial to correct, and setting the class quickly becomes a habit.
For me the real concerns are:
Using SBs demands a lot of screen real estate for development. Using SBs can be frustrating on a small display.
Highly complex UIs with many many scenes should be split into multiple SBs. Multiple SBs are fully supported, but it's easy to fail to do it. (It's like refactoring a method that gets too big. Usually I notice that I need to refractor code AFTER something has already gotten messy.)
The convenience of SBs during layout and the elimination of so much of the boilerplate code that clutters up VC objects is a huge benefit. (Every line of code that I eliminate is a line I can't screw up, and a line that can't obscure the real code that remains.)
In short, I can't imagine going back to life without SBs. Yes, it is a change. But I haven't found any real serious downside. It's especially important to keep in mind that even when using SBs, all the non-SB coding techniques still work. Give SB's a try, and report your own experience. Good luck!
I generally agree with jbbenni. The only "valid" criticism I see in your points was that about "Storyboards always create a new instance." Basically, this meant that though you could wire up a button to push a view controller on the stack, you could not wire up a button to pop back up the stack without extra code. This has been resolved in Xcode 4.5 with "exit segues", which let you indicate that you want to pop back to a previous controller rather than creating a new instance.
The other limitation of storyboards many complained about was that you could not embed child view controllers in the storyboard itself. This has also been addressed in Xcode 4.5.
Storyboards are a significant step forward for iOS development. Complaints like "it makes merging hard" are unfounded; storyboards are no harder to merge than other code; you just have to take the time to actually read the diff instead of glossing over it as "not Obj-C; can't read."
I've used storyboards successfully in a team setting since their introduction. Don't let the uninformed scare you off. They're great.

MVC: why the separation of model, view, and controller?

Other than the “philosophical” aspects of it, is it a bad idea to have my controller also be my model?
It seems to save some programming time. I don’t have to create logic between the controller and the model, since it’s the same thing. And I can directly interact with the view.
What’s the point of separating the M and the C? Is modularity — that is, the ability to swap one model and controller set for another — the only reason to separate them? It seems to me that “swapping” modules out happens a lot less than (for example) having to update both the model and the controller because something in the model is changing.
It seems odd that a simple calculator, according to the MVC concept, should have both a controller and a view for its settings (like default settings, or something). I know this is a simple example, but it seems to apply to all cases (except maybe frameworks).
The primary reason is for reusability of code. If you’re only ever going to write one program in your professional life, then perhaps it doesn’t matter. If you plan to make a career of it, having reusable pieces is valuable. Well-designed model, controller and view classes are very easy to drop into other programs. I do this all the time.
Consider UITableViewController, which is a Controller. Now imagine if it were designed exclusively to handle music tracks (the Model), and you needed to create a completely different table-management class when you wanted to handle something else. Avoiding this nightmare is why MVC is heavily used in Cocoa.
There are other ways to split things up. Some languages subclass heavily rather than delegating. But in Cocoa, the primary means of splitting up programs is MVC, and it works very well.
EDIT: Just some more reasons from the world of developing commercial apps.
Memory handling is much easier in MVC. You can hold on to your model objects and throw away your view objects (and many of your controller objects) when they go offscreen.
It’s easier to serialize model objects that aren’t wrapped up with controllers and views, and it’s much easier to display the same data in multiple ways. Even in a “simple” text editor, you may want to be able to do split-screen, or have multiple windows showing the same document. In MVC that’s very easy.
If you need no flexibility now or in the future, you don’t need much architecture. But most real projects aren’t so simple. MVC grew out of Xerox’s experience with writing large programs and the difficulties encountered when everything was thrown together.
EDIT 2: I was looking at your earlier edit: “It seems odd that a simple calculator, according to the MVC concept, should have both a controller and a view for its settings (like default settings, or something).”
This is exactly the reason for MVC. It would seem crazy to have to re-code all of the things required for saving user settings specially for a Calculator app. You’d want a generic “please save these user settings” that was completely separate from the UI and that you could reuse. On OS X it’s called NSUserDefaults, and the Calculator app stores its configuration in exactly this way.
MVC is a standard pattern that is well understood in the development community, and for good reasons. The separation really makes things easy to read, easy to troubleshoot, easy to find, and easy to test, as individual components, each with its own area of responsibility.
Do you have to use it? of course not. But keeping the parts separate is generally considered a good idea.
The controller knows how to link a specific view to your model. The separation of model and controller, apart from improving documentation and maintainability, has the immediate benefit of allowing multiple views to display the same information from the model without adding any complexity to either.
That applies not just to multiple views in the same application, but also to the multiple variations in views you'll have across multiple versions of your application. Your model is insulated and logically clean.
Combining model and controller is a classic false economy in my opinion. It may feel like it saves a few minutes, but it costs significantly as an application develops and grows.
If it works for you then it works. Period. The reason for separation of Models, Views, and Controllers revolves around the idea that most development for enterprise applications is done by a team of developers.
Imagine 10 developers trying to work on your controller. But all they want to do is add something to the model. Now your Controller broke? What did they do?
The Models are usually separate components which can be re-used between Controllers. If you are absolutely certain you won't be re-using Models in multiple Controller, I don't really see a problem with blending these concerns.
I guess one could argue why even use MVC design if you are planning on deviating. Maybe there is a more suitable pattern to follow for your situation. Can you give us an example of something you've done where the Controller is the Model? It would help us understand what you are trying to do better.
MVC is all about management (separation of data, representation and business logic). So it's like this: if you run a small company, having a MS-sized management would be a real drag. But if you are a giant corporation, not having big middle management is impossible.
Honestly, in most of my college progamming assignments, I combined the models and controllers, because I didn't see the need for the separation. But working on big projects? The deficiency would be pretty obvious if you try to not separate. Just do what you feel right.
The model depends on neither the view nor the controller. This is one the key benefits of the separation. This separation allows the model to be built and tested independent of the visual presentation.

What do you call a generalized (non-GUI-related) "Model-View-Controller" architecture?

I am currently refactoring code that coordinates multiple hardware components for data acquisition, and feeling a bit like I'm recreating the wheel. In particular, an MVC-like pattern seems to be emerging. Except, this has nothing to do with a GUI and I'm worried that I'm forcing this particular pattern where another might be more appropriate. Here's my scenario:
Individual hardware "component" classes obey interface contracts for each hardware type. Previously, component instances were orchestrated by a single monolithic InstrumentController class, which relied heavily on configuration + branching logic for executing a specific acquisition sequence. After an iteration, I have a separate controller for each component, with these controllers all managed by a small InstrumentControllerBase (or its derivatives). The composite system will receive "input" either programmatically or via inter-hardware component triggering - in either case these interactions are routed to, and handled by, the appropriate controller.
So, I have something that feels MVC-esque, but I don't know if that's because I'm forcing the point. With little direct MVC experience in application development, it's hard to know if I'm just trying to make my scenario fit MVC, where another pattern might be a good alternative or complimentary. My problem is, search results and wiki documentation of these family of patterns seems to immediately drop me into GUI-specific discussions.
I understand "M means Model data and the V means View" - but what do you call the superset pattern? Component-Commander-Controller?
Whence can I exhume examples exemplary?
IMO a "view" is not necessarily a GUI component. The pattern is easiest to demonstrate with GUIs but that does not limit its usability to GUIs. If it works for you, don't worry about the name :-) And of course, feel free to tailor it according to your needs.
Update: Of more generic kins of MVC, the only example which surfaced in my mind (after a day's background processing) is PAC.

Should you wrap 3rd party libraries that you adopt into your project? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
A discussion I had with a colleague today.
He claims whenever you use a 3rd party library, you should always write for it a wrapper. So you can always change things later and accomodate things for your specific use.
I disagree with the word always, the discussion arose regarding log4j and I claimed that log4j has well tested and time proven API and implementation, and everything thinkable can be configured a posteriori and there is nothing you should wrap. Even if you wanted to wrap there are proven wrappers like commons-logging and log5j.
Another example that we touched in our discussion is Hibernate. I claimed that it has a very big API to be wrapped. Furthermore it has a layered API which lets you tweak its inside if you so need. My friend claimed that he still believes it should be wrapped but he didn't do it because of the size of the API (this co-worker is much veteran than me in our current project).
I claimed this, and that wrapping should be done in specific cases:
you are not sure how the library will fit your needs
you will only use a small portion of a libary (in which case you may only expose a part of its API).
you are not sure of the quality of the library's API or implementation.
I also maintained that sometimes you can wrap your code instead of the library. For example, puting your database related code in a DAO layer, instead of preemptively wrapping all of hibernate.
Well, in the end this is not really a question, but your insights, experiences and opinions are highly appreciated.
It's a perfect example for YAGNI:
it is more work
it inflates your project
it may complicate your design
it has no immediate benefit
the scenarion you write it for may never manifest
when it does, your wrapper most likely needs to be re-written completely because it is tied too closely to the concrete library you were using and the new one's API simply doesn't match yours.
Well, the obvious benefit is for switching technologies. If you have a library that becomes deprecated, and you want to switch, you may end up rewriting a lot of code to accommodate the change, whereas if it were wrapped, you'd have an easier time writing a new wrapper for the new lib, than changing all your code.
On the other hand, it would mean that you have to write a wrapper for every trivial library that you include, which is probably an unacceptable amount of overhead.
My industry is all about speed, so the only time I'd be able to justify writing a wrapper is if it was around some critical library that was likely to change dramatically on a regular basis. Or, more commonly, if I need to take a new library and shoehorn it into old code, which is an unfortunate reality.
It's definitely not an "always" situation. It's something that may be desirable. But the time isn't always going to be there, and, in the end, if writing a wrapper takes hours and the long term code library changes are going to be few, and trivial...Why bother?
No. Java architects/wanna-bees are too busy designing against imaginary changes.
With modern IDE, it's a piece of cake when you do need change. Until then, keep it simple.
I agree with everything that's been said pretty much.
The only time wrapping third party code is useful (bar violating YAGNI) is for unit testing.
Mocking statics and so forth requires you to wrap the code, this is a valid reason to write wrappers for third party code.
In the case of logging code, its not needed though.
The problem here is partially the word 'wrapper', partially a false dichotomy, and partially a false distinction between the JDK and everything else.
The word 'wrapper'
Wrapping all of Hibernate, as you say, is a completely impractical enterprise.
Restricting the Hibernate dependencies to an identified, controlled, set of source files, on the other hand, may well be practical and achieve the same results.
The false dichotomy
The false dichotomy is the failure to recognize a third option: standards. If you use, say, JPA annotations, you can swap Hibernate for other things. If you are writing a web service and use JAX-WS annotations and JAX-B, you can swap between the JDK, CXF, Glassfish, or whatever.
The false distinction
Sure, the JDK changes slowly and is unlikely to die. But major open source packages also change slowly and are unlikely to die. Untold thousands of developers and projects use Hibernate. There's really no more risk of Hibernate disappearing or making radical incompatible API changes than there is of Java itself.
If the library you are planning to wrap is unique in its "access principles, metaphors and idioms" from other offerings in the same domain, then your wrapper is pretty much going to be similar to that library and won't do you any good if you one day switch to a different library since you will need a new wrapper.
If the library is accessed in a similar way to other libraries and the same wrapper can apply to these libraries, then they are probably written based on some existing standard and there is some common layer that already exists to access both of them.
I would only go with wrappers if I knew for sure that I would have to support multiple and substantially different libraries in production.
The main factor for deciding to wrap a library or not is the impact a library change will have on the code. When a library is only called from 1 class the impact of changing library will be minimal. If on the other side a library is called in all classes a wrapper is much more likely.
Any uncertainty around the choice of 3rd party library should be flushed out at the beginning of the project using prototypes to test the scalability/suitability/whatever of the 3rd party library.
If you decide to go ahead and provide full de-coupling/abstraction support it should be costed up and ultimately approved by the project sponsor - ultimately it's a commercial decision as someone has to pay for it and the work required to do it (unless it's absolutely trivial, in which case the api is probably low risk anyway).
Generally an experienced architect will chose a technology that they can be reasonably confident with, and have experience of, and that they are confident will last the lifetime of the app, OR else they will eliminate any risk in the decision early on in the project, thus removing any need to do this, most of the time
I'd tend to agree with most of your points. Using absolutes often gets you into trouble and saying you should "always" do something limits your flexibility. I'd add some more points to your list.
When you use wrapping code around a very common API, like Hibernate or log4j you make it more difficult to bring on new developers. New developers now have to learn a whole new API, where if you hadn't wrapped the code they would have been very familiar right away.
On the flip side of that, you also limit your developers' view into the API. Using an advanced feature of the API takes more time because you have to make sure that your wrapper is implemented in a way that can handle it.
Many of the wrapping layers I've seen also are very specific to the underlying implementation. So, if you write a log wrapper around log4j, you are thinking in log4j terms. If some new cool framework comes out, it may change the whole paradigm, so your wrapping code doesn't migrate as well as you had thought.
I'm definitely not saying wrapping code is always bad, but as you stated, there are a lot of factors you have to consider.
The purpose of wrapping even a well-tested and time-proven 3rd-party library is that you might decide to switch libraries at some point in the future. Wrapping it makes it easier to switch without changing any code in your core application. Only the wrapper needs to change.
If you're absolutely sure that you'll never (another absolute) use a different logging framework in your project, go ahead and skip the wrapper. Even having said that, I'd probably hold off on writing the wrapper until I knew I needed it, like the first time I need to switch.
This is kind of a funny question.
I've worked in systems where we've found showstopper bugs in libraries we were using, and which upstream was either no longer maintaining, or not interested in fixing. In a language like Java, you usually can't fix internal bugs from a wrapper. (Fortunately, if they're open-source, you can at least fix them yourself.) So it's no help here.
But I'm often working in a language where you can easily modify libraries at any time, without seeing or even having their source code -- I commonly add new methods to existing classes, for example. So in this case, there's no point in wrapping: just make the change you want.
Also, does your colleague draw the line at things called "libraries"? What about Java itself? Does he wrap built-in classes? Does he wrap the filesystem? The thread scheduler? The kernel? (That is, with his own wrappers -- in a sense, everything is a wrapper around the CPU, but it sounds like he's talking about wrappers in your source repo that are completely under your control.) I've had built-in functionality change or disappear when new versions of it appear. Java is not immune from this.
So the idea to always write a wrapper comes down to a bet. Assuming he's only wrapping third-party libraries, he seems to be implicitly betting that:
"first-party" functionality (like Java itself, the kernel, etc.) will never change
when "third-party" functionality changes, it will always be done in a way that can be fixed in a wrapper
Is that true in your case? I don't know. Of the medium-large Java projects I've done, it's rarely true for me. I wouldn't spend effort wrapping all third-party libraries, because it seems like a poor bet, but your situation is certainly different from mine.
There is one situation where you with good reason can wrap. Namely if you need to test stuff, and the default third party object is heavy weight. Then having an interface can really make a difference.
Note, this is not to replace the library ,but make it manageable where it doesn't matter much.
Wrapping a whole library is boilerplate, ineffective, and wrong in most cases. It can be done in a much clever way. I'd say that wrapping a library is appropriate mostly in case of UI component libraries, and again, you have to be adding some additional core functionality of yours to all the components for this to be needed.
if too much modifications and additions are needed, this is most likely not the library you are looking for
if there is a moderate amount of additions and modifications - there are always the design patterns that come handy in those cases. The Decorator pattern (allows new/additional behaviour to be added to an existing object dynamically) , for example, is rather suitable for the most cases.
IDE search/replace and refactoring capabilities offer an easy way to change your code in all required places if some important change is needed and a wrapping object appears. (of course, unit-tests would be helpful here ;) )
In my experience the question becomes fairly moot if you're using abstractions sufficiently. Coupling to a library is just like coupling to any other interface. Thus you want to reduce accidental coupling and the scope of rewrite necessary if you need to swap out the implementation. Don't bind your application logic to some construct, but don't just form a bunch of stupid (literally) wrappers around something and expect to gain any benefit.
A wrapper doesn't usually gain you anything unless it's answering a specific purpose (such as polymorphizing a non-polymorphic construct). They often show up in refactoring, but I wouldn't recommend forming an architecture on them. There's a few exceptions of course, but there is with any principle.
This doesn't speak toward adapters. An adapter can be a pretty important component for when you want to actually alter the interface of a library and its use to be in line with architecture, code, or domain concepts in your project.
You should do it always, often, sometimes, rarely, or never. Not even your colleague does it always, but the instructive cases are always and never. Suppose that it is sometimes necessary. If you never wrapped a library, the worst consequence is that one day you discovered that it was necessary for a library that you had used all over the place. It would take you some time to wrap that library and to perform shotgun surgery on the clients. The question is whether that eventuality would take more time than habitually providing wrappers that are rarely necessary, but having never to perform the shotgun surgery.
My instinct is to appeal to the YAGNI (you ain't gonna need it) principle and opt for "rarely".
I would not wrap it as a one to one thing, but I would layer the app so that each part it replaceable as much as possible. The ISO OSI model works well for all types of software :-)

Resources