I was wondering about the best practice when using the MVC pattern.
When you develop an app for a client, you want to think business. You want to think as the customer would. That's why I'm wondering :
Isn't it better to develop the view part, without any data treatment, so the customer can validate it ?
I see this practice as powerful as TDD is, I mean if you clearly know what your program will look like, you know which treatment it will require, making the model part a bit more concrete and business oriented, instead of making it too abstract and global.
I can not see downsides to this, so if you can see some, or explain me why it's not a good idea, please do.
Thanks :-)
The main benefit, as I see it, would be the ability to provide the client with hands-on prototype.
It not uncommon for clients to change their mind, because often, when they hire a developer or company, they actually have only a vague goal for the end product. This way you would mitigate the risk of large scale changes late in the projects life-cycle.
As for implementation of such approach, I would recommend for you to look into concept of "presentation objects" (Fowler has this annoying habit of slapping "model" on every damned term).
With presentation objects you would gain an ability to "shim" the data from model layer's services. And it also would let you figure out, what exactly services (and service calls) will you *UI layer( interact with).
Note: of course I am assuming that with "MVC" you do not mean some Rails-style abomination, where "views" are just dumb templates.
Related
Other than the “philosophical” aspects of it, is it a bad idea to have my controller also be my model?
It seems to save some programming time. I don’t have to create logic between the controller and the model, since it’s the same thing. And I can directly interact with the view.
What’s the point of separating the M and the C? Is modularity — that is, the ability to swap one model and controller set for another — the only reason to separate them? It seems to me that “swapping” modules out happens a lot less than (for example) having to update both the model and the controller because something in the model is changing.
It seems odd that a simple calculator, according to the MVC concept, should have both a controller and a view for its settings (like default settings, or something). I know this is a simple example, but it seems to apply to all cases (except maybe frameworks).
The primary reason is for reusability of code. If you’re only ever going to write one program in your professional life, then perhaps it doesn’t matter. If you plan to make a career of it, having reusable pieces is valuable. Well-designed model, controller and view classes are very easy to drop into other programs. I do this all the time.
Consider UITableViewController, which is a Controller. Now imagine if it were designed exclusively to handle music tracks (the Model), and you needed to create a completely different table-management class when you wanted to handle something else. Avoiding this nightmare is why MVC is heavily used in Cocoa.
There are other ways to split things up. Some languages subclass heavily rather than delegating. But in Cocoa, the primary means of splitting up programs is MVC, and it works very well.
EDIT: Just some more reasons from the world of developing commercial apps.
Memory handling is much easier in MVC. You can hold on to your model objects and throw away your view objects (and many of your controller objects) when they go offscreen.
It’s easier to serialize model objects that aren’t wrapped up with controllers and views, and it’s much easier to display the same data in multiple ways. Even in a “simple” text editor, you may want to be able to do split-screen, or have multiple windows showing the same document. In MVC that’s very easy.
If you need no flexibility now or in the future, you don’t need much architecture. But most real projects aren’t so simple. MVC grew out of Xerox’s experience with writing large programs and the difficulties encountered when everything was thrown together.
EDIT 2: I was looking at your earlier edit: “It seems odd that a simple calculator, according to the MVC concept, should have both a controller and a view for its settings (like default settings, or something).”
This is exactly the reason for MVC. It would seem crazy to have to re-code all of the things required for saving user settings specially for a Calculator app. You’d want a generic “please save these user settings” that was completely separate from the UI and that you could reuse. On OS X it’s called NSUserDefaults, and the Calculator app stores its configuration in exactly this way.
MVC is a standard pattern that is well understood in the development community, and for good reasons. The separation really makes things easy to read, easy to troubleshoot, easy to find, and easy to test, as individual components, each with its own area of responsibility.
Do you have to use it? of course not. But keeping the parts separate is generally considered a good idea.
The controller knows how to link a specific view to your model. The separation of model and controller, apart from improving documentation and maintainability, has the immediate benefit of allowing multiple views to display the same information from the model without adding any complexity to either.
That applies not just to multiple views in the same application, but also to the multiple variations in views you'll have across multiple versions of your application. Your model is insulated and logically clean.
Combining model and controller is a classic false economy in my opinion. It may feel like it saves a few minutes, but it costs significantly as an application develops and grows.
If it works for you then it works. Period. The reason for separation of Models, Views, and Controllers revolves around the idea that most development for enterprise applications is done by a team of developers.
Imagine 10 developers trying to work on your controller. But all they want to do is add something to the model. Now your Controller broke? What did they do?
The Models are usually separate components which can be re-used between Controllers. If you are absolutely certain you won't be re-using Models in multiple Controller, I don't really see a problem with blending these concerns.
I guess one could argue why even use MVC design if you are planning on deviating. Maybe there is a more suitable pattern to follow for your situation. Can you give us an example of something you've done where the Controller is the Model? It would help us understand what you are trying to do better.
MVC is all about management (separation of data, representation and business logic). So it's like this: if you run a small company, having a MS-sized management would be a real drag. But if you are a giant corporation, not having big middle management is impossible.
Honestly, in most of my college progamming assignments, I combined the models and controllers, because I didn't see the need for the separation. But working on big projects? The deficiency would be pretty obvious if you try to not separate. Just do what you feel right.
The model depends on neither the view nor the controller. This is one the key benefits of the separation. This separation allows the model to be built and tested independent of the visual presentation.
I'm in the process of exploring Zend Framework, an MVC (Model View Controller) framework. I noticed that Zend's own site is using it (of course), but I was wondering if some of today's most popular sites:
Amazon
Facebook
Twitter
maybe Google?
do as well. I ask because I am curious if it is robust and manageable enough for huge sites like Amazon and Facebook. Does anyone know if they do?
MVC isn't a framework. In the context of web programming, is just a very rough division of labor:
Model is - usually, roughly - the code that handles data store
View is responsible for HTML generation etc
Controller is the part that translates incoming request to optional model changes and then hands off to some view.
I would be very surprised if the sites/companies you mentioned don't structure their code roughly like that, but as you can see it's only a very high-level structure and has more to do with maintainability / sane division of responsibilities than performance.
In any case, if you have to ask this kind of question, performance is NOT the first thing to worry about - relatively sane structure and readability of code is much, much more important, if only because the chances of getting enough users that serious scalability is an issue are low, while unmaintainable code will be an issue tomorrow - even if you have only one important user.
I am currently refactoring code that coordinates multiple hardware components for data acquisition, and feeling a bit like I'm recreating the wheel. In particular, an MVC-like pattern seems to be emerging. Except, this has nothing to do with a GUI and I'm worried that I'm forcing this particular pattern where another might be more appropriate. Here's my scenario:
Individual hardware "component" classes obey interface contracts for each hardware type. Previously, component instances were orchestrated by a single monolithic InstrumentController class, which relied heavily on configuration + branching logic for executing a specific acquisition sequence. After an iteration, I have a separate controller for each component, with these controllers all managed by a small InstrumentControllerBase (or its derivatives). The composite system will receive "input" either programmatically or via inter-hardware component triggering - in either case these interactions are routed to, and handled by, the appropriate controller.
So, I have something that feels MVC-esque, but I don't know if that's because I'm forcing the point. With little direct MVC experience in application development, it's hard to know if I'm just trying to make my scenario fit MVC, where another pattern might be a good alternative or complimentary. My problem is, search results and wiki documentation of these family of patterns seems to immediately drop me into GUI-specific discussions.
I understand "M means Model data and the V means View" - but what do you call the superset pattern? Component-Commander-Controller?
Whence can I exhume examples exemplary?
IMO a "view" is not necessarily a GUI component. The pattern is easiest to demonstrate with GUIs but that does not limit its usability to GUIs. If it works for you, don't worry about the name :-) And of course, feel free to tailor it according to your needs.
Update: Of more generic kins of MVC, the only example which surfaced in my mind (after a day's background processing) is PAC.
I'm drinking the coolade and loving it - interfaces, IoC, DI, TDD, etc. etc. Working out pretty well. But I'm finding I have to fight a tendency to make everything an interface! I have a factory which is an interface. Its methods return objects which could be interfaces (might make testing easier). Those objects are DI'ed interfaces to the services they need. What I'm finding is that keeping the interfaces in sync with the implementations is adding to the work - adding a method to a class means adding it to the class + the interface, mocks, etc.
Am I factoring the interfaces out too early? Are there best practices to know when something should return an interface vs. an object?
Interfaces are useful when you want to mock an interaction between an object and one of its collaborators. However there is less value in an interface for an object which has internal state.
For example, say I have a service which talks to a repository in order to extract some domain object in order to manipulate it in some way.
There is definite design value in extracting an interface from the repository. My concrete implementation of the repository may well be strongly linked to NHibernate or ActiveRecord. By linking my service to the interface I get a clean separation from this implementation detail. It just so happens that I can also write super fast standalone unit tests for my service now that I can hand it a mock IRepository.
Considering the domain object which came back from the repository and which my service acts upon, there is less value. When I write test for my service, I will want to use a real domain object and check its state. E.g. after the call to service.AddSomething() I want to check that something was added to the domain object. I can test this by simple inspection of the state of the domain object. When I test my domain object in isolation, I don't need interfaces as I am only going to perform operations on the object and quiz it on its internal state. e.g. is it valid for my sheep to eat grass if it is sleeping?
In the first case, we are interested in interaction based testing. Interfaces help because we want to intercept the calls passing between the object under test and its collaborators with mocks. In the second case we are interested in state based testing. Interfaces don't help here. Try to be conscious of whether you are testing state or interactions and let that influence your interface or no interface decision.
Remember that (providing you have a copy of Resharper installed) it is extremely cheap to extract an interface later. It is also cheap to delete the interface and revert to a simpler class hierarchy if you decide that you didn't need that interface after all. My advice would be to start without interfaces and extract them on demand when you find that you want to mock the interaction.
When you bring IoC into the picture, then I would tend to extract more interfaces - but try to keep a lid on how many classes you shove into your IoC container. In general, you want to keep these restricted to largely stateless service objects.
Sounds like you're suffering a little from BDUF.
Take it easy with the coolade and let it flow naturally.
Remember that while flexibility is a worthy objective, added flexibility with IoC and DI (which to some extent are requirements for TDD) also increases complexity. The only point of flexibility is to make changes downstream quicker, cheaper or better. Each IoC/DI point increases complexity, and thus contributes to making changes elsewhere more difficult.
This is actually where you need a Big Design Up Front to some extent: identify what areas are most likely to change (and/or need extensive unit testing), and plan for flexibility there. Refactor to eliminate flexibility where changes are unlikely.
Now, I'm not saying that you can guess where flexibility will be needed with any kind of accuracy. You'll be wrong. But it's likely that you'll get something right. Where you later find you don't need flexibility, it can be factored out in maintenance. Where you need it, it can be factored in when adding features.
Now, areas which may or may not change depends on your business problem and IT environment. Here are some recurring areas.
I'd always consider external
interfaces where you integrate to
other systems to be highly mutable.
Whatever code provides a back end to
the user interface will need to support change in the UI. However, plan for changes in functionality primarily: don't go overboard and plan for different UI technologies (such as supporting both a smart client and a web application – usage patterns will differ too much).
On the other hand, coding for
portability to different databases
and platforms is usually a waste
of time at least in corporate
environments. Ask around and check
what plans may exist to replace or
upgrade technologies within the
likely lifespan of your software.
Changes to data content and formats are a tricky
business: while data will
occasionally change, most designs
I've seen handle such changes
poorly, and thus you get concrete
entity classes used directly.
But only you can make the judgement of what might or should not change.
I usually find that I want interfaces for "services" - whereas types which are primarily about "data" can be concrete classes. For instance, I'd have an Authenticator interface, but a Contact class. Of course, it's not always that clear-cut, but it's an initial rule of thumb.
I do feel your pain though - it's a little bit like going back to the dark days of .h and .c files...
I think the most important "agile" principle is YAGNI ("You Ain't Gonna Need It"). In other words, don't write extra code until it's actually needed, because if you write it in advance the requirements and constraints may well have changed when(if!) you finally do need it.
Interfaces, dependency injections, etc. - all this stuff adds complexity to your code, making it harder to understand and change. My rule of thumb is to keep things as simple as possible (but no simpler) and to not add complexity unless it gains me more than enough to offset the burden it imposes.
So if you are actually testing and having a mock object would be very useful then by all means define an interface that both your mock and real classes implement. But don't create a bunch of interfaces on the purely hypothetical grounds that it might be useful at some point, or that it is "correct" OO design.
Interfaces have the purpose of establishing a contract and are particularly useful when you want to change the class performing a task on the fly. If there is no need to change the classes an interface just might get in the way.
There are automatic tools to extract the interface, so maybe you'd better delay the interface extraction later in the process.
It depends very much on what you are providing... if you are working on internal things then the advice of "don't do them until needed" is reasonable. If, however, you are making an API that is to be consumed by other developers then changing things around to interfaces at a later date can be annoying.
A good rule of thumb is to make interfaces out of anything that needs to be subclasses. This is not an "always make an interface in that case" sort of thing, you still need to think about it.
So, the short answer is (and this works with both internal things and providing an API) is that if you anticipate more than one implementation is going to be needed then make it an interface.
Somethings that generally would not be interfaces would be classes that only hold data, like say a Location class that deals with x and y. THe odds of there being another implementation of that is slim.
Do not create the interfaces first - you ain't gonna need them. You cannot guess for which classes you'll need an interface of for which classes you don't. Therefore do not spend any time to burden the code with useless interface now.
But extract them when you feel the urge to do so - when you see the need of an interface - at the refactoring step.
Those answers can help too.
Assuming the you are implementing a user story that requires changes in all layers from UI (or service facade) to DB.
In what direction do you move?
From UI to Business Layer to Repository to DB?
From DB to Repository to Business Layer to UI?
It depends. (On what ?)
The best answer I've seen to this sort of question was supplied by the Atomic Object guys and their Presenter First pattern. Basically it is an implementation of the MVP pattern, in which (as the name suggests) you start working from the Presenter.
This provides you with a very light-weight object (since the presenter is basically there to marshal data from the Model to the View, and events from the View to the Model) that can directly model your set of user actions. When working on the Presenter, the View and Model are typically defined as interfaces, and mocked, so your initial focus is on defining how the user is interacting with your objects.
I generally like to work in this way, even if I'm not doing a strict MVP pattern. I find that focusing on user interaction helps me create business objects that are easier to interact with. We also use Fitnesse in house for integration testing, and I find that writing the fixtures for Fitnesse while building out my business objects helps keep things focused on the user's perspective of the story.
I have to say, though, that you end up with a pretty interesting TDD cycle when you start with a failing Fitnesse test, then create a failing Unit Test for that functionality, and work your way back up the stack. In some cases I'm also writing Database unit tests, so there is another layer of tests that get to be written, failed, and passed, before the Fitnesse tests pass.
If change is likely, start in the front. You can get immediate feedback from shareholders. Who knows? Maybe they don't actually know what they want. Watch them use the interface (UI, service, or otherwise). Their actions might inspire you to view the problem in a new light. If you can catch changes before coding domain objects and database, you save a ton of time.
If requirements are rigid, it's not as important. Start in the layer that's likely to be the most difficult - address risk early. Ultimately, this is one of those "more an art than a science" issues. It's probably a delicate interplay between layer design that creates the best solution.
Cheers.
I'd do it bottom up, since you'll have some working results fast (i. e. you can write unit tests without a user interface, but can't test the user interface until the model is done).
There are other opinions, though.
I would start modeling the problem domain. Create relevant classes representing the entities of the system. Once I feel confident with that, I'd try to find a feasible mapping for persisting the entities to the database. If you put too much work into the UI before you have a model of the domain, there is a significant risk that you need to re-work the UI afterwards.
Thinking of it, you probably need to do some updates to all of the layers anyway... =)