Interface Insanity - tdd

I'm drinking the coolade and loving it - interfaces, IoC, DI, TDD, etc. etc. Working out pretty well. But I'm finding I have to fight a tendency to make everything an interface! I have a factory which is an interface. Its methods return objects which could be interfaces (might make testing easier). Those objects are DI'ed interfaces to the services they need. What I'm finding is that keeping the interfaces in sync with the implementations is adding to the work - adding a method to a class means adding it to the class + the interface, mocks, etc.
Am I factoring the interfaces out too early? Are there best practices to know when something should return an interface vs. an object?

Interfaces are useful when you want to mock an interaction between an object and one of its collaborators. However there is less value in an interface for an object which has internal state.
For example, say I have a service which talks to a repository in order to extract some domain object in order to manipulate it in some way.
There is definite design value in extracting an interface from the repository. My concrete implementation of the repository may well be strongly linked to NHibernate or ActiveRecord. By linking my service to the interface I get a clean separation from this implementation detail. It just so happens that I can also write super fast standalone unit tests for my service now that I can hand it a mock IRepository.
Considering the domain object which came back from the repository and which my service acts upon, there is less value. When I write test for my service, I will want to use a real domain object and check its state. E.g. after the call to service.AddSomething() I want to check that something was added to the domain object. I can test this by simple inspection of the state of the domain object. When I test my domain object in isolation, I don't need interfaces as I am only going to perform operations on the object and quiz it on its internal state. e.g. is it valid for my sheep to eat grass if it is sleeping?
In the first case, we are interested in interaction based testing. Interfaces help because we want to intercept the calls passing between the object under test and its collaborators with mocks. In the second case we are interested in state based testing. Interfaces don't help here. Try to be conscious of whether you are testing state or interactions and let that influence your interface or no interface decision.
Remember that (providing you have a copy of Resharper installed) it is extremely cheap to extract an interface later. It is also cheap to delete the interface and revert to a simpler class hierarchy if you decide that you didn't need that interface after all. My advice would be to start without interfaces and extract them on demand when you find that you want to mock the interaction.
When you bring IoC into the picture, then I would tend to extract more interfaces - but try to keep a lid on how many classes you shove into your IoC container. In general, you want to keep these restricted to largely stateless service objects.

Sounds like you're suffering a little from BDUF.
Take it easy with the coolade and let it flow naturally.

Remember that while flexibility is a worthy objective, added flexibility with IoC and DI (which to some extent are requirements for TDD) also increases complexity. The only point of flexibility is to make changes downstream quicker, cheaper or better. Each IoC/DI point increases complexity, and thus contributes to making changes elsewhere more difficult.
This is actually where you need a Big Design Up Front to some extent: identify what areas are most likely to change (and/or need extensive unit testing), and plan for flexibility there. Refactor to eliminate flexibility where changes are unlikely.
Now, I'm not saying that you can guess where flexibility will be needed with any kind of accuracy. You'll be wrong. But it's likely that you'll get something right. Where you later find you don't need flexibility, it can be factored out in maintenance. Where you need it, it can be factored in when adding features.
Now, areas which may or may not change depends on your business problem and IT environment. Here are some recurring areas.
I'd always consider external
interfaces where you integrate to
other systems to be highly mutable.
Whatever code provides a back end to
the user interface will need to support change in the UI. However, plan for changes in functionality primarily: don't go overboard and plan for different UI technologies (such as supporting both a smart client and a web application – usage patterns will differ too much).
On the other hand, coding for
portability to different databases
and platforms is usually a waste
of time at least in corporate
environments. Ask around and check
what plans may exist to replace or
upgrade technologies within the
likely lifespan of your software.
Changes to data content and formats are a tricky
business: while data will
occasionally change, most designs
I've seen handle such changes
poorly, and thus you get concrete
entity classes used directly.
But only you can make the judgement of what might or should not change.

I usually find that I want interfaces for "services" - whereas types which are primarily about "data" can be concrete classes. For instance, I'd have an Authenticator interface, but a Contact class. Of course, it's not always that clear-cut, but it's an initial rule of thumb.
I do feel your pain though - it's a little bit like going back to the dark days of .h and .c files...

I think the most important "agile" principle is YAGNI ("You Ain't Gonna Need It"). In other words, don't write extra code until it's actually needed, because if you write it in advance the requirements and constraints may well have changed when(if!) you finally do need it.
Interfaces, dependency injections, etc. - all this stuff adds complexity to your code, making it harder to understand and change. My rule of thumb is to keep things as simple as possible (but no simpler) and to not add complexity unless it gains me more than enough to offset the burden it imposes.
So if you are actually testing and having a mock object would be very useful then by all means define an interface that both your mock and real classes implement. But don't create a bunch of interfaces on the purely hypothetical grounds that it might be useful at some point, or that it is "correct" OO design.

Interfaces have the purpose of establishing a contract and are particularly useful when you want to change the class performing a task on the fly. If there is no need to change the classes an interface just might get in the way.
There are automatic tools to extract the interface, so maybe you'd better delay the interface extraction later in the process.

It depends very much on what you are providing... if you are working on internal things then the advice of "don't do them until needed" is reasonable. If, however, you are making an API that is to be consumed by other developers then changing things around to interfaces at a later date can be annoying.
A good rule of thumb is to make interfaces out of anything that needs to be subclasses. This is not an "always make an interface in that case" sort of thing, you still need to think about it.
So, the short answer is (and this works with both internal things and providing an API) is that if you anticipate more than one implementation is going to be needed then make it an interface.
Somethings that generally would not be interfaces would be classes that only hold data, like say a Location class that deals with x and y. THe odds of there being another implementation of that is slim.

Do not create the interfaces first - you ain't gonna need them. You cannot guess for which classes you'll need an interface of for which classes you don't. Therefore do not spend any time to burden the code with useless interface now.
But extract them when you feel the urge to do so - when you see the need of an interface - at the refactoring step.
Those answers can help too.

Related

Is it good to develop view before model (in MVC pattern)?

I was wondering about the best practice when using the MVC pattern.
When you develop an app for a client, you want to think business. You want to think as the customer would. That's why I'm wondering :
Isn't it better to develop the view part, without any data treatment, so the customer can validate it ?
I see this practice as powerful as TDD is, I mean if you clearly know what your program will look like, you know which treatment it will require, making the model part a bit more concrete and business oriented, instead of making it too abstract and global.
I can not see downsides to this, so if you can see some, or explain me why it's not a good idea, please do.
Thanks :-)
The main benefit, as I see it, would be the ability to provide the client with hands-on prototype.
It not uncommon for clients to change their mind, because often, when they hire a developer or company, they actually have only a vague goal for the end product. This way you would mitigate the risk of large scale changes late in the projects life-cycle.
As for implementation of such approach, I would recommend for you to look into concept of "presentation objects" (Fowler has this annoying habit of slapping "model" on every damned term).
With presentation objects you would gain an ability to "shim" the data from model layer's services. And it also would let you figure out, what exactly services (and service calls) will you *UI layer( interact with).
Note: of course I am assuming that with "MVC" you do not mean some Rails-style abomination, where "views" are just dumb templates.

What do you call a generalized (non-GUI-related) "Model-View-Controller" architecture?

I am currently refactoring code that coordinates multiple hardware components for data acquisition, and feeling a bit like I'm recreating the wheel. In particular, an MVC-like pattern seems to be emerging. Except, this has nothing to do with a GUI and I'm worried that I'm forcing this particular pattern where another might be more appropriate. Here's my scenario:
Individual hardware "component" classes obey interface contracts for each hardware type. Previously, component instances were orchestrated by a single monolithic InstrumentController class, which relied heavily on configuration + branching logic for executing a specific acquisition sequence. After an iteration, I have a separate controller for each component, with these controllers all managed by a small InstrumentControllerBase (or its derivatives). The composite system will receive "input" either programmatically or via inter-hardware component triggering - in either case these interactions are routed to, and handled by, the appropriate controller.
So, I have something that feels MVC-esque, but I don't know if that's because I'm forcing the point. With little direct MVC experience in application development, it's hard to know if I'm just trying to make my scenario fit MVC, where another pattern might be a good alternative or complimentary. My problem is, search results and wiki documentation of these family of patterns seems to immediately drop me into GUI-specific discussions.
I understand "M means Model data and the V means View" - but what do you call the superset pattern? Component-Commander-Controller?
Whence can I exhume examples exemplary?
IMO a "view" is not necessarily a GUI component. The pattern is easiest to demonstrate with GUIs but that does not limit its usability to GUIs. If it works for you, don't worry about the name :-) And of course, feel free to tailor it according to your needs.
Update: Of more generic kins of MVC, the only example which surfaced in my mind (after a day's background processing) is PAC.

Should you wrap 3rd party libraries that you adopt into your project? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
A discussion I had with a colleague today.
He claims whenever you use a 3rd party library, you should always write for it a wrapper. So you can always change things later and accomodate things for your specific use.
I disagree with the word always, the discussion arose regarding log4j and I claimed that log4j has well tested and time proven API and implementation, and everything thinkable can be configured a posteriori and there is nothing you should wrap. Even if you wanted to wrap there are proven wrappers like commons-logging and log5j.
Another example that we touched in our discussion is Hibernate. I claimed that it has a very big API to be wrapped. Furthermore it has a layered API which lets you tweak its inside if you so need. My friend claimed that he still believes it should be wrapped but he didn't do it because of the size of the API (this co-worker is much veteran than me in our current project).
I claimed this, and that wrapping should be done in specific cases:
you are not sure how the library will fit your needs
you will only use a small portion of a libary (in which case you may only expose a part of its API).
you are not sure of the quality of the library's API or implementation.
I also maintained that sometimes you can wrap your code instead of the library. For example, puting your database related code in a DAO layer, instead of preemptively wrapping all of hibernate.
Well, in the end this is not really a question, but your insights, experiences and opinions are highly appreciated.
It's a perfect example for YAGNI:
it is more work
it inflates your project
it may complicate your design
it has no immediate benefit
the scenarion you write it for may never manifest
when it does, your wrapper most likely needs to be re-written completely because it is tied too closely to the concrete library you were using and the new one's API simply doesn't match yours.
Well, the obvious benefit is for switching technologies. If you have a library that becomes deprecated, and you want to switch, you may end up rewriting a lot of code to accommodate the change, whereas if it were wrapped, you'd have an easier time writing a new wrapper for the new lib, than changing all your code.
On the other hand, it would mean that you have to write a wrapper for every trivial library that you include, which is probably an unacceptable amount of overhead.
My industry is all about speed, so the only time I'd be able to justify writing a wrapper is if it was around some critical library that was likely to change dramatically on a regular basis. Or, more commonly, if I need to take a new library and shoehorn it into old code, which is an unfortunate reality.
It's definitely not an "always" situation. It's something that may be desirable. But the time isn't always going to be there, and, in the end, if writing a wrapper takes hours and the long term code library changes are going to be few, and trivial...Why bother?
No. Java architects/wanna-bees are too busy designing against imaginary changes.
With modern IDE, it's a piece of cake when you do need change. Until then, keep it simple.
I agree with everything that's been said pretty much.
The only time wrapping third party code is useful (bar violating YAGNI) is for unit testing.
Mocking statics and so forth requires you to wrap the code, this is a valid reason to write wrappers for third party code.
In the case of logging code, its not needed though.
The problem here is partially the word 'wrapper', partially a false dichotomy, and partially a false distinction between the JDK and everything else.
The word 'wrapper'
Wrapping all of Hibernate, as you say, is a completely impractical enterprise.
Restricting the Hibernate dependencies to an identified, controlled, set of source files, on the other hand, may well be practical and achieve the same results.
The false dichotomy
The false dichotomy is the failure to recognize a third option: standards. If you use, say, JPA annotations, you can swap Hibernate for other things. If you are writing a web service and use JAX-WS annotations and JAX-B, you can swap between the JDK, CXF, Glassfish, or whatever.
The false distinction
Sure, the JDK changes slowly and is unlikely to die. But major open source packages also change slowly and are unlikely to die. Untold thousands of developers and projects use Hibernate. There's really no more risk of Hibernate disappearing or making radical incompatible API changes than there is of Java itself.
If the library you are planning to wrap is unique in its "access principles, metaphors and idioms" from other offerings in the same domain, then your wrapper is pretty much going to be similar to that library and won't do you any good if you one day switch to a different library since you will need a new wrapper.
If the library is accessed in a similar way to other libraries and the same wrapper can apply to these libraries, then they are probably written based on some existing standard and there is some common layer that already exists to access both of them.
I would only go with wrappers if I knew for sure that I would have to support multiple and substantially different libraries in production.
The main factor for deciding to wrap a library or not is the impact a library change will have on the code. When a library is only called from 1 class the impact of changing library will be minimal. If on the other side a library is called in all classes a wrapper is much more likely.
Any uncertainty around the choice of 3rd party library should be flushed out at the beginning of the project using prototypes to test the scalability/suitability/whatever of the 3rd party library.
If you decide to go ahead and provide full de-coupling/abstraction support it should be costed up and ultimately approved by the project sponsor - ultimately it's a commercial decision as someone has to pay for it and the work required to do it (unless it's absolutely trivial, in which case the api is probably low risk anyway).
Generally an experienced architect will chose a technology that they can be reasonably confident with, and have experience of, and that they are confident will last the lifetime of the app, OR else they will eliminate any risk in the decision early on in the project, thus removing any need to do this, most of the time
I'd tend to agree with most of your points. Using absolutes often gets you into trouble and saying you should "always" do something limits your flexibility. I'd add some more points to your list.
When you use wrapping code around a very common API, like Hibernate or log4j you make it more difficult to bring on new developers. New developers now have to learn a whole new API, where if you hadn't wrapped the code they would have been very familiar right away.
On the flip side of that, you also limit your developers' view into the API. Using an advanced feature of the API takes more time because you have to make sure that your wrapper is implemented in a way that can handle it.
Many of the wrapping layers I've seen also are very specific to the underlying implementation. So, if you write a log wrapper around log4j, you are thinking in log4j terms. If some new cool framework comes out, it may change the whole paradigm, so your wrapping code doesn't migrate as well as you had thought.
I'm definitely not saying wrapping code is always bad, but as you stated, there are a lot of factors you have to consider.
The purpose of wrapping even a well-tested and time-proven 3rd-party library is that you might decide to switch libraries at some point in the future. Wrapping it makes it easier to switch without changing any code in your core application. Only the wrapper needs to change.
If you're absolutely sure that you'll never (another absolute) use a different logging framework in your project, go ahead and skip the wrapper. Even having said that, I'd probably hold off on writing the wrapper until I knew I needed it, like the first time I need to switch.
This is kind of a funny question.
I've worked in systems where we've found showstopper bugs in libraries we were using, and which upstream was either no longer maintaining, or not interested in fixing. In a language like Java, you usually can't fix internal bugs from a wrapper. (Fortunately, if they're open-source, you can at least fix them yourself.) So it's no help here.
But I'm often working in a language where you can easily modify libraries at any time, without seeing or even having their source code -- I commonly add new methods to existing classes, for example. So in this case, there's no point in wrapping: just make the change you want.
Also, does your colleague draw the line at things called "libraries"? What about Java itself? Does he wrap built-in classes? Does he wrap the filesystem? The thread scheduler? The kernel? (That is, with his own wrappers -- in a sense, everything is a wrapper around the CPU, but it sounds like he's talking about wrappers in your source repo that are completely under your control.) I've had built-in functionality change or disappear when new versions of it appear. Java is not immune from this.
So the idea to always write a wrapper comes down to a bet. Assuming he's only wrapping third-party libraries, he seems to be implicitly betting that:
"first-party" functionality (like Java itself, the kernel, etc.) will never change
when "third-party" functionality changes, it will always be done in a way that can be fixed in a wrapper
Is that true in your case? I don't know. Of the medium-large Java projects I've done, it's rarely true for me. I wouldn't spend effort wrapping all third-party libraries, because it seems like a poor bet, but your situation is certainly different from mine.
There is one situation where you with good reason can wrap. Namely if you need to test stuff, and the default third party object is heavy weight. Then having an interface can really make a difference.
Note, this is not to replace the library ,but make it manageable where it doesn't matter much.
Wrapping a whole library is boilerplate, ineffective, and wrong in most cases. It can be done in a much clever way. I'd say that wrapping a library is appropriate mostly in case of UI component libraries, and again, you have to be adding some additional core functionality of yours to all the components for this to be needed.
if too much modifications and additions are needed, this is most likely not the library you are looking for
if there is a moderate amount of additions and modifications - there are always the design patterns that come handy in those cases. The Decorator pattern (allows new/additional behaviour to be added to an existing object dynamically) , for example, is rather suitable for the most cases.
IDE search/replace and refactoring capabilities offer an easy way to change your code in all required places if some important change is needed and a wrapping object appears. (of course, unit-tests would be helpful here ;) )
In my experience the question becomes fairly moot if you're using abstractions sufficiently. Coupling to a library is just like coupling to any other interface. Thus you want to reduce accidental coupling and the scope of rewrite necessary if you need to swap out the implementation. Don't bind your application logic to some construct, but don't just form a bunch of stupid (literally) wrappers around something and expect to gain any benefit.
A wrapper doesn't usually gain you anything unless it's answering a specific purpose (such as polymorphizing a non-polymorphic construct). They often show up in refactoring, but I wouldn't recommend forming an architecture on them. There's a few exceptions of course, but there is with any principle.
This doesn't speak toward adapters. An adapter can be a pretty important component for when you want to actually alter the interface of a library and its use to be in line with architecture, code, or domain concepts in your project.
You should do it always, often, sometimes, rarely, or never. Not even your colleague does it always, but the instructive cases are always and never. Suppose that it is sometimes necessary. If you never wrapped a library, the worst consequence is that one day you discovered that it was necessary for a library that you had used all over the place. It would take you some time to wrap that library and to perform shotgun surgery on the clients. The question is whether that eventuality would take more time than habitually providing wrappers that are rarely necessary, but having never to perform the shotgun surgery.
My instinct is to appeal to the YAGNI (you ain't gonna need it) principle and opt for "rarely".
I would not wrap it as a one to one thing, but I would layer the app so that each part it replaceable as much as possible. The ISO OSI model works well for all types of software :-)

Performance of using static methods vs instantiating the class containing the methods

I'm working on a project in C#. The previous programmer didn't know object oriented programming, so most of the code is in huge files (we're talking around 4-5000 lines) spread over tens and sometimes hundreds of methods, but only one class. Refactoring such a project is a huge undertaking, and so I've semi-learned to live with it for now.
Whenever a method is used in one of the code files, the class is instantiated and then the method is called on the object instance.
I'm wondering whether there are any noticeable performance penalties in doing it this way? Should I make all the methods static "for now" and, most importantly, will the application benefit from it in any way?
From here, a static call is 4 to 5 times faster than constructing an instance every time you call an instance method. However, we're still only talking about tens of nanoseconds per call, so you're unlikely to notice any benefit unless you have really tight loops calling a method millions of times, and you could get the same benefit by constructing a single instance outside that loop and reusing it.
Since you'd have to change every call site to use the newly static method, you're probably better spending your time on gradually refactoring.
I have dealt with a similar problem where i work. The programmer before me created 1 controller class where all the BLL functions were dumped.
We are redesigning the system now and have created many Controller classes depending on what they are supposed to control e.g.
UserController, GeographyController, ShoppingController...
Inside each controller class they have static methods which make calls to cache or the DAL using the singleton pattern.
This has given us 2 main advantages. It is slightly faster(around 2-3 times faster but were talking nanoseconds here ;P). The other is that the code is much cleaner
i.e
ShoppingController.ListPaymentMethods()
instead of
new ShoppingController().ListPaymentMethods()
I think it makes sense to use static methods or classes if the class doesn't maintain any state.
It depends on what else that object contains -- if the "object" is just a bunch of functions then it's probably not the end of the world. But if the object contains a bunch of other objects, then instantiating it is gonna call all their constructors (and destructors, when it's deleted) and you may get memory fragmentation and so on.
That said, it doesn't sound like performance is your biggest problem right now.
You have to determine the goals of the rewrite. If you want to have nice testable, extendable & maintainable OO code then you could try to use objects and their instance methods. After all this is Object Oriented programing we're talking about here, not Class Oriented programming.
It is very straightforward to fake and/or mock objects when you define classes that implement interfaces and you execute instance methods. This makes thorough unit testing quick and effective.
Also, if you are to follow good OO principles (see SOLID at http://en.wikipedia.org/wiki/SOLID_%28object-oriented_design%29 ) and/or use design patterns you will certainly be doing a lot of instance based, interface based development, and not using many static methods.
As for this suggestion:
It seems silly to me to create an object JUST so you can call a method which
seemingly has no side effects on the object (from your description i assume this).
I see this a lot in dot net shops and to me this violates encapsulation, a key OO concept. I should not be able to tell whether a method has side effects by whether or not the method is static. As well as breaking encapsulation this means that you will need to be changing methods from static to instance if/when you modify them to have side effects. I suggest you read up on the Open/Closed principle for this one and see how the suggested approach, quoted above, works with that in mind.
Remember that old chestnut, 'premature optimization is the root of all evil'. I think in this case this means don't jump through hoops using inappropriate techniques (i.e. Class Oriented programming) until you know you have a performance issue. Even then debug the issue and look for the most appropriate.
Static methods are a lot faster and uses a lot less memory. There is this misconception that it's just a little faster. It's a little faster as long as you don't put it on loops. BTW, some loops look small but really aren't because the method call containing the loop is also another loop. You can tell the difference in code that performs rendering functions. A lot less memory is unfortunately true in many cases. An instance allows easy sharing of information with sister methods. A static method will ask for the information when he needs it.
But like in driving cars, speed brings responsibility. Static methods usually have more parameters than their instance counterpart. Because an instance would take care of caching shared variables, your instance methods will look prettier.
ShapeUtils.DrawCircle(stroke, pen, origin, radius);
ShapeUtils.DrawSquare(stroke, pen, x, y, width, length);
VS
ShapeUtils utils = new ShapeUtils(stroke,pen);
util.DrawCircle(origin,radius);
util.DrawSquare(x,y,width,length);
In this case, whenever the instance variables are used by all methods most of the time, instance methods are pretty worth it. Instances are NOT ABOUT STATE, it's about SHARING although COMMON STATE is a natural form of SHARING, they are NOT THE SAME. General rule of thumb is this: if the method is tightly coupled with other methods --- they love each other so much that they when one is called, the other needs to be called too and they probably share the same cup of water---, it should be made instance. To translate static methods into instance methods is not that hard. You only need to take the shared parameters and put them as instance variables. The other way around is harder.
Or you can make a proxy class that will bridge the static methods. While it may seem to be more inefficient in theory, practice tells a different story. This is because whenever you need to call a DrawSquare once (or in a loop), you go straight to the static method. But whenever you are gonna use it over and over along with DrawCircle, you are gonna use the instance proxy. An example is the System.IO classes FileInfo (instance) vs File (static).
Static Methods are testable. In fact, even more testable than instance once. Method GetSum(x,y) would be very testable to not just unit test but load test, integrated test and usage test. Instance methods are good for units tests but horrible for every other tests (which matters more than units tests BTW) that is why we get so many bugs these days. The thing that makes ALL Methods untestable are parameters that don't make sense like (Sender s, EventArgs e) or global state like DateTime.Now. In fact, static methods are so good at testability that you see less bugs in C code of a new Linux distro than your average OO programmer (he's full of s*** I know).
I think you've partially answered this question in the way you asked it: are there any noticeable performance penalties in the code that you have?
If the penalties aren't noticeable, you needn't necessarily do anything at all. (Though it goes without saying the codebase would benefit dramtically from a gradual refactor into a respectable OO model).
I guess what I'm saying is, a performance problem is only a problem when you notice that it's a problem.
It seems silly to me to create an object JUST so you can call a method which seemingly has no side effects on the object (from your description i assume this). It seems to me that a better compromise would be to have several global objects and just use those. That way you can put the variables that normally would be global into the appropriate classes so that they have slightly smaller scope.
From there you can slowly move the scope of these objects to be smaller and smaller until you have a decent OOP design.
Then again, the approach that I would probably use is different ;).
Personally, I would likely focus on structures and the functions which operate on them and try to convert these into classes with members little by little.
As for the performance aspect of the question, static methods should be slightly faster (but not much) since they don't involve constructing, passing and deconstructing an object.
It's not valid in PHP,
Object Method is faster :
http://www.vanylla.it/tests/static-method-vs-object.php

Developing N-Tier App. In what direction?

Assuming the you are implementing a user story that requires changes in all layers from UI (or service facade) to DB.
In what direction do you move?
From UI to Business Layer to Repository to DB?
From DB to Repository to Business Layer to UI?
It depends. (On what ?)
The best answer I've seen to this sort of question was supplied by the Atomic Object guys and their Presenter First pattern. Basically it is an implementation of the MVP pattern, in which (as the name suggests) you start working from the Presenter.
This provides you with a very light-weight object (since the presenter is basically there to marshal data from the Model to the View, and events from the View to the Model) that can directly model your set of user actions. When working on the Presenter, the View and Model are typically defined as interfaces, and mocked, so your initial focus is on defining how the user is interacting with your objects.
I generally like to work in this way, even if I'm not doing a strict MVP pattern. I find that focusing on user interaction helps me create business objects that are easier to interact with. We also use Fitnesse in house for integration testing, and I find that writing the fixtures for Fitnesse while building out my business objects helps keep things focused on the user's perspective of the story.
I have to say, though, that you end up with a pretty interesting TDD cycle when you start with a failing Fitnesse test, then create a failing Unit Test for that functionality, and work your way back up the stack. In some cases I'm also writing Database unit tests, so there is another layer of tests that get to be written, failed, and passed, before the Fitnesse tests pass.
If change is likely, start in the front. You can get immediate feedback from shareholders. Who knows? Maybe they don't actually know what they want. Watch them use the interface (UI, service, or otherwise). Their actions might inspire you to view the problem in a new light. If you can catch changes before coding domain objects and database, you save a ton of time.
If requirements are rigid, it's not as important. Start in the layer that's likely to be the most difficult - address risk early. Ultimately, this is one of those "more an art than a science" issues. It's probably a delicate interplay between layer design that creates the best solution.
Cheers.
I'd do it bottom up, since you'll have some working results fast (i. e. you can write unit tests without a user interface, but can't test the user interface until the model is done).
There are other opinions, though.
I would start modeling the problem domain. Create relevant classes representing the entities of the system. Once I feel confident with that, I'd try to find a feasible mapping for persisting the entities to the database. If you put too much work into the UI before you have a model of the domain, there is a significant risk that you need to re-work the UI afterwards.
Thinking of it, you probably need to do some updates to all of the layers anyway... =)

Resources