I'm not entirely convinced of the benefits of a 3-tier architecture. Why, then, has LINQ emerged, which is a lighter data access approach? Any input would be appreciated.
One of the main benefits of n-tier applications (there are of course many more than what I mention here) is the separation of concerns that it brings. If you structure your application so, that the responsibility for i.e. data access is held in a data access layer (LINQ2SQL is a perfectly good example of one), validation and other business logic in one or more other layers, presentation in yet another one etc., you can change details in, or even replace, an either layer without having to re-write the rest of your applicaton.
If, on the other hand, you choose not to implement an n-tier approach, you'll quickly notice that for example changing the name of one single database table will require you to go through your entire application - every single line of code - in search for SQL statements that need to be updated. In an n-tier application (if you've done things rigth), you'll only have to change the table name once in your code.
You need to do it the naive way and fail before you realize the problems those frameworks and patterns solve.
Happened to me with many things. SVN branches looked like a disorganized way to do things, until one day I wished I had branched before my last 5 commits. C++ templates seemed useless and confusing, until I got enlightened; now I use them pretty often. And every single J2EE feature will look like useless bloat to anyone, until you actually build an app big enough and have problems; then they may be exactly what you need. (thus it's a flaw to "require" you use them)
Like in most fields of engineering there's never a perfect one-size-fits-all solution for development or architecture. So it is with n-tier architectures.
For example, quite a few applications run perfectly well as a one-tier or two-tier architecture. Microsoft Word, for example, does quite well, thank you, as a single-tier system.
Most business applications have started using layers (as distinct from tiers: layers are virtual, tiers are physical) as it makes life much easier to have presentation logic in one place, business logic in another, and persistence logic somewhere else. It can make sense too depending on the application to have lots more layers: I recently finished up a project with about sixteen layers between the UI client and the SQL database. (We had REST services, co-ordination layers, mixed databases, you name it. Made for quite a deployment challenge.)
The nice thing about all these layers are
testing becomes fairly easy, as each layer does one and only one thing
it's feasible to scale, especially if you design your layers to be stateless: then you can group them together and deploy to separate boxes quite easily
it's feasible to have lots of developers working simultaneously, so long as you keep talkin' to each other
changes are (usually) limited to one layer in the code
LINQ, the Language Integrated Query, really helps too, as can abstracts away much of the harder parts of working with persistence layers. For instance
the SQL-like syntax maps fairly directly to SQL database tables or views
working with more complex non-relational data like XML files is made straightforward
Without LINQ developing persistence layers was invariably repetitive, never a good thing.
Related
Context:
We built a data-intensive app for US region for just one client using ASP.NET MVC and now we are slowly moving to ASP NET Core. we have a requirement to develop similar version for Canada, our approach was to maintain two different code bases even though the UI is 70% same.
Problem:
Two code bases seems maintainable but were ending up doing double work if a generic component has to changed. Now we have multiple clients coming from multiple regions and UI can be a little different by client and region and we are bit confused on how to architect the such an app with just once code base.
I am not sure on what would be a maintainable and scalable approach.
One approach is having an UI powered by rules engine that is capable of showing and hiding the components. How maintainable is this approach in deployments perspective?
what would be other approaches to solve this problem?
The main approaches I can think of are:
Separate code bases and release pipelines. This seems to be your current approach.
Pros:
independent releases - no surprises like releasing a change to Canada which the other team made for US
potentially simpler code base - less configuration, fewer "if (region == 'CANADA')..."
independent QA - it's much simpler to automate testing if you're just testing one environment
Cons:
effort duplication as you've already noticed
One code base, changes configuration driven.
Pros:
making a change in one place
Cons:
higher chance on many devs working on the same code at the same time
you're likely to end up with horrible 'ifs'
separating release pipelines can be very tricky. If you have a change for Canada, you need to test everything for US - this can be a significant amount of effort depending on the level of QA automation and the complexity of your test scenarios. Also, do you release US just because someone in Canada wanted to change the button color to green? If you do then you waste time. If you don't then potentially untested changes pile up for the next US release.
if you have other regions coming, this code quickly becomes complex - many people just throw stuff in to make their region work and you end up with spaghetti code.
Separate code bases using common, configurable modules.
This could contain anything you decide is unlikely to differ across regions: Nuget packages with core logic, npm packages with javascript, front end styling, etc.
Pros:
if done right you can get the best of both worlds - separate release pipelines and separate (simple) region specific code
you can make a change to the common module and decide when/if to update each region to the newest version separately
Cons:
more infrastructure effort - you need a release pipeline per app and one per each package
debugging and understanding packaged code when used in an app is tricky
changing something in common module and testing it in your app is a pain - you have to go to the common repository, make a change, test it, create a PR, merge it, wait for the package to build and get released, upgrade in your app... and then you discover the change was wrong.
I've worked with such projects and there are always problems - if you make it super configurable it becomes unreadable and overengineered. If you make it separated you have to make changes in many places and maintaining things like unit tests in many places is a nightmare.
Since you already started with approach 1 and since you mentioned other regions are coming, I'd suggest going with your current strategy and slowly abstracting common pieces to separate repos (moving towards 3rd approach).
I think the most important piece that will make such changes easier is a decent level of test automation - both for your apps and for your common modules when you create them.
One piece of advice I can give you is to be pragmatic. Some level of duplication is fine, especially if the alternative is a complex rule engine that no one understands and no one wants to touch because it's used everywhere.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I am new in web development, and read some wiki and discussions about MVC. However, the more I read, the more confusion I have about its design purpose.
I just want to know why is this design pattern invented? And what problem is it used to solve?
Thanks in advance.
The goal of the MVC paradigm is in essence to ensure a form of separation of code. The problem that often arise when developing code is that the code is written in a succession, where each part follows another and where each part is directly dependent upon what the other parts are doing.
When working with a large project, maintaining and further developing the code can quickly become an issue. You could therefore argue, in a simplified manner, that what the MVC paradigm tries to do is to ensure that you separate business logic (e.g. the code that performs) from the presentation logic (the code that shows the results). But those two parts need to communicate with each other, which is what the controller is responsible for.
This allows for a clear structure of code where the different parts are more decoupled, meaning less dependent upon each other.
The separation also means that you work in a much more modular way, where each part interacts with the others through an interface (some defined functions and variables that are used to call upon other parts) so that you can change the underlying functionality without having the change other parts of your code, as long as your interface remains the same.
So the problem it tries to solve is to avoid having a code base that is so entangled that you can't change or add anything without breaking the code, meaning you have to modify the code in all sorts of places beyond where you made your original changes.
To some degree it's a solution in search of a problem.
As a rather ancient programmer I'm well aware of the benefits of "separation of concerns", but (in my not-so-humble opinion) MVC doesn't do this very well, especially when implemented "cook-book" fashion. Very often it just leads to a proliferation of modules, with three separate modules for every function, and no common code or common theme to tie things together and accomplish the real goal: minimize complexity and maximize reliability/maintainability.
"Classical" MVC is especially inappropriate in your typical phone GUI app, where, eg, management of a database table may be intimately connected to management of a corresponding table view. Spreading the logic out among three different modules only makes things more complicated and harder to maintain.
What does often work well is to think about your data and understand what sorts of updates and queries will be required, then build a "wrapper" for the database (or whatever data storage you use), to "abstract" it and minimize the interactions between the DB and the rest of the system. But planning this is hard, and a significant amount of trial and error is often required -- definitely not cook-book.
Similarly you can sometimes abstract other areas, but abstracting, say, a GUI interface is often too difficult to be worthwhile -- don't just write "wrappers" to say you did it.
Keep in mind that the authors of databases, GUI systems, app flow control mechanisms, etc, have already put considerable effort (sometimes too much) into abstracting those interfaces, so your further "abstraction" is often little more than an extra layer of calls (especially if you take the cook-book approach).
Model view controller was created to separate concerns in code instead of crating a hodge podge all in a single blob. (Spaghetti code) the view code is merely presentation logic, the model is your objects representing your domain and the controller handle negotiating business logic and integrations to services on the backend.
Core Data is pretty amazing, and I've really enjoyed using the visual layout Xcode provides for it to organize things and get a quick sample of what data I've placed where. At times I've started to wonder if I'm making the best use of it, however, as after a while there tends to be such a mass of arrows that it becomes difficult to tell what's going where.
I try to keep this to a minimum by
grouping like objects together,
abstract objects/parents in trees with their children,
etc.
but the clutter seems inevitable.
What are some ways you employ to keep it optimally organized and readable?
This is difficult to answer in a general sense. I think it's important and you're right to give this some good consideration. I tend to obsess over the visual arrangement of things myself as I find it has a profound affect on my perception and ongoing understanding of my own schema. Xcode's data modeler is essentially a schema design and design documentation tool.
I strive to compartmentalize my own designs as much as possible. For example, if you consider an iTunes-like case, you might have a controller managing the library source list selection (a playlist, for a simple example), and another managing the members of the selected playlist. In the schema, there may be several "library-related" entities and several "playlist-related" entities, and there are definitely several "song-related" entities (album, artist, and song/track). I'd group the song-related stuff tightly together in a way that nicely arranges the relationship lines, but that keeps these entities visually separated by space from playlist- and library-related items.
In other words, if you keep related items together in clearly-defined logical clusters, separated by nice whitespace, organized in the same way you'd organize your controllers, the concepts are kept fairly clear.
The other problem is Xcode's automatic placement of the relationship lines. Unfortunately, there's little we can do about making those neat. I've been known to spend (actual time redacted out of embarrassment) worrying over balancing clearly-depicted relationships with clearly-depicted clusters of interrelated entities.
Good luck and happy OCD! :-)
here's better suggestion.
http://www.sebastianrehnby.com/blog/2013/01/15/structuring-an-ios-project/
Additionally, Services module, Helper module(your app utility classes)
services - (calling external services like your back-end server, DBOject services)
Also, this one
http://www.slideshare.net/MassimoOliviero/architecting-ios-project
Historically I've been completely against using ORMS for all but the most basics applications.
My reasoning has and always has been that it's a very leaky abstraction ... mostly because SQL provides a very powerful way to retreive data from a relational source which usually gets messed up by the ORM so that you lost a lot of performance to gain an appearance of not having a relational backend.
I've always thought the DATA should always be kept in the Data Base, not eat up application memory which won't scale anyway. In addition the performance hit of being to generic is harmful. For example, if I need the name and address of all the clients of my database SQL provides me with an easy way to get it, in one query. With an ORM I need to get all the clients and then each name and address, even if it's lazy loaded it's gonna take a LOT longer.
That's what I think but has any of the above changed? I'm seeing a lot of ORMS like the Entity Framework, NHibernate, etc. And they seem to have a lot of popularity lately... Are they worth it? Do they solve the problems I describe above??
Please read: All Abstractions Are Failed Abstractions It should put a lot of your questions in perspective.
Performance is usually not an issue with ORM - and if you really find yourself in a situation where it is, then there usually is always the option to handcraft the SQL statements the ORM uses.
IMHO ORM give you an instant and huge development speed increase. That's why they are so popular. And using them right does not make you paint yourself in a corner. There is always the option of hand tuning the performance.
Edit:
Even though Jeff focuses on Linq to SQL all he says about abstractions and performance are equally true for NHibernate (which I know from years of real world app development). IMHO one should use by default an ORM since they are more than fast enough for the notorious 90% of situations. Reading code written for an ORM usually is more maintainable and readable especially when your code is picked up by the next developer that inherits your code. Always code as if the person who ends up maintaining your code is a violent psychopath who knows where you live. Never forget about that guy!
In addition they give out of the box caching, lazy loading, unit of work, ... you name it. And I found that when I was not happy about the performance of the ORM it was MY fault. ORM do force you to adhere to good OO design practices and help you shape your Domain Model.
On the Ruby on Rails side, ActiveRecord -- essentially an ORM -- is the basis of 95% of Rails applications (made-up statistic, but it's around there). Actually, to get to that 95% we would probably need to include other ORMs for Rails, like DataMapper.
The abstraction is leaky, and a developer can always dip down to SQL as necessary. Even when you're not using SQL directly, you have to think about number of database hits, etc. For instance, in ActiveRecord, "eager loading" is used to avoid multiple database hits, so you see stuff like this (includes the related "author" field of each Post in the initial query... it does a join under the hood, I think)
for post in Post.find(:all, :include => :author)
The point is that the abstraction leaks as do all abstractions, but that's not really the point. To decide whether to use the abstraction or not, you have to consider whether it will add to or reduce your general workload. In other words, will you spend more time retrofitting your concepts to make the abstraction work, or is it ready to do what you need without much hacking (saving you time)?
I think that the abstractions that work are those that are mature: ActiveRecord has been around the block a ton (as has Hibernate), so it provides an abstract way to patch most of the leaks you would normally be worried about, without explicitly rolling your own lower-level solution (i.e., without writing SQL).
Beyond the learning curve, I think that ORMs are an amazing time-saver for most of your database access, and that most apps actually do make quite "normal" use of the DB. While it may not be your case whatsoever, eschewing an ORM for direct DB access is often a case of early, and unnecessary, optimization.
Edit: I hadn't seen this, but the Jeff quote is
Does this abstraction make our code at
least a little easier to write? To
understand? To troubleshoot? Are we
better off with this abstraction than
we were without it?
saying essentially the same thing.
Some of the more modern ORM's are really powerful tools that solve a lot of real world problems. The good ORM's don't try to hide the relational model from you, but actually leverage it to make OO programming more powerful. They really aren't abstractions in the sense that they let you ignore the "lowlevel" details of relational algebra, instead they are toolkits that let you build abstractions on the relational model and make it easier to bring in data into the imperative model, track the changes and push them back to the database. The SQL language really doesn't provide any good way to factor out common predicates into composable, reusable components to achieve businesstule level abstractions.
Sure there is a performance hit, but it's mostly a constant factor thing as you can make the ORM issue what ever SQL you would issue yourself. Like for your name and address example, in SQLAlchemy you'd just do
for name, address in session.query(Client.name, Client.address):
# process data
and you're done. But where the ORM helps you is when you have reusable relations and predicates. For instance, say you have defined a way to join to a client's favorited items, and a predicate to see if it is on sale. Then you can get the list of clients that have some of their favorite items on sale while also fetching the assigned salesperson with the following query:
potential_sales = (session.query(Client).join(Client.favorite_items)
.filter(Item.is_on_sale)
.options(eagerload(Client.assigned_salesperson)))
Atleast for me, the intent of the query is a lot faster to write, clearer and easier to understand when written like this, instead of a dozen lines of SQL.
As to any abstraction, you'll have to pay either in form of performance, or leaking. I agree with you in being against ORM's, since SQL is a clean and elegant language. I've sort of written my own little frameworks which do this things for me, but hey, then I sat there with my own ORM (but with a little more control over it than for example Hibernate). The people behind Hibernate states that it is fast. It should be able to do about 95% of the boring work against your database (simple queries, updates etc..) but gives you freedom to do the last 5% yourself if you want (you could always write your own mappings in special cases).
I think most of the popularity stems from that many programmers are lazy and want established frameworks to do the dirty boring persistence job for them (I can understand that), but the price of an abstraction will always be there. I would consider my options thoroughly before choosing to use an ORM in a serious project.
I'm trying to build a very light re-usable framework for my games, rather than starting from scratch each time I start a game. I have a component driven architecture - e.g. Entity composes a Position component and a Health component and Ai component etc.
My big question is whether my model composes view components to allow for more than one view of the model, or whether to use a truer MVC where the model does not know about its views, and they are managed externally.
I have tried both methods but if anyone knows the pros and cons of each approach and which is the industry standard, it would be great to know.
depends on your audience, game devs, myself included aren't very used to the MVC model, although most know it, it's not as easy to keep it clean cut, because of development casualties (not any serious technical reasons). So from experience, I've seen dozens of game frameworks start as MVC, but only a pair were able to maintain it until the end. My theory is MVC adds too much complexity and little benefits for small throwaway games (with normally a few devs), and it's to hard to keep really cleanly separate most game objects into these layers for large/complex games. And since games have a release date, they many times sacrifice code clarity and reusability for performance and quick adhoc solutions (that will get rewritten if necessarry in the sequel (if there is one)).
However, with the caveat above, it's better to aim high, because if you succeed it's better :) and if you fail, well to bad. So you should probably try the MVC, but don't worry if it fails, profesional game devs have all failed at the task many times :)
I’d certainly vote for the model to know nothing about its views. Loose coupling is good: Simpler model code, easier testing, more choices.
I know this question might be outdated, but I need to reply on it.
Actually, I started programming a game in Lua (with LÖVE) and I started programming a MVC - Framework for it.
At first, to use MVC really depends on what you want.
I know my problems with game programming, when the program becomes bigger, and mostly the structure becoms too complex to maintain.
Next thing is, I know that I will change all the graphics when I find an artist who is willing to work for it. But until then, I'm gonna use my own dummy graphics.
I want the artist to feel free to do what ever he wants, without beeing dependend on any resolution or color restriction.
That means, I might have to change the whole (!) presentation code. Maybe even the way objects interact (collision detection, f.e.).
The game logic is captured in the models, so I can concentrate on that. And I think game logic is the most important part of making a game. Isn't it?
Hope you see my point.
But, if you have everything together: all the graphics, sounds, the whole thing; then you can code straight forward.
My MVC is a configuration-over-convention-ass, that slows down prototyping a bit.
BUT(!) iterations of development can be made much more easily. Testing, especially Unit-Tests are done much more faster.
I would say MVC turns you development-speed-curve (which is normally an anti-exponential curve) into an exponential curve. Slow at the beginning, but more and more fast at the end.
MVC works really well for games, at least for my games which are designed for cross-platform.
It really depends on how you implement it in order to get the benefit.