We are designing our new product, which will include multi-tenancy. It will be written in ASP.NET and C#, and may be hosted on Windows Azure or some other Cloud hosting solution.
We’ve been looking at MVC and other technologies and, to be honest, we’re getting bogged down in various acronyms (MVC, EF, WCF etc. etc.).
A particular requirement of our application is causing a headache – the users will be able to add fields to the database, or even create a whole new module.
As a result, each tenant would have a database with a different structure to every other tenant using the system. We envisage that every tenant will have their own database, rather than sharing a database.
(Adding fields etc. to the system will be accomplished using a web interface).
All well and good, but the problem comes when creating a data model for MVC. Modifying a data model programmatically to add a field to a table seems to be impossible, according to this link:
Create EDM during runtime?
This is a major headache for us. Even if we don’t use MVC, I think we’d still want to create a data model (perhaps for used with LINQ to SQL).
We’re considering having a table with loads of fields in it, and instead of adding fields to the database we allocate an existing field in the table when the user wants to add a field to his form. Not sure I like that idea, though.
Of course, we don’t have to use MVC or Entity Framework, but it appears to me that these are the kind of technologies that Microsoft would steer us towards for future development.
Any thoughts? I’m assuming that we’re not the first people in the world to consider this idea of a user-customisable application.
I'd make sure that you have fully explored the option of creating 'Name-Value Pair' type tables as described here http://msdn.microsoft.com/en-us/library/aa479086.aspx#mlttntda_nvp
before you start looking at a customizable schema. Also don't forget that you are going to have to grant much higher permissions to your sql accounts in order for them to create tables on the fly.
A customizable schema means that your sql accounts will also need much higher permissions. It wouldnt be advisable to assign these higher permissions to a tenants account, but to a separate provisioning account which can perform these tasks.
Also before investing effort into EF - try googling 'EF Vote of No Confidence'. It was raised (i believe) mainly in reaction to earlier versions but its definately worth reading up on. nHibernate is an alternative worth investigating.
Just off the top of my head it sounds like a bad idea to allow users to change the database schema. I think you are missing a layer of abstraction. In my mind, it would be more correct to use the database to hold data that describes the format of a customer's data. The actual data would then be saved in a text column as xml, including version information.
This solution may not fit your needs, but I don't know the details of your project. So just consider it my 5 cents.
Most modern SQL databases today supports the 'jsonb' type for key/value storage as a field. Other types (hstor for postgres) exists too. Forget about XML, that's yesterday and no application with respect for itself implements XML unless it is for importing/converting old data.
Related
We plan to change our multitenant ordering system on the intranet.
All products of the product catalog are retrieved through web services. This back-end architecture can not be replaced. Today, however, we are facing performance problems that should be eliminated with the new solution.
Therefore, we plan to use one caching db per tenant and we have made first tests with RavenDB.
The product catalog is relatively static, and we mainly will read data from the cache.
Only at the intermediate storage of the shopping cart data is also written.
We plan to regenerate each database once per hour, and then replace the existing database with the new one. We hope that this simplifies the update of the caching databases with the new product catalog.
There are, however, doubts whether this is contrary to the architecture of RavenDB. (existing Indexes, References)
Is our approach at all possible?
Has anyone found a good solution in a similar situation?
Thank you for your help
MS007,
Using RavenDB as as persistent view model storage is very common.
But I don't see why you want to actually refresh the RavenDB databases on an hourly basis. It would be much cheaper to simply refresh the changed data, and you don't have to worry about what is going on in the system while you are dropping a database and creating a new one.
I am creating a web application using MVC3. I have one database which contains all of the tables I use in the application (including a Users and Roles table, which are the focus of this question).
I have already set it up (very quickly) to Authenticate the user using SetAuthCookie. However as my application has different roles, I also need to be able to find out if the user is in a specific role before they can access specific controllers.
I have spent the last few days trying and failing to implement various solutions from here and general online searches. Topics of discussion have been FormsAuthentication, IIdentity, IPrincipal, MembershipProvider, RoleProvider. And after several failed attempts I am completely confused by everything.
In the past I have used the default accounts stuff using the ASPNET Membership DB, and since I didn't have to touch it at all, had no problems. But I would prefer all of my tables were in the same DB this time around. I would also prefer that I can decorate the controllers with [Authorize], [Role="Sysadmin"] etc.
I am aware that this and similar questions are asked a lot and that there are several resources available which cover the topics described above, but the problem is that the solutions are conflicting, or assume knowledge of other related fields, or the presence of non-standard classes, and even in some cases provide a flawed solution "for simplicity's sake".
So to recap:
I have a blank MVC3 project.
I have a Database with my Users Table and Roles Table (each user can have one role).
I want to decorate the controllers with [Authorize], [Role="Sysadmin"]
I do not care how complex the solution is. I am willing to spend as much time as necessary to get the most secure and efficient solution.
Thanks in advance
You can write custom membership providers, here is a link that explains more:
MVC 3 Customer Membership providers
I have a website developed with ASP.NET MVC, Entity Framework Code First and SQL Server.
The website has entities that each have a history of statuses that we defined (NEW, PACKED, SHIPPED etc.)
The DB contains a table in which a completely separate system inserts parcel tracking data.
I have to read this data tracking data and, following certain business rules, add to the existing status history of my entities.
The best way I can think of is to write an independent Windows service to poll the tracking data every so often and update my entity statuses from that. However, that makes me concerned about DB concurrency issues.
Please could someone advise me on the best strategy for this scenario?
Many thanks
There are different ways to do it. It also depends on the response time you need. If you need to update your system as soon as the tracking system updates the record then a trigger is the preferred way. Alternative way is to schedule a job which will run every 15/30mins and sync the 2 systems.
As for the concurrency issue you can use a concurrency token field. Entity framework has support for this.
MVC sets up clear distinction between Model, View and Controller.
For the model, now adays, web frameworks provides ability to map the model directly to database entities (ORM), which, IMHO, end up causing performance issues at runtime due to direct database I/O.
The thing is, if that's really the case, why model ORM is so pupular and every web frameworks want to support it either organically or not.
To a web site has huge amount of traffic, it definitely won't work. But what's the work around? Connect directly to database is definitely not a wise solution here.
What's your question?
Is it a good idea to use direct db access from webpages?
A: No.
Is it a good idea to use ORM's?
A: Debatable : See How can I design a Java web application without an ORM and without embedded SQL
Is it a good idea to use MVC model?
A: Yes - it has nothing to do with "Direct" database access - it's about separating your application logic from your model and your display. (Put simply).
And the rationale for not putting database logic inside webpages has nothing to do with performance - it's about security/maintainability etc etc. Calling a usp from a webpage is likely to be MORE performant than using an ORM, but it's bad because the performance gain is negligible, and the cons are significant.
As to workaround: if you mean how do you hook up a database to a web application...?
The simplest way is to use something like Entity Frameworks or Linq-Sql with your Model - there are plenty of examples of this in tutorials on the web.
A better method IMO, is to have a separate Services layer (which may be WCF based), and have all the database access inside that, with DTO's transferring the data to your Web Application which has it's own ViewModel.
Mvc is not about orm but about separation of display logics and business logics. There is no reason your exposed model needs to be identical to you database model and many reasons to ensure that the exposed model closely matches what is to be displayed.
The other part of the solution to scale well would be to implement caching in the control and be able to distribute load on sevaral instances.
I think #BonyT has given a good answer, (and I've voted for it :) ), I'd just add that:
"web frameworks provide the ability to map the model directly to database entities (ORM), which, IMHO, ends up causing performance issues at runtime due to direct database I/O"
Even if this is true, using an ORM can solve a lot of problems with a model being easy to update and translate back and forth between a database. Solving a performance hit by buying extra web servers or cloud instances is much cheaper than having to buy extra developers or extra hours in development to solve things other people have already written ORMs to do for you.
This seems to be an overlooked area that could really use some insight. What are your best practices for:
making an upgrade procedure
backing out in case of errors
syncing code and database changes
testing prior to deployment
mechanics of modifying the table
etc...
Liquibase
liquibase.org:
it understands hibernate definitions.
it generates better schema update sql than hibernate
it logs which upgrades have been made to a database
it handles two-step changes (i.e. delete a column "foo" and then rename a different column to "foo")
it handles the concept of conditional upgrades
the developer actually listens to the community (with hibernate if you are not in the "in" crowd or a newbie -- you are basically ignored.)
http://www.liquibase.org
opinion
the application should never handle a schema update. This is a disaster waiting to happen. Data outlasts the applications and as soon as multiple applications try to work with the same data ( the production app + a reporting app for example) -- chances are they will both use the same underlying company libraries... and then both programs decide to do their own db upgrade ... have fun with that mess.
I am a big fan of Red Gate products that help creating SQL packages to update database schemas. The database scripts can be added to source control to help with versioning and rollback.
In general my rule is: "The application should manage it's own schema."
This means schema upgrade scripts are part of any upgrade package for the application and run automatically when the application starts. In case of errors the application fails to start and the upgrade script transaction is not committed. The downside to this is that the application has to have full modification access to the schema (this annoys DBAs).
I've had great success using Hibernates SchemaUpdate feature to manage the table structures. Leaving the upgrade scripts to only handle actual data initialization and occasional removing of columns (SchemaUpdate doesn't do that).
Regarding testing, since the upgrades are part of the application, testing them becomes part of the test cycle for the application.
Afterthought: Taking on board some of the criticism in other posts here, note the rule says "it's own". It only really applies where the application owns the schema as is generally the case with software sold as a product. If your software is sharing a database with other software, use other methods.
That's a great question. ( There is a high chance this is going to end up a normalised versus denormalised database debate..which I am not going to start... okay now for some input.)
some off the top of my head things I have done (will add more when I have some more time or need a break)
client design - this is where the VB method of inline sql (even with prepared statements) gets you into trouble. You can spend AGES just finding those statements. If you use something like Hibernate and put as much SQL into named queries you have a single place for most of the sql (nothing worse than trying to test sql that is inside of some IF statement and you just don't hit the "trigger" criteria in your testing for that IF statement). Prior to using hibernate (or other orms') when I would do SQL directly in JDBC or ODBC I would put all the sql statements as either public fields of an object (with a naming convention) or in a property file (also with a naming convention for the values say PREP_STMT_xxxx. And use either reflection or iterate over the values at startup in a) test cases b) startup of the application (some rdbms allow you to pre-compile with prepared statements before execution, so on startup post login I would pre-compile the prep-stmts at startup to make the application self testing. Even for 100's of statements on a good rdbms thats only a few seconds. and only once. And it has saved my butt a lot. On one project the DBA's wouldn't communicate (a different team, in a different country) and the schema seemed to change NIGHTLY, for no reason. And each morning we got a list of exactly where it broke the application, on startup.
If you need adhoc functionality , put it in a well named class (ie. again a naming convention helps with auto mated testing) that acts as some sort of factory for you query (ie. it builds the query). You are going to have to write the equivalent code anyway right, just put in a place you can test it. You can even write some basic test methods on the same object or in a separate class.
If you can , also try to use stored procedures. They are a bit harder to test as above. Some db's also don't pre-validate the sql in stored procs against the schema at compile time only at run time. It usually involves say taking a copy of the schema structure (no data) and then creating all stored procs against this copy (in case the db team making the changes DIDn't validate correctly). Thus the structure can be checked. but as a point of change management stored procs are great. On change all get it. Especially when the db changes are a result of business process changes. And all languages (java, vb, etc get the change )
I usually also setup a table I use called system_setting etc. In this table we keep a VERSION identifier. This is so that client libraries can connection and validate if they are valid for this version of the schema. Depending on the changes to your schema, you don't want to allow clients to connect if they can corrupt your schema (ie. you don't have a lot of referential rules in the db, but on the client). It depends if you are also going to have multiple client versions (which does happen in NON - web apps, ie. they are running the wrong binary). You could also have batch tools etc. Another approach which I have also done is define a set of schema to operation versions in some sort of property file or again in a system_info table. This table is loaded on login, and then used by each "manager" (I usually have some sort of client side api to do most db stuff) to validate for that operation if it is the right version. Thus most operations can succeed, but you can also fail (throw some exception) on out of date methods and tells you WHY.
managing the change to schema -> do you update the table or add 1-1 relationships to new tables ? I have seen a lot of shops which always access data via a view for this reason. This allows table names to change , columns etc. I have played with the idea of actually treating views like interfaces in COM. ie. you add a new VIEW for new functionality / versions. Often, what gets you here is that you can have a lot of reports (especially end user custom reports) that assume table formats. The views allow you to deploy a new table format but support existing client apps (remember all those pesky adhoc reports).
Also, need to write update and rollback scripts. and again TEST, TEST, TEST...
------------ OKAY - THIS IS A BIT RANDOM DISCUSSION TIME --------------
Actually had a large commercial project (ie. software shop) where we had the same problem. The architecture was a 2 tier and they were using a product a bit like PHP but pre-php. Same thing. different name. anyway i came in in version 2....
It was costing A LOT OF MONEY to do upgrades. A lot. ie. give away weeks of free consulting time on site.
And it was getting to the point of wanting to either add new features or optimize the code. Some of the existing code used stored procedures , so we had common points where we could manage code. but other areas were this embedded sql markup in html. Which was great for getting to market quickly but with each interaction of new features the cost at least doubled to test and maintain. So when we were looking at pulling out the php type code out, putting in data layers (this was 2001-2002, pre any ORM's etc) and adding a lot of new features (customer feedback) looked at this issue of how to engineer UPGRADES into the system. Which is a big deal, as upgrades cost a lot of money to do correctly. Now, most patterns and all the other stuff people discuss with a degree of energy deals with OO code that is running, but what about the fact that your data has to a) integrate to this logic, b) the meaning and also the structure of the data can change over time, and often due to the way data works you end up with a lot of sub process / applications in your clients organisation that needs that data -> ad hoc reporting or any complex custom reporting, as well as batch jobs that have been done for custom data feeds etc.
With this in mind i started playing with something a bit left of field. It also has a few assumptions. a) data is heavily read more than write. b) updates do happen, but not at bank levels ie. one or 2 a second say.
The idea was to apply a COM / Interface view to how data was accessed by clients over a set of CONCRETE tables (which varied with schema changes). You could create a seperate view for each type operation - update, delete, insert and read. This is important. The views would either map directly to a table , or allow you to trigger of a dummy table that does the real updates or inserts etc. What i actually wanted was some sort of trappable level indirection that could still be used by crystal reports etc. NOTE - For inserts , update and deletes you could also use stored procs. And you had a version for each version of the product. That way your version 1.0 had its version of the schema, and if the tables changed, you would still have the version 1.0 VIEWS but with NEW backend logic to map to the new tables as needed, but you also had version 2.0 views that would support new fields etc. This was really just to support ad hoc reporting, which if your a BUSINESS person and not a coder is probably the whole point of why you have the product. (your product can be crap but if you have the best reporting in the world you can still win, the reverse is true - your product can be the best feature wise, but if its the worse on reporting you can very easily loose).
okay, hope some of those ideas help.
These are all weighty topics, but here is my recommendation for updating.
You did not specify your platform, but for NANT build environments I use Tarantino. For every database update you are ready to commit, you make a change script (using RedGate or another tool). When you build to production, Tarantino checks if the script has been run on the database (it adds a table to your database to keep track). If not, the script is run. It takes all the manual work (read: human error) out of managing database versions.
I've heard good things about iBATIS 3 Schema Migrations System:
User Guide: http://svn.apache.org/repos/asf/ibatis/java/ibatis-3/trunk/doc/en/iBATIS-3-Migrations.pdf
As Pat said, use liquibase. Especially when you have several developers with their own dev databases
making changes that will become part of the production database.
If there's only one dev, as on one project I'm on now(ha), I just commit the schema changes as SQL text files into a CVS repo, which I check out in batches on the production server when the code changes go in.
But liquibase is better organized than that!