How to handle multiple customers with different SQL databases - visual-studio

Summary
I have a project with multiple existing MSSQL databases, I already created an Azure Analysis Service where I deployed my first Tabular Cube. I already tested to access the Analysis Service, worked perfectly.
Finally I have to duplicate the above described for ~90 databases (90 different customers).
I'm unsure how to organize this project and I'm not sure about the possibilities I have.
What I did
I already browsed the Internet to find some information, but I just found a single source where somebody asked a similar question, the first reply is what I was already thinking about, as I described below.
The last reply I don't really understand, what does he mean with one solution, is there another hierarchy above the project?
Question
A possibility would be to import each database as a source in the same project, but I think this means I have to import each table from this source, means finally 5*90 = 450 tables, I think this gets quickly outta control?
Also I thought about duplicating the whole Visual Studio Project folder for ~90 times for each customer, but at the moment I fail to find all references to change the name, but I think this wouldn't be to hard.
Is there an easier way to achieve my goal? Especially regarding maintainability.
Solution
I will make a completely new Database with all the needed tables. Inside those tables I copy the databases from all customers with a new column customerId. The data I'll transfer with a cyclic job, periodicity to define. Updates in already existing row in the customer database I handle with a trigger.

For this the best approach would be to create a staging database and import the data from the other databases, so your Tabular Model can read the data from it.
Doing 90+ databases is going to be a massive admin overhead and getting the cube to lad them effectively is going to be problematic. Move the data using SSIS/Data factory as you'll be able to better orchestrate the data movement, and incremental loads that way. That way if you need to add/remove/update data sources it is not done in the Cube, its all done at the database/data factory level.

Just use one database for all the customers and differentiate each customer with a customer_id column.

Related

Getting into designing dashboards and need some help identifying each technical layer along the way

So I will be embarking on designing a dashboard that will display KPI's and other relevant information for my team. Since I am in the early stages of this project and am not very familiar on the technical process behind designing a dashboard, I need some questions vetted out first before I go and shop for some solutions to avoid reinventing the wheel.
Here are some of my questions:
We want a dashboard that can provide live-time information via our data sources (or as close to live-time as possible). What function allows a dashboard to update itself with concurrent datasources? From a conceptual standpoint, I can understand creating a dashboard out of Microsoft Excel, and having the dashboard dependent on the values you may have set within your pivot table.
How do you make a dashboard request information from multiple datasources on its own? Just like the excel example, a user may have to go into the pivot tables to update values, but I want to know how would a dashboard request this by itself and what is the exact method from a programming standpoint? Does the code execute itself every time you refresh the webpage?
How do you create datasources organically? I know for some solutions such as SharePoint BI Center, there are pre-supported datasources like an excel sheet or SharePoint and it's as easy as uploading your document and letting the design handle the rest. However, there are going to be some datasources that I know that will need to be fetched. Do I need to understand something else like an event recorder in order to navigate this issue?
Introduction
The dashboard (or a report, respectively) is usually the result of a long chain of steps. Very much simplified it could look like this:
src1
|------\
src2 | /---- Dashboards
|------+---[DWH]-[BR]-+
src n | | \---- Reports etc.
|------/ [Big Data]
Keep in mind, this is only a very, very simple structure of a data backend / frontend.
DWH means Data Warehouse, where data might be stored temporarily (you referred to this as fetching). This could be a database, could be a Big Data engine, could be a combination of both...
Afterwards, there are Business Rules (BR). Those might be specific rules in how different departments calculate and relate to data, but also simple things like algebra.
Questions
So, the main question should not be about the technology:
What software should we choose?
How can we create a dashboard?
but on the contrary focused on your business processes (see it like a top-down view):
How does our core process look like? Where would I like to measure data?
How would department a calculate sales in difference to department b? Should all use the same rule?
Where does everyone store the data? Can we access it? Do we need structural data?
And, very easy to forget but also easily sometimes one of the biggest parts: Is the identifier of a business object (say, sales id) everywhere build and formatted in the same way?
Conclusion
When those questions are at least in the back of your head and you keep working in this direction, more or less automatically data will spill out at certain points of that process.
Then it won't matter if you use Excel, a small-to medium app like Tableau, Tibco Spotfire, QlikView, Power BI or you want to go full scale with a big Hadoop backend, databases and JasperReports, Apache Drill, Pentaho, SSIS on top of it... it will come out eventually.
TL;DR
Focus on the processes first. Make sure to understand them. Draft in Excel. Then proceed in getting the data and the tools you need to help your use cases. It will work out much better from a "top-down" approach than trying to solve your requirements with tools only.

Handling passive deletion updates (ie. archiving instead of deleting)

We are developing an application based on DDD principles. We have encountered a couple of problems so far that we can't answer nor can we find the answers on the Internet.
Our application is intended to be a cloud application for multiple companies.
One of the demands is that there are no physical deletions from the database. We make only passive deletion by setting Active property of entities to false. That takes care of Select, Insert and Delete operations, but we don't know how to handle update operations.
Update means changing values of properties, but also means that past values are deleted and there are many reasons that we don't want that. One of the primary reason is for Accounting purposes.
If we make all update statements as "Archive old values" and then "Create new values" we would have a great number of duplicate values. For eg., Company has Branches, and Company is the Aggregate Root for Branches. If I change Companies phone number, that would mean I have to archive old company and all of its branches and create completely new company with branches just for one property. This may be a good idea at first, but over time there will be many values which can clog up the database. Phone is maybe an irrelevant property, but changing the Address (if street name has changed, but company is still in the same physical location) is a far more serious problem.
Currently we are using ASP.NET MVC with EF CF for repository, but one of the demands is that we are able to easily switch, or add, another technology like WPF or WCF. Currently we are using Automapper to map DTO's to Domain entities and vice versa and DTO's are primary source for views, ie. we have no view models. Application is layered according to DDD principle, and mapping occurs in Service Layer.
Another demand is that we musn't create a initial entity in database and then fill the values, but an entire aggregate should be stored as a whole.
Any comments or suggestions are appreciated.
We also welcome any changes in demands (as this is an internal project, and not for a customer) and architecture, but only if it's absolutely neccessary.
Thank you.
Have you ever come across event sourcing? Sounds like it could be of use if you're interested in tracking the complete history of aggregates.
To be honest I would create another table that would be a change log inserting the old record and deleted records etc etc into it before updating the live data. Yes you are creating a lot of records but you are abstracting this data from live records and keeping this data as lean as possible.
Also when it comes to clean up and backup you have your live date and your changed / delete data and you can routinely back up and trim your old changed / delete and reduced its size depending on how long you have agreed to keep changed / delete data live with the supplier or business you are working with.
I think this would be the best way to go as your core functionality will be working on a leaner dataset and I'm assuming your users wont be wanting to check revision and deletions of records all the time? So by separating the data you are accessing it when it is needed instead of all the time because everything is intermingled.

Best strategy for retrieving large dynamically-specified tables on an ASP.NET page

Looking for a bit of advice on how to optimise one of our projects. We have a ASP.NET/C# system that retrieves data from a SQL2008 data and presents it on a DevExpress ASPxGridView. The data that's retrieved can come from one of a number of databases - all of which are slightly different and are being added and removed regularly. The user is presented with a list of live "companies", and the data is retrieved from the corresponding database.
At the moment, data is being retrieved using a standard SqlDataSource and a dynamically-created SQL SELECT statement. There are a few JOINs in the statement, as well as optional WHERE constraints, again dynamically-created depending on the database and the user's permission level.
All of this works great (honest!), apart from performance. When it comes to some databases, there are several hundreds of thousands of rows, and retrieving and paging through the data is quite slow (the databases are already properly indexed). I've therefore been looking at ways of speeding the system up, and it seems to boil down to two choices: XPO or LINQ.
LINQ seems to be the popular choice, but I'm not sure how easy it will be to implement with a system that is so dynamic in nature - would I need to create "definitions" for each database that LINQ could access? I'm also a bit unsure about creating the LINQ queries dynamically too, although looking at a few examples that part at least seems doable.
XPO, on the other hand, seems to allow me to create a XPO Data Source on the fly. However, I can't find too much information on how to JOIN to other tables.
Can anyone offer any advice on which method - if any - is the best to try and retro-fit into this project? Or is the dynamic SQL model currently used fundamentally different from LINQ and XPO and best left alone?
Before you go and change the whole way that your app talks to the database, have you had a look at the following:
Run your code through a performance profiler (such as Redgate's performance profiler), the results are often surprising.
If you are constructing the SQL string on the fly, are you using .Net best practices such as String.Concat("str1", "str2") instead of "str1" + "str2". Remember, multiple small gains add up to big gains.
Have you thought about having a summary table or database that is periodically updated (say every 15 mins, you might need to run a service to update this data automatically.) so that you are only hitting one database. New connections to databases are quiet expensive.
Have you looked at the query plans for the SQL that you are running. Today, I moved a dynamically created SQL string to a sproc (only 1 param changed) and shaved 5-10 seconds off the running time (it was being called 100-10000 times depending on some conditions).
Just a warning if you do use LINQ. I have seen some developers who have decided to use LINQ write more inefficient code because they did not know what they are doing (pulling 36,000 records when they needed to check for 1 for example). This things are very easily overlooked.
Just something to get you started on and hopefully there is something there that you haven't thought of.
Cheers,
Stu
As far as I understand you are talking about so called server mode when all data manipulations are done on the DB server instead of them to the web server and processing them there. In this mode grid works very fast with data sources that can contain hundreds thousands records. If you want to use this mode, you should either create the corresponding LINQ classes or XPO classes. If you decide to use LINQ based server mode, the LINQServerModeDataSource provides the Selecting event which can be used to set a custom IQueryable and KeyExpression. I would suggest that you use LINQ in your application. I hope, this information will be helpful to you.
I guess there are two points where performance might be tweaked in this case. I'll assume that you're accessing the database directly rather than through some kind of secondary layer.
First, you don't say how you're displaying the data itself. If you're loading thousands of records into a grid, that will take time no matter how fast everything else is. Obviously the trick here is to show a subset of the data and allow the user to page, etc. If you're not doing this then that might be a good place to start.
Second, you say that the tables are properly indexed. If this is the case, and assuming that you're not loading 1,000 records into the page at once and retreiving only subsets at a time, then you should be OK.
But, if you're only doing an ExecuteQuery() against an SQL connection to get a dataset back I don't see how Linq or anything else will help you. I'd say that the problem is obviously on the DB side.
So to solve the problem with the database you need to profile the different SELECT statements you're running against it, examine the query plan and identify the places where things are slowing down. You might want to start by using the SQL Server Profiler, but if you have a good DBA, sometimes just looking at the query plan (which you can get from Management Studio) is usually enough.

Managing LINQ to SQL .dbml model complexity

This question is addressed to a degree in this question on LINQ to SQL .dbml best practices, but I am not sure how to add to a question.
One of our applications uses LINQ to SQL and we have currently have one .dbml file for the entire database which is becoming difficult to manage. We are looking at refactoring it a bit into separate files that are more module/functionality specific, but one problem is that many of the high level classes would have to be duplicated in several .dbml files as the associations can't be used across .dbml files (as far as I know), with the additional partial class code as well.
Has anyone grappled with this problem and what recommendations would you make?
Take advantage of the namespace settings. You can get to it in properties from clicking in the white space of the ORM.
This allows me to have a Users table and a User class for one set of business rules and a second (but the same data store) Users table and a User class for another set of business rules.
Or, break up the library, which should also have the affect of changing the namespacing depending on your company's naming conventions. I've never worked on an enterprise app where I needed access to every single table.
Past a certain size it probably becomes easier to work with the xml instead of the dbml designer.
I have written a tool too! Mine is for scripting changes to dbml files using c# so you can rerun them and not lose changes. See my blog http://www.adverseconditionals.com 4 more details
The approach that we've used it to keep 2 .dbml files. One of them holds the Stored Procs and all production DB access is done through this. The other is in a unit test folder and holds tables and their relationships and is used for DB data manipulation and querying for unit tests.
I have written a utility to address exactly that problem, I needed a quick app to let you select only the database objects you need. In my case I often needed a complex view, but no tables.
http://www.codeplex.com/SqlMetalInclude/

One database or many?

I am developing a website that will manage data for multiple entities. No data is shared between entities, but they may be owned by the same customer. A customer may want to manage all their entities from a single "dashboard". So should I have one database for everything, or keep the data seperated into individual databases?
Is there a best-practice? What are the positives/negatives for having a:
database for the entire site (entity
has a "customerID", data has
"entityID")
database for each
customer (data has "entityID")
database for each entity (relation of
database to customer is outside of
database)
Multiple databases seems like it would have better performance (fewer rows and joins) but may eventually become a maintenance nightmare.
Personally, I prefer separate databases, specifically a database for each entity. I like this approach for the following reasons:
Smaller = faster regarding the queries.
Queries are simpler.
No risk of ever accidentally displaying one customer's data to another.
One database could pose a performance bottleneck as it gets large (# of entities increase). You get a sort of build in horizontal scalability with 1 per entity.
Easy data clean up as customers or entities are removed.
Sure it'll take more time to upgrade the schema, but in my experience modifications are fairly uncommon once you deploy and additions are trivial.
I think this is hard to answer without more information.
I lean on the side of one database. Properly coded business objects should prevent you from forgetting clientId in your queries.
The type of database you are using and how it scales might help you make your decision.
For schema changes down the road, it seems one database would be easier from a maintenance perspective - you have one place to make them.
What about backup and restore? Could you experience a customer wanting to restore a backup for one of their entities?
This is a fairly normal scenario in multi-tenant SAAS applications. Both approaches have their pros and cons. Search on best practices for multi-tenant SAAS (software as a service) and you will find tons of stuff to ponder upon.
Check out this article on Microsoft's site. I think it does a nice job of laying out the different costs and benefits associated with Multi-Tenant designs. Also look at the Multi tenancy article on wikipedeia. There are many trade offs and your best match greatly depends on what type of product you are developing.
One good argument for keeping them in separate databases is that its easier to scale (you can simply have multiple installations of the server, with the client databases distributed across the servers).
Another argument is that once you are logged in, you don't need to add an extra where check (for client ID) in each of your queries.
So, a master DB backed by multiple DBs for each client may be a better approach,
If the client would ever need to restore only a single entity from a backup and leave the others in their current state, then the maintenance will be much easier if each entity is in a separate database. if they can be backed up and restored together, then it may be easier to maintain the entities as a single database.
I think you have to go with the most realistic scenario and not necessarily what a customer "may" want to do in the future. If you are going to market that feature (i.e. seeing all your entities in one dashboard), then you have to either find a solution (maybe have the dashboard pull from multiple databases) or use a single database for the whole app.
IMHO, having the data for multiple clients in the same database just seems like a bad idea to me. You'll have to remember to always filter your queries by clientID.
It also depends on your RDBMS e.g.
With SQL server databases are cheep
With Oracle it is easy to partition tables by customer "customerID", so a single large database can run as fast as a small database for each customer.
However witch every you choose, try to hide it as a low level in your data access code
Do you plan to have your code deployed to multiple environments?
If so, then try to keep it within one database and have all table references prefixed with a namespace from a configuration file.
The single database option would make the maintenance much easier.

Resources