I have a project to build reports using Microsoft Dynamics CRM 2011 & SSRS. The recommended data source is filtered views. I have all queries for the reports using filtered views.
I have discovered that filtered views are very slow; for instance, it takes more than 10 seconds to select top 1 * from [FilteredContact].
What are the best alternatives to this solution?
The Filtered Views are slow usually because of all of the security rules that have to be applied. This leaves a couple things to look at and potentially tweak.
Abandon Filtered Views All Together (don't use if you need to limit viewable records via security) This is not the easiest thing to do usually because any joins you need have to be done explicitly. This is also unsupported being that the next rollup could break your queries. If you're willing to accept the risk, this is the fastest method.
Improve your Security Model You'll need a SQL DBA to confirm this, but I'm guessing that the main reason of slowness is the security rules that have to be applied. Check out the Scalable Security Modeling with Microsoft Dynamics CRM 2011 white paper to see if you can change any of your normal practices to improve performance:
Your options are very limited. CRM will only allow you to use the filtered views however you can query information from external databases by creating a linked server and using 4 part naming or having the other database on the same SQL instance. You could for example hold CRM data in a data warehouse and report off that more example
I would however be more worried about the performance of your CRM server. On one of the CRM instances I look after I tried the same query and it returned in <3 seconds on a table size of 21,971 rows.
I would also say that that your query is also not a normal one you would run as to do the top 1 it will have to query a lot of tables. A more normal query like showing the contacts for a company would be a lot quicker.
Related
I am currently facing the need to access Dynamics 365 online and Dynamics NAV to compare data in both systems. Instead of using PowerBI, I would prefer more of a code-driven approach, so I tried LinqPad.
My idea was to fetch the data from both systems, run a comparison method and dump whatever needs to be dumped.
Unfortunately I am struggling with this approach, as it looks like I can not use multiple connections to unrelated systems in one query.
Any ideas, how to solve this? Perhaps export to csv and use that in another query?
Regards
Sven
I've been trying to build a repository using the BI administrator tool in OBIEE. The repository is then loaded using the enterprise manager without any issues. But a new analysis could not be performed because of the following error whenever I try to see the answers of my analysis in the analytics presentation port 9704:
Message returned from OBIS. [nQSError: 42016] Check database specific features table. Must be able to push at least a single table reference to a remote database (HY000)
which I haven't been able to correct. Then I came across a link:
https://www.youtube.com/watch?v=TK7KYaCEGZU
which creates a report using the BI publisher. It has the features I want:
1. I am able to access my table in the Oracle server
2. I choose the columns and make it into a data model
3. I use this model to create my report
When I use the BI administrator tool, they've asked us to build a physical layer, then the BMM layer and the presentation layer. But there is no reference to such terms while creating a data model using the BI publisher tool and so, I'd like to know the difference between creating a report using BI publisher tool and BI administrator tool.
From my perspective, the biggest differences are scalability and access, though this really only scratches the surface.
Scalability
The OBI business model should be considered more widely than one reporting requirement. A business model can be used by multiple presentation catalogs, and in turn each presentation catalog can by used by many Analyses and Dashboards. So you only have to build the data model once, and it can be reused and added to over time to make a more complete... well, model of the business. This gives you "one version of the truth", with reports all coming from this one data source.
In Publisher, there is the concept of the data model. This is generally built specifically for one report. You then have many data models (perhaps using the same underlying data sources), for your many reports.
Consumption
Although Publisher reports can be accessed through the browser and interacted with, the interactivity is generally limited and the reports are (as the name suggests) published in PDF or Microsoft Office formats.
With OBI dashboards there as a much greater degree of interactivity with prompts and navigation. Although these can be published through Agents, they are usually considered as consumed through the browser on a self-serve basis, allowing users the flexibility to answer business questions as they arise.
We delivered a successful project a few days back and now we need to make some performance improvements in our WCF Restful API.
The projects is using the following tools/technologies
1- LINQ
2- Entity Framework
3- Enterprise library for Logging/Exception handling
4- MS SQL 2008
5- Deployed on IIS 7
A few things to note
1- 10-20 queries have more than 7 table joins in LINQ
2- The current IIS has more than 10 applications deployed
3- The entity framework has around 60 tables
4- The WCF api is using HTTPS
5- All the API call return JSON responses
The general flow is
1- WCF call is received
2- Session is checked
3- Function from BL layer is called
4- Function from DA layer is called
5- Response returned in JSON
Currently, as per my little knowledge and research I think that the
following might improve performance
1- Implement caching for reference data
2- Move LINQ queries with more than 3 joins to stored procedure (and use hints maybe?)
3- Database table re-indexing
4- Use performance counters to know the problem area's
5- Move functions with more than 3 update/delete/inserts to stored procedure
Can you point out some issue with the above improvements ? and what
other improvements can i do ?
Your post is missing some background on your improvement suggestions. Are they just guesses or have you actually measured and identified them as problem areas?
There really is no substitute for proper performance monitoring and profiling to determine which area you should focus on for optimizations. Everything else is just guesswork, and although some things might be obvious, it's often the not-so-obvious things that actually improve performance.
Run your code through a performance profiling tool to quickly identify problem areas inside the actual application. If you don't have access to the Visual Studio Performance Analyzer (Visual Studio Premium or Ultimate), take a look at PerfView which is a pretty good memory/CPU profiler that won't cost you anything.
Use a tool such as MiniProfiler to be able to easily set up measuring points, as well as monitoring the Entity Framework execution at runtime. MiniProfiler can also be configured to save the results to a database which is handy when you don't have a UI.
Analyzing the generated T-SQL statements from the Entity Framework, which can be seen in MiniProfiler, should allow you to easily measure the query performance by looking at the SQL execution plans as well as fetching the SQL IO statistics. That should give you a good overview of what can/should be put into stored procedures and if you need any other indexes.
We plan to change our multitenant ordering system on the intranet.
All products of the product catalog are retrieved through web services. This back-end architecture can not be replaced. Today, however, we are facing performance problems that should be eliminated with the new solution.
Therefore, we plan to use one caching db per tenant and we have made first tests with RavenDB.
The product catalog is relatively static, and we mainly will read data from the cache.
Only at the intermediate storage of the shopping cart data is also written.
We plan to regenerate each database once per hour, and then replace the existing database with the new one. We hope that this simplifies the update of the caching databases with the new product catalog.
There are, however, doubts whether this is contrary to the architecture of RavenDB. (existing Indexes, References)
Is our approach at all possible?
Has anyone found a good solution in a similar situation?
Thank you for your help
MS007,
Using RavenDB as as persistent view model storage is very common.
But I don't see why you want to actually refresh the RavenDB databases on an hourly basis. It would be much cheaper to simply refresh the changed data, and you don't have to worry about what is going on in the system while you are dropping a database and creating a new one.
We are designing our new product, which will include multi-tenancy. It will be written in ASP.NET and C#, and may be hosted on Windows Azure or some other Cloud hosting solution.
We’ve been looking at MVC and other technologies and, to be honest, we’re getting bogged down in various acronyms (MVC, EF, WCF etc. etc.).
A particular requirement of our application is causing a headache – the users will be able to add fields to the database, or even create a whole new module.
As a result, each tenant would have a database with a different structure to every other tenant using the system. We envisage that every tenant will have their own database, rather than sharing a database.
(Adding fields etc. to the system will be accomplished using a web interface).
All well and good, but the problem comes when creating a data model for MVC. Modifying a data model programmatically to add a field to a table seems to be impossible, according to this link:
Create EDM during runtime?
This is a major headache for us. Even if we don’t use MVC, I think we’d still want to create a data model (perhaps for used with LINQ to SQL).
We’re considering having a table with loads of fields in it, and instead of adding fields to the database we allocate an existing field in the table when the user wants to add a field to his form. Not sure I like that idea, though.
Of course, we don’t have to use MVC or Entity Framework, but it appears to me that these are the kind of technologies that Microsoft would steer us towards for future development.
Any thoughts? I’m assuming that we’re not the first people in the world to consider this idea of a user-customisable application.
I'd make sure that you have fully explored the option of creating 'Name-Value Pair' type tables as described here http://msdn.microsoft.com/en-us/library/aa479086.aspx#mlttntda_nvp
before you start looking at a customizable schema. Also don't forget that you are going to have to grant much higher permissions to your sql accounts in order for them to create tables on the fly.
A customizable schema means that your sql accounts will also need much higher permissions. It wouldnt be advisable to assign these higher permissions to a tenants account, but to a separate provisioning account which can perform these tasks.
Also before investing effort into EF - try googling 'EF Vote of No Confidence'. It was raised (i believe) mainly in reaction to earlier versions but its definately worth reading up on. nHibernate is an alternative worth investigating.
Just off the top of my head it sounds like a bad idea to allow users to change the database schema. I think you are missing a layer of abstraction. In my mind, it would be more correct to use the database to hold data that describes the format of a customer's data. The actual data would then be saved in a text column as xml, including version information.
This solution may not fit your needs, but I don't know the details of your project. So just consider it my 5 cents.
Most modern SQL databases today supports the 'jsonb' type for key/value storage as a field. Other types (hstor for postgres) exists too. Forget about XML, that's yesterday and no application with respect for itself implements XML unless it is for importing/converting old data.