Microsoft Dynamics CRM internal clock (BD or IIS)? - dynamics-crm

We have an application that consumes data from Dynamics CRM and do some queries to retrieve data after a specific date and time.
It is important to us to know what clock uses Dynamics CRM to set the date for its records. For example, when it creates a new Opportunity, where the createdon date comes from?
Is the date and time based on the clock of the web server where CRM is running, is it based on the SQL Server where the CRM Database is stored, other?

I've never seen any actual documentation on this question. I can deduce the following from looking at SQL traces and some decompilation of CRM server DLLs.
I'm not sure there is a single way that date/time values get generated in Dynamics CRM.
For example, there is a stored proc to create a WebResource which uses SQL Server's function to get utc time. In that case the time will be clock time of the server hosting SQL.
This is not true for creating/updating entity records. In this case the time is passed as part of the INSERT command. I do not believe CRM is doing a call to get SQL Server's time first, thus this time is coming from the CRM web front-end server (or the async server if the Create/Update operation is taking place asynchronously.) .NET provides the UTC time value, I do not believe there is an explicit conversion of time values taking place for populating these values on Create and Update. There is an explicit action to remove the milliseconds part.
If you use OverrideCreatedOn then the value is completely up to you (so long as it's within the CRM acceptable range), but it will cut off the milliseconds.

Dynamics CRM is straightforward when it comes to the time: it simply uses the System.DateTime.Now property, which reads the server's built-in clock.

Querying CRM data it looks like it is only down to the second. If you create a handful of records in CRM (without overriding createdon) then query their createdon date you'll get values back like (where I printed with the ToString("o") formatter for the ISO 8601 standard):
2017-01-19T21:33:12.0000000Z
2017-01-19T21:27:02.0000000Z
2017-01-19T20:12:54.0000000Z
There also appears to be an open CrmIdea item to start recording milliseconds in these type of timestamps.

Dynamics CRM stores date/time based on the Systems Settings. When this is stored in the CRM Database however, stores it as UTC+ 0 based on the Windows Server Time of the SQL Server.

Related

Automating Validation of PDF Prepared Reports

Our team uses Spotfire to host online analyses and also prepare monthly reports. One pain point that we have is around validation. The reports are all prepared reports, and the process for creating them each month is as simple as 1) refresh the data (through Infolink connected to Oracle) and 2) Press button to export each report. The format of the final product is a PDF.
The issue is that there are a lot of small things that can go wrong with the reports (filter accidentally applied, wrong month selected, data didn't refresh, new department not grouped correctly, etc.) meaning that someone on our team has to manually validate each of the reports. We create almost 20 reports each month and some of them are as many as 100 pages.
We've done a great job automating the creation of the reports, but now we have this weird imbalance where it takes like 25 minutes to create all the reports but 4+ hours to validate each one.
Does anyone know of a good way to automate, or even cut down, the time we have to spend each month validating the reports? I did a brief google and all I could find was in the realm of validating reports to meet government regulation standards
It depends on 2 factors:
Do your reports have the same template (format) each time you extract them? You said that you pull them out automatically so I guess the answer is Yes.
What exactly are you trying to check/validate? You need to have a clear list on what are you validating. You mentioned month, grouping, data values (for the refresh)). But the clearer the picture you have for validation, the more likely the process can be fully automated.
There are so called RPA (robot process automation) tools that can automate complex workflows.
A "data extract" task, which is part of a workflow, can detect and collect data from documents (PDF for example).
A robot that runs on the validating machine can:
batch read all your PDF reports from specified locations on your computer (or on another computer);
based on predefined templates it can read through the documents for specific fields that you specify (through defined anchors on the templates) and collect the exact data from there;
compare the extracted data with the baseline that you set (compare the month to be correct, compare a data field to confirm proper refresh of the data, another data field to confirm grouping, etc.);
It takes a bit of time to dissect the PDF for each report template and correctly set the anchors but then it runs seamless each time.
One such tool I used is called Atomatik. It has a studio environment where you design the robot (or robots) and run the process.

How to retrive more than 5000 record from CRM using kingswaysoft for SSIS packages?

I am trying to migrate data between two CRM databases(dynamics 365) but when in kingswasoft there is limit of 5000 record per batch. can anyone please suggest an approach wherein I can send n number of records?
We will page through all records in the source entity. The Batch Size setting on the CRM Source component is just used to specify how many records you want to retrieve per service call, not the total number you will get from the source entity. Hope this clarifies things a bit more.

Using views in MS Dynamics CRM

I'm running an SSIS package on the the view FilteredAccount. It takes a very long time to cache lookup data from this source, often around 15 minutes for the number of accounts we have. I ran the same package on the Account view and it completed in ~3 minutes.
I'm trying to understand whether using the unfiltered views is supported by MS in this case because the decrease in run time is amazing between the two. Looking in MSDN for an answer has been frustrating, since "views" and "filtered views" are almost noise words when it comes to CRM.
Here are differences between Views and FilteredViews:
FilteredViews available for any user. Views available only for users with Sys Admin privileges.
FilteredViews contains logic that returns data that is available for user that asks for a data (business units, sharing, e.t.c.).
FilteredViews return Label fields for every Lookup, OptionSet (that means additional joins on other tables) and converts all datetime fields to local time of user that asks data. Views return data as it is is in database.
So. My suggestion based on my experience. Forget about don't read data from Views bla-bla-bla. You can read data from Views without any problems.
Good luck.

Can I (ab)use SQL-Server Analysis Services to create user generated reports?

Question:
We (that is to say I, single person) should implement "user generated reports" (in 1 month at most, presentation at the end of the month/start of the new month).
Problem 1:
By user, I mean users that do not have any technical skills, like SQL or VBA.
Problem 2:
Technology is .NET ONLY, so I cannot use Java (and things based on Java like Jasper)
Problem 3:
Exports to Excel should be possible (and I mean XLS or XLSX, not XML or CSV)
Problem 4:
Grouping of data should be possible (multiple groups)
Problem 5:
Database is Microsoft SQL-Server (presumable 2008 R2, but could end-up being 2008 R1 or 2005)
Bonus "Problem":
Web based, with ASP.NET WebForms, but can also be desktop based, if web is not possible
Now apart from the sheer ridiculousness of those requirements and time constraints...
One solution would be the report builder supplied by SSRS (SQL-Server Reporting Service).
However, there are some disadvantages, which I think are pretty severe:
The user creating the report basically still needs to know SQL (left, right, inner, outer join and their consequences). Since the user probably doesn't understand the difference, they will just blame me if they get no or wrong results (inner join on a null column for example).
The user creating the report knows nothing about the database/data-structure (e.g. soft deletes, duration dates). Also garbage-in garbage-out is probably going to be a problem, complete with wrong data etc. ...
If they are going to make a matrix, and are going to sum subtotals from unrounded values, the sum of the total is not going to match the sum of the subtotals, because the report is going to display something like only 2 digits after the comma (and therefore round the value to 2 digits) for subtotals, but it's going to calculate the total from the sum of all values (which are NOT rounded), not from the sum of the subtotals (which are rounded). Again, they will blame me or the data, or the report builder for it.
Since the report-builder is not going to display the number of results after adding an additional table with a join, a user will have no way of telling whether they have the right number of records, which will inevitably result in wrong results. Again, they will blame me.
Filters on dates: One needs to apply them, but not necessarily in the where, but in the join. Report builder doesn't support that. It's not possible to create a serious report like that.
Status: As said, we use soft-deletes, and a status field with status 99 for deleted records. Filtering status in the where is dangerous, and must sometimes occur in the join. Again, Report Builder doesn't support that, unless you use raw SQL, which is pointless since the users are not going to know SQL.
Installing report builder requires admin rights, or the IT department of the customer company to install it. And the appropriate .NET framework for the appropriate ReportBuilder , and the appropriate ReportBuilder for the appropriate Report-Server, since SQL-Server 2005 Reporting Service is NOT going to work with reports for SQL-Server 2008 Reporting Services, and SQL 2008 R1 not with R2. And also this requires all users capable of that to be in a certain SQL-Server reporting service report generator user role, which requires the IT department to put users into the appropriate active-directory group, which so far has never worked with any of the customers we had. Plus I don't trust the IT department to know that they install the appropriate ReportBuilder, if they agree to install it at all.
Now I once (a longer time ago) happened to view a presentation of SSAS (SQL-Server Analysis Services) on youtube.
But I don't find the link anymore.
But anyway, I don't have any experience in SSAS, only SSRS.
I think it would be possible to abuse SSAS in such a way, that the users could connect to it via Excel, and get the data and sum them more or less like they want to. Also, they would be able to see the raw data.
And I could pre-prepare a few queries for raw-data from tables (that, I could do with reportbuilder as well, via datasets).
Does anybody know SSAS well enough to tell me whether this is feasible in that amount of time ?
And if the add-in required for analysis server and Excel-versions (2007/2010) is compatible with all analysis-server versions, or if there are problems accessing 2008 R2 from Excel 2007 or SSAS-2005 from Excel 2010.
Or whether I am bound to run into more problems with SSAS than with ReportBuilder ?
If your question is whether SSAS is a reasonable approach to your problem, my answer is yes. The benefit of SSAS is that generally speaking the data is modeled in a way that is readily understood by business users and easily manipulated in Excel to produce a variety of reports with no knowledge of a query language. With any version of SSAS, you can use Excel versions 2007 or 2010. There is no add-in required for this - the provider is built into both Excel versions already. Furthermore, by putting the model into SSAS, you are actually making your data more readily accessible by a variety of tools - you can use Excel or SSRS or a variety of 3rd party tools if you so desire. In other words, you're not limiting your options with this approach, but expanding your options as compared to Report Builder.
That said, working with SSAS can be simple or hard. It depends on the type of data you're working with and the complexity of any calculations that must be added to the model. Whether you can achieve your goals in the amount of time you have available is not a question I can answer. It really depends on the type of data that you have and the type of reports that your users need.
I can point you to a couple of resources. I wrote an article for TechNet as a gentle introduction: http://technet.microsoft.com/en-us/magazine/ee677579.aspx. It was written for SSAS 2008 but the principles apply to SSAS 2005, SSAS 2008, SSAS 2008 R2, and SSAS 2012.
If you prefer a video introduction, see http://channel9.msdn.com/Blogs/rdoherty/Demo-Developing-a-SQL-Server-2008-R2-Analysis-Services-Database to start. You can find a lot of free video material on SSAS at channel9. Just do a search for SSAS.

Can I capture Performance Counters for an Azure Web/Worker Role remotely...?

I am aware of the generation of the Performance Counters and Diagnosis in webrole and worker-role in Azure.
My question is can I get the Performance Counter on a remote place or remote app, given the subscription ID and other certificates (3rd Party app to give performance Counter).
Question in other words, Can I get the Performance Counter Data, the way I use Service Management API for any hosted service...?
What are the pre-configurations required to be done in Server...? to get CPU data...???
Following is the description of the attributes for Performance counters table:
EventTickCount: Stores the tick count (in UTC) when the log entry was recorded.
DeploymentId: Id of your deployment.
Role: Role name
RoleInstance: Role instance name
CounterName: Name of the counter
CounterValue: Value of the performance counter
One of the key thing here is to understand how to effectively query this table (and other diagnostics table). One of the things we would want from the diagnostics table is to fetch the data for a certain period of time. Our natural instinct would be to query this table on Timestamp attribute. However that's a BAD DESIGN choice because you know in an Azure table the data is indexed on PartitionKey and RowKey. Querying on any other attribute will result in full table scan which will create a problem when your table contains a lot of data.
The good thing about these logs table is that PartitionKey value in a way represents the date/time when the data point was collected. Basically PartitionKey is created by using higher order bits of DateTime.Ticks (in UTC). So if you were to fetch the data for a certain date/time range, first you would need to calculate the Ticks for your range (in UTC) and then prepend a "0" in front of it and use those values in your query.
If you're querying using REST API, you would use syntax like:
PartitionKey ge '0<from date/time ticks in UTC>' and PartitionKey le '0<to date/time in UTC>'.
You could use this syntax if you're querying table storage in our tool Cloud Storage Studio, Visual Studio or Azure Storage Explorer.
Unfortunately I don't have much experience with the Storage Client library but let me work something out. May be I will write a blog post about it. Once I do that, I will post the link to my blog post here.
Gaurav
Since the performance counters data gets persisted in Windows Azure Table Storage (WADPerformanceCountersTable), you can query that table through a remote app (either by using Microsoft's Storage Client library or writing your own custom wrapper around Azure Table Service REST API to retrieve the data. All you will need is the storage account name and key.

Resources