Read-level Auditing in Dynamics CRM - dynamics-crm

We have a (challenging!!) requirement to audit read-operations in CRM. This audit won't be the OOTB CRM Auditing but audit to an external auditing system via web services. Basically we will be classifying all the entity fields as High/Medium/Low and whenever any user views any fields tagged as High/Medium, we need to audit it.
I understand that Read-level auditing isn't supported OOTB by CRM and this requirement will have significant performance impact, however there is no way out since this is a business-critical functionality. Since the CRM records can be viewed from multiple sources (Form, Home Grid, Sub Grid, Advanced Find, Lookup Views, etc.), I am trying to look for a common solution that works in all the scenarios. One approach I tried out is using plugins on Retrieve/RetrieveMultiple messages and have the custom audit logic in the plugin, however I am concerned about the performance impact this approach will have. Another approach I can think of would be to handle this using Javascript, however the Javascript approach won't work with
all the scenarios like Advanced Find, Lookup Views, etc.
I am looking for suggestions on any other better solution to this.

Try to switch your plugins to Async mode. This should not cause such huge performance impact as Sync plugin cause.
I'm afraid that Plugins is the only solution for you. Good luck implementing it.

Related

In Microsoft Dynamics 365 CRM what is the major difference in plugins and workflows when both serve the same purpose

Can someone please tell me which of the following has more advantages - plugin/workflow ?
As the Post in Custom WorkFlows vs Plug-ins in MS CRM seems to be a little outdated, i can share my experieces with you.
Workflows:
Contains certain Logic you provide by only "clicking" on the actions
you want to be made (Like Update, Create, etc.)
Can be run "onDemand"
Can often be handled by KeyUsers and do not need an explicit developer
Should not be used for complicated logic as the iterface often does not provide the possibility to add additional logic afterwards
If used for complicated logic (as statened above), refaktoring or changes are often very hard to integrate!
In current Cloud organisations you get the Information that you SHOULD not use these anymore, but to swith to MS Flow. (VERY IMPORTANT!!)
Plugins:
Custom Code - so you can provide very complicated or also simple server-side logic
You need a(n experienced) developer
Can perform faster than workflows!
nearly everything you can do with a Workflow can be done by a Plugin (or job) but not visa-vera
You have the possibility to trigger the plugin as well as hand in Data (Parameters!) as you can create your own "Messages" (With this i mean you do not only use Update, Delete and Create, etc. as Messages for Plugins, but you can define your own Message Steps by creating "Actions" in the Prozess Section in your Dynamics Organization. There you can define Input- AND Outputparameters. These custom Messages can be also triggered on demand!!! For instance by using javascript. Guid how to use/create custom Messages (Actions))
In my experience Plugins are mostly the better suited solution if you have (even a little) complicated matter, as workflows are far less maintainable. Simple "1 Liners" can often be replaced by workflows.
Nevertheless each developer/consultant has to suggest his own way for the improvement/developmet of his/her organization.
#Community: Feel free to correct me, if i am wrong anywhere or if you have different experiences.

Need defense against wacky challenge to Event Sourcing architecture w/CosmosDB

In the current plan, incoming commands are handled via Function Apps, resulting in Events being sent to an Event Hub, and then materializing the views
Someone is arguing that instead of storing events in something like table storage, and materializing views based on events and snapshots, that we should:
Just stream events to a log in Azure Monitor to have auditing
We can make changes to a domain object immediately in response to a command and use the change feed as our source of events for materialized views.
He doesn’t see the advantage of even having a materialized view. Why not just use a query? Argument is we don’t expect a lot of traffic.
He wants to fulfill the whole audit log by saving events to the azure monitor log - Just an application log. Instead, that commands should just directly modify the representation of an entity in cosmos, and we'd use the change feed from CosmosDB as our domain object events, or we would create new events off of that via subscribers to that stream.
Is this actually an advantageous approach? Can ya'll think of any reasons why we wouldn't want to do that? Seems like we'd be losing something here.
He's saying we'd no longer need to be concerned with eventual consistency, as we'd have immediate consistency.
Every reference implementation I've evaluated does NOT do it the way he's suggesting. I'm not deeply versed in the advantages/disadvantages of the event sourcing / CQRS paradigm so I'm at a loss at the moment.. Currently researching furiously
This is a conceptual issue so there's not so much a code example. However, here's some references that seem to back up the approach I'm taking..
https://medium.com/#thomasweiss_io/planet-scale-event-sourcing-with-azure-cosmos-db-48a557757c8d
https://sajeetharan.com/2019/02/03/event-sourcing-with-azure-eventhub-and-cosmosdb/
https://learn.microsoft.com/en-us/azure/architecture/patterns/event-sourcing
If your goal is only to have the audit log, state-based persistence could be a good choice. Event sourcing adds some complexity to the implementation side and unless you can identify more advantages of using it, you might not convince your team to bring this complexity to the system. There are numerous questions and answers on SO, as well as in some blog posts, about pros and cons of event sourcing, so I won't get into that discussion here.
I can warn you, though, that the second article in your list is very weak and would most probably lead you to many difficulties. The role of Event Hub there is completely unclear and it doesn't explain anything about projections and read-models (what you call "materialised views"). Only a very limited number of use-cases can live with only getting one entity by id and without being able to execute a query across multiple entities. That also probably answers your concern of having read-models at all. You will need them very soon when for the first time you will start figuring out how to get a list of entities based on some condition (query).
Using CosmosDb as the event store is completely feasible, as described in the first article if you can manage the costs involved. Just remember to set the change feed TTL to -1, otherwise, you won't be able to replay your projections when you need to.
To summarise:
Keeping the audit log can be done without event-sourcing, but you need to ensure that events are published reliably, preferably in the same transaction as the entity state update. It is often hard or impossible but you might accept the risk of your audit requirement is not strict. You can also base your audit log on the CosmosDb change feed, just collecting document changes and logging them somewhere.
Event sourcing is a powerful technique but it has both pros and cons. The most common prejudice against using event sourcing is its implementation complexity. It might not be a big issue if you have a team that is somewhat experienced in building event-sourced systems. If you don't have such a team, you might want to build a small-scale spike to get some experience.
If you don't get full buy-in from the team to use event sourcing, you will later get all the blame if anything goes wrong. And it will go wrong at some point, especially with little experience in this area.
Spend some time reading books and trying out things yourself, before going wild in production.
Don't use Event Hub for anything that it is not designed for. Event Hub is the powerful event ingestion transport with limited TTL and it should be used for that purpose.
Don't use Table Storage as the event store, unless you only read entities by id. I used it in production for such a scenario and it worked (to some extent) but you can't project read-models from there.
A simple rule of thumb is to not use products for tasks they weren't designed for.
Azure Monitor was not designed to store application domain data. Azure Monitor is designed to store telemetry data from your applications and services and provides features such as alerts and other types of integration into DevOps tools for managing the operation and health of your apps.
There is a simple reason why you were able to find articles on event sourcing using Cosmos DB and why our own docs talk about it. Because it was designed to be used this way. It is simple to set up Cosmos DB to be an append only event store for your applications and use Change Feed to fire off messages in other apps or services or, in your case, to maintain a materialized view state of domain objects within your app.

Enforcing relational workflows in TargetProcess

I'm currently evaluating a few different issue management tools, and have it narrowed down to TargetProcess, Redmine and Youtrack. For what I need TargetProcess seems to do everything with a lot less need for customisation, however as the only person working on QA at a small startup, I'm trying to make sure that as much of the process is automated as possible.
YouTrack has a workflow editor which allows you to write validation rules for your issues, and would therefore allow me to specify that you can't move an issue of a certain type into a certain state without having a related issue of another type, for example you cannot move a feature out of "New" without having a set of related requirements in the form of test cases.
While this isn't as ingrained in Redmine, there is a plugin which allows you to write these types of rules. I haven't however been able to find anything of the sort for TargetProcess, and worry that the ability to perform this sort of deep customisation will add an extra time-sink as I have to spend more time on this process myself.
Is there any way to achieve this in TargetProcess, be it using a plugin or an external service? I can see that I could hook something up to the REST api, but this would make it difficult to give feedback as to why an issue had not been progressed. TargetProcess is an impressive tool, however it is very expensive, and unless it does everything I want, it is difficult to justify the outlay.
TL/DR
Is there a mechanism for writing business rules into TargetProcess such that the proper QA process is enforced, so I can concentrate on providing value through QA rather than process management?
There are no customized Business Rules in Targetprocess so far. The only thing that exist is a Mashup that allows some rules customization related to custom fields
https://github.com/TargetProcess/TP3MashupLibrary/tree/master/Custom%20Field%20Constraints
Custom Business Rules are requested by many people and we are going to start development this year.

Modern reporting solutions for distributed data systems

We've built a SAAS solution, which has a Frontend in PHP/MySQL. The solution uses our in-house "Backend" API to manage user transactions (financial-ish type of stuff). So basically, some of our data is in the "Frontend" database, while all transactional data is in the "Backend" database.
When it comes to reporting, the Frontend requests transactional reports from the Backend, augments it with Frontend data (user attributes, etc), and draws the report. Usually it's slow and cumbersome to create a new report, and they lack robust features like sorting & filtering. This is partially because there is no single data-source for all the info. Also, we are constantly being asked to provide "adhoc" reporting capabilities - the type of thing that is complex, and has the potential to bring a server to its knees if you aren't careful.
I think we're at the point where we need to invest in a Reporting system, which would be responsible for combining data dumps from Frontend/Backend, and would allow a non-developer to create new reports. One thing that would be important to us is to provide as seamless of an interface as possible to the reports via our Frontend. That might mean the Reporting system exposes web widgets, or perhaps has a web interface that can be accessed with SSO between our system and the Reporting system. In a nutshell, we aren't looking for a dinosaur, we need something modern. Hosted solutions are preferred, but we'd consider something we need to run ourselves. Looking for advice. Thanks!
EDIT: A hosted solution might not work for us. We are located in Canada, and many customers have policies about having data reside in the US (Patriot Act).
Have a look at myDBR reporting solution. Reports are built using stored procedures, so anyone familiar with SQL will be able to create reports. There is also a built in wizard to get you started quickly. It is also very easy to link reports to each other allowing for easy drill-down style reports.
The solution is very reasonably priced at 129 EUR (~ 170 USD) and can be installed in minutes on any standard web server (PHP being to only requirement).
myDBR can be easily integrated into your existing web-pages via the built-in SSO and styled via CSS to match your sites overall look and feel.

Multiple programs updating the same database

I have a website developed with ASP.NET MVC, Entity Framework Code First and SQL Server.
The website has entities that each have a history of statuses that we defined (NEW, PACKED, SHIPPED etc.)
The DB contains a table in which a completely separate system inserts parcel tracking data.
I have to read this data tracking data and, following certain business rules, add to the existing status history of my entities.
The best way I can think of is to write an independent Windows service to poll the tracking data every so often and update my entity statuses from that. However, that makes me concerned about DB concurrency issues.
Please could someone advise me on the best strategy for this scenario?
Many thanks
There are different ways to do it. It also depends on the response time you need. If you need to update your system as soon as the tracking system updates the record then a trigger is the preferred way. Alternative way is to schedule a job which will run every 15/30mins and sync the 2 systems.
As for the concurrency issue you can use a concurrency token field. Entity framework has support for this.

Resources