How to create user manageable business rules in .Net Core - .net-5

I am looking to create an application that can process requests and return response based on some rules.
User must have an option to change the rules on the fly. So i would expect the rule to be in Code.
The only solution that comes to my mind is to have the .Net 5 application to run either a shell or python script that will evaluate the request and I will create a response based on the script result.
This theoretically would work, but I don't see it as a reasonable solution.
Is there a cleaner/better way to achieve user manageable rules in .Net core application?

Related

In Microsoft Dynamics 365 CRM what is the major difference in plugins and workflows when both serve the same purpose

Can someone please tell me which of the following has more advantages - plugin/workflow ?
As the Post in Custom WorkFlows vs Plug-ins in MS CRM seems to be a little outdated, i can share my experieces with you.
Workflows:
Contains certain Logic you provide by only "clicking" on the actions
you want to be made (Like Update, Create, etc.)
Can be run "onDemand"
Can often be handled by KeyUsers and do not need an explicit developer
Should not be used for complicated logic as the iterface often does not provide the possibility to add additional logic afterwards
If used for complicated logic (as statened above), refaktoring or changes are often very hard to integrate!
In current Cloud organisations you get the Information that you SHOULD not use these anymore, but to swith to MS Flow. (VERY IMPORTANT!!)
Plugins:
Custom Code - so you can provide very complicated or also simple server-side logic
You need a(n experienced) developer
Can perform faster than workflows!
nearly everything you can do with a Workflow can be done by a Plugin (or job) but not visa-vera
You have the possibility to trigger the plugin as well as hand in Data (Parameters!) as you can create your own "Messages" (With this i mean you do not only use Update, Delete and Create, etc. as Messages for Plugins, but you can define your own Message Steps by creating "Actions" in the Prozess Section in your Dynamics Organization. There you can define Input- AND Outputparameters. These custom Messages can be also triggered on demand!!! For instance by using javascript. Guid how to use/create custom Messages (Actions))
In my experience Plugins are mostly the better suited solution if you have (even a little) complicated matter, as workflows are far less maintainable. Simple "1 Liners" can often be replaced by workflows.
Nevertheless each developer/consultant has to suggest his own way for the improvement/developmet of his/her organization.
#Community: Feel free to correct me, if i am wrong anywhere or if you have different experiences.

How to create Performance testing framework in jmeter?

For functional automation we use to create a framework which is reusable for automating application. Is there any way to create Performance testing framework in jmeter. So that we can use same framework for Performance testing of different applications.
Please help if any one knows and provide more information regarding it.
You can consider JMeter as a "framework" which already comes with test elements to build requests via different protocols/transports, applying assertions, generating reports, etc.
It is highly unlikely you will be able to re-use existing script for another application as JMeter acts on protocol level therefore there will be different requests for different applications.
There is a mechanism in JMeter allowing to re-use pieces of test plan as modules so you won't have to duplicate your code, check out Test Fragments and Module Controller, however it is more applicable for a single application.
The only "framework-like" approach I can think of is adding your JMeter tests into continuous integration process so you will have a build step which will execute performance tests and publish reports, basically you will be able to re-use the same test setup and reporting routine and the only thing which will change from application to application will be .jmx test script(s). See JMeter Maven Plugin and/or JMeter Ant Task for more details.
You must first ask yourself, how dynamic is my conversation that I am attempting to replicate. If you have a very stable services API where the exposed external interface is static, but the code to handle it on the back end is changing, then you have a good shot at building something which has a long life.
But, if you are like the majority of web sites in the universe then you are dealing with developers who are always changing something, adding a resource, adding of deleting form values (hidden or not), headers, etc.... In this case you should consider that your scripts are perishable, with a limited life, and you will need to rebuild them at some point.
Having noted the limited lifetime of a piece of code to test a piece of code with a limited lifetime, are there some techniques you can use to insulate yourself? Yes. Rule of thumb is the higher up the stack you go to build your test scripts the more insulated you are from changes under the covers ( assuming the layer you build to is stable ). The trade off is with more of the intelligence under the covers of your test interface, the higher the resource cost for any individual virtual user which then dictates more hosts for test execution and more skew from client side code which can distort the view of what is coming from the server. An example, run a selenium script instead of a base jmeter script. A browser is invoked, you have the benefit of all of the local javascript processing to handle the dynamic changes and your script has a longer life.

Enforcing relational workflows in TargetProcess

I'm currently evaluating a few different issue management tools, and have it narrowed down to TargetProcess, Redmine and Youtrack. For what I need TargetProcess seems to do everything with a lot less need for customisation, however as the only person working on QA at a small startup, I'm trying to make sure that as much of the process is automated as possible.
YouTrack has a workflow editor which allows you to write validation rules for your issues, and would therefore allow me to specify that you can't move an issue of a certain type into a certain state without having a related issue of another type, for example you cannot move a feature out of "New" without having a set of related requirements in the form of test cases.
While this isn't as ingrained in Redmine, there is a plugin which allows you to write these types of rules. I haven't however been able to find anything of the sort for TargetProcess, and worry that the ability to perform this sort of deep customisation will add an extra time-sink as I have to spend more time on this process myself.
Is there any way to achieve this in TargetProcess, be it using a plugin or an external service? I can see that I could hook something up to the REST api, but this would make it difficult to give feedback as to why an issue had not been progressed. TargetProcess is an impressive tool, however it is very expensive, and unless it does everything I want, it is difficult to justify the outlay.
TL/DR
Is there a mechanism for writing business rules into TargetProcess such that the proper QA process is enforced, so I can concentrate on providing value through QA rather than process management?
There are no customized Business Rules in Targetprocess so far. The only thing that exist is a Mashup that allows some rules customization related to custom fields
https://github.com/TargetProcess/TP3MashupLibrary/tree/master/Custom%20Field%20Constraints
Custom Business Rules are requested by many people and we are going to start development this year.

Prefered methods for interacting with a rules engine

I am about to dive into a rules oriented project (using ILOGs Rules for .NET - now IBM). And I have read a couple different perspectives regarding how to set up the rules processing and how to interact with the rule engine.
The two main thoughts I have seen is to centralize the rule engine (into its own farm of servers) and program against the farm via a web service API (or in ILOG's case via WCF). The other side is to run an instance of the rule engine on each of your app servers and interact with it locally with each instance having its own copy of the rules.
The up side to centralization is the ease of deployment of the rules to a centralized location. The rules scale as they need to rather than scaling each time you expand your application server configuration. This reduces waste from a purchased license perspective. The down side to this set up is the added overhead of making service calls, network latency, etc.
The upside/downside to locally running the rule engine is the exact opposite of the centralized configuration's upside/downside. No slow service calls (fast API calls), no network issues, each app server relies on it self. Managing deployment of rules becomes more complex. Each time you add a node to your app cloud you will need more licenses for rule engines.
In reading white papers I see that Amazon is running the rule engine per app server configuration. They appear to do a slow deployment of rules and recognize that the lag in rule publishing is "acceptable" even though business logic is out of a sync for a given period of time.
Question: From your experiences, what is the best way to start integrating rules into a .net based web app for a shop that has not yet spent much time working in a rules driven world?
I never liked the centralization argument. It means that everything is coupled into the rules engine, which becomes a dumping ground for all the rules in the system. Pretty soon you can't change anything for fear of the unknown: "What will we break?"
I much prefer following Amazon's idea of services as isolated, autonomous components. I interpret that to mean that services own their data and their rules.
This has the added benefit of partitioning the rules space. A rule set becomes harder to maintain as it grows; better to keep them to a manageable size.
If parts of the rule set are shared, I'd prefer a data-driven, DI approach where a service can have its own instance of a rules engine and load the common rules from a database on startup. This might not be feasible if your iLog license makes multiple instances cost prohibitive. That would be a case where product that's supposed to be helping might actually be dictating architectural choices that will bring grief. It would be a good argument for a less expensive alternative (e.g., JBoss Rules in Java-land).
What about a data-driven decision tree approach? Is a Rete rules engine really necessary, o is the "enterprise tool" decision driving your choice?
I'd try to set up the rules engine so it was as decoupled from the rest of the enterprise as possible. I wouldn't have it calling out to databases or services if I could. Better to make that the responsibility of the objects asking for a decision. Let them call to the necessary web services and databases to assemble the necessary data, pass it to the rules engine, and let it do its thing. Coupling is your enemy: Try to design your system to minimize it. Keeping rules engines isolated is a good way to do it.
We're using ILOG For DotNet and have a deployed pilot project.
Here's a summary of our immature Rules Architecture:
All data-access done outside of rules.
Rules are deployed the same way as code (source control, release process, yada yada).
Projects (services) that use Rules have a reference to ILOG.Rules.dll and new-up RuleEngines via a custom pooling class. RuleEngines are pooled because it is expensive to bind a RuleSet to a RuleEngine.
Almost all rules are written to expect Assert'd objects, rather than RuleFlow parameters.
Since the rules run in the same memory space, instances that are modified by the rules are the same instances in the program - which is immediate propagation of state.
Almost all rules are run via RuleFlow (even if it is a single RuleStep in the RuleFlow).
We're looking at RuleExecutionServer as an hosting platform as well as RuleTeamServerForSharePoint to be the host for rules source. Eventually, we will have Rules deployed to production outside of the code release process.
The primary obstacle in all our Rule endeavors is Modeling and Rule Authoring skillsets.
I don't have much to say on the "which server" question but I would urge you to develop decision services - callable services that use rules to make decisions but that do not change the state of the business. Letting the calling application/service/process decide what data changes to make as a result of calling the decision service and having the calling component actually initiate the action(s) suggested by the decision service makes it easier to use the decision service over and over again (across channels, processes etc). The cleaner and less tied to the rest of the infrastructure the decision service the more reusable and manageable it is going to be.
The discussion here on ebizQ might be worth reading in this regard.
In my experience with rules engines, we've applied a pretty basic set of practices to govern interaction with the rules engine. First of all, these have always been commercial rules engines (iLog, Corticon) and not open source (Drools), hence deploy locally to each of the app servers has never really been a viable option due to licensing costs. Hence, we've always gone with the centralized model, albeit in two primary flavors:
Remote Execution of Web Service - In the same way you specified in your question, we make calls to SOAP-based services provided by the rules engine product. Within the web service realm, we have come upon several options: (1) "Boxcar" the requests, allowing the application to queue up rules processing requests and send them over in chunks as opposed to one-off messages; (2) Tune the threading and process options provided by the vendor. This includes allowing separating decision services out by function and allocating each a W3WP and/or using web gardens. There is an aweful lot of tweaking you can do with boxcars, threads, and processes and getting the right mix is more a process of trial and error (and knowing your rulesets and data) than an exact science.
Remotely Call the Rules Engine in Process - A classic batch style trick to avoid the overhead of serialization and de-serialization. Remotely make a call that fires up an in-process call to the rules engine. This can be done either scheduled (e.g. batch) or based upon demand (i.e. "boxcars" of requests). Either way a lot of the overhead of the service calls can be avoided by interacting directly with the process and the database. Downside of this process is that you don't have IIS or your EJB/Servlet container managing the threads for you and you have to do it yourself.

MS Validation Block or Workflow Rules engine?

For a large application that will be developed, we are in the process of selecting a Validation framework. Although the Workflow Rules engine is not strictly a Validation framework, it can be used by itself without using the Workflow foundation. It appears to give flexibility of specifying the rules in a database that is used at runtime. However, it appears that you cannot specify rules in the code.
If greater flexibility is one of the requirements (not necessarily that the rules need to be edited by Business analysts), which of the two would you prefer and why?
It quite matters in what your exact requirements are. 'Being flexible' is by itself not a good requirement, because it isn't measurable. It's very subjective if something is flexible.
I'm not familiar with Microsoft Business Rules Engine, so I can't comment on that. I am however very familiar with the Microsoft Enterprise Library Validation Application Block (VAB) and it has served me well over the last year. It has several features that make it flexible for the situations I’m dealing with:
It allows both defining validation declarative (using attributes) and using an external configuration file (which is very useful when your entities are generated).
It contains a set of default validators that can be used and custom validators can be written.
It allows validation of single properties and allows you compare multiple properties as a group (by using self validation or custom validators).
It allows validating objects in isolation, as well as object graphs.
It allows you to define multiple 'rule sets' which for instance allows you to define an set of hard errors and a set of warnings.
VAB (or the Enterprise Library as a whole) allows you write a custom configuration source (IConfigurationSource) which allows you to define your business rules wherever you want. So in theory you could store them in the database, however you will have to write such configuration source yourself, and this will be quite some work. Especially when you want your business analysts to be able to define validations and update the database with some sort of editing tool, it think this will be quite hellish to accomplish with VAB.
If there really is a requirement of the business people to write those rules themselves, hopefully you got process supporting this requirement. For instance, how are they going to test whether their changes are correct? You wouldn’t want your business analysts to make changes directly to the production database.
But please give this a thought. If the rules are going to be tested, I expect those rules not to be changed directly in your production database, otherwise you would be testing after the fact. So, the analysts would be changing the rules in their own environment and you’d probably be publishing the new rules from this environment to a test environment, and later on to acceptation environment and eventually to a production environment. And if you’re taking these steps, should you still use a database to store those business rules? Using a configuration file would make it much easier than using the database. Deployment would simply be a file copy instead of copying the content of multiple tables from database to database.
I'm interested in what others have to say about the validation frameworks they know well.

Resources