Drools: can we add/amends rules at runtime - runtime

can we add/amends rules (.drl or decision table) at runtime in drools ?
For example
I created a simple decision table rule, this rule worked fined, however any change in decision table was not reflected in rule evaluation at runtime until the jvm was restarted.
Tried this re-creating the KieSession

I was able to do this by creating new KieSession instance as below:
This was done with
drools version: 8.32.0
java 11
============================
KnowledgeBuilder builder = KnowledgeBuilderFactory.newKnowledgeBuilder();
builder.add(ResourceFactory.newFileResource(<rule xls file directory>), ResourceType.DTABLE);
InternalKnowledgeBase knowledgeBase = KnowledgeBaseFactory.newKnowledgeBase();
knowledgeBase.addPackages(builder.getKnowledgePackages());
knowledgeBase.newKieSession();
=====================
The new session was created whenever there was any amend made in the rule.

Related

Apache Geode - Creating region on DUnit Based Test Server/Remote Server with same code from client

I am tryint to reuse the code in following documentation : https://geode.apache.org/docs/guide/11/developing/region_options/dynamic_region_creation.html
The first problem that i met is that
Cache cache = CacheFactory.getAnyInstance();
Region<String,RegionAttributes<?,?>> regionAttributesMetadataRegion = createRegionAttributesMetadataRegion(cache);
should not be executed in constructor. In case it is , the code is executed in client instance , it is failed on not server error.When this fixed i receive
[fatal 2021/02/15 16:38:24.915 EET <ServerConnection on port 40527 Thread 1> tid=81] Serialization filter is rejecting class org.restcomm.cache.geode.CreateRegionFunction
java.lang.Exception:
at org.apache.geode.internal.ObjectInputStreamFilterWrapper.lambda$createSerializationFilter$0(ObjectInputStreamFilterWrapper.java:233)
The problem is that code is getting executed on dunit MemberVM and the required class is actually the part of the package under which the test is getting executed.
So i guess i should somehow register the classes ( or may be jar ) separately to dunit MemberVM. How it can be done?
Another question is: currently the code is checking if the region exists and if not it calls the method. In both cases it also tries to create the clientRegion. The question is whether this is a correct approach?
Region<?,?> cache = instance.getRegion(name);
if(cache==null) {
Execution execution = FunctionService.onServers(instance);
ArrayList argList = new ArrayList();
argList.add(name);
Function function = new CreateRegionFunction();
execution.setArguments(argList).execute(function).getResult();
}
ClientRegionFactory<Object, Object> cf=this.instance.createClientRegionFactory(ClientRegionShortcut.CACHING_PROXY).addCacheListener(new ExtendedCacheListener());
this.cache = cf.create(name);
BR
Yulian Oifa
The first problem that i met is that
Cache cache = CacheFactory.getAnyInstance();
should not be executed in constructor. In case it is , the code is executed in client instance , it is failed on not server error.When this fixed i receive
Once the Function is registered on server side, you can execute it by ID instead of sending the object across the wire (so you won't need to instantiate the function on the client), in which case you'll also avoid the Serialization filter error. As an example, FunctionService.onServers(instance).execute(CreateRegionFunction.ID).
The problem is that code is getting executed on dunit MemberVM and the required class is actually the part of the package under which the test is getting executed. So i guess i should somehow register the classes ( or may be jar ) separately to dunit MemberVM. How it can be done?
Indeed, for security reasons Geode doesn't allow serializing / deserializing arbitrary classes. Internal Geode distributed tests use the MemberVM and set a special property (serializable-object-filter) to circumvent this problem. Here's an example of how you can achieve that within your own tests.
Another question is: currently the code is checking if the region exists and if not it calls the method. In both cases it also tries to create the clientRegion. The question is whether this is a correct approach?
If the dynamically created region is used by the client application then yes, you should create it, otherwise you won't be able to use it.
As a side note, there's a lot of internal logic implemented by Geode when creating a Region so I wouldn't advice to dynamically create regions on your own. Instead, it would be advisable to use the gfsh create region command directly, or look at how it works internally (see here) and try to re-use that.

How to use ReteOO programmatically in Drools 7.5.0.Final

I am trying to use ReteOO in Drools 7.5.0.Final and Java 8; however, the following code does not compile
KieServices ks = KieServices.Factory.get();
KieBaseConfiguration kconfig = ks.newKieBaseConfiguration();
kconfig.setOption(RuleEngineOption.RETEOO);
Also, drools-reteoo-(version).jar is not included in the binary folder of Drools 7.5.0.Final distribution.
Thanks in advance.
The ReteOO isn't available anymore in Drools 7.x stream. PHREAK (as a ReteOO successor in Drools) is used as a default from 6.x series. If you need immediate or eager evaluation, you can use one of the propagation modes. See here in docs [1].
Regards,
Tibor
[1] https://docs.jboss.org/drools/release/7.7.0.Final/drools-docs/html_single/index.html#_propagation_modes_2

How does one configure VSTS specific load test context parameters for own azure agents intelligently

Recently moved from utilising AWS to Azure for the location of our load test agents, thus making the transition to making full use of VSTS.
It was described that, for the moment, to get a load test file working with VSTS to using our own VMs for testing, we need to provide two context parameters, UseStaticLoadAgents and StaticAgentsGroupName in each loadtest file.
Our load test solution is getting very large, and we have multiple loadtest files where we have to set these two values each time. This leads us into the situation where, if we were to change our agents group name for example, we would have to update each individual load test file with the new information.
Im looking at a way to centralise this until a nicer way is implemented by Microsoft. The idea was to use a load test plugin, to add these context parameters with the plugin drawing the needed values from a centralised config file.
However, it seems that none of the hooks in the load test plugin or simply using the initialise method to manually set these values is working. Likely because they are set after full initialisation.
Has anyone got a nice, code focused solution to manage this and stop us depending on adding brittle values in the editor? Or even gotten the above approach to work?
The loadtest file is the XML file, so you can update it programmatically, for example:
string filePath = #"XXX\LoadTest1.loadtest";
XmlDocument doc = new XmlDocument();
doc.Load(filePath);
XmlNamespaceManager nsmgr = new XmlNamespaceManager(doc.NameTable);
nsmgr.AddNamespace("ns", "http://microsoft.com/schemas/VisualStudio/TeamTest/2010");
XmlNode root = doc.DocumentElement;
XmlNode nodeParameters = root.SelectSingleNode("//ns:RunConfigurations/ns:RunConfiguration[#Name='Run Settings1']/ns:ContextParameters", nsmgr);
if(nodeParameters!=null)
{
//nodeParameters.SelectSingleNode("//ns:ContextParameter[#Name='UseStaticLoadAgents']").Value = "agent1";
foreach (XmlNode n in nodeParameters.ChildNodes)
{
switch (n.Attributes["Name"].Value)
{
case "Parameter1":
n.Attributes["Value"].Value = "testUpdate";
break;
case "UseStaticLoadAgents":
n.Attributes["Value"].Value = "agent1";
break;
case "StaticAgentsGroupName":
n.Attributes["Value"].Value = "group1";
break;
}
}
}
doc.Save(filePath);

F# project is taking so much time to build

I have created f# solution and added one class library. Only one project in the solution and 5 files and 20 lines of code in each file. Still it will take more 2 minutes to build each time.
I have tried to clean solution.
Also created new solution and project and includes same files, still it will take same time to build it.
Note : First I have created it as a Console Application then convert it into the Class Library.
Edit: Code Sample `
open System
open Configuration
open DBUtil
open Definitions
module DBAccess =
let GetSeq (sql: string) =
let db = dbSchema.GetDataContext(connectionString)
db.DataContext.CommandTimeout <- 0
(db.DataContext.ExecuteQuery(sql,""))
let GetEmployeeByID (id: EMP_PersonalEmpID) =
GetSeq (String.Format("EXEC [EMP_GetEntityById] {0}",id.EmployeeID)) |> Seq.toList<EMP_PersonalOutput>
let GetEmployeeListByIDs (id : Emp_PersonalInput) =
GetSeq (String.Format("EXEC [EMP_GetEntityById] {0}",id.EmployeeID)) |> Seq.toList<EMP_PersonalOutput>`
configuration code snippets : `open Microsoft.FSharp.Data.TypeProviders
module Configuration =
let connectionString = System.Configuration.ConfigurationManager.ConnectionStrings.["EmpPersonal"].ConnectionString
//for database,then stored procedure, the getting the context,then taking the employee table
type dbSchema = SqlDataConnection<"", "EmpPersonal">
//let db = dbSchema.GetDataContext(connectionString)
type tbEmpPersonal = dbSchema.ServiceTypes.EMP_Personal`
Okay, seeing your actual code, I think the main problem is that the type provider connects to the database every time to retrieve the schema. The way to fix this is to cache the schema in a dbml file.
type dbSchema = SqlDataConnection<"connection string...",
LocalSchemaFile = "myDb.dbml",
ForceUpdate = false>
The first time, the TP will connect to the database as usual, but it will also write the schema to myDb.dbml. On subsequent compiles, it will load the schema from myDb.dbml instead of connecting to the database.
Of course, this caching means that changes to the database are not reflected in the types. So every time you need to reload the schema from the database, you can set ForceUpdate to true, do a compile (which will connect to the db), and set it back to false to use the updated myDb.dbml.
Edit: you can even commit the dbml file to your source repository if you want. This will have the additional benefit to allow collaborators who don't have access to a development version of the database to compile the solution anyway.
This answer about NGEN helped me once, but the build time of F# is still terrible compared to C#, just not minutes.

Unitils and DBMaintainer - how to make them work with multiple users/schemas?

I am working on a new Oracle ADF project, that is using Oragle 10g Database, and I am using Unitils and DBMaintainer in our project for:
updating the db structure
unittesting
read in seed data
read in test data
List item
In our project, we have 2 schemas, and 2 db users that have privilegies to connect to these schemas. I have them in a folder structure with incremental names and I am using the #convention for script naming.
001_#schemaA_name.sql
002_#schemaB_name.sql
003_#schemaA_name.sql
This works fine with ant and DBMaintainer update task, and I supply the multiple user names by configuring extra elements for the ant task.
<target name="create" depends="users-drop, users-create" description="This tasks ... ">
<updateDatabase scriptLocations="${dbscript.maintainer.dir}" autoCreateDbMaintainScriptsTable="true">
<database name="${db.user.dans}" driverClassName="${driver}" userName="${db.user.dans}" password="${db.user.dans.pwd}" url="${db.url.full}" schemaNames="${db.user.dans}" />
<database name="idp" driverClassName="${driver}" userName="${db.user.idp}"
password="${db.user.idp.pwd}" url="${db.url.full}" schemaNames="${db.user.idp}" />
</updateDatabase>
</target>
However, I cant figure out, how to make the DBMaintainer update task create the xsd schemas from my db schemas?
So, I decided to use Unitils, since its update creates xsd schemas.
I haven't found any description or documentation for the Unitils ant tasks - can anyone give some hints?
For the time being I have figured out to run Unitils by creating a Junit test, with #Dataset annotation. I can make it work with one schema, and one db user. But I am out of ideas how to make it work with multiple users?
Here is the unitils-local.properties setup I have:
database.url=jdbc\:oracle\:thin\:#localhost\:1521\:vipu
database.schemaNames=a,b
database.userName=a
database.password=a1
Can any of you guys give me a tip, how to make Unitils work with the second user/schema ??
I will be extremely gratefull for your help!
eventually I found a way to inject any unitil.properties of your choice --- by instantiating Unitils yourself!
You need a method that is evoked #BeforeClass, in which you perform something like the following:
#BeforeClass
public void initializeUnitils {
Properties properties;
...
// load properties file/values depending on various conditions
...
Unitils unitils = new Unitils();
unitils.init(properties);
Unitils.setInstance( unitils );
}
I choose the properties file depending on which hibernate configuration is loaded (via #HibernateSessionFactory), but there should be other options as well
I have figure out how to make dbmaintain and unitils work together on multi-database-user support, but the solution is a pure ant hack.
I have set up the configuration for dbmaintain, using multi-database-user support.
I have made a unitils-local.properties file with token keys for replacement.
The init target of my ant script is generating a new unitils-local.properties file, by replacing tokens for username/password/schema with values that are correct for the target envirnonment, and then copies it to the users home directory.
I have sorted the tests into folders, that are prefixed with the schema name
When unitils is invoked, it picks up the unitils-local.properties file just created by the ant script, and does its magic.
Its far from pretty, but it works.
Check out this link: http://www.dbmaintain.org/tutorial.html#From_Java_code
Specifically you may need to do something like:
databases.names=admin,user,read
database.driverClassName=oracle.jdbc.driver.OracleDriver
database.url=jdbc:oracle:thin://mydb:1521:MYDB
database.admin.username=admin
database.admin.password=adminpwd
database.admin.schemaNames=admin
database.user.userName=user
database.user.password=userpwd
database.user.schemaNames=user
database.read.userName=read
database.read.password=readpwd
database.read.schemaNames=read
Also this link may be helpful: http://www.dbmaintain.org/tutorial.html#Multi-database__user_support
I followed Ryan suggestion. I noticed couple changes when I debugged UnitilsDB.
Following is my running unitils-local.properties:
database.names=db1,db2
database.driverClassName.db1=oracle.jdbc.driver.OracleDriver
database.url.db1=jdbc:oracle:thin:#db1d.company.com:123:db1d
database.userName.db1=user
database.password.db1=password
database.dialect.db1=oracle
database.schemaNames.db1=user_admin
database.driverClassName.db2=oracle.jdbc.driver.OracleDriver
database.url.db2=jdbc:oracle:thin:#db2s.company.com:456:db2s
database.userName.db2=user
database.password.db2=password
database.dialect.db2=oracle
Make sure to use #ConfigurationProperties(prefix = "database.db1") to connecto to particular database in your test case:
#RunWith(UnitilsJUnit4TestClassRunner.class)
#ConfigurationProperties(prefix = "database.db1")
#Transactional
#DataSet
public class MyDAOTest {
..
}

Resources