How to use ReteOO programmatically in Drools 7.5.0.Final - java-8

I am trying to use ReteOO in Drools 7.5.0.Final and Java 8; however, the following code does not compile
KieServices ks = KieServices.Factory.get();
KieBaseConfiguration kconfig = ks.newKieBaseConfiguration();
kconfig.setOption(RuleEngineOption.RETEOO);
Also, drools-reteoo-(version).jar is not included in the binary folder of Drools 7.5.0.Final distribution.
Thanks in advance.

The ReteOO isn't available anymore in Drools 7.x stream. PHREAK (as a ReteOO successor in Drools) is used as a default from 6.x series. If you need immediate or eager evaluation, you can use one of the propagation modes. See here in docs [1].
Regards,
Tibor
[1] https://docs.jboss.org/drools/release/7.7.0.Final/drools-docs/html_single/index.html#_propagation_modes_2

Related

Drools: can we add/amends rules at runtime

can we add/amends rules (.drl or decision table) at runtime in drools ?
For example
I created a simple decision table rule, this rule worked fined, however any change in decision table was not reflected in rule evaluation at runtime until the jvm was restarted.
Tried this re-creating the KieSession
I was able to do this by creating new KieSession instance as below:
This was done with
drools version: 8.32.0
java 11
============================
KnowledgeBuilder builder = KnowledgeBuilderFactory.newKnowledgeBuilder();
builder.add(ResourceFactory.newFileResource(<rule xls file directory>), ResourceType.DTABLE);
InternalKnowledgeBase knowledgeBase = KnowledgeBaseFactory.newKnowledgeBase();
knowledgeBase.addPackages(builder.getKnowledgePackages());
knowledgeBase.newKieSession();
=====================
The new session was created whenever there was any amend made in the rule.

Unable to resolve lmjoin in the winapi::um [duplicate]

I'm trying to use rand::SmallRng. The documentation says
This PRNG is feature-gated: to use, you must enable the crate feature small_rng.
I've been searching and can't figure out how to enable "crate features". The phrase isn't even used anywhere in the Rust docs. This is the best I could come up with:
[features]
default = ["small_rng"]
But I get:
Feature default includes small_rng which is neither a dependency nor another feature
Are the docs wrong, or is there something I'm missing?
Specify the dependencies in Cargo.toml like so:
[dependencies]
rand = { version = "0.7.2", features = ["small_rng"] }
Alternatively:
[dependencies.rand]
version = "0.7.2"
features = ["small_rng"]
Both work.

OptaPlanner's Drools rules don't fire with Spring Boot's devtools on the classpath so the score is zero

I got optaplanner working correctly with drools rules.
"Suddenly", after some change I did, Optaplanner does not put my facts in the drools kSession anymore.
I put some logging, and I see that optaplanner calls the getProblemFacts() method on my Solution, and this method returns a list with size > 0.
I wrote a DRL rule to simply count the facts and log these counts (This rule is unit tested, and works well when I put the objects in the ksession myself). I am also convinced that optaplanner does not put the facts in the working memory.
The ConstructionHeuristics phase terminates well (and does it's job, as my PlaningVariables are not null anymore after this phase). I got my issue only when LocalSearch begins.
Don't know how/where to search further to understand the issue. Any ideas?
I have an advice: I use <scanAnnotatedClasses/> and have this problem.
If I put the two classes "manually" using <solutionClass/> and <entityClass/> then I get a reflection error:
Exception in thread "Solver" java.lang.IllegalArgumentException: object is not an instance of declaring class
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:483)
at org.optaplanner.core.impl.domain.common.accessor.BeanPropertyMemberAccessor.executeGetter(BeanPropertyMemberAccessor.java:67)
at org.optaplanner.core.impl.domain.solution.descriptor.SolutionDescriptor.extractEntityCollection(SolutionDescriptor.java:626)
at org.optaplanner.core.impl.domain.solution.descriptor.SolutionDescriptor.getEntityCount(SolutionDescriptor.java:489)
at org.optaplanner.core.impl.domain.solution.cloner.FieldAccessingSolutionCloner$FieldAccessingSolutionClonerRun.cloneSolution(FieldAccessingSolutionCloner.java:200)
at org.optaplanner.core.impl.domain.solution.cloner.FieldAccessingSolutionCloner.cloneSolution(FieldAccessingSolutionCloner.java:70)
at org.optaplanner.core.impl.score.director.AbstractScoreDirector.cloneSolution(AbstractScoreDirector.java:147)
at org.optaplanner.core.impl.solver.scope.DefaultSolverScope.setWorkingSolutionFromBestSolution(DefaultSolverScope.java:197)
at org.optaplanner.core.impl.solver.DefaultSolver.solvingStarted(DefaultSolver.java:195)
at org.optaplanner.core.impl.solver.DefaultSolver.solve(DefaultSolver.java:175)
at ****.services.impl.SolverServiceImpl.lambda$0(SolverServiceImpl.java:169)
I am using the spring dev tools to auto-reload my webapp uppon changes in the source files.
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-devtools</artifactId>
<optional>true</optional>
</dependency>
This is the issue. To perform hot reloading, all resources and classes of the project are loaded and watched by spring's RestartClassLoader but the librairies (dependencies, e.g. Drools & Optaplanner) are loaded by the Base ClassLoader (in fact AppClassLoader). Therefore the problems.
To fix it, configure spring dev tools to load Drools librairies in the RestartClassLoader, together with the project's classes:
using-boot-devtools-customizing-classload
So my question was not really good named. Drools working memory is not empty, but contains Objects that are no instanceof my Classes, because not in the same ClassLoader.
To understand this I used the following Rule:
rule "countProblemFacts"
when
$nLectures : Long() from accumulate($lectures : Lecture(), count( $lectures ))
$nCourses : Long() from accumulate($courses : Course(), count( $courses ))
$nRooms : Long() from accumulate($rooms : Room(), count( $rooms ))
$nPeriods : Long() from accumulate($periods : Period(), count( $periods ))
$nObjects : Long() from accumulate($objects : Object(), count( $objects ))
then
DroolsUtil.log(drools, "Drools working memory");
DroolsUtil.log("Lectures:", $nLectures);
DroolsUtil.log("Courses:", $nCourses);
DroolsUtil.log("Rooms:", $nRooms);
DroolsUtil.log("Periods:", $nPeriods);
DroolsUtil.log("Objects:", $nObjects);
DroolsUtil.log(drools, "Total", ($nLectures + $nCourses + $nRooms + $nPeriods), "objects");
end
$nObjects counts to 12, all others count to 0, as the Classes are not "the same".
In my app I have everywhere an
org.springframework.boot.devtools.restart.classloader.RestartClassLoader
That's not the default classloader, so there is classloading magic going on. According to your comment, it's not the same classloader as used to load the optaplanner classes. So you'll need to provide your classloader:
Classloader classloader = TimeTable.class.getClassLoader();
... = SolverFactory.createFromXmlResource(".../solverConfig.xml", classloader);
It might be needed to upgrade to 6.4.0.Beta2, I 've fixed a number of advanced classloading issues last month.
The problem should be resolved in Drools 7.23.0.Final. See https://issues.jboss.org/browse/DROOLS-1540.

which jar contains org.apache.spark.sql.api.java.JavaSQLContext

The following dependency is in the pom:
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.10</artifactId>
<version>1.3.0</version>
</dependency>
I expect the jar to contain the following class:
org.apache.spark.sql.api.java.JavaSQLContext
but while it contains the package org.apache.spark.sql.api.java, all that package appears to contain are interfaces named UDF1- UDSF22.
Which is the correct dependency to get JavaSQLContext?
Thanks.
The JavaSQLContext class has been removed from version 1.3.0 onwards. You should use org.apache.spark.sql.SQLContext class instead. The documentation states the following:
Prior to Spark 1.3 there were separate Java compatible classes (JavaSQLContext and JavaSchemaRDD) that mirrored the Scala API. In Spark 1.3 the Java API and Scala API have been unified. Users of either language should use SQLContext and DataFrame. In general theses classes try to use types that are usable from both languages (i.e. Array instead of language specific collections). In some cases where no common type exists (e.g., for passing in closures or Maps) function overloading is used instead.
Additionally the Java specific types API has been removed. Users of both Scala and Java should use the classes present in org.apache.spark.sql.types to describe schema programmatically.
As an aside if you want to search which jars contain a specific class you can use the Advanced Search of Maven Central and search "By Classname". So here is the search for JavaSQLContext:
http://search.maven.org/#search|ga|1|fc%3A%22org.apache.spark.sql.api.java.JavaSQLContext%22
From a cursory search, it appears that the class org.apache.spark.sql.api.java.JavaSQLContext only appears in the 1.2 versions and earlier of the spark-sql JAR file. It is likely that the code with which you are working is also using this older dependency. You have two choices at this point: you can either upgrade your code usage, or you can downgrade the spark-sql JAR. You probably want to go with the former option.
If you insist on keeping your code the same, then including the following dependency in your POM should fix the problem:
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.10</artifactId>
<version>1.2.2</version>
</dependency>
If you want to upgrade your code, see the answer given by #DB5
I had the same problem, and it was because I was looking at the wrong version of the documentation.
My understanding from the latest documentation - https://spark.apache.org/docs/latest/sql-programming-guide.html#loading-data-programmatically - is to use something like this (copied from the doc):
SQLContext sqlContext = null; // Determine;
DataFrame schemaPeople = null; // The DataFrame from the previous example.
// DataFrames can be saved as Parquet files, maintaining the schema information.
schemaPeople.write().parquet("people.parquet");
// Read in the Parquet file created above. Parquet files are self-describing so the schema is preserved.
// The result of loading a parquet file is also a DataFrame.
DataFrame parquetFile = sqlContext.read().parquet("people.parquet");
// Parquet files can also be registered as tables and then used in SQL statements.
parquetFile.registerTempTable("parquetFile");
DataFrame teenagers = sqlContext.sql("SELECT name FROM parquetFile WHERE age >= 13 AND age <= 19");
List<String> teenagerNames = teenagers.javaRDD().map(new Function<Row, String>() {
public String call(Row row) {
return "Name: " + row.getString(0);
}
}).collect();

sparkR 1.4.0 : how to include jars

I'm trying to hook SparkR 1.4.0 up to Elasticsearch using the elasticsearch-hadoop-2.1.0.rc1.jar jar file (found here). It's requiring a bit of hacking together, calling the SparkR:::callJMethod function. I need to get a jobj R object for a couple of Java classes. For some of the classes, this works:
SparkR:::callJStatic('java.lang.Class',
'forName',
'org.apache.hadoop.io.NullWritable')
But for others, it does not:
SparkR:::callJStatic('java.lang.Class',
'forName',
'org.elasticsearch.hadoop.mr.LinkedMapWritable')
Yielding the error:
java.lang.ClassNotFoundException:org.elasticsearch.hadoop.mr.EsInputFormat
It seems like Java isn't finding the org.elasticsearch.* classes, even though I've tried including them with the command line --jars argument, and the sparkR.init(sparkJars = ...) function.
Any help would be greatly appreciated. Also, if this is a question that more appropriately belongs on the actual SparkR issue tracker, could someone please point me to it? I looked and was not able to find it. Also, if someone knows an alternative way to hook SparkR up to Elasticsearch, I'd be happy to hear that as well.
Thanks!
Ben
Here's how I've achieved it:
# environments, packages, etc ----
Sys.setenv(SPARK_HOME = "/applications/spark-1.4.1")
.libPaths(c(file.path(Sys.getenv("SPARK_HOME"), "R", "lib"), .libPaths()))
library(SparkR)
# connecting Elasticsearch to Spark via ES-Hadoop-2.1 ----
spark_context <- sparkR.init(master = "local[2]", sparkPackages = "org.elasticsearch:elasticsearch-spark_2.10:2.1.0")
spark_sql_context <- sparkRSQL.init(spark_context)
spark_es <- read.df(spark_sql_context, path = "index/type", source = "org.elasticsearch.spark.sql")
printSchema(spark_es)
(Spark 1.4.1, Elasticsearch 1.5.1, ES-Hadoop 2.1 on OS X Yosemite)
The key idea is to link to the ES-Hadoop package and not the jar file, and to use it to create a Spark SQL context directly.

Resources