eye.candy.sixties not found? - themes

When I try to run my report, I'm getting this exception:
Chart theme 'eye.candy.sixties' not found.
net.sf.jasperreports.engine.JRRuntimeException: Chart theme 'eye.candy.sixties' not found.
Sure enough, I couldn't find the theme defined anywhere in jasper-4.0.2.jar. What library do I need to get the default ireport chart themes?

I had this problem with charts using the 'aegean' theme in a web application.
I copied the jasperreports-chart-themes-4.x.x.jar eg
jasperreports-server-cp-4.0.0/ireport/ireport/modules/ext/jasperreports-chart-themes-4.0.0.jar
into my WEB-INF/lib and the charts worked.

<dependency>
<groupId>net.sf.jasperreports</groupId>
<artifactId>jasperreports-chart-themes</artifactId>
<version>${jasperReport.version}</version>
</dependency>
<dependency>
<groupId>net.sf.jasperreports</groupId>
<artifactId>jasperreports-fonts</artifactId>
<version>${jasperReport.version}</version>
</dependency>

You would have to build a project and a jar with the themes manually. There doesn't seem to be an easy library you could just include.

Related

How to use Evaluators in Java to score on a PMML using org.apache.spark?

I've implemented the code for scoring on a provided PMML file and a csv data file (Linear Regression) using Spark and Java. For this I've used jpmml-evaluator-spark and spark-mllib_2.11 maven artifacts, and it works fine.
Now, I'm looking at replacing jpmml-evaluator-spark library, which is AGPL licensed, to something similar may be bundled within org-apache-spark (or any other fully open source option)
I don't see Evaluators for scoring on a PMML available in org.apache.spark group of dependencies. Please confirm if this is correct and suggest some alternative.
https://github.com/jpmml/jpmml-evaluator-spark
This is the PMML evaluator library for the Apache Spark cluster computing system (http://spark.apache.org/) and is AGPL.
Also refer to: http://spark.apache.org/docs/latest/ml-guide.html
These suggest that whatever is packaged along with apache spark includes algorithms and model creation and training, but scoring on the model is not available here & has its dependencies included in the jpmml-evaluator-spark only.
import org.apache.spark.ml.Transformer;
import org.apache.spark.sql.Dataset;
import org.jpmml.evaluator.Evaluator;
import org.jpmml.evaluator.EvaluatorBuilder;
import org.jpmml.evaluator.LoadingModelEvaluatorBuilder;
import org.jpmml.evaluator.spark.TransformerBuilder;
...
...
...
EvaluatorBuilder evaluatorBuilder = new LoadingModelEvaluatorBuilder().setLocatable(false)
.setVisitors(new DefaultVisitorBattery()).load(pmmlInputStream);
Evaluator evaluator = evaluatorBuilder.build();
evaluator.verify();
TransformerBuilder pmmlTransformerBuilder = new TransformerBuilder(evaluator).withLabelCol("Predicted_SpeciesCategory").exploded(true);
Transformer pmmlTransformer = pmmlTransformerBuilder.build();
Dataset<?> resultDataset = pmmlTransformer.transform(csvDataset);
...
...
Maven dependencies:
<dependency>
<groupId>org.jpmml</groupId>
<artifactId>jpmml-evaluator-spark</artifactId>
<version>1.2.2</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>2.4.3</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-mllib_2.11</artifactId>
<version>2.4.3</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.jpmml</groupId>
<artifactId>jpmml-sparkml</artifactId>
<version>1.5.4</version>
</dependency>
This code still has dependency on org.jpmml library, which I wish to remove. Looking for an alternative using org.apache.spark library to achieve similar results.
You could use the PMML4S-Spark to evaluate a PMML model against Spark, for example:
import org.pmml4s.spark.ScoreModel
val model = ScoreModel.fromInputStream(pmmlInputStream)
val resultDataset = model.transform(csvDataset)
If you want to use PMML4S-Spark in Java, it's also easy to use and similar as Scala, for example:
import org.pmml4s.spark.ScoreModel;
import org.apache.spark.sql.Dataset;
ScoreModel model = ScoreModel.fromInputStream(pmmlInputStream);
Dataset<?> resultDataset = model.transform(csvDataset);
BTW, PMML4S-Spark's license is APL 2.0.
My answer might be totally irrelevant to your question, but since I faced an issue and was coming to this question again and again - I don't want others to face it. I reached to the solution some how by going through many stackoverflows...
The problem
I was using spark in java using the old dependency maven code:
<dependency>
<groupId>org.jpmml</groupId>
<artifactId>pmml-evaluator-metro</artifactId>
<version>1.6.3</version>
</dependency>
Which I thought is perfect and will work. But that was not recognizing TransformerBuilder as one of its libraries.
The dependency code given below should solve your problem if your problem is related to TransformerBuilder:
<dependency>
<groupId>org.jpmml</groupId>
<artifactId>jpmml-evaluator-spark</artifactId>
<version>1.3.0</version>
</dependency>
That was it. You're welcome in advance 😉

how to add a feature in Nitrogen opendaylight?

I am trying to add some feature to my open daylight project (e.g. l2switch, dlux, rest,...).
I used to edit the features.xml and the pom.xml for add there features in Carbon release. I am currently using Nitrogen release, when adding these dependencies in my features pom.xml file, I am still unable to detect the features when I login to my karaf (using feature:install/list).
<dependency>
<groupId>org.opendaylight.netconf</groupId>
<artifactId>features-restconf</artifactId>
<classifier>features</classifier>
<version>${restconf.version}</version>
<type>xml</type>
<scope>runtime</scope>
</dependency>
<dependency>
<groupId>org.opendaylight.dluxapps</groupId>
<artifactId>features-dluxapps</artifactId>
<classifier>features</classifier>
<version>${dluxapps.version}</version>
<type>xml</type>
<scope>runtime</scope>
</dependency>
am I missing something else? when I try to add repositories,as I previously did in carbon-release. The feature.xml it automatically re-generated and all my editing is removed.
I am using Nitrogen release by defining and -DarchetypeVersion=1.4.0 when generating my maven artifact.
See the upstream configuration management tooling for running-code examples being used constantly in downstreams like OPNFV.
# Configuration of Karaf features to install
file { 'org.apache.karaf.features.cfg':
ensure => file,
path => '/opt/opendaylight/etc/org.apache.karaf.features.cfg',
# Set user:group owners
owner => 'odl',
group => 'odl',
}
$features_csv = join($opendaylight::features, ',')
file_line { 'featuresBoot':
path => '/opt/opendaylight/etc/org.apache.karaf.features.cfg',
line => "featuresBoot=${features_csv}",
match => '^featuresBoot=.*$',
}
puppet-opendaylight, manifests/config.pp, stable/nitrogen
So basically you shouldn't be editing the XML directly, you should edit the configuration that generates the XML. I'm surprised that worked in Carbon.
I recommend directly using upstream configuration management tooling, like puppet-opendaylight or ansible-opendaylight, vs trying to figure out the configuration knobs yourself, duplicating effort. If you're doing a more complex deployment, look at the OPNFV installer scenarios (that build on these ODL tools) vs trying to solve that very hard problem yourself.

Flink JDBCInputFormat cannot find method 'setRowTypeInfo'

I want to use flink-jdbc to get data from mysql。
I have seen an example on Apache flink website
// Read data from a relational database using the JDBC input format
DataSet<Tuple2<String, Integer> dbData =
env.createInput(
JDBCInputFormat.buildJDBCInputFormat()
.setDrivername("org.apache.derby.jdbc.EmbeddedDriver")
.setDBUrl("jdbc:derby:memory:persons")
.setQuery("select name, age from persons")
.setRowTypeInfo(new RowTypeInfo(BasicTypeInfo.STRING_TYPE_INFO, BasicTypeInfo.INT_TYPE_INFO))
.finish()
);
But when i try to write a demo, i can't find the method 'setRowTypeInfo'.
It was like this
import org.apache.flink.api.common.typeinfo.BasicTypeInfo
import org.apache.flink.api.java.ExecutionEnvironment
import org.apache.flink.api.java.io.jdbc.JDBCInputFormat
import org.apache.flink.api.scala._
/**
* Created by lulijun on 17/7/7.
*/
object FlinkJDBC {
def main(args:Array[String]): Unit = {
val env = ExecutionEnvironment.createLocalEnvironment()
val dbData = env.createInput(
JDBCInputFormat.buildJDBCInputFormat
.setDrivername("com.mysql.jdbc.Driver")
.setDBUrl("XXX")
.setUsername("xxx")
.setPassword("XXX")
.setQuery("select name, age from persons")
.setRowTypeInfo(new Nothing(BasicTypeInfo.STRING_TYPE_INFO, BasicTypeInfo.INT_TYPE_INFO))
.finish)
dbData.print()
env.execute()
}
}
The "setRowTypeInfo" method is always red, and the IDEA prompts
"cannot resolve symbol setRowTypeInfo"
The jar version of flink-jdbc i used is 1.0.0.
<dependencies>
<!-- Use this dependency if you are using the DataSet API -->
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-scala_2.10</artifactId>
<version>1.3.0</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-clients_2.10</artifactId>
<version>1.3.0</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-jdbc</artifactId>
<version>1.0.0</version>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>5.1.36</version>
</dependency>
</dependencies>
I have searched a lot, and most of the people use the method exactly like the official document, but on one mentioned this problem.
I doubt whether I used the wrong version of flink-jdbc, but I cannot get any information about the right way to use flink-jdbc.
If you know the problem, please teach me.Thank you.
I changed the flink-jdbc version from 1.0.0 to 1.3.0 and the problem solved.
But when I search flink-jdbc on maven websit
https://mvnrepository.com/search?q=flink-jdbc, I can't get the right information in the first few pages, It makes me thought the version of flink-jdbc do not need to be matched with other flink jars.
But the truth is flink-jdbc/1.1.3 use class RowTypeInfo of package api.table, but flink-jdbc/1.3.0 use class RowTypeInfo of package api.java.They have close ties with each other.
We must make sure the version is matched.

source code for kafka_2.9.2,version 0.8.2.1

How do i compile the source code for the following dependency.
Also where can i get the source code for the same dependency? Please provide me a link. I want to compile the source code after adding few logs.
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.9.2</artifactId>
<version>0.8.2.1</version>
</dependency>
Also i have a question like, when i am adding this dependency in my code,i am getting many other dependency jars as well such as:
scala-library-2.9.2.jar
metrics-core-2.2.0.jar
kafka_2.9.2-0.8.2.1.jar
snappy-java.jar
zookeeper.jar
I understand there requirement but i dont get these jars in lib folder of kafka_2.9.2-0.8.2.1, when i unzip it!!
Any idea where am i doing wrong!!
Regards,
The easiest way to get the source code should be to fork the git repository: https://github.com/apache/kafka/tree/0.8.2
Have a look at the README file -- it explains how to compile Kafka.

Sass variable arguments syntax not working in wro4j's rubySassCss

I'm implementing Sass support in our Java application. As we already have wro4j (in the newest version 1.7.5), I decided on using it's rubySassCss as a pre-processor. I got it all configured and the whole thing process .scss files alright, until I use some of the newer syntax.
I get errors when using features that were brought in VIII 2012 with 3.2.0 version of Sass (http://sass-lang.com/documentation/file.SASS_CHANGELOG.html), that is variable arguments ("$args..." syntax) and block of contents in the mixin declarations.
#mixin mix($arg...) {
font-size: 12px;
}
.class {
color: black;
}
For example the above simple .scss file throws when processed by rubySassCss:
2014-06-13 11:13:48,574 DEBUG [ro.isdc.wro.http.WroFilter] Exception occured
ro.isdc.wro.WroRuntimeException: org.jruby.embed.EvalFailedException: (SyntaxError) Invalid CSS after "#mixin mix($arg": expected ")", was "...) {"
at ro.isdc.wro.extensions.processor.support.sass.RubySassEngine.process(RubySassEngine.java:70)
at ro.isdc.wro.extensions.processor.css.RubySassCssProcessor.process(RubySassCssProcessor.java:59)
at ro.isdc.wro.model.resource.processor.decorator.ProcessorDecorator.process(ProcessorDecorator.java:89)
at ro.isdc.wro.model.resource.processor.decorator.LazyProcessorDecorator.process(LazyProcessorDecorator.java:49)
at ro.isdc.wro.model.resource.processor.decorator.ProcessorDecorator.process(ProcessorDecorator.java:89)
at ro.isdc.wro.model.resource.processor.decorator.ProcessorDecorator.process(ProcessorDecorator.java:89)
That wouldn't surprise me that much, if not the fact that wro4j release notes (https://code.google.com/p/wro4j/wiki/ReleaseNotes) clearly states that they implemented Sass processor 3.2.1 a long time ago in IX 2012:
Release 1.5.0
Date: 27 Sep 2012
(...)
Issue523 Upgrade rubySassCss processor to 3.2.1
Anybody can tell me if they have the same problem or know what can cause it? I'd really like to take advantage of that variable arguments syntax.
Here's my pom.xml as well:
<dependency>
<groupId>ro.isdc.wro4j</groupId>
<artifactId>wro4j-core</artifactId>
<version>1.7.5</version>
</dependency>
<dependency>
<groupId>ro.isdc.wro4j</groupId>
<artifactId>wro4j-extensions</artifactId>
<version>1.7.5</version>
</dependency>
Looks like you just need to force sass-gems to 3.2.1 version. wro4j should force that itself but it seems it didn't (it used 3.1.9 version instead). Anyway the solution is to add this anywhere in your pom.xml:
<dependency>
<groupId>me.n4u.sass</groupId>
<artifactId>sass-gems</artifactId>
<version>3.2.1</version>
</dependency>

Resources