FHIR build fails with NoSuchMethodError: net.sf.saxon.Configuration.newConfiguration() - hl7-fhir

Following instructions at http://wiki.hl7.org/index.php?title=FHIR_Build_Process my FHIR build is failing. I modified the publish.bat to ensure it uses the correct JDK. Running it on Windows 7 64-bit machine with JDK 1.6 (also tried JDK 1.7) and both failing with same error.
Looks like some Saxon JAR hell somewhere. Any ideas?
...validate v2-tables 441sec 755MB
...validate v3-codesystems 443sec 889MB
Reference Platform Validation. 447sec 1067MB
...test adversereaction-example 447sec 1067MB
Exception in thread "main" java.lang.NoSuchMethodError: net.sf.saxon.Configuration.newConfiguration()Lnet/sf/saxon/Configuration
;
at net.sf.saxon.xpath.XPathFactoryImpl.<init>(XPathFactoryImpl.java:33)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513)
at java.lang.Class.newInstance0(Class.java:355)
at java.lang.Class.newInstance(Class.java:308)
at javax.xml.xpath.XPathFactoryFinder.loadFromService(XPathFactoryFinder.java:401)
at javax.xml.xpath.XPathFactoryFinder._newFactory(XPathFactoryFinder.java:222)
at javax.xml.xpath.XPathFactoryFinder.newFactory(XPathFactoryFinder.java:143)
at javax.xml.xpath.XPathFactory.newInstance(XPathFactory.java:185)
at javax.xml.xpath.XPathFactory.newInstance(XPathFactory.java:99)
at org.hl7.fhir.tools.publisher.Publisher.testSearchParameters(Publisher.java:2796)
at org.hl7.fhir.tools.publisher.Publisher.testSearchParameters(Publisher.java:2785)
at org.hl7.fhir.tools.publisher.Publisher.validateRoundTrip(Publisher.java:2759)
at org.hl7.fhir.tools.publisher.Publisher.validateXml(Publisher.java:2656)
at org.hl7.fhir.tools.publisher.Publisher.execute(Publisher.java:378)
at org.hl7.fhir.tools.publisher.Publisher.main(Publisher.java:281)

A workaround... do a fresh build of the publisher tool jar from source.
Following instructions in the build/buildhowto.txt I was able to build the tool jar inside Eclipse, run the Publisher successfully from inside Eclipse and then export it as a fresh tool jar overwriting the one I pulled from SVN. The freshly build one then ran to completion from the command line.
Could be there's just a problem with the version of tools jar out there in SVN at the moment.
For the record I am working with Version 0.12-1953.

You have two classes net.sf.saxon.Configuration in your classpath. One containing the method newConfiguration() and one not.
The method is probably called from Saxon-HE 9.x, and the class net.sf.saxon.Configuration is found in saxon 8.x, while the class should have been found inside Saxon-HE 9.x, where it also is, and does have this method.
So, check your dependencies to see if saxon 8.x is called, and try replacing that with Saxon-HE 9.x, then your problem is solved

Related

Quarkus xml Parser DocumentBuilderFactory cannot be found, but only when using quarkus-run.jar

When packaging our app with mvn package everything works fine. Then when we start our app with java -jar target\quarkus-app\quarkus-run.jar the app silently crashes. While debugging we found that it crashes while parsing an xml InputStream. It happens while initialising some classes.
This is the stacktrace that we had to dig out ourselves:
Exception occurred in target VM: Provider for javax.xml.parsers.DocumentBuilderFactory cannot be found
javax.xml.parsers.FactoryConfigurationError: Provider for javax.xml.parsers.DocumentBuilderFactory cannot be found
at javax.xml.parsers.DocumentBuilderFactory.newInstance(Unknown Source)
at org.optaplanner.core.impl.io.jaxb.GenericJaxbIO.parseXml(GenericJaxbIO.java:209)
at org.optaplanner.core.impl.io.jaxb.SolverConfigIO.read(SolverConfigIO.java:15)
at org.optaplanner.core.config.solver.SolverConfig.createFromXmlReader(SolverConfig.java:199)
at org.optaplanner.core.config.solver.SolverConfig.createFromXmlInputStream(SolverConfig.java:173)
at org.optaplanner.core.config.solver.SolverConfig.createFromXmlInputStream(SolverConfig.java:160)
When packaging the app in an uberjar this problem does not occur. Same when using dev.
We use graalvm-ce-java17-22.2.0, together with the 2.11.2.Final version of quarkus and the 8.29.0.Final version of optaplanner.
We tried to verify that there aren't any xml exclusion in the dependencies. Also we checked if quarkus and the quarkus maven-compiler-plugin are of the same version. Also we looked into the compiled jarfiles, if the xml we want to read is present. If it wouldn't be present, the code would crash even earlier. The class javax.xml.parsers.DocumentBuilderFactory is not listed in the quarkus-app-dependencies.txt
Adding the quarkus-optaplanner extension helped to identify the logger issue. So the problem with the silent crash is resolved. Adding quarkus-jaxp to the dependencies gets rid of the FactoryConfigurationError and everything works as expected.

macOS: how to install JavaFX properly to run mqtt-spy?

I'm trying to run mqtt-spy-1.0.0.jar on my macOS 10.14 system but it won't start, returning the following error message:
According to the developers, this problem is caused if JavaFX is missing on the system.
The latest version of the Oracle JDK is installed on my system as can be seen below, however, I'm aware that Oracle has excluded JavaFX from the JDK in v11.
So I downloaded JavaFX from GluonHQ and followed their instructions on how to get started.
Despite having both required variables set correctly in ~/.bash_profile, mqtt-spy-1.0.0.jar is still returning the error message shown on the first screenshot ...
What else do I need to do or what do I need to do differently to run mqtt-spy?
There is already an issue filed about this, but not a solution.
I haven't really tried to get it fully working, but these are the required steps to run a jar on Java 11 that requires JavaFX 11, but doesn't bundle it:
Go to OpenJFX docs and read about how to get started with JavaFX 11.
Download JavaFX 11 for your platform from here. Unzip it
Providing that you have Java 11 installed, and set as JAVA_HOME:
With mqtt-spy-1.0.0.jar (as latest release), you can run:
java --module-path /path-to/javafx-sdk-11.0.1/lib \
--add-modules javafx.controls,javafx.fxml -jar mqtt-spy-1.0.0.jar
After you run this, you will get this exception:
Exception in Application start method
java.lang.reflect.InvocationTargetException
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
...
Caused by: java.lang.ClassNotFoundException: javax.xml.bind.JAXBException
at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:583)
Since Java 9, JAXB is not part of the JDK either. So you can add try to download the dependency from here) and add it to the classpath. But this will take some iterations (there are a few other required jars more, see this).
So why don't you use the latest snapshot available, that includes dependencies: mqtt-spy-1.0.1-beta-b18-jar-with-dependencies.jar.
With this:
java --module-path /path-to/javafx-sdk-11.0.1/lib \
--add-modules javafx.controls,javafx.fxml -jar mqtt-spy-1.0.1-beta-b18-jar-with-dependencies.jar
I get:
Warning: But this doesn't mean that the app will fully work. Given that it is a Java 8 app, there are things that have changed in JavaFX 11, mainly related to the control skins. If the app was using private API (com.sun.javafx....), that won't work now, because either it has been moved to public packages, or because it is not accessible by the modules. For the latter you can use --add-opens, but for the former there is no solution other than update the app dependencies to Java 9+.

java.lang.NoClassDefFoundError: org/apache/hadoop/fs/StorageStatistics

I'm trying to run a simple spark to s3 app from a server but I keep getting the below error because the server has hadoop 2.7.3 installed and it looks like it doesn't include the GlobalStorageStatistics class. I have hadoop 2.8.x defined in my pom.xml file but trying to test it by running it locally.
How can I make it ignore searching for that or what workaround options are there to include that class if I have to go with hadoop 2.7.3?
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/fs/StorageStatistics
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2134)
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2099)
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2193)
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2654)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
at org.apache.spark.sql.execution.datasources.DataSource.hasMetadata(DataSource.scala:301)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:344)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:152)
at org.apache.spark.sql.DataFrameReader.parquet(DataFrameReader.scala:441)
at org.apache.spark.sql.DataFrameReader.parquet(DataFrameReader.scala:425)
at com.ibm.cos.jdbc2DF$.main(jdbc2DF.scala:153)
at com.ibm.cos.jdbc2DF.main(jdbc2DF.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:738)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.fs.StorageStatistics
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 28 more
You can't mix bits of Hadoop and expect things to work. It's not just the close coupling between internal classes in hadoop-common and hadoop-aws, its things like the specific version of the amazon-aws SDK the hadoop-aws module was built it.
If you get ClassNotFoundException or MethodNotFoundException stack traces when trying to work with s3a:// URLs, JAR version mismatch is the likely cause.
Using the RFC2117 MUST/SHOULD/MAY terminology, here are the rules to avoid this situation:
The s3a connector is in hadoop-aws JAR; it depends on hadoop-common and the aws-sdk-shaded JARs.
all these JARs MUST be on the classpath.
All versions of the hadoop-* JARs on your classpath MUST be exactly the same version, e.g 3.3.1 everywhere, or 3.2.2. Otherwise: stack trace. Always
And they MUST be exclusively of that version; there MUST NOT be multiple versions of hadoop-common, hadoop-aws etc on the classpath. Otherwise: stack trace. Always. Usually ClassNotFoundException indicating a mismatch in hadoop-common and hadoop-aws.
The exact missing class varies across Hadoop releases: it's the first class depended on by org.apache.fs.s3a.S3AFileSystem which the classloader can't find -the exact class depends on the mismatch of JARs
The AWS SDK jar SHOULD be the huge aws-java-sdk-bundle JAR, unless you know exactly which bits of the AWS SDK stack you need *and are confident all transitive dependencies (jackson, httpclient, ...) are in your Spark distribution and compatible. Otherwise: missing classes or odd runtime issues.
There MUST NOT be any other AWS SDK jars on your classpath. Otherwise: duplicate classes and general classpath problems.
The AWS SDK version SHOULD be the one shipped. Otherwise: maybe stack trace, maybe not. Either way -you are in self-support mode or have opted to join a QE team for version testing.
The specific version of the AWS SDK you need can be determined from Maven Repository
Changing the AWS SDK versions MAY work. You get to test, and if there are compatibility problems: you get to fix. See Qualifying an AWS SDK Update for the least you should be doing.
You SHOULD use the most recent versions of Hadoop you can/Spark is tested with. Non-critical bug fixes do not get backported to old Hadoop releases, and the S3A and ABFS connectors are rapidly evolving. New releases will be better, stronger, faster. Generally
If none of this works. a bug report filed on the ASF JIRA server will get closed as WORKSFORME. Config issues aren't treated as code bugs
Finally: the ASF documentation: The S3A Connector.
Note: that link is to the latest release. If you are using an older release it will lack features. Upgrade before complaining that the s3a connector doesn't do what the documentation says it does.
I found stevel's answer above to be extremely helpful. His information inspired my write-up here. I will copy the relevant parts below. My answer is tailored to a Python/Windows context, but I suspect most points are still relevant in a JVM/Linux context.
Dependencies
This answer is intended for Python developers, so it assumes we will install Apache Spark indirectly via pip. When pip installs PySpark, it collects most dependencies automatically, as seen in .venv/Lib/site-packages/pyspark/jars. However, to enable the S3A Connector, we must track down the following dependencies manually:
JAR file: hadoop-aws
JAR file: aws-java-sdk-bundle
Executable: winutils.exe (and hadoop.dll) <-- Only needed in Windows
Constraints
Assuming we're installing Spark via pip, we can't pick the Hadoop version directly. We can only pick the PySpark version, e.g. pip install pyspark==3.1.3, which will indirectly determine the Hadoop version. For example, PySpark 3.1.3 maps to Hadoop 3.2.0.
All Hadoop JARs must have the exact same version, e.g. 3.2.0. Verify this with cd pyspark/jars && ls -l | grep hadoop. Notice that pip install pyspark automatically included some Hadoop JARs. Thus, if these Hadoop JARs are 3.2.0, then we should download hadoop-aws:3.2.0 to match.
winutils.exe must have the exact same version as Hadoop, e.g. 3.2.0. Beware, winutils releases are scarce. Thus, we must carefully pick our PySpark/Hadoop version such that a matching winutils version exists. Some PySpark/Hadoop versions do not have a corresponding winutils release, thus they cannot be used on Windows.
aws-java-sdk-bundle must be compatible with our hadoop-aws choice above. For example, hadoop-aws:3.2.0 depends on aws-java-sdk-bundle:1.11.375, which can be verified here.
Instructions
With the above constraints in mind, here is a reliable algorithm for installing PySpark with S3A support on Windows:
Find latest available version of winutils.exe here. At time of writing, it is 3.2.0. Place it at C:/hadoop/bin. Set environment variable HADOOP_HOME to C:/hadoop and (important!) add %HADOOP_HOME%/bin to PATH.
Find latest available version of PySpark that uses Hadoop version equal to above, e.g. 3.2.0. This can be determined by browsing PySpark's pom.xml file across each release tag. At time of writing, it is 3.1.3.
Find the version of aws-java-sdk-bundle that hadoop-aws requires. For example, if we're using hadoop-aws:3.2.0, then we can use this page. At time of writing, it is 1.11.375.
Create a venv and install the PySpark version from step 2.
python -m venv .venv
source .venv/Scripts/activate
pip install pyspark==3.1.3
Download the AWS JARs into PySpark's JAR directory:
cd .venv/Lib/site-packages/pyspark/jars
ls -l | grep hadoop
curl -O https://repo1.maven.org/maven2/org/apache/hadoop/hadoop-aws/3.2.0/hadoop-aws-3.2.0.jar
curl -O https://repo1.maven.org/maven2/com/amazonaws/aws-java-sdk-bundle/1.11.375/aws-java-sdk-bundle-1.11.375.jar
Download winutils:
cd C:/hadoop/bin
curl -O https://raw.githubusercontent.com/cdarlint/winutils/master/hadoop-3.2.0/bin/winutils.exe
curl -O https://raw.githubusercontent.com/cdarlint/winutils/master/hadoop-3.2.0/bin/hadoop.dll
Testing
To verify your setup, try running the following script.
import pyspark
spark = (pyspark.sql.SparkSession.builder
.appName('my_app')
.master('local[*]')
.config('spark.hadoop.fs.s3a.access.key', 'secret')
.config('spark.hadoop.fs.s3a.secret.key', 'secret')
.getOrCreate())
# Test reading from S3.
df = spark.read.csv('s3a://my-bucket/path/to/input/file.csv')
print(df.head(3))
# Test writing to S3.
df.write.csv('s3a://my-bucket/path/to/output')
You'll need to substitute your AWS keys and S3 paths, accordingly.
If you recently updated your OS environment variables, e.g. HADOOP_HOME and PATH, you might need to close and re-open VSCode to reflect that.

Trouble installing Kyoto Tycoon - Java. Maven failing

I've downloaded Kyoto Tycoon via:
hxxps://bitbucket.org/EP/kyototycoon-java
When running mvn install I get plenty of warnings telling me it couldn't find several files:
http://pastebin.com/znpJ3d5n
When I first started running the install I was getting a lot of failures and no errors. After blindly going around and trying to install things separately, the output now looks like this. I have no experience with Maven so editing the pom.xml file is out of the question. I've tried using the ignore tests and compiling, and I get few jar files. This allows me to compile "Example.java" using:
javac -cp .:target/kyototycoon-0.2-SNAPSHOT.jar Example.java
I then try to run the code using:
java -cp .:target/kyototycoon-0.2-SNAPSHOT.jar Example
but I get an runtime error:
Exception in thread "main" java.lang.NoClassDefFoundError: Exception in thread "main" java.lang.NoClassDefFoundError: com/twitter/finagle/Codec
at kyototycoon.SimpleKyotoTycoonClient.<init>(SimpleKyotoTycoonClient.java:16)
at Example.main(Example.java:11)
Caused by: java.lang.ClassNotFoundException: com.twitter.finagle.Codec
at java.net.URLClassLoader$1.run(URLClassLoader.java:217)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:205)
at java.lang.ClassLoader.loadClass(ClassLoader.java:321)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294)
at java.lang.ClassLoader.loadClass(ClassLoader.java:266)
... 2 more
I'm assuming this all comes back the the maven install failing, but I'm not sure how to fix it.
Any direction is appreciated. There doesn't seem to be a large support group for Kyoto.
For anyone experiencing this same issue, I found that JAR files I needed were being created. They were being stored in ~/.m2/repository/kyototycoon/kytotycoon/. I created a disgusting looking classpath that included all of these JAR files and this allowed me to compile AND run the Example.java file.
If "editing the pom file is out of the question" I'd strongly recommend steering clear of Maven.
In any case, you can install local jars to your repository if the artifact isn't available from any repository.

Is SpringSource Tool Suite 2.6 Grails support broken?

I have recently updated my STS from 2.5.2 to 2.6. Since then, each grails project shows an error in the conf/spring/resources.groovy file reading: Description Resource Path Location Type
Internal compiler error: java.lang.VerifyError: (class: org/codehaus/jdt/groovy/internal/compiler/ast/JDTClassNode, method: initialize signature: ()V) Bad access to protected data at org.codehaus.jdt.groovy.internal.compiler.ast.JDTResolver.createClassNode(JDTResolver.java:461) resources.groovy /GrailsProject/grails-app/conf/spring line 0 Java Problem
The resources.groovy file is as good as empty (in default state), and if I delete it, the error is shown on the DataSource.groovy, so the file itself seems not to be the cause.The used groovy compiler version is 1.7.3.I have made a clean STS 2.6 install, installed the groovy and grails plugins and got the same error.What could be the problem? And is there a solution to this not resulting in downgrading to 2.5.2 again? Thank you
Take a look at your preferences Groovy -> Compiler. Are you by any chance accidentally using Groovy 1.6?
EDIT
That didn't solve the problem, but as described in http://forum.springframework.org/showthread.php?p=357361, upgrade to the latest dev build of Groovy-Eclipse as well as Grails Tooling and that should work.

Resources