Currently the way I'm referencing my Jars through my Spring Boot application is by using Maven dependency manager with a system path.
<dependency>
<groupId>example1.1</groupId>
<artifactId>example1.1</artifactId>
<version>1.1</version>
<scope>system</scope>
<systemPath>${basedir}/libs/example1.1.jar</systemPath>
</dependency>
... etc (30 more jars)
How would one avoid this way of loading of jars since I'm moving over to to publishing it as a jar and getting a error of:
should not point at files within the project directory,
${basedir}/libs/example1.1.jar will be unresolvable by dependent
projects
Any easy way to load multiple jars without putting them on the local system?
Cheers
Related
I have a given eclipse maven project which builds to a jar. The pom has one major dependency of BiRT 4.8.0-202010080643 Runtime.
<dependency>
<groupId>com.customer.birt.runtime</groupId>
<artifactId>org.eclipse.birt.runtime</artifactId>
<version>4.8.0-202010080643</version>
</dependency>
So they pushed the artifact into their own nexus; thats why com.customer.birt.runtime.
I really don't know how the guy did that and which tools he used. Currently I want to update to BiRT 4.9. Replacing the above with the only available:
<dependency>
<groupId>org.eclipse.birt</groupId>
<artifactId>birt-runtime</artifactId>
<version>4.9.0</version>
<type>pom</type>
</dependency
does not go well. Both are totally different constellations from the same big project. How can I make use of the above maven dependency of 4.9 in my simple birt project? I'm building only a service for a desktop application that is hosted and run within an RCP application. I started to list the individual maven deps so that the java compiles which I succeeded to but I still have few unit tests that execute and render ReportEngine and fail because of missing Deps at runtime. This is because the ReportEngine is loading APIs at runtime..
I started to post here once I noticed that I will be declaring the separate deps in pom.xml blindly which is (even if the Unittests pass) very unreliable..
Thank you so much!
M.Abdu
My solution was currently as I put in the comments or yet simpler. I just uploaded manually the birt-runtime jar into nexus using my account within the customer and then put in my pom the exact same unique coordinates groupid:artifactid:version. Plus some other dependencies depending of what my unit tests are asking at runtime, e.g. eclipse.platform, emf.core, w3c, batik.css etc.
I am talking about executing the build using mvn clean verify and resulting a jar file
The jar you get from here
https://search.maven.org/remotecontent?filepath=org/eclipse/birt/birt-runtime/4.9.0/birt-runtime-4.9.0.zip
pom in my case:
<dependency>
<groupId>org.eclipse.birt</groupId>
<artifactId>runtime</artifactId>
<version>4.9.0-20220502</version>
</dependency>
I have a project where several dependency versions need to be chosen at deployment time - i.e. specified in the classpath.
The provided scope prevents the dependency being packaged but the project fails when I try to run from within IntelliJ IDEA
e.g.
<dependency>
<groupId>org.apache.activemq</groupId>
<artifactId>activemq-core</artifactId>
<version>5.3.1</version>
<scope>provided</scope>
</dependency>
produces
{stacktrace ...}
Caused by: java.lang.ClassNotFoundException: javax.jms.ConnectionFactory
If I remove the scope the project runs fine but, of course, includes the jar.
If you mark a jar as <provided>, the classes need to be provided by the container that runs the surrounding war/ear.
When you run your project from within IntelliJ, it is probably deployed on some kind of container. Make sure this container provides you <provided> dependencies.
Final approach was to create two maven profiles, one for running locally and one for packaging. The local profile used compile scope while the package profile used provided.
I got a requirement to customize our projects using Maven. We have 3 projects based on Spring & Hibernate. Hence, there are repeated jars in each WAR file while building, which increases size unnecessarily while deployment.
So we have decided to keep all jar dependencies into one project called "CommonJars" and other existing projects will use them. For deployment, we want to generate TAR/ZIP file of "CommonJars" (while deploying it should be extracted in folder, most probably lib folder of Tomcat as it is common to the projects deployed in webapps folder) and other WAR file without jars. So that we can reduce the size of WAR files of 3 projects.
Can somebody suggest me on how we can achieve this? As I am new to Maven, I am unable to find proper solution for this.
I do not agree with your approach but to solve your maven issue, you need to include a 'scope' in your maven dependency.
Example:
<dependency>
<groupId>org.hibernate</groupId>
<artifactId>hibernate-core</artifactId>
<version>5.0.1.Final</version>
<scope>provided</scope>
</dependency>
The Maven Scope 'provided' will allow you to compile your projects but maven will not include them in your war.
Good luck.
I'm working on a recommender system using Apache Flink. The implementation is running when I test it in IntelliJ, but I would like now to go on a cluster. I also built a jar file and tested it locally to see if all was working but I encountered a problem.
java.lang.NoClassDefFoundError: org/apache/flink/ml/common/FlinkMLTools$
As we can see, the class FlinkMLTools used in my code isn't found during the running of the jar.
I built this jar with Maven 3.3.3 with mvn clean install and I'm using the version 0.9.0 of Flink.
First Trail
The fact is that my global project contains other projects (and this recommender is one of the sub-project). In that way, I have to launch the mvn clean install in the folder of the right project, otherwise Maven always builds a jar of an other project (and I don't understand why). So I'm wondering if there could be a way to say explicitly to maven to build one specific project of the global project. Indeed, perhaps the path to FlinkMLTools is contained in a link present in the pom.xml file of the global project.
Any other ideas?
The problem is that Flink's binary distribution does not contain the libraries (flink-ml, gelly, etc.). This means that you either have to ship the library jar files with your job jar or that you have to copy them manually to your cluster. I strongly recommend the first option.
Building a fat-jar to include library jars
The easiest way to build a fat jar which does not contain unnecessary jars is to use Flink's quickstart archetype to set up the project's pom.
mvn archetype:generate -DarchetypeGroupId=org.apache.flink \
-DarchetypeArtifactId=flink-quickstart-scala -DarchetypeVersion=0.9.0
will create the structure for a Flink project using the Scala API. The generated pom file will have the following dependencies.
<dependencies>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-scala</artifactId>
<version>0.9.0</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-streaming-scala</artifactId>
<version>0.9.0</version>
</dependency>
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-clients</artifactId>
<version>0.9.0</version>
</dependency>
</dependencies>
You can remove flink-streaming-scala and instead you insert the following dependency tag in order to include Flink's machine learning library.
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-ml</artifactId>
<version>0.9.0</version>
</dependency>
When you know build the job jar with mvn package, the generated jar should contain the flink-ml jar and all of its transitive dependencies.
Copying the library jars manually to the cluster
Flink includes all jars which are located in the <FLINK_ROOT_DIR>/lib folder in the classpath of the executed jobs. Thus, in order to use Flink's machine learning library you have to put the flink-ml jar and all needed transitive dependencies into the /lib folder. This is rather tricky, since you have to figure out which transitive dependencies are actually needed by your algorithm and, consequently, you will often end up copying all transitive dependencies.
How to build a specific sub-module with maven
In order to build a specific sub-module X from your parent project you can use the following command:
mvn clean package -pl X -am
-pl allows you to specify which sub-modules you want to build and -am tells maven to also build other required sub-modules. It is also described here.
In cluster mode, Flink does not put all library JAR files into the classpath of its workers. When executing the program locally in IntelliJ all required dependencies are in the classpath, but not when executing on a cluster.
You have two options:
copy the FlinkML Jar file into the lib folder of all Flink TaskManager
Build a fat Jar file for you application that includes the FLinkML dependencies.
See the Cluster Execution Documentation for details.
I am using Netbeans to build a Maven project, and have the JTidy java library as a dependency. It turns out JTidy doesnt exist in any maven repos, so I can't just add a "normal" depedency entry for it.
What is the best way of handling dependencies to libraries in Maven projects that arent available on repos?
I've currently tried adding it to my maven pom as such (after copying the jar to my projects /libs folder)
<dependency>
<groupId>org.w3c</groupId>
<artifactId>org.w3c.tidy</artifactId>
<version>9.3.8</version>
<scope>system</scope>
<systemPath>${basedir}/libs/jtidy-r938.jar</systemPath>
</dependency>
However it complains that it will be unresolvable by dependent projects.
First of all, it's under another groupId, that's why you didn't find it.
<dependency>
<groupId>net.sf.jtidy</groupId>
<artifactId>jtidy</artifactId>
<version>r938</version>
</dependency>
Jtidy
But to answer your question, one way of doing this is to manually install it in your local repo as described here.
The best way IMHO is to add it to a proxy like Nexus. That way other people can access it from there without having to install it locally. However, this means you have to set up a repository manager, which doesn't make much sense if you are the only developer on the project.