Maven can't find artefact from Nexus - maven

We have an Atlassian Bamboo instance building and deploying our projects (snapshots) to Nexus, around 11:20 pm every day.
Another Bamboo instance runs a test plan A at midnight and fails because it can't find an artefact from Nexus (although it's looking in the right repositories), artefact that was built by the other Bamboo instance and is actually in Nexus.
A test plan B starts around 00:30 on the same instance and this one finds the artefact. In the morning, when I manually launch the plan A, it works well.
So I suspect a cache/metadata issue, but I couldn't figure out what is the right configuration to set, either in Nexus or in the Maven settings.
It's running Maven 2.2.1. Other plans running Maven 3.0.5 for a different version of our project don't seem to have the problem. Nexus is 2.7.2-03.
The error is "2 required artifacts are missing." and the list of Nexus groups in "from the specified remote repositories:" is the right one. Those groups are configured like that in the project's pom:
<snapshots>
<enabled>true</enabled>
<updatePolicy>always</updatePolicy>
</snapshots>
Any idea about how to fix this issue?
Thank you!

Best practices we've identified for moving Maven 2 builds to Maven 3:
1) All of the builders (individual users running builds on personal machines plus Jenkins/CI) should be converted at the same time. In other words, once the POM for an artifact has been adjusted as needed and the built artifact deployed to the remote repository with Maven 3, then Maven 2 should not be used to build that same artifact ever again. This is because Maven 3 uses timestamped snapshots, while Maven 2 does not. Also, Maven 2 and Maven 3 handle dependency resolution and repository metadata differently. Mixing the two doesn't work well.
One symptom is someone commits changes, builds, and deploys to the remote repo with Maven 3, then another developer tries to download the changed jar with Maven 2 and gets the old one. The team may prevent the artifacts from being built with Maven 2 using the Enforcer plugin.
2) All builders should clean out their local artifact repositories. Maven 2 and Maven 3 handle dependency resolution and repository metadata differently, and a fresh repository insulates the developer from strange, hard to find problems. (Since Maven will then have to download the world on the first build, consider doing the first Maven 3 build before you go to lunch or leave work for the day.)
3) Someone with delete privileges on the remote repository (e.g. Nexus, Artifactory) should log into the remote repo, find the artifact(s) in the Snapshots repository and delete the snapshot version during which the Maven 2 to 3 migration occurred.
For example, say that the team is working on 2.0-SNAPSHOT, and there's a task to move from Maven 2 to 3 for this version. When the POM changes for the task are complete, the developer logs into the remote repo, finds com.company.some-group:myArtifact(s), and removes the 2.0-SNAPSHOT version entirely. This will remove all snapshots built by Maven 2.

Related

How can I use multiple Maven local repositories with precedence

I would like to know if there any workaround about having multiple local repositories used by order.
Example :
Default local repository : ~/.m2/repository
Work repository : /var/tmp/m2LocalRepo
When running Maven, to resolve dependencies I would like that at first it looks for the artifact in /var/tmp/m2LocalRepo if it does not find it it looks in the default one.
There is the issue MNG-3655 – Allow multiple local repositories which talks about the same problem but has not been resolved yet.
UPDATE
My use case is as follows:
An application is represented by 2 maven projects projectA and projectB which have the same version.
projectB is a dependency of projectA
when developing a new feature, we create a development branch on each project with the same name. However, we do not modify the version number which remains the same as the main branch.
During a build, on Jenkins for example, I would like to build projectB first but install its artifacts in another local repository (for example, /var/tmp/m2LocalRepo) and not on the default repository so as not to disrupt other builds that also depend on projectB. Then, during the build of projectA, I would like it to get the projectB dependency from /var/tmp/m2LocalRepo and the other dependencies from the local repository.
Re "During a build, on Jenkins for example":
Jenkins Maven projects have an option Build → Advanced... → ☑ Use private Maven repository with its inline help:
Normally, Jenkins uses the local Maven repository as determined by Maven — the exact process seems to be undocumented, but it's ~/.m2/repository and can be overridden by in ~/.m2/settings.xml (see the reference for more details.)
This normally means that all the jobs that are executed on the same node shares a single Maven repository. The upside of this is that you can save the disk space, but the downside of this is that sometimes those builds could interfere with each other. For example, you might end up having builds incorrectly succeed, just because your have all the dependencies in your local repository, despite that fact that none of the repositories in POM might have them.
There are also some reported problems regarding having concurrent Maven processes trying to use the same local repository.
When this option is checked, Jenkins will tell Maven to use $WORKSPACE/.repository as the local Maven repository. This means each job will get its own isolated Maven repository just for itself. It fixes the above problems, at the expense of additional disk space consumption.
When using this option, consider setting up a Maven artifact manager so that you don't have to hit remote Maven repositories too often.
If you'd prefer to activate this mode in all the Maven jobs executed on Jenkins, refer to the technique described here.
and the options:
Default ...
Local to the executor
Local to the workspace
You can copy projectB's artifacts using the Maven Resources Plugin (as described in this answer to Maven, how to copy files?) to projectA's local repo on demand.

Jenkins CI server and Nexus Server on the same Box

I am in a situation where I have one Build Server box which is to carry out all continuous integration and manage our maven repository. The box works as follows:
There is one maven repository which is hosted through Apache Server as a URL for developers to use
All Jenkins jobs (including release jobs) run mvn install so that artifacts are kept in this one repository.
I would like to get rid of the Apache server and run Nexus on this same box to manage and host repositories, however I have the following questions/ideas:
With Nexus and Jenkins on the same box, will it mean that I will have to manage two repositories, one where maven installs an artifact to a local repository, and one where maven deploys an artifact to nexus? Would it be possible to have Nexus manage the "mvn install" repository also? How can I make sure we don't run out of disk space on the server very very quickly all the time?
Thanks
Added as response to comments: Thank you both, I am thinking I will just set the Jenkins jobs and release plugin goals to mvn package deploy:deploy in order to skip the install phase, that way, artifacts go directly from the target directory to Nexus. However I guess the Jenkins job will require a local repository from which to use depedencies which will get copied from Nexus to the maven local repository during the build, I am not sure if this can be avoided though.
mvn install installs in the local repository
mvn deploy installs to the remote repository
these semantics are defined in the lifecycle and map to different plugins. Their implementations are different.
You don't have to manage the local repository. Actually for some if not most jobs you might even want to define it localized to the job (with the 'Use private Maven repository' option) instead of to the user who is running the job, especially that you plan to use nexus for repository.
You will have to change your jobs to use mvn deploy instead.
How can I make sure we don't run out of disk space on the server very
very quickly all the time?
Configure Jenkins/Nexus. Discard old builds and disable automatic artifact archiving. Both settings can be found in the Jenkins job-configuration. Also you could delete old artifacts automatically from Nexus using Scheduling Tasks.
There is no need to install the artifacts into the local maven repository when using Jenkins/Nexus on a dedicated server.

How to update maven repository manually from the maven build?

We do not have our own repository at the moment. So, when we build with maven it creates .m2 repository in the home directory of the current user.
Now there are two third party jars which are not found in the Maven Central. Suppose one of them is hasp-srm-api.jar. Today the process is this:
a. The pom.xml of the project depending on hasp-srm-api.jar contain these lines:
<dependency>
<groupId>com.safenet</groupId>
<artifactId>hasp</artifactId>
<version>1</version>
</dependency>
b. Before doing the first build we execute the following command:
mvn install:install-file -Dfile=hasp-srm-api.jar -DgroupId=com.safenet -DartifactId=hasp -Dversion=1 -Dpackaging=jar
My question is this - is it possible to automate this step? I would like to be able to tell maven to check whether the hasp artifact exists and if not - install it manually using the aforementioned command line. How can I do it?
NO. It is not possible to have maven automatically deploy an artifact into a repository in the fashion you suggest. This goes for both local and remote repositories. If the artifact exists in a some repository somewhere, you can add that repository to your build's list of known remote repos, but other than that you have to add it yourself.
You can add it to your local .m2 repository, but that will then only be good for that individual environment. Other dev's will have to repeat the process. This is one of the main attractions of running your own repository server( like Nexus ); you can add the artifact to that repository and then everyone in your organization can use it forever. There is still no way to automate the deployment of the artifact, but it's easy to do and is permanent.
Note, setting up a repository manager is very easy to do. It's highly recommended. It makes the whole Maven thing make a whole lot more sense.
The best solution for such problems is using a repository manager which results in installing such kind of dependencies only once into the repository manager and the whole company can use it a usual dependency. That's it.
Other option you have is to write your own maven plugin. May be below link will be right place for you start
MOJO FAQ

deploy on nexus artifacts with Snapshot policy but without SNAPSHOT string in version

apparently my Nexus is rejecting every deploy I throw at him if the artifact has not -SNAPSHOT in the version.
Data:
name of the failing artifact: entando-core-engine-experiment-bundles_with_bootstrap.jar where experiment-bundles_with_bootstrap is the version as in the version element of the pom.xml
hosted repository policy on my Nexus: Snapshot, allow redeploy and so on (classic conf for snapshots)
deployer: Jenkins 1.481
same Jenkins job, but entando-core-engine-SNAPSHOT.jar ---> SUCCESS
I need this naming convention because I'm building one of the several experiments we run internally, as opposite to the canonical develop branch which produces a proper entando-core-engine-SNAPSHOT.jar
Any advice?
I'm totally lost.
The thing is that usually your Nexus is configured not to allow a redeployment of a release. A release from Maven point of view is an artifact where it's version it NOT -SNAPSHOT. In contradiction a SNAPSHOT is intended to be deployed several times into nexus.
This sounds like you don't using the release plugin of Maven nor the Release PLugin of Jenkins.
Nexus is a repository manager that uses different repository formats, with the main format being the Maven repository format. Changing the names of artifacts on the server is not possible since it violates the format. They have to be located in the directory structure established by groupId, artifactId and version and use the artifactId-version-classifier.packaging for the file names.
If you need a different file name on the server you have to look at a different repository format (bad idea). If you need the filename on the client just download from the correct name and rename..

Maven WAR overlay problems, while using Hudson + Artifactory

We have three artifacts:
common.jar : with common classes.
public.war : depending on the common.jar, contains only public site resources.
internal.war : depends on both common.jar and public.war, adding authentication
information and security context resource files. Also contains
few administration site classes.
Currently I have structured these in such way, that internal.war overlays itself with public.war.
Building the project locally, installing the artifacts to local repo, works perfectly.
Problems start when trying to get the Hudson builds working with following sequence:
Build all projects in dependency order.
Modify common.jar (say, add a new class method)
Modify internal.war classes in such way that they are compile-time dependent on changes done in 2. step.
Commit both changes, triggering the Hudson builds.
Internal.war build fails because it can not find the symbols added in step 2.
Somehow the build in step 5. is using an old version of the common.jar, and failing because of it.
The common.jar version number does not change, let's say it's 1.0.0-SNAPSHOT for the purposes of this example.
If I DO change the common.jar version number, the build works. (Supposedly because there is only one release by a release version number).
Now, what could cause this using of old artifacts in Hudson builds?
We are running maven builds on Hudson with command "clean package -e -X -U"
"Deploy artifacts to maven repository" has been checked.
It's hard to definitively answer this without access to the real poms, but here is what I would do:
1) Make sure Hudson is using the exact same version of Maven as you are on your local machine
2) Examine the effective pom.xml of internal.war on the Hudson machine in a terminal via mvn help:effective-pom making sure you are running the same mvn executable as your Hudson job does. You need to verify the version of the common.jar in the effective pom.xml of internal.war. It could be different than what you expect due to profiles or settings.xml differences.
3) Check the settings.xml file for your Hudson install of Maven. In particular you need to verify all is well in your distributionManagement, servers, and repositories stanzas. Another good way to check this is to go to your internal.war project and run mvn help:effective-settings and see if what is there matches what is on your local machine.
Something is awry and it won't take long to find with the right analysis.

Resources