mvn release:perform creates multiple staging repos - maven

I have a project that uses maven and I am attempting to deploy to the sonatype OSS repository. When I execute mvn release:perform, 5 different staging repos are created instead of just one. The various files are spread among these different repos so I cannot successfully deploy.
Is there a reason that maven is splitting up my release?
The project along with my pom files are here:
https://github.com/Uncodin/bypass/tree/master/platform/android

Turns out that each staging repository thought that it was deployed from a different IP address. This can happen in corporate environments where a floating IP address proxies outbound requests.
https://issues.sonatype.org/browse/OSSRH-5454?focusedCommentId=180666&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-180666

the parent you are using (https://oss.sonatype.org/service/local/repositories/central/content/org/sonatype/oss/oss-parent/7/oss-parent-7.pom) does not really give a hint on whats going wrong. there is only one release repository configured: https://oss.sonatype.org/service/local/staging/deploy/maven2/
Is it possible this is caused by some configuration inside the Nexus Proxy? And not by your maven structure?

Related

Can Sonatype Nexus use maven's local repository

I now have a PC working as a Sonatype Nexus server and a development environment. I know Nexus stores artifacts for proxy type repository in SonatypRoot\sonatype-work\nexus\storage, and Maven will use a local repository to store artifacts (default directory is C:\USERS\USER_NAME\.m2\repository).
So the question comes when I'm using Maven with Nexus running on the same machine, because i have two copies of every artifact which is big waste of storage.
In Nexus's configuration tab for proxy type repository, there is an option named Override Local Storage Location.
My question is can I set this to my Maven's local repository?
That's a bad idea. One common purpose of nexus to publish artifact internally within your organisation. Typically this is done using mvn deploy. On the other hand your maven local repository serves purpose as a cache to avoid downloading stuff that has been obtained before. If you mix them together you might be accidentally publishing artifacts to your organisation while you just want to test locally in your PC.

Migrating maven artifact repositories - pom <url> value points to old repo

Question:
When importing maven artifact repositories (either from other instances of Artifactory, or nexus, for example), many artifacts (and most parent) poms contain url tags which reference the old repository. These url tags are within the distributionManagement and repositories tags.
Do we need to go through a time consuming process of updating these URLs for every single artifact (and parent pom, where applicable)?
Further Information:
We are in the process of migrating some artifact repositories to a whole new environment. We have an old Artifactory instance and a Nexus instance from a separate project that we need to migrate into a single Artifactory instance in a new environment. We currently don't have access to run maven builds from the Nexus repo - we have only been given access to their filesystem to pull artifacts across.
The new Artifactory version is newer than the old one, so we used the following process:
1. system export excluding binaries
2. copy filestore directory across to new Artifactory server
3. imported the system export
For Nexus, we are rsyncing the filesystem for each repository across to the new Artifactory server, and using the 'Import Repository from Path' feature.
These imports have all finished successfully, and we can see all of the required artifacts in the new Artifactory instance.
We have successfully executed a maven build that pulled down dependencies imported from the old Artifactory instance, and this same build successfully published it's artifacts back to the new Artifactory instance as well.
Given our successful tests so far, we're not sure if we really need to update them, or if they will become a problem later for some reason (such as when we decommission the old Artifactory instance)
You're lucky to use Artifactory in your new environment :)
Artifactory will automatically remove any <repositories> references from your pom files, leaving the resolution rules to your settings.xml. All you need to do is generate a new settings.xml file from your new Artifactory and all the resolution will occur from it.
In order for it to work, please declare the old Artifactory and Nexus as remote repositories for the new Artifactory instance (don't use export/import). Once new Artifactory fetches artifact from old Artifactory or Nexus it removes the repositories declaration and stores the new, clean pom in the cache.
After awhile when you sure everything is cached, you can decommission the old servers and declare those repositories as offline (optionally moving the artifacts to local repository).
Neither the repositories nor the distribitionManagement have an impact on your usage of the components and as such nothing needs to be done on the import.
The distributionManagement details where components are released to. Since the component are already released and in your repo server the content does not matter.
Having repositories as an element in your pom files is a very bad practice and should be avoided. However if you are using a repo manager and the appropriate settings using the mirrorOf setup in settings.xml none of the repositories will be taken into account, but instead your repo manager will be contacted as defined in your settings.xml.
As you can see you can just migrate the components and leave them alone. Modifying the poms of already released components is probably a bad practice, since it means that some clients will have one pom, while others will have a different one for the SAME artifact. This violates the idea of a non-changing release artifact and can cause problems.
And in terms of migration you can easily just migrate the repositories in Nexus and turn off the old servers (at least you could migrating to Nexus). That way you don't have to run a number of them in parallel and can quickly decommission, while at the same time being sure you have all your components in your new repo manager.

How to manage maven settings.xml on a shared jenkins server?

I have a Jenkins cluster that is shared by several teams, that I can configure build jobs on, However i can't easily make changes to the Jenkins configuration itself.
There is a central "nexus pro" maven repository manager but each team / group in this very large multinational has their own repo, publishing to the repos requires username / password combination.
This means that I have to configure the Jenkins server with a maven settings.xml that is unique to the team I am working with without messing up the maven configuration of the other users of the Jenkins cluster.
Git is the source control repository.
On a shared Jenkins cluster how do I configure a maven settings.xml that is unique to a a group of build jobs or to a single job? What are the best practices for handling this type of situation?
I would recommend using the configuration file plugin, provides a UI to edit one or more Maven settings files.
These settings files can be passed into your Maven build using the "-s" option.
You can specify for each job in the Maven Advanced Options part a specific seetings.xml path
We manage all our build nodes using Puppet. It gives you greater control than just settings.xml. Highly recommended
Puppet is IT automation software that helps system administrators manage infrastructure throughout its lifecycle, from provisioning and configuration to patch management and compliance. Using Puppet, you can easily automate repetitive tasks, quickly deploy critical applications, and proactively manage change, scaling from 10s of servers to 1000s, on-premise or in the cloud.
If your company is using Nexus Pro (as you've already mentioned), then your unique Maven settings.xml can be stored there, and retrieved at build time using the nexus-maven-plugin as described here: http://books.sonatype.com/nexus-book/reference/maven-settings.html
Combined with token-based access (again, Nexus Pro does this), you do not need to store passwords insecurely in the settings.xml (see https://books.sonatype.com/nexus-book/reference/usertoken.html)
I faced the similar issue when building the project with jenkins as ojdbc jar is not available in maven central repository.
It worked when I placed the ojdbc jar in WEB-INF/lib folder and removed the maven dependency in pom.xml.
A good way to automate the provisioning of maven executors with specific configuration, is using ElasticBox Jenkins plugin.
You only need to create a box for the Maven slave, that define all the customization variables and files to be used by it and choose your preferred cloud provider for deploying it.
ElasticBox gives you also the flexibility to create new slaves only when needed and automatically destroy them after an specified retention time.
Here is how-to connect your Jenkins with ElasticBox:
https://elasticbox.com/documentation/integrate-with-jenkins/jenkins-elasticbox-setup/#jenkins-configure-plugin
Here is how to automate creation of Jenkins slaves with ElasticBox:
https://elasticbox.com/documentation/integrate-with-jenkins/jenkins-elasticbox-slaves/
There is a blog post about how easily build and deploy from GitHub pull requests with ElasticBox Jenkins plugin:
https://elasticbox.com/blog/github-pull-requests-jenkinsplugin/

Reusing Artifactory's maven repo

I'm trying to figure out if its possible to reuse Artifactory's maven repo on the local machine where the Artifactory server is running. The following details what I am trying to do.
I have a server where Artifactory runs and I'm planning on setting up Jenkins on the same server. If possible, I would like to have only one maven repository on the server. Since Artifactory already runs there, I would expect it is maintaining some kind of a maven repository (I looked around for it but couldn't find it).
Currently, when Jenkins uses Maven to build a maven project, it downloads the dependent jars into a local maven repo (a .m2 folder) on the server. Instead of this, would it be possible to point the settings.xml that maven is using to some local folder under Artifactory where artifactory stores all the jars? Basically, I would like maven to think that all the jars are already available in a local repo (which artifactory is maintaining) and so it wouldnt have to download all the jars from artifactory.
If maven and artifactory can share the same repo folder, this would be possible. But if Artifactory uses its own strucuture to maintain the maven repository (something other than the structure maven follows with its .m2 folder) this would not be possible.
I should state that I have very minimal knowledge of Artifactory, other than the fact that it is a maven repository manager.
Answering my own question here, as more research suggests that this is not possible. I found another question here on SO that states:
Artifactory uses Java Content Repository (JCR) standard to store artifacts. It is an abstraction above various storage implementations, which include filesystem, relational databases, etc. In any case, JCR manages the store by checksums (to reduce size and bandwith), so the repository is not directly browesable in the filesystem. The default implementation is storing the binaries on the filesystem (inside $ARTIFACTORY_HOME/data/filestore and the metadata in Derby DB.
How Artifactory manages repos
A blog post by the Nexus guys also suggests that this is not possible.
Contrasting Nexus and Artifactory -> Contrast #2

Good configuration for Archiva?

We have recently decided to use Maven as build system. I'm responsible to migrate all the projects from Ant to Maven. We also decided to use Apache Archiva to configure an internal repository in the company.
I see that Archiva create two repositories by default (internal and snapshots). I also see that it configures the internal repository to proxy the central and java.net repositories.
Are there some best practices regarding Archiva configuration?
In the Archiva documentation, there is a possibility to configure Maven to use only the internal repository and then access the remote repository through the internal repository. What do you think about this option?
Thanks for your help
A Maven repository manager is essential to support Enterprise Maven development. The Maven installer is merely a bootstrap, running Maven for the first time downloads everything it needs from the Maven Central repository in order to compile your project.
The benefits of using a Maven repository aree documented elsewhere but I'll summarize:
Efficiency. Repository acts as a cache for Maven Central artifacts
Resilience. Repository protects against remote repository failures or lack of internet connection
Repeatability. Storing common artifacts centrally, avoids shared build failures caused by developers maintaining their own local repositories.
Audit. If all 3rd party libraries used by development come from a single entry point in the build process one can assess how often they're used (based on download log files) and what kinds of licensing conditions apply.
To that end I'd encourage you to use the following Archiva features:
Locking down to only use Archiva. Configure Maven clients download everything from Archiva.
Virtual repositories for each team. Configure all the remote repositories used by teams centrally in Archiva instead of leaving the details to the teams themselves.
PS
I use Nexus for my Maven repository management, but the same concepts apply.

Resources