How to manage maven settings.xml on a shared jenkins server? - maven

I have a Jenkins cluster that is shared by several teams, that I can configure build jobs on, However i can't easily make changes to the Jenkins configuration itself.
There is a central "nexus pro" maven repository manager but each team / group in this very large multinational has their own repo, publishing to the repos requires username / password combination.
This means that I have to configure the Jenkins server with a maven settings.xml that is unique to the team I am working with without messing up the maven configuration of the other users of the Jenkins cluster.
Git is the source control repository.
On a shared Jenkins cluster how do I configure a maven settings.xml that is unique to a a group of build jobs or to a single job? What are the best practices for handling this type of situation?

I would recommend using the configuration file plugin, provides a UI to edit one or more Maven settings files.
These settings files can be passed into your Maven build using the "-s" option.

You can specify for each job in the Maven Advanced Options part a specific seetings.xml path

We manage all our build nodes using Puppet. It gives you greater control than just settings.xml. Highly recommended
Puppet is IT automation software that helps system administrators manage infrastructure throughout its lifecycle, from provisioning and configuration to patch management and compliance. Using Puppet, you can easily automate repetitive tasks, quickly deploy critical applications, and proactively manage change, scaling from 10s of servers to 1000s, on-premise or in the cloud.

If your company is using Nexus Pro (as you've already mentioned), then your unique Maven settings.xml can be stored there, and retrieved at build time using the nexus-maven-plugin as described here: http://books.sonatype.com/nexus-book/reference/maven-settings.html
Combined with token-based access (again, Nexus Pro does this), you do not need to store passwords insecurely in the settings.xml (see https://books.sonatype.com/nexus-book/reference/usertoken.html)

I faced the similar issue when building the project with jenkins as ojdbc jar is not available in maven central repository.
It worked when I placed the ojdbc jar in WEB-INF/lib folder and removed the maven dependency in pom.xml.

A good way to automate the provisioning of maven executors with specific configuration, is using ElasticBox Jenkins plugin.
You only need to create a box for the Maven slave, that define all the customization variables and files to be used by it and choose your preferred cloud provider for deploying it.
ElasticBox gives you also the flexibility to create new slaves only when needed and automatically destroy them after an specified retention time.
Here is how-to connect your Jenkins with ElasticBox:
https://elasticbox.com/documentation/integrate-with-jenkins/jenkins-elasticbox-setup/#jenkins-configure-plugin
Here is how to automate creation of Jenkins slaves with ElasticBox:
https://elasticbox.com/documentation/integrate-with-jenkins/jenkins-elasticbox-slaves/
There is a blog post about how easily build and deploy from GitHub pull requests with ElasticBox Jenkins plugin:
https://elasticbox.com/blog/github-pull-requests-jenkinsplugin/

Related

Configuring team managed credentials in settings.xml for maven builds in bamboo

Currently I am using bamboo for maven builds. Artifactory is being used for artifact deployment. During initial bamboo setup, artifactory admin user and password got configured in maven settings.xml. Due to admin privileges, bamboo plan everytime overwrites artifact at the time of deployment. I would like to stop this artifact overwrite behavior.
I would like to:
Replace admin account in settings.xml with another account which will have only artifact upload access in Artifactory.
Teams will use their own generic ID in bamboo plans for uploading artifacts to Artifactory.
Is there any other standard solution to fix this overwrite problem.? I am not sure how teams will be able to pass their artifactory generic id and password for maven build in bamboo. And is this industry used approach while dealing with maven builds for multiple teams in Bamboo/Jenkins.?
Thanks,
Pushpraj
You can configure different Artifactory users for each of your Bamboo Plans. Here's how you can achieve this:
Install the Bamboo Artifactory Plugin on your Bamboo instance.
The plugin adds an "Artifactory" section in Bamboo's Administration. Configure the details of your Artifactory server there.
The plugin also adds a few new Bamboo tasks. One of them is "Artifactory Maven". This task allows you to run a maven build, while resolving the build dependencies and deploying the build artifacts to Artifactory. For the deployment and resolution, the task configuration allows you to override the Artifactory user defined in the Bamboo administration. This allows you to deploy artifacts from different plans using different Artifactory users.
Important: the deployment to Artifactory happens during maven's install goal.
You can read more about the plugin in the Bamboo Artifactory Plugin User Guide

mvn release:perform creates multiple staging repos

I have a project that uses maven and I am attempting to deploy to the sonatype OSS repository. When I execute mvn release:perform, 5 different staging repos are created instead of just one. The various files are spread among these different repos so I cannot successfully deploy.
Is there a reason that maven is splitting up my release?
The project along with my pom files are here:
https://github.com/Uncodin/bypass/tree/master/platform/android
Turns out that each staging repository thought that it was deployed from a different IP address. This can happen in corporate environments where a floating IP address proxies outbound requests.
https://issues.sonatype.org/browse/OSSRH-5454?focusedCommentId=180666&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-180666
the parent you are using (https://oss.sonatype.org/service/local/repositories/central/content/org/sonatype/oss/oss-parent/7/oss-parent-7.pom) does not really give a hint on whats going wrong. there is only one release repository configured: https://oss.sonatype.org/service/local/staging/deploy/maven2/
Is it possible this is caused by some configuration inside the Nexus Proxy? And not by your maven structure?

adding artefacts in Archiva not through its interface

How can I insert artefact in archiva not through its web interface.
It is possible to upload artifacts using maven.
Please refer to the Archiva Users Guide, Section Deploying to Repository for the details.
The following methods are available:
upload via the user interface (I presume this is the one you refer to as the web interface)
connect via any WebDAV client at http://localhost:8080/archiva/repository/repo-name (adjust according to your configuration)
use HTTP PUT with basic authentication to the same location as the WebDAV URL (this is the method that other tools like Maven, Ivy, etc. would use)
drop the file into the correct place in the file system and wait for Archiva's scanner to pick up the changed artefact
As Torsten's answer indicates, uploading using Maven's deploy phase or deploy:deploy-file goals (or equivalent from another build tool) is likely what you want since it will take care of constructing the correct path for the artefact and pushing any associated metadata, assuming you are using Archiva as a Maven artefact repository.
You have an upload screen tru the web ui.
See http://www.youtube.com/watch?v=LSXe26inf0Y

What is a "resolver" when deploying to maven repositories?

We're trying to setup Gradle to publish artifacts to Artifactory. There are two sets of credentials that can be configured, a "deployer" and "resolver". The deployer seems fairly obvious, as the target repository is read-only, a set of credentials are necessary to authenticate for deployment.
However, what is this "resolver" in the context of maven repositories. We are already using Gradle's dependency management, so project dependencies are already getting resolved via the repositories we have configured using Gradle.
So what's the point of this second "resolver" configuration, and why would it need credentials?
Thanks.
If you're using the Gradle Artifactory Plugin, then it allows you to set optional user/pass for a repository that requires authenticated read access (can be set in Artifactory using permission targets).
A maven (or ivy, if configured) repository with these credentials will be added to your project by the plugin behind the scenes.
I think this is needed e.g. if you use your own enterprise repository (like Nexus or Artifactory) and you even need credentials to read that repositories (which may be the case in companies).

Good configuration for Archiva?

We have recently decided to use Maven as build system. I'm responsible to migrate all the projects from Ant to Maven. We also decided to use Apache Archiva to configure an internal repository in the company.
I see that Archiva create two repositories by default (internal and snapshots). I also see that it configures the internal repository to proxy the central and java.net repositories.
Are there some best practices regarding Archiva configuration?
In the Archiva documentation, there is a possibility to configure Maven to use only the internal repository and then access the remote repository through the internal repository. What do you think about this option?
Thanks for your help
A Maven repository manager is essential to support Enterprise Maven development. The Maven installer is merely a bootstrap, running Maven for the first time downloads everything it needs from the Maven Central repository in order to compile your project.
The benefits of using a Maven repository aree documented elsewhere but I'll summarize:
Efficiency. Repository acts as a cache for Maven Central artifacts
Resilience. Repository protects against remote repository failures or lack of internet connection
Repeatability. Storing common artifacts centrally, avoids shared build failures caused by developers maintaining their own local repositories.
Audit. If all 3rd party libraries used by development come from a single entry point in the build process one can assess how often they're used (based on download log files) and what kinds of licensing conditions apply.
To that end I'd encourage you to use the following Archiva features:
Locking down to only use Archiva. Configure Maven clients download everything from Archiva.
Virtual repositories for each team. Configure all the remote repositories used by teams centrally in Archiva instead of leaving the details to the teams themselves.
PS
I use Nexus for my Maven repository management, but the same concepts apply.

Resources