How to setup a CICD build with 2 Artifactory instances - maven

I have 2 Artifactory URLs, one for production and another for non-production. I want to configure CICD pipeline that publishes non-production artifacts to 1 Artifactory server and final prod artifacts to the other Artifactory URL.
Can we have 2 Artifactory repo entries in the same pom.xml file? How can the above scenario be achieved in a single CI job?

Related

How to build dependent projects using Bit Bucket pipeline

I am trying to get my build working with pipeline using maven . I have two bit bucket repositories for two maven projects. repository1 -> project1 repository2 -> project2. project2 has dependency on project1. Now I dont have problem in building project1 as it doesn't has dependency on any projects. But when I try to build project2 using pipeline build is failing because maven is not finding the project1 artifact.
I got to know that every pipeline runs within a docker image. So my guess is that pipelines for project1 and project2 are running in 2 separate docker images. Because of this when I run pipeline for project2 maven is not finding project1 artifact in local repository. One way to fix this is hosting a maven remote repo for my project artifacts and adding the repo in POM of project2. But i don't want to host a maven repo. I want maven to pick the artifact from local repo. How to get this working?
I'm sorry that no one ever answered this question for you. Setting up BitBucket Pipelines to use private maven repositories requires you to create a custom settings.xml file in the pipeline and then invoking maven with that file specified.
You can't just put a settings.xml file in your source code repository as it would put your credentials at risk. Instead, you can create a settings.xml file and set credentials from BitBucket Pipelines Secure environment variables.
It's pretty straightforward once you see it in action. I actually wrote an extensive guide on how to completely setup BitBucket Pipelines with Maven repositories that shows specifically how to do this in a secure manner.

Relation between Jenkins and Nexus

Can someone please let me know what is the relation between Jenkins and Nexus. I'm new to this area so pardon me if this is a stupid question.
Jenkins - a continuous integration (CI) platform.
Nexus - a repository manager.
You also tagged Maven so I guess in short Jenkins triggers Maven, Maven builds your packages, while doing it downloads dependencies from Nexus and also uploads artifacts back to Nexus.
Take Nexus as package repository to distribute your artifacts and Jenkins just as a build machine to build the artifacts.

Deploy latest build from repository to tomcat

What I want to achieve is two step build automation
Step A - Build & Upload to artifact repo
Create build job in Jenkins which will create build after every check-in
Upload every successful build on Archiva server
Step B - Get latest artifact & deploy on required servers
After every desired interval, get latest build from Archiva
Deploy build uploaded in Step A.2 to dev/qa/stg server by unzipping its content in web server directory.
I was able to do achieve Step A by using maven goals in my project pom.xml but any idea/suggestions/best practices for Step B.
I understand/agree, I would need two different jobs having different pom.xml, question remains how will we get latest war from repo in pom and how will be deploy that latest war on remote server by unzipping it, as tomcat there does not have admin module.
I would deploy not a jar/war artifact on step 2, but would create a RPM that contains a needed files structure.
With maven it's quite easy to do with a maven rpm plugin
3-4. Nexus has built in YUM repository support, so you can use yum to install the latest rpm version
So I've always found it better to separate the builds and the deploys.
The schedule for those can be independently managed then.
Assuming you are using linux on the servers you could use the ssh-plugins in jenkins to download the artifact in archiva
wget http://server/repository/internal/group/artifact/version/artifact-version.jar
As for the deploy, you could sftp them over to the deploy server also using jenkins SFTP plugin.

how to upload my local repository to archiva

I am setting up an apache archiva instance to server as our development team's local repository. I'd like to initially seed it with the artifacts in my local .m2 repository. However, as far as I can tell, the depoly plugin and the repository plugin work only with individual projects. I have also configured my local settings.xml file to deploy artifacts to archiva when built with maven, as shown in the archiva documentatation. Also, I'm aware that it's possible to upload artifacts via archiva's webUI form. This would still require me to upload jars individually. Is there a way to automate this or do some sort of mass upload?
deploy plugin can do it for you. (use deploy-file goal)
See http://maven.apache.org/plugins/maven-deploy-plugin/file-deployment.html

Jenkins CI server and Nexus Server on the same Box

I am in a situation where I have one Build Server box which is to carry out all continuous integration and manage our maven repository. The box works as follows:
There is one maven repository which is hosted through Apache Server as a URL for developers to use
All Jenkins jobs (including release jobs) run mvn install so that artifacts are kept in this one repository.
I would like to get rid of the Apache server and run Nexus on this same box to manage and host repositories, however I have the following questions/ideas:
With Nexus and Jenkins on the same box, will it mean that I will have to manage two repositories, one where maven installs an artifact to a local repository, and one where maven deploys an artifact to nexus? Would it be possible to have Nexus manage the "mvn install" repository also? How can I make sure we don't run out of disk space on the server very very quickly all the time?
Thanks
Added as response to comments: Thank you both, I am thinking I will just set the Jenkins jobs and release plugin goals to mvn package deploy:deploy in order to skip the install phase, that way, artifacts go directly from the target directory to Nexus. However I guess the Jenkins job will require a local repository from which to use depedencies which will get copied from Nexus to the maven local repository during the build, I am not sure if this can be avoided though.
mvn install installs in the local repository
mvn deploy installs to the remote repository
these semantics are defined in the lifecycle and map to different plugins. Their implementations are different.
You don't have to manage the local repository. Actually for some if not most jobs you might even want to define it localized to the job (with the 'Use private Maven repository' option) instead of to the user who is running the job, especially that you plan to use nexus for repository.
You will have to change your jobs to use mvn deploy instead.
How can I make sure we don't run out of disk space on the server very
very quickly all the time?
Configure Jenkins/Nexus. Discard old builds and disable automatic artifact archiving. Both settings can be found in the Jenkins job-configuration. Also you could delete old artifacts automatically from Nexus using Scheduling Tasks.
There is no need to install the artifacts into the local maven repository when using Jenkins/Nexus on a dedicated server.

Resources