In the infrastructure I've inherited, the guys use jenkins and maven to construct jars and config files.
So, jenkins checks out some code, then maven builds something. Jenkins post build task then copies files from mavens target directory and copies them to a "build share" (eeks), for example
Source
**/target/*.jar
Destination
releases/env_name/project/lib
Source
**/target/classes/some-service.xml
Destination
releases/env_name/project/conf
I dare say this is not ideal.
In the new provisioning project, the jars from the build, are deployed into Nexus. In turn, I then use the maven rpm plugin to build RPMs using GAVs to identify which jars are packaged. I haven't captured the config files.
Could I publish the config files to Nexus? Yes they're small, but still artifacts. They would also have a maven snapshot or release coordinate, which could fit in with my releases. I have not tried, but I imagine I can add a section to my existing RPM build pom so that the desired plain text files are uploaded to Nexus when I run mvn deploy.
Some of the files require modification post deployment to suit an environment - this adds further head ache to this.
The poms that define the RPMs will also grow significantly in length. Not a problem per se, if I can ascertain this is the right way.
The question: How have others managed jars and config files deployment in a multi-environment set up?
Since you are asking how others have done it, I will say how I do, which may or may not be appropriate in your situation.
The binaries (.jar or whatnot) are build by Jenkins from the source code in SVN.
The SVN source code has configuration files, but most of the time, those are configured for DEV or Local environments. I let them have it there so that developers' local builds can be simple, but really as far as I am concerned they don't exist.
The SVN also has a separate location (not accessible by DEV, only by Release Managers and Systems teams), and this location contains configuration files for all environments (including DEV environment, when deployed from the CI server)
Since we have a lot of environment specific settings, it makes sense to keep separate copies of each environment.
If a new configuration parameter is added, it is merged from DEV config files all the way to PROD, just like other source code changes are merged, with the exception that only RM or Systems team can do it.
The deployment process (in our case a shell script, but could be anything) takes care of pulling the correct configuration files from SVN and pushing them, along with binaries, to the remote server. Once server is deployed, the "in-jar" configuration is wholly replaced by the configuration files that came from SVN.
I organize the configuration file by ENV, then by TYPE, for example: QA/WEB, QA/API and then PROD/WEB, PROD/API. So I don't have to maintain configuration files for each remote machine.
However there are times when configuration is specific to the remote machine, such as an IP address of the machine. In these cases, the SVN configuration file contains a token, like [local_ip]. When the deployment script pushes this file to remote server, it then knows to replace the token with real IP of the remote machine
Related
What we use:
We use mercurial and bitbucket for repositories. Appveyor and kudu for continous integration and deployment. We are using visual studio 2015 as IDE.
What we have:
We have different web projects. They share some other projects. All of web projects have their own solution. Every solution have their own repository.
If there is change on develop branch. Appveyor builds this repository, tests and deploys it.
If there is change on default, kudu builds this repository and deploys it.
What we want:
We want to merge all of these projects in one solution. But I couldn't figure it out, how I can achive continous integration or deployment.
If I change something on webproject1, I just want to build and deploy webproject1. The other webprojects in solution neither should be built nor deployed.
Perhaps a single repository will help you. Using relative path to include the shared libraries from your different applications.
Each application can still have its own Solution file and your CI setup also stays as it is. What changes is that the shared projects you have across all applications will be referenced with relative path. E.g.:
Repository root\Core\Component1\Component1.csproj
Repository root\Core\Component2\Component2.csproj
Repository root\Applications\App1\App1.sln
Repository root\Applications\App1\Domain\Domain.csproj
Repository root\Applications\App1\Web\Web.csproj
Repository root\Applications\App2\App2.sln
Repository root\Applications\App2\Domain\Domain.csproj
Repository root\Applications\App2\Web\Web.csproj
Now your different application can include the Core\Components they need by adding existing project to solution using relative path.
Your continuous integration system will have VCS triggers watching the app and dependencies so only relevant changes fire a build.
So if App1 developer makes a change on Component1, and Component1 is also used by App2, the build server will trigger a build to App1 and App2, signaling any breaking changes. However if App2 doesn't not depend on Component1, then only App1 will build.
This is achieved by configuring the build triggers for your applications.
One benefit of this strategy vs having a single .sln is that you won't have to build everything each time you build solution (nor configure what projects to build each time you work on a different app)
Also note that you can achieve this with multiple repositories. But that means you'd need to check them out at the correct location so your relative paths work. It's also quite obscure since if you checkout App1 and try to build it. It simply won't work and you'll have to figure out which other repos to check out, etc.
You are using Mercurial but FYI, the way (one of) this would be handled with Git is with submodules.
Wondering how people manage their project artefacts through an environment lifecycle of say DEV - AQA - CQA - RELEASE and if there's some best practices to follow.
I use a Jenkins build server to build my projects (code checkout then maven build). My artefacts all have version 1.0.0-SNAPSHOT and are published to a local .m2 repo on the build server. There are also Jenkins jobs that rebuild the DEV system (on the same server) using those artefacts. The project build is automated whenever someone checks in code. The DEV build is automated on a nightly basis.
At some point, my lead developer determines that our project is fit to go to AQA (the first level of testing environment on a different server).
For this I need to mark the artefacts as version 1.0.0-1 and publish to a remote AQA repository (it's actually a Nexus repo).
The Maven deploy plugin sounds like the right approach, but how do I change the version number to be effectively 1.0.0-$release (where $release is just an incrementing number starting from 1)? Would Maven/Nexus be able to manage the value of $release, or would I need a simple properties file in my project to store/update the last used $release.
Furthermore, someone tests AQA and determines its fit to move on to CQA (second testing env). This is 'promote to AQA'. So my requirement is to copy the artefact from the AQA Nexus repo and publish to the CQA Nexus repo.
Likewise, after CQA, there'd be a 'promote to RELEASE' job too.
I think the version value remains unchanged during the 'promote' phases. I'd expect the AQA repo to see all versions 1-50, but CQA only 25 and 50, then RELEASE only 50, for example.
I can find loads of info about Maven plugins/goals/phases, but very little about a prescriptive method on how or where to use outside of the immediate development environment.
Any suggestions gratefully received.
Staging/promoting is out of scope for Maven. Once deployed/uploaded to a remote repository, that system is responsible for the completion of the release cycle. Read this chapter about staging: http://books.sonatype.com/nexus-book/reference/staging.html if you use Nexus.
Build numbers are just that build numbers. They are not promotion / staging numbers.
You should come up with another means of tracking your promotions, because otherwise one might get confused in "knowing" that build 10.1.4-2 is the same as 10.1.4-6. Certainly, all of the Maven related software will see those two builds as different builds.
In addition, if a person "grabs" the wrong copy of the build, the way you are managing staging within your build number will increase confusion. As if you don't kill all of the 10.1.4-2 builds, then someone might get a copy of that not realizing that the build has been promoted to 10.1.4-6. This means that for the "last" staging number to be the most likely one to be grabbed, you must do two things (which are impossible in combination)
Remove all the old staging numbers, updating them to the new ones.
Ensure that no copy of an old staging number escaped the update.
Since people generally can copy files without being tracked, or said files might not be reachable at time up "update", or timing between reaching all the files cannot be simultaneous, such a system is doomed to fail.
Instead, I recommend (if you must track by file), placing the same file in different "staging directories". This defines release gateways by whether the file exists in a certain directory, and makes it clear that it is the same file that is going through the entire process. In addition, it becomes easy to have various stages of verification poll their respective directories (and you can write Jenkins tasks to promote from one directory to another, if you really wish).
I face a problem, common I guess.
I have a project which is store in github.
I need it to run for: production, testing and different developers.
The project uses maven profiles to set some parameters.
The project has also a spring profile: currently only DEV which initialize a DB.
The project uses external software with specfic configuration files.
I need to have the project in production but being able to serve development versions.
The question: can I use git ignore locally within developers to distribute the code and still store the files in github?
Or would you have a different solution?
The actual question is:
Can I use git ignore locally for some configuration files and still have the files on the main repository? Updated by only the production users?
One way is to store:
one different configuration file per environment
one template file
one script able to detect the current environment the git repo is cloned in, and generate the actual configuration file (which isn't versioned) used for that local environment.
That generation can be automated on checkout, with a smudge script declared as a content filter driver.
The way we have solved this is with a configuration script that pulls the server-appropriate credential file for local, staging, or production from a separate repository.
After each server pulls the appropriate file, we rename it for consistency across the different environments, and then we have our settings file which is tracked by git include the credentials file.
We currently use SilkCentral Test Manager (SCTM) integrated with our source control system via SCTM source control profiles. However, we would like to explore integrating with build artifacts checked into Maven's remote Nexus repository instead.
The idea being that the application-under-test is built and checked into Nexus along with the automated tests only if the build and the tests pass. Therefore, when QA is ready to run tests from SCTM (manual or automated), there is a well-defined combination of application build artifacts and test build artifacts in Nexus that present a more reliable target for SCTM as compared to getting the latest code from the source control system.
All of this is more relevant during active development when the code and the tests and changing daily and the builds are snapshot builds rather than formal builds with tags in the source control system that SCTM could use.
SCTM apparently has support for both universal naming convention (UNC) and Apache virtual file system (VFS) and either of these should potentially be utilizable to point the SCTM source control profiles to Nexus artifacts rather than raw source code. However, I wanted to check with the community to see if there's a simpler approach. (For example, I noted the existence of a Hudson SCTM plugin.) Also, I welcome alternative thoughts and ideas.
There are probably many solutions for solving this, I'd try the following:
Manage the build/first test/publishing steps in Hudson/Jenkins.
For example by modelling it with dependent jobs, the publish job is only triggered if the tests pass. There are also more advanced gatekeeper plugins available (for example a Downstream Ext plugin) which might solve this even more comfortable.
Once the publishing is done, use the Hudson/Jenkins-Silk Central plugin to trigger the executions on Silk Central. There, instead of using UNC or VFS, I'd rather use a setup script which pulls the artifacts from the repository and prepares everything for the tests. This would allow you to use something Maven/Nexus aware to pull the correct artifacts from the repository, instead of somehow trying to make it accessible via UNC or VFS.
As a consultant, I have multiple clients that I'm doing work for. Each client utilizes their own internal Maven repository that is also set up to mirror Central and other external repositories. I need to configure my maven installation on my laptop so that when I'm doing work for one client, it utilizes their internal repository for everything.
I had thought I would be able to utilize profiles to handle this, but mirror settings cannot be changed per-profile.
Does anybody have suggestions on how to approach this maven configuration?
Note: A similar question is here: How do I configure maven to access maven central if nexus server is down?, but that question deals with switching between Central coming from a mirror or not. What I need is for Central (and others) to come from one mirror or a different one based on some property/setting/variable etc.
Create two shell aliases:
alias build_at_home="mvn -s $HOME/.m2/home_settings.xml"
alias build_at_work="mvn -s $HOME/.m2/work_settings.xml"
The "-s" option is handy for explicitly stating which environment settings file to use.
We use this mechanism on our shared build server to ensure each project build is isolated.
Obviously on windows you could create a set of batch files.
I have a very similar requirement in my project too. I created two separate settings.xml files, named them as settings_one.xml and settings_two.xml and saved them in the MAVEN_HOME. Depending on which file I need, I have a small script (a bat file on windows) which overwrites the existing settings.xml with one of the two settings files.
del C:\Users\<username>\.m2\settings.xml
copy C:\Users\<username>\.m2\settings_one.xml C:\Users\tadigotl\.m2\settings.xml
The simplest solution i can suggest is to install git and commit you .m2/settings.xml (of course ignore the repository itself via .gitignore) into git and make appropriate branches for the customers. Change the settings will be done by:
git checkout CUSTOMER_BRANCH
and furthermore any change is tracked by a SCM.