I have a Jenkins job that builds a simple Maven project. If all I do is build, it works just fine. The problem arises when I try and do a release, dry run or regular. It consistently fails with the Cannot prepare the release because you have local modifications error. I have wiped out the workspace, but the problem persists. Is there any way I can get Maven to tell me which file it thinks has been modified? I would assume that by wiping out the local workspace and immediately running the dry run release that there wouldn't be any opportunity for anything to get modified.
Please note, I do not have access to the Jenkins server or the slave that is running the actual release build, so I can't use any tools there (like SVN) to determine what is supposedly modified.
You can use the Maven SCM plugin to do a diff.
https://maven.apache.org/scm/maven-scm-plugin/diff-mojo.html
Basically, integrate the maven plugin upstream of the failure, and see if anything has been changed. I imagine you might be able to see the output in the log, but if you cannot, you might be able to move your "real" maven pom.xml aside and replace it with one that generates a diff file and with the help of the maven build helper plugin, attaches that file as an additional aritfact (to a pom target).
It turned out the solution to my problem was to not use the "Local to the workspace" strategy for my private Maven repository in the Jenkins job configuration. By changing that to the "Local to the executor" strategy the problem went away. I'm still not sure why it was having the problem in the workspace, but this solution resolved it form me, and might work for others.
Related
I use the maven release plugin within CI/CD pipeline based on Gitlab.
Everything works fine but I try to inject some sensitive data directly to a file using runner during the pipeline run:
simply echo $var > file
yet before running maven plugin - it works fine but running maven release perform in one of its steps is branch checkout which means that all uncommitted files will be skipped. Of course, as they are sensitive ones I don’t wanna keep them directly at the repo level and finally as a result I have an artifact without necessary data
Maybe someone knows how to inject/add such a file running maven release plugin? Worth to add that I'm fully aware of fact that it's not a secure way to keep sensitive data in the artifact - it's only a temporary solution.
I would like repeatable results when running maven commands locally, even if somebody else is pushing updates to a snapshot dependency.
To achieve this, I would like to use the updatePolicy of never.
This will allow any dependencies that aren't available locally to be downloaded, while any I have installed locally will be used.
The offline flag won't work in this situation, as there may be dependencies that I haven't installed locally which will need to be downloaded from the remote repo.
I don't want to have to modify the pom, as doing this locally with every checkout will be error prone, and I don't want to commit these changes as it will have adverse effects on other developers.
Ideally I'd like to specify this from the command line. The opposite of the -U flag.
I've searched the docs, and so far have not found out how to do this.
If you want repeatable builds you can create a Docker image that can run Maven. Then load all you project files and run Maven build.
This will provide a clean environment for your build every time.
About the changing dependencies, if you work using SNAPSHOT dependencies, you must expect this different results. That is what SNAPSHOT means: "this is under development".
If you (or your team) control the SNAPSHOT dependency and there is an error in the build that's a "good" sign, the tests found something to be fixed.
If you (or your team) don't control the SNAPSHOT dependency, you would prefer to the last stable release.
I have Jenkins version 2.7.1 running on a Windows 7 machine. It is successfully pulling code from a subversion repository and running tests. I have the test jobs set up for the development branch of each project only.
We periodically make stable releases of the projects in jar files with version numbers. I would like to have Jenkins be the repository manager for those stable releases. These are made by hand - There is no Jenkins job making or testing stable releases. The projects do use Maven.
Each stable build is tagged in the subversion repository, so it could be made again on demand if needed.
I downloaded the Maven repository server hoping to make this fit the purpose. I read the documentation that's provided, but it's pretty terse. As I understand it and have it configured now, this appears to have a couple of issues:
If I go to jenkins-ip/plugin/repository/project, it has made directories there that expose the names of all of my projects, which seems undesirable. (Here jenkins-ip is the IP where I access Jenkins on my local network.)
On the other hand, there's nothing but empty directories under these projects, so they're currently useless.
These projects all correspond to the continuous testing of the development branch. There's no apparent way to get the stable builds into the hierarchy. (It doesn't seem efficient to create a job for each stable release...)
Is there anyway to get Jenkins (with this plugin or through another method) to be the repository manager just for the stable builds? I know that I can start a different repository manager like archiva, but it would be ideal to use Jenkins since it's already running and it seems to claim capability for this function now.
To use Maven repository server you have to build the project on Jenkins.
Then the plugin will expose all archived artifacts as maven repo.
Note you need to use a "Maven project" type for it to work (freestyle is not supported)
There are several plugins that will help you manage building from multiple tags, however not all of them work with "Maven project" type.
You could also try Jenkins pipeline (previously "Workflow") or the Job-DSL plugin.
A simplest solution would be to have a build parameter specify the tag name (then checkout e.g. ^/tags/projectname/${tagParam}), but you have to figure out how to trigger the job then.
Well, this is kind of embarrassing. I am in the process of mavenizing our build processes and just don't know how the access the result of a build. I build, let's say, a jar file and mvn deploy it. So it ends up as some blah-0.1.2.jar in our company maven repository, which is just a webdav share. Now how would you pass that on to someone else to use? Just pry it from target/blah-0.1.2.jar can't be the answer. I found several suggestions to use variants of mvn dependency:get but they were all just close and didn't feel right. There must be a way to use all those nice versions of blah-*.jar that end up in the repository for purposes other than a maven dependency. Preferably from the command line and maybe even without maven. Hm, a webdav client doesn't look too bad except for snapshots. What would you suggest?
Creating a script that makes a dependency:get call is probably going to be closest to your desired outcome. You can specify the destination of your downloaded jar this way.
If you are looking for an easy way to share builds between people in/outside of your company then you can look into setting up some automated build software like Bamboo or something similar. A new build gets triggered any time a commit is made to the section where your project resides in whatever version control system you use. An artifact is then made available for each successful build and are available via Bamboo's web interface. Bamboo can be configured to run with your maven pom's.
While they can bit a bit of pain to set up, going the automated build route will take a lot of the sting out of sharing your builds in the future.
I am a complete noob to this so if there is a completely obvious answer by all means make fun and point and laugh then give the answer.
We use Visual Studio 2010 to compile our published website. I have a repository that I use for my source code and one which I publish the compiled code to. I then check out the publish repository on the testing server and once it tests good I check out the repository on my main server. This is fine and all but I am using Tortoise SVN and automating the commit. Problem is, I really need to wipe the publish SVN repository, then copy the files, then commit. I just can't get that to happen and have it still recognize it as a SVN repository. Suggestions?
First of all, don't put compiled code into your source repository. It's bad form.
Look at Jenkins as a build server. Jenkins can use the msbuild.exe command to build .NET projects using the .sln file your project creates.
When you do a commit in Subversion, Jenkins will automatically fire off the build. If you have NUnit tests, Jenkins will run those and give you the results. You can have Jenkins store the compiled files for you in its archive. If someone wants to install a particular build, they can directly download it from Jenkins without having to do a checkout in Subversion first.
Jenkins offer all of these advantages:
It shows you all the changes in your repository and what changed in each commit.
It can run all sorts of tests automatically for you.
You can mark builds that are released using the "Simple Promotion" plugin.
You can tag builds in Subversion directly in Jenkins without needing a command line or working directory.
It can alert the developers if a build fails due to bad code, or if testing fails. These alerts can be done via Email, instant messaging, phone text messages, Twitter, and many other ways. All it takes is the right plugin which Jenkins makes easy to install.
Jenkins can act as a release repository which makes it easy to find the release, what's in the release and why.
Jenkins integrates with Bamboo, ViewVC, and Sventon. These are web-based repository browsers. This way Jenkins not only shows you the file changed, but what changed in the file.
Jenkins is easy to use and install. Download it and give it a try.
Unless you have a hard and fast requirement which forces you to use two separate repositories, i'd suggest taking a look at SVN tagging and branching functionality.
http://tortoisesvn.net/docs/release/TortoiseSVN_en/tsvn-dug-branchtag.html
Having a repository for the published code really doesn't buy you anything. IMO, you would be better off with a bunch of zip files (one per release) with the date and SVN branch reflected in the name. DO have a changelog .txt file in the zip, and also check that into the repo.
Problem is, I really need to wipe the publish SVN repository, then copy the files, then commit.
You don't need wiping in repo. Just make commit to production repo with exported HEAD from dev-repo (post-commit hook for commit message)
And tags, yes, are more natural and bulletproof way.