Our Jenkins pipeline has a stage to build with following command -
mvn clean install -DdestFolder=<destination-folder-path>
Above command takes 1 hour when run through pipeline. If same is run locally from same machine, it takes only 15 mins.
When run through Jenkins pipeline, I can see enough memory, enough space. Also I had explicitly assigned memory to maven using MAVEN_OPTS, still same result.
Could you please help to understand what could be wrong here?
Related
I'm using maven-help-plugin one-liner to grab the project version in a bash script:
currentVersion=`mvn org.apache.maven.plugins:maven-help-plugin:3.2.0:evaluate -Dexpression=project.version -q -DforceStdout`
It runs in a few seconds locally but takes over 2 minutes when it runs on our build agents. I ran it with -X and I didn't see anything obviously alarming though it did run a bit more quickly. I also tried running it with -o in case it's dependency downloading or some network hiccup slowing it down but it just fails to find the plugin and doesn't run at all.
Is this just a known slow command, or could the command run faster if I configured it differently or adjusted my POMs in some way?
But with it taking 2+ minutes I may need to drop the maven way in favor of xmlstarlet or something to treat the POM as just XML.
I've trying to fiddle with SonarQube and now I'm learning about the incremental mode. In my understanding it should analyze only the changed files.
So my first test is just to run SonarQube twice on our project without any change. I run SonarQube (5.1.2) installed locally on windows 7 64-bit machine with SSD drive and I7 CPU. We use java 1.7 and Maven 3.3.3. Our project is fairly big (~570 modules) of maven, most of them are java code. After I run a prepare-agent of jacoco along with my unit tests I understand that its time to run sonar:sonar and create a report.
So what I try is:
mvn sonar:sonar -Dsonar.analysis.mode=incremental -Dsonar.host.url=http://localhost:9000 -Dsonar.java.coveragePlugin=jacoco
This runs for 20 minutes. Ok, now I run the same command again without doing any change and it still runs the same 20 minutes
So my question is - whether someone can explain me how to use the incremental mode correctly? I have a hard time understanding what I'm doing wrong, in my understanding the second run has to be much faster, otherwise I don't see any advantage over the preview mode here.
Thanks Mark
The incremental mode will analyze only changed files since latest "regular" analysis on server. So in your case you should first run a normal (now called "publish") analysis:
mvn sonar:sonar -Dsonar.java.coveragePlugin=jacoco
Then your can use the incremental mode:
mvn sonar:sonar -Dsonar.analysis.mode=incremental -Dsonar.java.coveragePlugin=jacoco
I have a TeamCity build that sometimes fails too early.
What I mean by that is that the first few steps are for "provisioning" (setting up the testing environment) and the testing of my code itself comes later.
Sometimes (for whatever reason) the build fails during one of the "provisioning" steps. This is not a problem since running the build again usually works fine.
But - the "changes" are not passed along to the next run of the build.
I am using this command as part of my build to output the "changes" that came from my codebase:
copy "%system.teamcity.build.changedFiles.file%" changelog.txt
So I need a way to tell TeamCity "hey, ignore the last run, that failure doesn't count because it didn't test my code, I want the next run to contain the same 'changes' in system.teamcity.build.changedFiles.file"
How can I do that?
Have you tried build chains with dependencies? They can be set up to only execute if the build (including tests) is successful: http://blog.jetbrains.com/teamcity/2012/04/teamcity-build-dependencies-2/
I'm investigating a Maven/Jenkins/Artifactory set up as a possible solution for a CI/Release process.
I have a job in Jenkins - call it MYJOB - that builds and deploys an artifact to Artifactory. Now, I want another job that I can run manually that will copy the artifact of a particular build of MYJOB from Artifactory and put it somewhere (not too bothered where right now, but eventually it'll be another server).
For example, let's say build #123 went green, and now my QA team want to deploy the built artifact to their environment for further testing (the idea being to keep this artifact intact and unchanged throughout the testing process, before marking it as releasable). I want them to be able to enter '123' into Jenkins as a job parameter and then kick off the build.
So, I figure I need a free-style job that uses a script to do this.
My question is: How can I pass the number of a previous MYJOB build to the job, so that it will get the correct artifact from artifactory?
Alternative methods of achieving my goal are welcomed :)
So I got it working. I put the following code in the Build Step section of a Jenkins Freestyle Build Configuration after defining a parameter called 'REQ_BUILD_NUMBER':
SOMETHING=$(curl "http://MYARTIFACTORYLOCATION/artifactory/api/search/prop?build.number=$REQ_BUILD_NUMBER" | jq --raw-output '.results[0].uri')
echo $SOMETHING
SOMETHING_ELSE=$(curl $SOMETHING | jq --raw-output '.downloadUri')
echo $SOMETHING_ELSE
wget -N --directory-prefix=/var/lib/jenkins/artifacts/ $SOMETHING_ELSE
(NB: I'm barely competent at shell scripting. I'm sure it can be done in a better way)
EDIT: This requires installing 'jq' for the command line.
Create a Parameterized build for the second job. The QA team can put the build number when they start the build. This build number will be available as an environment variable which can be accessed in the scripts which can then retrieve the packages from the repository.
I'm trying to improve the startup time of Gradle. The expererimental --daemon switch doesn't seem to really speed it up. So I'm thinking to use some server process independent of gradle, and make gradle connect to it. The options I found so far are
nailgun to invoke java
GroovyServ to invoke a groovy script
Since gradle is started by a shell script, it takes some tweaking. My question is: has anyone used the above options to start gradle? Or if you have successfully used another option, what's that?
My guess is that your build is doing something at configuration time that it should be doing at execution time. With m5, gradle build --profile will give you an HMTL report showing where the time goes. Another way to see what's going on is gradle build --info or gradle build --debug.