I'm exploring Jenkins staging functionality and I want to devise a fast and lean setup.
Basically, Jenkins promotes the use of stages to partition in the build process and provide nice visual feedback about the progress of the build.
So the Jenkinsfile goes something like
stage("Build")
bat("mvn compile")
stage("Test")
bat("mvn test")
stage("Deploy")
bat("mvn deploy")
This works well, but feels wrong, because deploy and test will both do activities from previous phases again.
As a result, in this setup I am building three times (although skipping compilation due to no changes) and testing two times (in the test and the deploy runs).
When I google around I can find various switches and one of them works for skipping unit tests, but the compilation and dependency resolution steps happen regardless of what I do.
Do I need to choose between speed and stages in this case or can I have both?
I mean:
stage("Resolve dependencies, build, test and deploy")
bat("mvn deploy")
is by far the fastest approach, but it doesn't produce a nice progress table in Jenkins.
In order to bring incremental builds in Maven phases as Gradle does, you can use takari-lifecycle maven plugin.
So, once the plugin is apply you will get all the benefits. In your example, Test stage which will perform mvn test will avoid compilation because it was compiled in the previous stage and Deploy stage will avoid compilation from your main source code and test source code but tests will be executed again so I suggest to add -DskipTests.
Related
We have a large project that has several thousands of tests in the testsuite, and the full testsuite run takes very long time.
I am looking for a tool that I can integrate in the Maven build that will run only those tests that might be affected (knowing code coverage for each), because some covered code has changes.
I was googling that and found a few similar things but not a perfect fit:
Ekstazi http://www.ekstazi.org/ looks like exactly that, but it does not work out-of-the box with TestNG (used in the testsuite), and it is not open source
Infinitest https://infinitest.github.io/ seems to focus mainly on IDE integration - is it possible to run the tests only on demand (just like mvn infinitest)?
PIT http://pitest.org/ is not exactly what I am looking for but it also needs to analyze per-test coverage
It would be also very useful to remember test coverage with (last) git commit and run the tests against the last code changes.
Further suggestions and comments on those above are welcome.
As far as I can see, Infinitest doesn't provide corresponding Maven plugin, so it's impossible to do using it. You may consider creating it though, making an invaluable contribution to the world.
As far as I can see, it provides pretty solid API so writing a plugin shouldn't be a big problem. You may want to take a look at InfinitestCore interface first. If you're using a CI environment you may want to provide file list for the Infinitest directly from git diff --name-only HEAD~1 which will produce the list of files changed in latest commit (as an example, if you run your builds against each commit).
UPD. It seems like there's a workaround involving maven-exec-plugin to explicitly run Infinitest in the Maven build: you can run 'mvn exec:exec' from the command line or from m2eclipse's Maven
Build launcher to run Infinitest against your project. I'd advice specifying the explicit build phase on which it should be run using the executions element in POM:
executions: It is important to keep in mind that a plugin may have multiple goals. Each goal may have a separate configuration, possibly even binding a plugin's goal to a different phase altogether. executions configure the execution of a plugin's goals.
We're using Jenkins and attempting to make our project's CI build as modular as possible, i.e. independent jobs for checkout/build/test/analysis/deploy which can then be chained together as and when needed.
The problem is that I can't figure out how to get Sonar to only run tests or only run analysis. Regarding the former, I'm completely lost; for the latter, I understand I can set sonar.dynamicAnalysis = reuseReport. But our tests are in MSTest format and we use Gallio to run them, and since Sonar only seems to support surefire/cobertura/clover report files - not Gallio/MSTest - I'm not sure how to get Sonar to read the test output.
So how do I get Sonar to split its test and analysis phases?
IMO, you could let Sonar handle tests and analysis at the same time - this will make your configurations far simpler (=> using "reuseReport" can lead to lors of troubles if assemblies have been moved between the build/test and the Sonar analysis).
So basically, what I'm saying is that in your "analysis" job, Sonar would do test + static analysis. That's what we do at SonarSource, we have:
a continuous integration process that does "checkout/build/test/deploy" on every commit
a continuous inspection process that does ""checkout/build/sonar" every night
We have a huge project with many submodules. A full build takes currently over 30mins.
I wonder how this time distributes over different plugins/goals, e.g. tests, static analysis (findbugs, pmd, checkstyle, etc ...)
Would it be possible to time the build to see where (in both dimensions: modules and goals) most time is spent?
The maven-buildtime-extension is a maven plugin that can be used to see the times of each goal:
https://github.com/timgifford/maven-buildtime-extension
If you run the build in a CI server like TeamCity or Jenkins (formerly Hudson), it will give you timestamps for every step in the build process and you should be able to use these values to determine which goals/projects are taking the most time.
I don't think there is any way built in to maven to do this. In fact, in the related question artbristol posted, there is a link to a Maven feature request for this functionality. Unfortunately, this issue is unresolved and I don't know if it will ever be added.
The other potential solution is to write your own plugin which would provide this build metadata for you.
I don't think there is a way to determine the timing of particular goals. What you can do is run the particular goals separately to see how long they take. So instead of doing a "mvn install" which runs all of your tests, checkstyle, etc.. just do "mvn checkstyle:checkstyle" to see how long that takes for a particular module.
Having everything done every time is nice when its done by an automated server (continuum/jenkins/hudson) but when you are building locally, sometimes its better to be able to just compile. Some of the things you can do are have the static analysis goals ONLY run when you pass in a certain parameter or profile. Another option is to only have them ran when maven.test.skip=false.
If you are using a continuous build, try having the static analysis only done every 4 hours, or daily.
We have a lot of tests. I can break these up so that they run on seperate agents after an initial compile build happens, but is there a way I can recombine these results? Having 8 build configurations that all need to be green makes it hard to see if you've got one ubergreen build.
Is there a way in TeamCity to recombine / join builds once we've split them out? TW-9990 might help - allowing ANDs in the dependencies.
We found the answer which certainly works from TeamCity 5:
One compile build,
N test only builds that take compile.zip!** and copy to where the compile output would normally be. (via a template)
Consolidated finish:
Finish Build Trigger: Wait for a successful build in: ...
Snapshot Dependencies: Do not run new build if there is a suitable one
Only use successful builds from suitable ones
This all seems to work nicely and the whole shbang is easily copied for branches etc. Am very happy - this has worked well for us for many months now.
No idea how to do that natively. Here's my first thoughts on how I would try and tackle such a thing though:
Saving test results to files
Publishing the test result files as build artifacts
Creating a 'Merge build'
Adding artifact dependency onto the individual test projects
Writing a custom 'build' script using something like (N)Ant. This would parse the individual test results and publish the results as per the TC KB
Good luck!
Thinking outside the box you could have an overall build which doesn't really do anything (or use one of your test build configs as your 'master'), with snapshot dependencies on each of your split test builds. That way if any of them fail, the 'master' will fail because one the dependent build failed.
TW-9990 looks to be concerned with build triggering rather than dependencies.
I think I'm missing a valuable piece of understanding with TeamCity 5.0. Why is there a separate build runner for FxCop? I prefer that my build server run everything, at once (compile, run unit tests, FxCop, etc). The problem is, I don't see how to add more than a single Build Runner for a specific project, so it seems I have to add a second project to TeamCity with a dependency on another project that uses the sln2008 build runner, or I could simply go the long route and build everything out in MSBuild. Am I missing something that should be obvious? Is it possible to configure the sln2008 Build Runner to include FxCop code analysis?
I think most of the users want their builds with tests to be as fast as possible. Other things like coverage, code analysis, metrics most likely should not be run often. It is enough to run them once per day, because their value is statistics gathered over time.
As for multiple build runners per build configuraution feature - it is one of the most voted in our tracker: http://youtrack.jetbrains.net/issue/TW-3660?query=multiple+build+runners, it has very good chances to be implemented in the next versions.