I'm playing with the wonderful FindBugs plugin for Hudson. Ideally, I'd like to have the build fail if FindBugs finds any problems. Is this possible?
Please, don't try and tell me that "0 warnings" is unrealistic with FindBugs. We've been using FindBugs from Ant for a while and we usually do maintain 0 warnings. We achieve this through the use of general exclude filters and specific/targeted annotations.
The hudson way is to use unstable and not fail for something like this.
However if you really do want your build to fail, you should handle this in ant.
<findbugs ... warningsProperty="findbugsFailure"/>
<fail if="findbugsFailure">
Maybe you've already seen this option, but it can at least set your build to unstable when you have greater than X warnings. On your job configuration page, right below the Findbugs results input field where you specify your findbugs file pattern, should be an 'advanced' button. This will expand and give you an "Unstable Threshold" as well as Health Reporting that changes Hudson's weather indicator for the job based on the number of warnings.
I wouldn't want my build to fail, but unstable seems reasonable if you are maintaining 0 warnings (and presumably 0 test failures).
As Tom noted, the provided way to do this is with the warningsProperty of the FindBugs ant task.
However, we didn't like the coarse control that gave us over build failure. So we wrote a custom Ant task that parses the XML output of FindBugs. It will set one Ant property if any high priority warnings are found, a different property if any correctness warnings are found, a third property if any security warnings are found, etc. This lets us fail the build for a targeted subset of FindBugs warnings, while still generating an HTML report that covers a wider range of issues. This is particularly useful when adding FindBugs analysis to an existing code base.
You can not rely on find bugs so much , it is just an expert system that tells you that something may be wrong with your program during runtime. Personally I have seen a lot of warning generated by findbugs because it was not able to figure out the correctness of code (in fact).
One example when you open a stream or jdbc connection in one method and close it in other, in this case findbugs expecting to see close() call in same method which sometimes is impossible to do.
Related
We have a large project that has several thousands of tests in the testsuite, and the full testsuite run takes very long time.
I am looking for a tool that I can integrate in the Maven build that will run only those tests that might be affected (knowing code coverage for each), because some covered code has changes.
I was googling that and found a few similar things but not a perfect fit:
Ekstazi http://www.ekstazi.org/ looks like exactly that, but it does not work out-of-the box with TestNG (used in the testsuite), and it is not open source
Infinitest https://infinitest.github.io/ seems to focus mainly on IDE integration - is it possible to run the tests only on demand (just like mvn infinitest)?
PIT http://pitest.org/ is not exactly what I am looking for but it also needs to analyze per-test coverage
It would be also very useful to remember test coverage with (last) git commit and run the tests against the last code changes.
Further suggestions and comments on those above are welcome.
As far as I can see, Infinitest doesn't provide corresponding Maven plugin, so it's impossible to do using it. You may consider creating it though, making an invaluable contribution to the world.
As far as I can see, it provides pretty solid API so writing a plugin shouldn't be a big problem. You may want to take a look at InfinitestCore interface first. If you're using a CI environment you may want to provide file list for the Infinitest directly from git diff --name-only HEAD~1 which will produce the list of files changed in latest commit (as an example, if you run your builds against each commit).
UPD. It seems like there's a workaround involving maven-exec-plugin to explicitly run Infinitest in the Maven build: you can run 'mvn exec:exec' from the command line or from m2eclipse's Maven
Build launcher to run Infinitest against your project. I'd advice specifying the explicit build phase on which it should be run using the executions element in POM:
executions: It is important to keep in mind that a plugin may have multiple goals. Each goal may have a separate configuration, possibly even binding a plugin's goal to a different phase altogether. executions configure the execution of a plugin's goals.
Having a build fail because a coding standard is violated is nice. However, when introducing checkstyle to a new project, there might be a lot of violations.
Instead of adding one and one rule, does anyone know of a (preferably simple) way of setting up something that will fail the build only if more warnings are introduced than were there before?
This is not possible. But as a workaround you can specify maxAllowedViolations (and maybe reduce this number manually with every check-in).
I know I can instruct Maven to keep going after a failure with the -fae/--fail-at-end command-line option. Is there a way to set this behaviour as default for a given module?
Some background:
I'm moving from an Ant build to a Maven-based build, and the other developers on the team are used to the build completing even when there are failed unit tests. With a ~200-module product, the build takes a fair amount of time, and it would be ideal if developers could see all failing tests from the beginning, without having to remember to add the -fae option.
From reading the Maven docs, I get the feeling that the answer is a resounding "no", but I just want to cover my bases and make sure there isn't some sort of undocumented way to do this...
Thanks!
After checking out the Maven source code, I can confirm this cannot be configured by default.
An existing issue is opened on the Maven tracker: https://issues.apache.org/jira/browse/MNG-5342 (please vote if you still want it -- just noticed you did open it so you cannot vote on it ;) )...
Regards,
To skip test failures you can use property maven.test.failure.ignore. Of course you can use this property per module if you have parent and children poms.
Does Sonar offer any way to raise alerts and fail a build when the trend for certain metrics is bad?
Background: In our legacy project using a static threshold for example for code coverage ("red alert when coverage is below 80%") does not make much sense. But we would like to make sure that the coverage does not go down any further.
Please do not give any advice on lowering the bar by using a less restrictive rule set. This is no option in our case.
There is a build breaker plug-in that will fail the build if you breach a Warning or Error threshold setup in the quality profile.
Plug-in details are here:
http://docs.sonarqube.org/display/PLUG/Build+Breaker+Plugin
Not aware of any functionality that enables you to a metric trend.
We use Sonar as the second last step in our release process. The build breaker ensures that releases do not breach predetermined quality criteria.
We tried exactly the same, using the build breaker plugin. After a while, it showed to be too unflexible (and configuring Sonar is a mess), so we moved from sonar to Jenkins/Hudson plugins like Cobertura (for code coverage) or PMD for code style:
https://wiki.jenkins-ci.org/display/JENKINS/PMD+Plugin
https://wiki.jenkins-ci.org/display/JENKINS/Cobertura+Plugin
With these plugins, very fine-granular settings are possible, to set for example the build to yellow at <70% code coverage or to red by <50%; even the weather-symbol for each build is setable.
In the meanwhile we scripted our own buildbreaker that gets excecuted within our build. We use Groovy to query the REST API of Sonar to retrieve a certain set of metrics (including their historical values). The retrieval of metrics is provided by a build plugin that is provided for our whole division.
Each team can parameterize their build with a set of rules regarding those metrics that have to be verified for their project. Of course, the rules are also provides as Groovy snippets :-)
Typical are:
Number of (major|critical|blocker) violations is less or equal than in previous build
No new duplicates
Coverage not lower than in previous build
Bad findings can then be used for breaking the build or just for reporting.
We are working on a web project from scratch and are looking at the following static code analysis tools.
Conventions (Checkstyle)
Bad practices (PMD)
Potential bugs (FindBugs)
The project is built on Maven. Instead of using multiple tools for the purpose, I was looking at a single flexible solution and came across SonarQube.
Is it true that we can achieve the results from Checkstyle, PMD and Findbugs with SonarQube?
Sonar will run CheckStyle, FindBugs and PMD, as well as a few other "plugins" such as Cobertura (code coverage) by default for Java projects. The main added value, however, is that it stores the history in a database. You can then see the trend. Are you improving the code base or are you doing the opposite? Only a tool with memory can tell you that.
You should run Sonar in your CI system so that even things that take some time to execute (such as CPD – copy paste detector) can run. And you'll have your history. Whereas with an Eclipse plugin, for example, you'll detect violations sooner – which is great – but you will be tempted to run it less often if it starts taking too long, or run less "quality plugins" (such as skipping CPD or skipping code coverage analysis). And you won't have history.
Also, Sonar generates visual reports, "Dashboard" style. Which makes it very easy to grasp. With Sonar in Jenkins, you'll be able to show developers and your management the effects of the work that was performed on the quality of the code base over the last few weeks and months.
Sonar uses these 3 tools as plugins and aggregates the data from all three giving addition value by showing graphs and such from these tools. So they are complementary to sonar.
Yes and no. In addition to the other answers.
SonarQube is currently on the way to deprecate PMD, Checkstyle and Findbugs and use their own technology to analyze Java code (called SonarJava). They do it, because they don't want to spend their time fixing, upgrading (or waiting on it) those libraries (e.g. for Java 8), which for example uses outdated libraries.
They also got a new set of plugins for your personal IDE called SonarLint.
Sonar is great, but if you want to use the mentioned tools separately and still have nice graphs, you can use the Analysis Collector Plugin as part of your Jenkins CI build. A slight advantage of this is that you can check in your PMD/Findbugs/Checkstyle configuration into your SCM and have it integrated into your Maven build, rather than relying on a separate Sonar server.
... a few years later: no, it is not! SonarQube supposes to be able to cover all the rules with its own analyzer, but there are still rules from PMD or CheckStyle not covered by SonarQube. See for example: PMD ReturnFromFinallyBlock.
Sonar is much more than these tools alone.
The greatest benefits is the gui, which lets you configure anything easily.
The statistics it offers are very detailed (lines of code etc).
And it even offers great support for test coverage etc :)
Here you can take a good look:
http://nemo.sonarsource.org/
I would still use these tools in addition to sonar because they can fail the maven build when someone violates a rule. Where as sonar is more retrospective.
Well at least since SonarQube 6.3+ it seems to be that Findbugs is (at the moment) no longer supported as a plugin. Sonarsource is working on replacements of Findbugs-rules with its own Java-plugin.
They even had a list for the replacement status of each rule here, but it got removed by now.
See https://community.sonarsource.com/t/where-is-dist-sonarsource-com-content/5353 for more details.