Our team is investigating various options for static analysis in our project, and have mixed opinions about whether we want our Continuous Integration build to fail because of warnings from static analysis.
The argument against failing the build is that there are often exceptions to the rules, and attempting to work around them just to make the build succeed reduces productivity. A better approach would be to generate reports with the build, and regularly dedicate developer time to addressing the reported issues.
The counter-argument is that it is easy for the technical debt to build up if the bugs are not addressed immediately. Also, if the build fails when a potential bug is introduced, the amount of time required to fix it is reduced.
What are your thoughts?
Personally I'd rather see the build fail. While some warnings are false positives, warnings can be excluded using a SuppressMessageAttribute using a Justification. When doing this, you are sure that every warning is evaluated by developers and nothing simply slips through.
It's probably a good idea to fail the build, but this doesn't have to be an absolutely black and white decision.
Hudson lets you fail a build if a certain threshold of new static analysis faults is exceeded. So you can say "mark the build as unstable if 1 new fault is introduced; mark the build as failed if 5 new faults are introduced".
This is something that's built into the various analysis plugins available for Hudson.
I typically make the build fail on static analysis errors (not only the CI build but also the one that runs on developers machine before to commit and I use tools that can be included in the IDE).
If you don't do this, my experience is that errors don't get fixed and actually never will because if you consider errors as cosmetic (or you wouldn't allow the commit, right?), there will always be something more important to do. If there is a justified exception, most tools allow to exclude pieces of code (with things like a custom comment or an exclusion filter).
If you want to use static analysis, do it right, fix the problem when it occurs, don't let an error propagate further into the system. A process that lets this happen is wrong:
Let's make toast American style: you burn, I'll scrape. --W. Edwards Deming.
Tough call, without a good global answer. I’d like to agree with the two previous postings and say yes, but my Second Law of Static Analysis says that defects will congregate in parts of the organization where the software engineering process is most badly broken. A corollary is that engineers who are forced to change their code in a hurry to make the warnings go away, are the ones most likely to introduce new problems when they do so; I’ve seen depressingly many such bugs. I think it’s better software engineering to do the false-positive marking outside the code, as in, e.g., Coverity and Klocwork, and do your enforcement based on that.
It goes without saying that your main point about tracking such new defects, as loudly as possible, and dedicating time promptly to avoiding technical debt, are excellent ideas.
In addition to failing on errors, you need a process to investigate the warnings, and to decide whether some of them should become errors.
Related
I've just installed SonarQube and it's understandably found a lot of technical debt that we want to eventually fix. However at the moment, I want to make sure that any new code checked in is evaluated and issues flagged in that.
I know I can mark issues as won't fix, but is there a way to flag issues that have arisen after a certain point in time and leave the existing technical debt as "Will fix later"?
I know ideally I'd like to halt development and fix everything right now, but I've only just got buy in for a CI server and some of my senior colleagues don't even see the point of unit tests, let alone ensuring code quality.
SonarQube focuses now on the Leak Period, i.e. problems introduced recently. This is handled through project versions, so you just need to update your string to start a new leak period and immediately differentiate old code from new.
Take a look at SonarQube itself on SonarQube.com. The highlighted "Leak Period" section on the right brings attention to problems that are new in this version.
We are relying heavily on the incremental analysis, and I notice that the "duplicated blocks of code" - I think in all languages, but for sure in C#, C++
doesn't tell you where the duplication is, or even of what.
so basically you have to check-in (and fail a gated check-in...) before you understand what Sonar meant.
Anything we can do to know this during the incremental?
even running a third party utility will be fine...whatever can find the duplication.
Thanks,
Roy.
Unfortunately there is no way in preview mode to display duplication. As a side note in preview (and especially incremental) the duplication computation is unreliable. In upcoming version of SQ we will deeply rework preview mode so I'll try to keep this use case in mind.
I'm trying to analyze the programmers profile. So I'm looking for people that is duplicating code, and trying to understand why they're doing this.
My idea is identify (if is lazy, lack of knowledge, etc) and attack the problem in root.
Is there anyway to see only the duplications added ONLY in last analysis of SonarQube?
Just checked on nemo and the time machine view just tells you how much code duplication was added since last analysis, but doesn't actually link to the new issues unlike other metrics. Most likely it's not supported yet..
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Our team is having a heated debate as to whether we allow failing unit tests to be checked-in to source control.
On one side the argument is that yes you can as long as it is temporary - to be resolved within the current sprint. Some say even that in the case of bugs that may not be corrected within the current sprint we can check-in a corresponding failing test.
The other side of the argument is that those tests, if they are checked-in must be marked with the Ignore attribute - the reasoning being that the nightly build should not serve as a TODO list for a developer.
The problem with Ignore attribute however is that we tend to forget about the tests.
Does the community have any advice for us ?
We are a team of 8 developers, with a nightly build. Personally I am trying to practice TDD but the team tends to write unit tests after the code is written
I'd say not only should you not check in new failing tests, you should disable your "10 or so long-term failing tests". Better to fix them, of course, but between having a failing build every night and having every included test passing (with some excluded) - you're better off green. As things stand now, when you change something that causes a new failure in your existing suite of tests, you're pretty likely to miss it.
Disable the failing tests and enter a ticket for each of them; that way you'll get to it. You'll feel a lot better about your automated build system, too.
I discussed this with a friend and our first comment was a recent geek&poke :) (attached)
To be more specific - all tests should be written before (sa long as it's supposed to be TDD) but those checking an unimplemented functionality should have their value prepended with negation. If it's not implemented - it shouldn't work. [If it works - the test is bad] After implementing a test you remove the ! and it works [or fails, but then it's there to do so :) ].
You shouldn't think that tests are something written once and always right. Tests can have bugs too. So editing a test should be normal.
I'd say that checking in (known) failing tests should of course be only temporary, if at all. If the build is always failing, it loses its meaning (we've been there and that's not pretty).
I guess it would be ok to check in failing tests if you found a bug and could reproduce it quickly with a test, but the offending code is not "yours" and you don't have the time/responsibility to get into it enough to fix the code. Then give a ticket to someone who knows his way around and point to the test.
Otherwise I'd say use your ticket system as a TODO list, not the build.
It depends how you use tests. In my group, running tests is what you do before a commit in order to check that you (likely) have not broken anything.
When you are about to commit, it is painful to find failed tests that seem vaguely possibly related to your changes but still strange, investigate for a couple of hours, then realize it cannot possibly be because of your changes, do a clean checkout, compile, and find that indeed the test failures come from the trunk.
Obviously you do not use tests in the same way, otherwise you wouldn't even be asking.
If you were using a DVCS (e.g., git) this wouldn't be an issue as you'd commit the failing test to your local repository, make it work, and then push the whole lot (test and fix) back to the team's master repository. Job done, everyone happy.
As it seems you can't do that, the best you can do is to first make sure that the test is written and fails in your sandbox, and then fix that before committing. This might or might not be great TDD, but it's a reasonable compromise with the working practices of everyone else; working with your co-workers is more important than following some ivory tower principle in every aspect, since the author of the principle isn't located in the cubicle next door…
The purpose of your nightly build is to give you confidence that the code written the day before hasn't broken anything. If tests are often failing then you can't have that confidence.
You should first fix any failing tests you can and delete or comment out/ignore the other failing ones. Every nightly build should be green. If its not then there is a problem and that's immediately obvious now since it should have run green.
Secondly, you should never check in failing tests. You should never knowingly break the build. Ever. It causes unnecessary noise, distractions and lowers confidence. It also creates an atmosphere of laziness around quality and discipline. With respect to ignored tests which are checked in, these should be caught and addressed in your code reviews, which should be covering you test code as well.
If people want to write their code first and tests later, this is OK (though I prefer TDD), but only tested code which runs green should be checked in.
Finally, I would recommend changing the nightly build to a continuous integration build (run on each code check in) which might just change people's habits around code check-in.
I can see that you have a number of problems.
1) You are writing failing tests.
This is great! However, someone is intending to check those in "to be fixed within the current sprint". This tells me that it's taking too long to make those unit tests pass. Either the tests are covering more than one or two aspects of behaviour, or your underlying code is far too complex. Refactor the code to break it up and use mocks to separate the responsibilities of the class under test from its collaborators.
2) You tend to forget about tests with [Ignore] attributes.
If you're delivering code that people care about, either it works, or it has bugs which require the behaviour of the system to be changed. If your unit tests describe that behaviour but the behaviour doesn't work yet, then you won't forget because someone will have registered a bug. However, see point 1).
3) Your team is writing tests after the code is written.
This is fairly common for teams learning TDD. They might find it easier if they thought of the tests not as tests to check if their code is broken, but examples of how another developer might want to use their code, with a description of the value that their code provides. Perhaps you could pair with them and help them learn from what they already know about writing tests afterwards?
4) You're trying to practice TDD.
Do or do not. There is no try. Write a test first, or don't. Learning never stops even when you think you're doing TDD well.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
When I started building a continuous integration server, I ran across the statement "It's bad to break the build [of the code]." After finishing that project I came to the conclusion that
"Breaking the build." was a catchy phrase that was being thrown around a lot because of the alliteration, or
I wasn't understanding a key element of Continuous Integration.
So my question is in the spirit of #2: why is breaking the build a bad thing?
Be very careful in labeling "Breaking the Build" as a bad thing. It is something that needs immediate attention, but it is also a very normal and expected part of the development cycle. This is why Continuous Integration is so useful -- it tells you immediately when the build is broken, and what change set caused it. It helps you get back on track quickly.
If your culture penalizes "Breaking the Build", then you are in danger of cultivating a toxic work environment. Again, consider it to be something that needs immediate attention, but don't label it as "bad".
Because if other people check out your broken changes, they won't be able to work, or if they do they will do so less efficiently.
It also means you're not properly testing your changes before you commit, which is key in CI.
From Martin Fowler http://martinfowler.com/articles/continuousIntegration.html
The whole point of working with CI is
that you're always developing on a
known stable base. It's not a bad
thing for the mainline build to break,
although if it's happening all the
time it suggests people aren't being
careful enough about updating and
building locally before a commit. When
the mainline build does break,
however, it's important that it gets
fixed fast.
Because if other people checkout the changes, they won´t be able to work...
This image is copyrighted to Geek & Poke under a Creative Commons License
It you break the build as has happend to me yesterday. When your team-mates try and use the sourcecode. It will not build. Therfore they will struggle to test the work that they are doing. It get worse the bigger your team.
Surely the whole point of continuous integration is to identify problems early. Daily or more frequent check ins are required to reduce conflicts to a manageable size.
You should get an up to date copy of the repository and build locally. This will tell you if your proposed check in will break the build. You resolve any issues and then check in.
In this way the integration issues are kept local and easy to fix.
Breaking the build has dire implications for the project schedule (and the blood pressure of team-mates)
=> Other developers who then get latest version can no longer build there own changes, delaying them
=> Continuous integration will break, meaning that formal testing can be delayed
Many version control tools (e.g. TFS) can prevent developers from checking in code which does not compile or pass unit or code analysis tests.
Once builds start breaking, people get reluctant to get the latest changes, and you begin the deadly spiral towards Big Bang integration of changes.
I don't think breaking the build is necessarily a bad thing, as long as there is a well-known, working branch or tag in the repository. That said, make your own branch in the repository if you know your code is going the break the build today, but you will fix it next week. Then later you can merge back into trunk.
Because it means someone has done something bad (or at least, some changes have clashed) and you can no longer build and deploy your system.
Breaking the build means that you committed code to a shared repository that either (a) does not compile, or (b) does not work (fails unit tests). Anyone else who's developing from that shared repository is going to have to deal with the broken code you committed until it is fixed. That will cause a loss of productivity for the entire team.