Breaking the Build, Why is it a bad thing? [closed] - continuous-integration

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
When I started building a continuous integration server, I ran across the statement "It's bad to break the build [of the code]." After finishing that project I came to the conclusion that
"Breaking the build." was a catchy phrase that was being thrown around a lot because of the alliteration, or
I wasn't understanding a key element of Continuous Integration.
So my question is in the spirit of #2: why is breaking the build a bad thing?

Be very careful in labeling "Breaking the Build" as a bad thing. It is something that needs immediate attention, but it is also a very normal and expected part of the development cycle. This is why Continuous Integration is so useful -- it tells you immediately when the build is broken, and what change set caused it. It helps you get back on track quickly.
If your culture penalizes "Breaking the Build", then you are in danger of cultivating a toxic work environment. Again, consider it to be something that needs immediate attention, but don't label it as "bad".

Because if other people check out your broken changes, they won't be able to work, or if they do they will do so less efficiently.
It also means you're not properly testing your changes before you commit, which is key in CI.

From Martin Fowler http://martinfowler.com/articles/continuousIntegration.html
The whole point of working with CI is
that you're always developing on a
known stable base. It's not a bad
thing for the mainline build to break,
although if it's happening all the
time it suggests people aren't being
careful enough about updating and
building locally before a commit. When
the mainline build does break,
however, it's important that it gets
fixed fast.

Because if other people checkout the changes, they won´t be able to work...
This image is copyrighted to Geek & Poke under a Creative Commons License

It you break the build as has happend to me yesterday. When your team-mates try and use the sourcecode. It will not build. Therfore they will struggle to test the work that they are doing. It get worse the bigger your team.

Surely the whole point of continuous integration is to identify problems early. Daily or more frequent check ins are required to reduce conflicts to a manageable size.
You should get an up to date copy of the repository and build locally. This will tell you if your proposed check in will break the build. You resolve any issues and then check in.
In this way the integration issues are kept local and easy to fix.

Breaking the build has dire implications for the project schedule (and the blood pressure of team-mates)
=> Other developers who then get latest version can no longer build there own changes, delaying them
=> Continuous integration will break, meaning that formal testing can be delayed
Many version control tools (e.g. TFS) can prevent developers from checking in code which does not compile or pass unit or code analysis tests.

Once builds start breaking, people get reluctant to get the latest changes, and you begin the deadly spiral towards Big Bang integration of changes.

I don't think breaking the build is necessarily a bad thing, as long as there is a well-known, working branch or tag in the repository. That said, make your own branch in the repository if you know your code is going the break the build today, but you will fix it next week. Then later you can merge back into trunk.

Because it means someone has done something bad (or at least, some changes have clashed) and you can no longer build and deploy your system.

Breaking the build means that you committed code to a shared repository that either (a) does not compile, or (b) does not work (fails unit tests). Anyone else who's developing from that shared repository is going to have to deal with the broken code you committed until it is fixed. That will cause a loss of productivity for the entire team.

Related

Using Sonar Qube to flag "new" issues

I've just installed SonarQube and it's understandably found a lot of technical debt that we want to eventually fix. However at the moment, I want to make sure that any new code checked in is evaluated and issues flagged in that.
I know I can mark issues as won't fix, but is there a way to flag issues that have arisen after a certain point in time and leave the existing technical debt as "Will fix later"?
I know ideally I'd like to halt development and fix everything right now, but I've only just got buy in for a CI server and some of my senior colleagues don't even see the point of unit tests, let alone ensuring code quality.
SonarQube focuses now on the Leak Period, i.e. problems introduced recently. This is handled through project versions, so you just need to update your string to start a new leak period and immediately differentiate old code from new.
Take a look at SonarQube itself on SonarQube.com. The highlighted "Leak Period" section on the right brings attention to problems that are new in this version.

How to treat future requirements in terms of TDD [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
While attempting to adopt more TDD practices lately on a project I've run into to a situation regarding tests that cover future requirements which has me curious about how others are solving this problem.
Say for example I'm developing an application called SuperUberReporting and the current release is 1.4. As I'm developing features which are to be included in SuperUberReporting 1.5 I write a test for a new file export feature that will allow exporting report results to a CSV file. While writing that test it occurs to me that the feature to support exports to some other formats are slated for later versions 1.6, 1.7, and 1.9 which are documented in a issue tracking software. Now the question that I'm faced with is whether I should write up tests for these other formats or should I wait until I actually implement those features? This question hits at something a bit more fundamental about TDD which I would like to ask more broadly.
Can/should tests be written up front as soon as requirements are known or should the degree of stability of the requirements somehow determine whether a test should be written or not?
More generally, how far in advance should tests be written? Is it OK to write a test that will fail for two years until the that feature is slated to be implemented? If so then how would one organize their tests to separate tests that are required to pass versus those that are not yet required to pass? I'm currently using NUnit for a .NET project so I don't mind specifics since they may better demonstrate how to accomplish such organization.
If you're doing TDD properly, you will have a continuous integration server (something like Cruise Control or TeamCity or TFS) that builds your code and runs all your tests every time you check in. If any tests fail, the build fails.
So no, you don't go writing tests in advance. You write tests for what you're working on today, and you check in when they pass.
Failing tests are noise. If you have failing tests that you know fail, it will be much harder for you to notice that another (legitimate) failure has snuck in. If you strive to always have all your tests pass, then even one failing test is a big warning sign -- it tells you it's time to drop everything and fix that bug. But if you always say "oh, it's fine, we always have a few hundred failing tests", then when real bugs slip in, you don't notice. You're negating the primary benefit of having tests.
Besides, it's silly to write tests now for something you won't work on for years. You're delaying the stuff you should be working on now, and you're wasting work if those future features get cut.
I don't have a lot of experience with TDD (just started recently), but I think while practicing TDD, tests and actual code go together. Remember Red-Green-Refactor. So I would write just enough tests to cover my current functionality. Writing tests upfront for future requirements might not be a good idea.
Maybe someone with more experience can provide a better perspective.
Tests for future functionality can exist (I have BDD specs for things I'll implement later), but should either (a) not be run, or (b) run as non-error "pending" tests.
The system isn't expected to make them pass (yet): they're not valid tests, and should not stand as a valid indication of system functionality.

Am I allowed to check in a failing test [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Our team is having a heated debate as to whether we allow failing unit tests to be checked-in to source control.
On one side the argument is that yes you can as long as it is temporary - to be resolved within the current sprint. Some say even that in the case of bugs that may not be corrected within the current sprint we can check-in a corresponding failing test.
The other side of the argument is that those tests, if they are checked-in must be marked with the Ignore attribute - the reasoning being that the nightly build should not serve as a TODO list for a developer.
The problem with Ignore attribute however is that we tend to forget about the tests.
Does the community have any advice for us ?
We are a team of 8 developers, with a nightly build. Personally I am trying to practice TDD but the team tends to write unit tests after the code is written
I'd say not only should you not check in new failing tests, you should disable your "10 or so long-term failing tests". Better to fix them, of course, but between having a failing build every night and having every included test passing (with some excluded) - you're better off green. As things stand now, when you change something that causes a new failure in your existing suite of tests, you're pretty likely to miss it.
Disable the failing tests and enter a ticket for each of them; that way you'll get to it. You'll feel a lot better about your automated build system, too.
I discussed this with a friend and our first comment was a recent geek&poke :) (attached)
To be more specific - all tests should be written before (sa long as it's supposed to be TDD) but those checking an unimplemented functionality should have their value prepended with negation. If it's not implemented - it shouldn't work. [If it works - the test is bad] After implementing a test you remove the ! and it works [or fails, but then it's there to do so :) ].
You shouldn't think that tests are something written once and always right. Tests can have bugs too. So editing a test should be normal.
I'd say that checking in (known) failing tests should of course be only temporary, if at all. If the build is always failing, it loses its meaning (we've been there and that's not pretty).
I guess it would be ok to check in failing tests if you found a bug and could reproduce it quickly with a test, but the offending code is not "yours" and you don't have the time/responsibility to get into it enough to fix the code. Then give a ticket to someone who knows his way around and point to the test.
Otherwise I'd say use your ticket system as a TODO list, not the build.
It depends how you use tests. In my group, running tests is what you do before a commit in order to check that you (likely) have not broken anything.
When you are about to commit, it is painful to find failed tests that seem vaguely possibly related to your changes but still strange, investigate for a couple of hours, then realize it cannot possibly be because of your changes, do a clean checkout, compile, and find that indeed the test failures come from the trunk.
Obviously you do not use tests in the same way, otherwise you wouldn't even be asking.
If you were using a DVCS (e.g., git) this wouldn't be an issue as you'd commit the failing test to your local repository, make it work, and then push the whole lot (test and fix) back to the team's master repository. Job done, everyone happy.
As it seems you can't do that, the best you can do is to first make sure that the test is written and fails in your sandbox, and then fix that before committing. This might or might not be great TDD, but it's a reasonable compromise with the working practices of everyone else; working with your co-workers is more important than following some ivory tower principle in every aspect, since the author of the principle isn't located in the cubicle next door…
The purpose of your nightly build is to give you confidence that the code written the day before hasn't broken anything. If tests are often failing then you can't have that confidence.
You should first fix any failing tests you can and delete or comment out/ignore the other failing ones. Every nightly build should be green. If its not then there is a problem and that's immediately obvious now since it should have run green.
Secondly, you should never check in failing tests. You should never knowingly break the build. Ever. It causes unnecessary noise, distractions and lowers confidence. It also creates an atmosphere of laziness around quality and discipline. With respect to ignored tests which are checked in, these should be caught and addressed in your code reviews, which should be covering you test code as well.
If people want to write their code first and tests later, this is OK (though I prefer TDD), but only tested code which runs green should be checked in.
Finally, I would recommend changing the nightly build to a continuous integration build (run on each code check in) which might just change people's habits around code check-in.
I can see that you have a number of problems.
1) You are writing failing tests.
This is great! However, someone is intending to check those in "to be fixed within the current sprint". This tells me that it's taking too long to make those unit tests pass. Either the tests are covering more than one or two aspects of behaviour, or your underlying code is far too complex. Refactor the code to break it up and use mocks to separate the responsibilities of the class under test from its collaborators.
2) You tend to forget about tests with [Ignore] attributes.
If you're delivering code that people care about, either it works, or it has bugs which require the behaviour of the system to be changed. If your unit tests describe that behaviour but the behaviour doesn't work yet, then you won't forget because someone will have registered a bug. However, see point 1).
3) Your team is writing tests after the code is written.
This is fairly common for teams learning TDD. They might find it easier if they thought of the tests not as tests to check if their code is broken, but examples of how another developer might want to use their code, with a description of the value that their code provides. Perhaps you could pair with them and help them learn from what they already know about writing tests afterwards?
4) You're trying to practice TDD.
Do or do not. There is no try. Write a test first, or don't. Learning never stops even when you think you're doing TDD well.

How do you know who is fixing the build?

We are working in a CI environment, with Enterprise Cruise running our builds. Developers all have CCTray installed locally to notify us if a build breaks.
CCTray has a menu option Volunteer to fix build that you can use to let your team know that you are fixing the build. However this doesn't work in our environment (reasons: Fix build not currently supported on projects monitored via HTTP).
So the question is - does anyone have a technique that they use in their team that allows someone to indicate that they are fixing a broken build?
For me, Continuous Integration is not only about tools, but also about practices. One of them is the responsibility. In others words, the one who breaks the build is also the one who will fix it!
Shooting "I take it guys" is my prefered. ( in addition of the responsability romaintaz describe )
We send an email to the Developer's mailing list to let everyone know you are taking ownership of the build break.
We're co-located, we all run cctray, and when the build breaks we have an audio alert (red alert from the Starship Enterprise). When it breaks we all shout "who broke the build"! Once we figure out who broke the build we harhass them until they tuck their tail between there legs, do that stupid embarassed laugh, and volunteer to fix the build.
It's worth noting that things that aren't monitored by the build and tests can change on a CI box. For example: maybe someone went onto the box and changed a permission. Then when the next checkin is made it looks like the person that made the checkin broke the build when really it was the person that made the manual change without telling anyone.
On the volunteer thing, tools can help but verbal face to face communication is still king.
The onus is usually on who broke the build with their checkin. That's often obvious, even with multiple checkins from different individuals. After that there's a bit of negotiation if the build remains broken. Not particularly scientific or rigorous, but it seems to work.
If the build brokes, then in CCtray there is an option for "Volunteer to fix the build".
And it tells automatically to all the developers who is fixing the build

The best way to start a project [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
When you are starting a personal programming project, what is your first step? I'm trying to start a project thats just an idea at the moment. I get lots of these and I dive right into the code and after a while just completely lose interest and or just forget about the project.
When you are starting, what is your first step? do you plan out the project? make a diagram? write some code on paper? How do you start a project in a manner that you know you will succeed?
The only thing that works for me: Create the smallest possible implementation of it that's somehow usable and then use it.
From 7 Habits of Highly Effective People, Habit 2: Begin with the End In Mind.
With any project you need a clear goal, a point where you can say "I'm finished". A clear outcome will give you direction. Once you have that, you can start planning how to get there. The size and complexity of the project will determine how much detail your plan needs, but in general you'll want to feel your making progress against your plan quite regularly.
My next step is to sketch out a design of the modules that will be needed and the APIs between each module. If the APIs are clean then the modules are probably right. Then I start implementing the modules, testing as I go.
I spend a lot of time thinking about the various aspects of the project before I even touch a keyboard.
I go through what I've learnt from previous projects and write it down in various categories ('technical', 'promotion', etc)
Personal project or not, I always set up source code control. Git, Mercurial of Bazaar are examples of source code control tools that are not intrusive because you do not need to set up a master server. Just type a simple command to create the project, check your files in, commit. In the future, when you mess up one of your files, you will be able to 'undo'
I also set up a lightweight ticket system to keep track of 1.issues and 2.ideas
By "lightweight" I mean that if maintaining two text documents with these lists works for you, that's good enough.
Hope this helps.
I agree with the already given advice of:
Planning a minimal implementation that does something useful as a first complete release.
Have concrete goals about what you want to achieve to have something to compare your progress with.
I would also recommend beginning with a lightweight design of you overall architecture so you can have a roadmap of how to build your product.
I find it difficult to start building something when I don't have a clear idea about how it should look at least at a first level of decomposition. Think about what do you need besides functionality: high performance?, extensibility scenarios?, which ones?, usability goals?, high scalability?, ease of deployment and installability?, etc. Ask yourself: What components I will have to build in order to achieve those architectural qualities?.
And don't get me wrong, I'm a strong proponent of agile software development. You don't need to spend a lot of times designing your architecture (because it surely will have to evolve as you build and get feedback about what works and what doesn't), but having a blueprint of how to build your product based on its architecture should be useful in for planning your progress and setting realistic goals.
Define the goal for the project. Sounds like you are looking almost exclusively at the solution rather than the problem.
A program isn't useful to you or anyone else unless it addresses some problem. Writing code to get moving is great, but you appear to lose interest and focus after you start -- because you're looking at the code, not the problem.
Spend some time considering what led you to write this code. Ponder how other people might discover the same need, what path might take them to the same frustration you worked to solve.
Then, find some of those people and offer your (partial) solution, and you'll generate interest and suggestions among them all.
THAT will keep you going on your project. The fellow interest, the sharing, even the disagreements -- it's people who need software! Don't create solutions (software) looking for a problem (people). You started with YOU, with your need or desire, but focused on the code, and lost the impetus for the project.
Programming's a lot more fun when you're problem-solving. But you need to keep the problem in front of you. Sharing the problem builds community. That's what this is really all about, isn't it?
For my own personal projects I just dive right in. Of course, none of these have yet been sufficiently large as to require any sort of pre-planning. If this is going to be a serious project or a relatively large scale, it is always a good idea to flush out at least what each part of the program needs to do and a high level view of how they will do it.
Like the others, my personal projects always have:
A Final Goal
A Task List
Small usable units
Source control
As an additional motivator, I try to use a technology that I have never used before. Learning something new generally becomes the largest motivator for me.
Easy - don't start at all projects you're likely to lose interest in. Spend more time to make sure you want to commit yourself to an idea before beginning any work.
It depends on the project - how big is it?
If I'm writing the next Notepad clone I might just dive in, if I wanted to roll my own operating system it'd take a lot more non-coding work.
I like to do a lot of diagrams, the tool I use for most development is clean A4 paper and a pencil. Draw out the UI, workflow, basic classes, and how you're going to store any data - then the coding is just a computer readable way of writing what you drew already.
Source control le.g. SVN is a couple of keystrokes/clicks, so the overhead is low and the benefit is high, its handy to try stuff and just revert to an earlier state if they don't work.
Then just make the most basic protoype that will work - once something is actually going it is much easier to get enthused and add more. If it is overwhelming I'll find I think the problem is solved in my head, and thats enough.
First plan out the basic outline of the final application. Most important features, basic GUI, program flow, etc. Then refine that so that you don't take on too much at first, remove unnecessary features, and add what else you want in the first version. Then use that outline to start a task list to create the smallest possible working version of your application. Then it's much easier to add extra features and make it fully functioning.
I like Maximillian's answer.. to expand a little, my person projects are developed to solve something I'm working on already. So when I get tired of repeat work I'll prototype a solution. and then use it. If Its similar enough to one of my earlier projects I'll borrow as much code as I can and try to improve the level of my work, make it more professional.
Fusion's use of Source Control is important too. Takes 2 minutes to install SVN.
If you want to turn it into a public open source project, Producing Open Source Software is supposed to be a good read (available both online and in print).
If your personal project is similar to an existing open source project, you should consider contributing to that project instead. A couple of small contributions (bug fixes etc.) are
more valuable than a half finished project.
All of the above, but start to cement the plan in place.....
Go for some tools
SmartSheet - even if you are working on your own you should set out some stages and dates
yEd - and Graphity from www.yworks.com

Resources