Bug tracking best practices [closed] - project-management

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
In my company, these rules apply:
Only testers are allowed to create issues.
Developers must send e - mail a tester to have them create an issue.
Developers send e - mail to technical lead for having him assign an issue to themselves for issues they think they can resolve.
A developer cannot assign an issue to another developer (Must send e - mail to technical lead).
If a developer's issue is blocked by another developer's code, she must solve this problem outside of the bug tracking system.
Only testers are allowed to close issues which are opened by themselves.
All assignments must go through technical lead so he can track issues.
Bugs that are not directly related to user interface are not entered into the system (must be resolved externally).
What bug tracking flow are you using? Does it work well for you?

We use BugZilla for bug tracking and there are rules like:
Anybody can report a bug and every little change whatsoever should go through bug-tracking system. If it is an enhancement in the product, the bug should be marked as enhancement and bug-tracking system should be followed.
Anybody can assign a bug to anybody else which means that there is ease in routing an issue to others if a bug resides in somebody else's code. There may be circumstances when a bug needs to be fixed at more than one place i.e., there is dependency on somebody else's code to get fixed first and after that other person will fix his/her code. Under those cases, a bug gets assigned to the person who needs to do the work first and then he/she re-routes the bug to appropriate person by re-assigning it.
If an issue appears at more than one place and the code behind is different but the issue apparently is same, the bug is cloned so that a separate track can be kept of all the changes.
Technical leads are responsible for prioritizing the bugs based on the demand of that particular fix.
Testers/QAEs are responsible for assigning a Severity to the bug i.e., Critical/Major/Minor etc.
All bugs go through bug-tracking system. Bugs coming from customers are classified separately by a custom flag to indicate a customer bug. Customer bugs are mostly in the older released builds and patches are created for them, therefore, those are kept separate.
This way we ensure that we keep track of all the changes simultaneously in our Source Control System (which is TFS btw) and the Bugzilla so that any changes can be traced back to the original code-change/owner if needed in the future.

Sounds pretty complicated. We are using roughly the following process:
Everyone in the company can open an issue ticket and assigns it to a department.
Every department has a "dispatcher" who checks the incoming tickets for validity and prioritizes them.
Depending on the department's practices, developers are assigned tickets for the current development cycle by the dispatcher, or they assign themselves the tickets, highest priority first.
When a ticket is solved, it goes back to whoever opened it. This person also performs all activities neccessary afterwards, like informing customers.
All tickets are held in a software systems that makes these tasks easy. If you get a ticket, you also get an e-mail notification.
This is a lightweight process that encourages developers to take responsibility for their issues.
Aside from this, we have several quality assurance measures in place for the process of changing anything in the software, regardless of the source and type of the change requests. This includes especially:
All code must be reviewed before it is checked into the source code management system. This includes GUI and database reviews by specialized reviewers if neccessary
Code must be tested thoroughly by the developer himself before checking it in.
After the monthly build, all changes have to be tested again to prevent problems that occur due to several changes affecting the same code.
The monthly build enters a "first customer phase" where it is only rolled out to a few customer systems. If this phase shows no previously undetected errors, the build is declared safe.

I've used a great number of issue tracking systems, including gnats (ugh!), Bugzilla (slightly less ugh), Trac, Jira, and now FogBugz. I like Trac most of all, but that's probably because I'm not the administrator on FogBugz and it's being sadly and horribly mis-used in its current incarnation.
Getting the the workflow right is pretty crucial, and oddly enough it starts with deciding what to put in your bug tracker and how to label the things you put in there. As soon as you have a customer, all development teams really track three kinds of issues:
Problems noted by real customers (live bugs).
Problems with new software currently in development (dev bugs).
Things we want to do in the future (features).
Each of these three classes of issues have their own priorities, of course. A 'live bug' that's just a spelling error on a button may be a lot less important than a 'dev bug' that's blocking a publicly announced release, or gating other development, testing, etc.
The severity of an issue describes how horrible the side affects are. In my experience, this boils down to:
The program is ruining something. Data, customers being billed incorrectly, wrong medicine being dispensed. This is as bad as it gets. I once worked on a system where a software command retracted a hydraulic arm right through the middle of a serviceman. This is as bad as it gets.
The program is crashing and we don't have a work-around, but it's not ruining anything (other than being down) in the meantime. If the downtime resulta in something getting ruined use severity #1.
The program is misbehaving, but we have an identified work-around that can actually be used.
The program is misbehaving in ways that are annoying but don't affect the results.
The program needs to be better in some well defined way: easier to use, implement a new feature, run faster, etc.
Another problem that arises a lot in these systems is the concept of 'roles.' As applied to issue tracking systems, roles boils down to who is allowed to do things. Who gets to create issues? Who gets to change the status, who gets to reassign them to another user, who gets to close them, etc.
In the small- to mid-size teams I've worked closely with, this general set of rules has worked well:
Anyone can create an issue. The creator can assign the issue to any (or most) recipients as it's being created. The default recipient is the Issue Triage team. Developers can note bugs they've found working on code this way, and assign the bug to themselves, to track why they are changing code.
The Triage team meets (specify interval here) to evaluate and assign issues. The Triage team specifically looks for duplicate reports, in which case the new issue is 'rolled up' into the existing issue chain; for unreproduced issues from the field, which are assigned to QA for reproduction; and for high-severity issues from the customers.
The originator of a bug is the ONLY person that can close it. Bug reports initiated by QA or by a CSR cannot be closed by a developer. Yes, this means that bugs that CS and the dev team disagree on remain unresolved. Why have the issue tracker report an issue as resolved when the people aren't in agreement? If you want a digital repository of lies, you have C-SPAN.
Some teams may want to reserve moving an issue from one department to another to managers, other teams may allow any team member to move an issue on to (or BACK to) another team. This may boil down to management suspicion, or simply to who is allowed to allocate work time.
The Triage process is the key. The Triage team is essentially whoever in your organization decides who works on what, and what gets worked on next. Having the team meet on a regular schedule helps to make sure that really important stuff doesn't get missed, and that the mundane stuff doesn't get dropped due to inattention. If there isn't anything in the Triage queue, the meeting (concall, netmeeting, whatever the implementation is) can be cancelled by the meeting leader.
If you're using Scrum, the Triage team is probably the scrum masters, deciding if an issue is going to be pulled into the current sprint and properly assigning the priority if it's going into the backlog.

Wait, you write:
If a developer's issue is blocked by
another developer's code, she must
solve this problem outside of the bug
tracking system.
so there are bugs that fall outside of the normal bug flow. You have then a second system for tracking those bugs, or are these all ad-hoc?
Sounds like your bug tracking system is really a user-defect tracking system.
Does it work well for you or are you looking at alternatives?

I think that customers also should be able to create issues, with no separation between bug reports and feature requests.
Assignment of issues should not be performed by developers themselves: deciding which issues have to be fixed for next release should be of customer and manager responsibilities.
Other practices can be found in Painless Bug Tracking by Joel Spolsky.

I’ve used several different types of bug tracking systems over the past 10 years including nothing, a word document, FogBugz, Bugzilla, and Remedy. FogBugz is by far the best one. At that job anyone was allowed to enter bugs, and anyone could assign a bug to anyone else. I found that this worked well especially if I found a small bug in my code. Instead of spending an hour writing e-mails and filling out forms and getting several other people involved, I could quickly log that I found and fixed a bug. This encouraged me to enter all the bugs I found and fix them quickly. If a bug required a lot of work then I would assign it to my manager so he could prioritize it with my other work.
At the job where I used Bugzilla, every time a bug was created, assigned, or changed an e-mail was sent to all the developers and managers. This had the opposite effect, it discouraged me from finding and entering bugs in the system.

logging bugs is about speed - just the minimum amount of information needed to investigate/replicate the bug
for web projects, this comes down to: 1) a descriptive bug title, 2) the page where the error occurred, 3) a description of the problem + a screenshot OR step-by-step instructions for replicating the problem (if a screenshot isnt provided)
screenshots are very powerful for two-reasons: 1) a picture says a thousand words, 2) it gives creditability to the bug report (ever investigate a bug you couldnt replicate and think "looks like the client is making stuff up again"?)
i have a blog article which goes into the topic further: Logging Bugs Like a Pro

My small shop uses a pretty simple workflow:
Anyone can create an issue (I think it's unnecessarily restrictive not to allow this) This includes customers and users of our open source projects.
A change control board (sounds fancy, but it's just QA lead and head of engineering, plus product manager) reviews new issues and assigns fix version and priority
Anyone can reassign a bug, to ask the reporter a question or pass on to another person to fix or test
Anyone can mark a bug resolved
Only QA can close a bug - we do this to enforce verification of each bug fix.
This way, everything gets logged in the bug tracking system and we keep things efficient by not restricting updates. You can end up with a bit of "bug spam" this way, but it's better than creating bottlenecks in my experience.
We use JIRA as our bug tracker - it's possible to set up all kinds of custom workflows in JIRA to enforce your particular process, but I've never found the need to do that in smaller organizations.

What bug tracking flow are you using?
Tester will post the all bugs in open condition
Assigning to Developer
Developer will try to fix the bug - fixed
Bug Closed
Reopen the bug status

Related

Contributing to open source in Firefox(Mozilla): how do I find the relevant portion of the Mozilla code base where a bug occurs in order to fix

I am trying to contribute to open source particularly Firefox(Mozilla), I have done my installation and set up but I have a challenge determining where to look in the codebase to find the file where a bugs occurs in order to propose a patch. I would greatly appreciate general guidance on how to proceed. This is my first time attempting to contribute to open source with Firefox.
Basically, upon seeing the bug as reported in Bugzilla(a website where mozilla bugs are reported), I am clueless on how to proceed from there.
welcome to SO!
I know that contributing to such a big codebase can sometimes feel overwhelming, but I can guarantee you that the Firefox devs really appreciate the efforts you are already putting (and will put!) in your contribution. So.. thanks for the help!
General tips
Firefox codebase is huge, complex and has many moving parts. Downloading and getting Firefox correctly built locally is already a big step forward, and will save you time later. If you haven't done that already, consider doing it!
Read the How To Contribute Code To Firefox documentation page. It gives a good overview of how a code contribution process looks like in Firefox.
Don't feel shy about asking questions! The bug on Bugzilla (or the github ticket) is usually a good place to ask specific questions or general directions on how to fix a bug in Firefox, and folks are generally friendly, inclusive and happy to support you support them!
a. If you don't receive a direct response within a few business days (usually 2-3) from somebody on the bug, chances are the notification got swallowed in the "immense sea of notifications, emails, messages"(tm) that devs receive. See the next section about reaching out.
How to find who to talk to?
Who knows about a specific part of Firefox or any Mozilla product? This could seem like an hard thing to figure out, but there's a few tips.
If the bug report is on Bugzilla, good people to talk to would be the Reporter (if they are a Mozilla contributor) or the Triage Owner.
Mentored bugs are bugs that were triaged by the dev teams and that were designated to introduce folks to the codebase. For this bugs, a "Mentor" is usually shown under "Assignee" in the "People" section of the bug. That's a good person to ask questions!
Mozilla publishes the list of folks who are responsible about components in Firefox. You can find who to talk to based on where the code is/the bug was filed and then consulting this page.
You can send direct request over Bugzilla to individuals, they are called "needinfo requests". After logging into Bugzilla, on the specific page of the bug you need information on, scroll to the bottom. Type your question in the "Add comment" section, tick the "Request information from" checkbox and either pick the role of the person you want to flag from the dropdown, or select "other" and paste an email address there (that you have identified using the previous points). If the person is on bugzilla, the text field will autocomplete and show the relevant person.
If all the above fails, you can rely synchronous communication and chat with the devs over here in the # developers channel.
How to find what code to change?
If it's not in the bug, ask the reporter or the person responsible of that section of code. For bugs marked as "mentored", ask the assigned Mentor!
If the bugzilla bug doesn't mention specific files and you want to find out yourself without reaching out, your best ally is Searchfox. You can type some keywords from the bug at the top of the page and wait for the results in the codebase to come in. This is highly effective! If the bug asks changing CSS files, for example, you could add a file filter like *.css in the top right.
Another pro-tip is looking at what other bugs in that same bugzilla product/component touched. You would find that by clicking on the arrow next to the component, then picking "Recently Fixed Bugs in This Component": it will show a list of fixed bugs, you can pick one or more, then look at the attachments.
Hope this helps!

"Works on my machine" - How to fix non-reproducible bugs?

Very occasionally, despite all testing efforts, I get hit with a bug report from a customer that I simply can't reproduce in the office.
(Apologies to Jeff for the 'borrowing' of the badge)
I have a few "tools" that I can use to try and locate and fix these, but it always feels a bit like I'm knife-and-forking it:-
Asking for more and more context from the customer: (systeminfo)
Log files from our application
Ad-hoc tests with the customer to attempt to change the behaviour
Providing customer with a new build with additional diagnostics
Thinking about the problem in the bath...
Site visit (assuming customer is somewhere warm and sunny)
Are there set procedures, or other techniques than anyone uses to resolve problems like this?
One of the attributes of good debuggers, I think is that they always have a lot of weapons in their toolkit. They never seem to get "stuck" for too long and there is always something else for them to try. Some of the things I've been known to do:
ask for memory dumps
install a remote debugger on a client machine
add tracing code to builds
add logging code for debugging purposes
add performance counters
add configuration parameters to various bits of suspicious code so I can turn on and off features
rewrite and refactor suspicious code
try to replicate the issue locally on a different OS or machine
use debugging tools such as application verifier
use 3rd party load generation tools
write simulation tools in-house for load generation when the above failed
use tools like Glowcode to analyse memory leaks and performance issues
reinstall the client machine from scratch
get registry dumps and apply them locally
use registry and file watcher tools
Eventually, I find the bug just gives up out of some kind of awe at my persistence. Or the client realises that it's probably a machine or client side install or configuration issue.
Extensive logging usually helps.
The easiest way is always to see the customer in action (assuming that its readily reproducible by the customer). Oftentimes, problems arise due to issues with the customer's computer environment, conflicts with other programs, etc - these are details which you will not be able to catch on your dev rig. So a site visit might be useful; but if that's not convenient, tools like RealVNC might help as well in letting you see the customer 'do their thing'.
(watching the customer in action also allows you to catch them out in any WTF moments that they might have)
Now, if the problem is intermittent, then things get somewhat more complicated. The best way to get around this problem would be to log useful information in places where you guess problems could occur and perhaps use a tool like Splunk to index the log files during analysis. A diagnostic build (i.e. with extra logging) might be useful in this case.
I'm just in the middle of implementing an automated error reporting system that sends back to me information (currently via email although you could use a webservice) from any exception encountered by the app.
That way I get (nearly) all the information that I would do if I was sitting in front of VS2008 and it really helps me to work out what the problem is.
The customers are also usually (sorta) impressed that I know about their problem as soon as they encounter it!
Also, if you use the Application.ThreadException error handler you can send back info on unexpected exceptions too!
We use all the methods you mention progressively starting with the easiest and proceeding to the harder.
However you forget that sometimes hardware is at fault. For example, memory could be malfunctioning and some computation-intensive code will behave strangely throwing exceptions with weird diagnostics. Of cource, it works on your machine, since you don't have faulty hardware.
Experience is needed to identify such errors and insist that customer tries to install the program on another machine or does hardware check. One thing that helps greatly is good error handling - when your code throws an exception it should provide details, not just indicate that something is bad. With good error indication it's easier to identify such suspicious issues related to faulty hardware.
I think one of the most important things is the ability to ask sensible questions around what the customer has reported... More often than not they're not mentioning something that they don't see as relevant, but is actually key.
Telepathy would also be useful...
We've had good success using EurekaLog with it posting directly to FogBugz. This gets us a bug report containing a call stack, along with related system info (other processes running, memory, network details etc) and a screen shot. Occasionally customers enter further info too, which is helpful. It's certainly, in most cases, made it much easier and quicker to fix bugs.
One technique I've found useful is building an application with an integrated "diagnostic" mode (enabled by a command line switch when you launch the app). That certainly avoids the need to create custom builds with additional logging.
Otherwise, it sounds like what you're doing is as good an approach as any.
Copilot (assuming customer is somewhere cold and rainy :)
The usual procedure for this is to expect something like this will happen and add a ton of logging information. Of course you don't enable it from the beginning, but only when this happens.
Usually customers don't like to have to install a new version or some diagnostic tools. It is not their job to do your debugging. And visiting a client for cases like these is rarely an option. You must involve the client as little as possible. Changing a switch and sending you a log file is OK - anything more than this is too much.
I like the alternative of thinking the problem at the bath. I will start from trying to find out the differences between my machine and the client's configuration.
As a software engineer doing webstuff (booking/shop/member systems etc) the most important thing for us is to get as much information from the customer as possible.
Going from
it's broke!
to
it's broke! & here are screenshots of
every option I picked whilst
generating this particular report
reduces the amount of time it takes us to reproduce and fix an issue no end.
It may be obvious, but it takes a fair amount of chasing to get this kind of information from our customers sometimes! But it's worth it just for those moments you find they're not actually doing what they say they are.
I had these problems also. My solution was to add lots of logging and give the customer a debug build with all the possible debug information. Then make sure dr Watson (it was on Windows NT) created a memory dump with enough information.
After loading the memory dump in the debugger I could find out where and why it crashed.
EDIT: Oh, this obviously only works if the application terminates violently...
I think following the trail of the actions user took can lead us to the reasons of failure or selective failures. But most of the times users are at loss to precisely describe the interactions with the applications, the automatic screenshot taking (if it is desktop app. for .net app you can check Jeff's UnhandledExceptionHandler). Logging all the important action which change state of the objects can also help us in understanding it.
I don't have this problem very often, but if I did, I would use a screen sharing or recorded application to watch the user in action without having to go there (unless, as you said, it's warm and sunny and the company pays the trip).
I have recently been investigating such an issue myself. Over the course of my carrier I have learnt that, while computer systems may be complex, they are predictable so have faith that you can find the problem. My approach to these kinds of issues two fold:
1) Gather as much detailed information as possible from the customer about their failure and analyse it meticulously for patterns. Gather multiple sets of data for multiple failure occurrences to build up a clearer picture.
2) Try and reproduce the failure in house. Continue to make your system more and more similar to the customers system until you can reproduce it, the system is identical or it becomes impractical to make it more similar.
While doing this consider:
1)What differences exist between this system and other working systems.
2)What has recently changed in your product or the customers configuration that has caused the problem to start occurring.
Regards
Depending on the issue you could get WinDbg dumps, they normally give a pretty good idea of what is going on. We have diagnosed quite a few problems that weren't crashed from minidumps.
For .Net apps we also was Trace.Writeline then we can get the user to fire up DbgView and send us the output.
Its very complicated issue . I was thinking writing some procedure for this . I just made some procedure for this non-reproducible bug . it might be helpful
When the Bug accorded .. There are several factors it might to occur.
I am Sure all bugs are reproducible . I always keep eye for these kind of issues..
Get the System Information
what other process the customer did before that.
Time period it occurs . its rare or frequent
its next action happened after the issue ( its always same or different )
Find the factors for this bug ( as developer )
Find the exact position where this issue happened .
Find ALL THE SYSTEM Factors on that time
check all memory leaks or user error issue or wrong condition in code
List out all facotrs to may cause this issue.
How the each factors are affected this and wat are the data is holding those factors
Check memeory issues happened
check the customer have the current update code like yours
check all log from atleast 1 month and find any upnormal operation happened . keep on note
Just a short anecdote (hence 'community wiki'): Last week I thought it was a clever idea in a Django app to import the module pprint for pretty printing Python data only if DEBUG was True:
if settings.DEBUG:
from pprint import pprint
Then I used here and there the pprint command as debugging statement:
pprint(somevar) # show somevar on the console
After finishing the work, I tested the app with setting DEBUG=False. You can guess what happened: The site broke with HTTP500 errors all over the place, and I did not know why, because there is no traceback if DEBUG is False. I was puzzled that the errors disappeared magically, if I switched back to debug mode.
It took me 1-2 hours of putting print statements all over the code to find that the code crashes at exactly the above pprint() line. Then it took me another half an hour to convince myself to stop banging my head on the table.
Now comes the moral of the story:
Not every thing that looks like a clever idea in the first view is such savvy in the end.
An important point to look at for debugging these errors are all configuration options and platform switches your code by itself makes. This can be quite a lot more than just some user preferences. Document good, if you make an assumption about the user's platform (e.g., if you test for Win/Mac/Linux only, will your code crash on BSD or Solaris?)
Cheers,
However tough a non-reproducible problem is - we can still have a structured and strategic approach to solve them - and I can say through experience that it requires out of box thinking in 50% of the cases. Generally speaking, one can categorize the problems into different types which helps to identify what tool to be used. For example if you have a non-reproducible application crash issue or a memory issue you can use profilers and nail down the issue caused in the particular functionality.
Also, one of the most important approach is inforamation rich logging. I also use a lot of enums to describe the state of the process depending on the scenario in question. for exampe, I used like Initiated, Triggered, Running, Waiting Repaired etc to describe the schedules states and saved them to DB at different stages.
Not mentioned yet, but "directed code review" is one good solution, especially if you didn't do a proper review (at least 1 hour per 100 lines of code) before release.
I have also seen impressive demos of AppSight Suite, which is basically an advanced environment monitoring and logging tool. It allows the customer to record what happens on his machine in an extensive but fairly compact log file which you can then replay.
As many have mentioned, extensive logging, and asking the client for the log files when something goes wrong. In addition, as I worked more with web apps, I'll also provide detailed, but succinct deployment documentation (e.g., deployment steps, environmental resources that need to be set up etc).
Here are common problems I've seen that lead to the types of problem you are describing:
Environment not set up properly (e.g., missing environment variables, data sources etc).
Application not fully deployed (e.g., database schema not deployed).
Difference in operating system configuration (default character encoding being the most common culprit for me).
Most of the time, these issues can be identified through the log content.
You can use tools like Microsoft SharedView or TeamViewer to connect to remote PC and inspect problem directly on site. Of course, you'll need cooperation with customer.

Implementing features vs. bug fixing [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am interested in how much of your daily work time do you spend on implementing new features compared to fixing bugs.
I don't code any new features as long as there are some unfixed bugs in my software.
The only reason I can think of to let a bug unfixed in my software is that it's definitely to costly to fix. In this case, we may choose to change this from 'bug' to 'known limitation' or 'known bug', and we fix the feedback we give to the user accordingly, so that the user knows exactly what's going on and why it's not fixed (see my edit below)
So typically, I spend all of my time bug fixing as long as the QA is complaining about something, and all of my time coding when it's not ! :)
I do that because :
When a software does a lot of things, but crashes randomly, the user will get a feeling that he cannot rely on the software, and there's NOTHING you can do to fix this. ever.
When a software lacks some features, but is good at doing what it does, the user rather thinks "That may be a great software, too bad it doesn't support X and Y... I'll check the next release in 6 months".
Joel Spolsky has written an interesting post on that question in his
12 steps to better code.
Edit to answer comments : If I'm experiencing random crashes, that's definitely a bug, not a "known limitation". Once I know exactly what is going on, and only then, I can decide whether I can fix it or not.
I was rather thinking of the following situations :
the bug is provoked by code that doesn't belong to me (typically a third party library). If implementing a workaround is very complicated, it might be OK to wait for the third party vendor to fix it. Real world example : Clickonce doesn't work in some proxy situations... I expect Microsoft to fix it, eventually.
If the bug is that a specific feature doesn't work in all situations, and that this feature is too difficult to implement for those specific situations, I think it's ok to warn the user before he uses the feature that what is trying to do is not implemented, rather than just crashing.
I work for a group inside my company that is suppose to both create "featurettes" and respond to customer issues. I tend to spend more time on high priority customer issues (read: bugs). So I would say my time is nearly 100% spent on fixing bugs.
That said, lets read between the lines a bit. It seems that this question is a way of saying "ugg, I spend so much time on bugfixing...wish I could do more feature development". If that is the case, I think you need to look inward a bit.
As I said, I spend nearly all my time on fixing bugs for customer issues, but I have also written a ton of tools to help with that process. I have everything from specialized log analyzers to generic visualstudio solution file error checkers. Not to mention some of those sweet wndbg scripts I have written for esoteric breakpoints!
It is by doing stuff like that where I fulfill that desire to work on "something new". And in a way, it is much more rewarding than implementing some new small cog in a huge enterprise application.
Since I don't get paid to maintain any project, most of the time i'm working on new projects, hence adding new features all the time.
However, each feature needs to be tested and debugged thoroughly, so you can say that 30-40% of the time spent implementing a feature will go into debugging it.
Many projects have a development phase ("code thaw") where active adding of new features occurs concurrently with bug fixes, and a "code freeze" stage where feature set is frozen and 100% of the work goes towards bringing the critical bugs count to 0 (or fixing as many bugs until a fixed deadline as possible), so the answer would depend on the stage your project is in.
When I "do bugs," I also make my best effort to claim at least one feature to work on at the same time, or (when encountering a particularly buggy block of code) request a mandate to refactor the entire block. Thus I get to do some new development (and, face it, most of us prefer to write new stuff to fixing the old) while reducing the bug count.
There's a broad spectrum of priorities I have in my head when I'm triaging my work:
Bugs affecting a customer's ability to do their business or access their data. No work is done until any bugs like this are taken care of.
Other high priority bugs or features. These are usually "known issue" type bugs or enhancements that have become to painful to deal with anymore and now require a code change. Also, features requested by big customers or prospects generally fall into this category.
Everything else. This includes maintenance, nice-to-have features and just general itch-scratching-type maintenance on our code base.
As you can imagine, #3 category work doesn't get worked on all that often, which is a bit frustrating from an engineering perspective. But, our customers love us since they get an engineer working on their issues almost immediately after they call our support line and generally have a resolution within 24 hours, regardless of their size or importance.
It depends on the project, if there is a show stopper bug I focus on it but sometimes when I'm not motivated enough I just add one new cool feature so I can at least work on it instead of not doing anything.
This is for personal projects or before pre-release / research products
It all depends on kind of project I am working upon currently.
If the project is new then we do have a phase called bug fixing after the testing phase. Most of the bugs get fixed there. (!)
If the project is maintainance project then fixing bugs is a daily routine.
It depends on the bug.
Is it a minor cosmetic issue such as mislaigned label or a huge knock out bug that corrupts data?
Even if it is minor or cosmetic, is it causing user headaches, like a pop-up opening up in the wrong place? Is the data corruption bug only in Firefox 2 with a full moon (and your corporate intranet is IE 6)?
Good question though...

Responsibility without Authority is Meaningless - a technical-based solution?

My dad always says "Responsibility without Authority is meaningless".
However, I find that as developers, we get stuck in situations all the time where we are:
Responsible for ensuring the software is "bug free", but don't have the authority to implement a bug tracking system
Responsible for hitting project deadlines, but can't influence requirements, quality, or team resources (the three parts of project management)
etc.
Of course there are tons of things you could say to get around this - find a new job, fight with boss, etc....
But what about a technical solution to this problem? That is, what kind of coding things can you do on your own without having to convince a team to correct some of these issues - or what kind of tools can you use to demonstrate why untracked bugs are hurting you, that deadlines are being missed because of quality problems, and how can you use these tools to gain more "authority" without having to be the boss?
***An example - the boss comes to you and says "Why are there so many bugs!!?!?" - most of us would say "We don't have a good system to track them!", but this is usually seen as an excuse in my experience. So what if you could point to some report (managers love reports) and say "See, this is why"?
All you can do is your best, don't feel as if the key to successful software is only in your hands, your part of a team and don't have to be responsible for all.
Obviously you are in a environment that affects negatively your software, but can't change all his behavior so I recommend you start with yours, start working as a team of one with your own bugs, deadlines, requirements, quality and resources don't bother for the rest of the mess, but try to be the best at your work.
Working as a self-directed team of one showing your boss your plans, and reports of your progress, asking for more resources when you need it and showing him how your plans get affected when he give them to you or not.
You can find more advise about this in the PSP and TSP articles of wikipedia
After showing your boss a good work and meeting your own deadlines, surely he will trust you more and let some of your ideas flow to the entire team.
You don't need a bug-tracking system, you need automated tests: unit tests or otherwise. You can set-up automated tests with a Makefile. You can always find paths that are blocked by management, but that doesn't mean there aren't things you can do within the constraints of your job. Of course, the answer could be "find another job". If you can't find another job now, learn some skills so that you can.
The simple answer is -- you can start using the tools yourself.
Improve your own work. If people want you to fix code, tell them to file a bug. Show them how. Make sure they can do it without installing anything. They want a status update? Tell them to check the bug. They ask abou a code change you made? show them how to make a source control history query. or just show them on your box. Start showing them this stuff works.
And when you need the same results from them, demand that they do the legwork. When you can't find the changes in your source control, ask them to start diffing their revisions manually from the backup tapes. Don't do their work, or the work of source control and bug tracking, for them.
And most importantly, when applying this peer pressure, be nice about it. Flies and honey and all.
If they don't get it, you can continue to be the only professional developer in your company or group. Or at least it will help pad your resume: 'experience setting up and instructing others in CVS and FogBugs to improve product quality' and the like.
As for specific tools for showing that untracked bugs are hurting the team's ability to produce quality code, you've got a catch-22 here since you need something to track bugs before you can show their effect. You can't measure what you can't track. So what to do?
As an analogous example, we recently had a guy join our team who felt the way we did code reviews via email was preposterous. So, he found an open source tool, installed it on his box, got a few of our open-minded team members to try it out for a while, then demoed it to our team-lead. Within a few weeks he had the opportunity to demo it to all our teams. The new guy was influencing the whole company. I've heard lots of stories of this guerrilla-style tool adoption.
The trick is identifying who has the authority to make the decision, finding out what they value, and gathering enough evidence that what you want to implement will give them what they value.
For a broader look at how to lead from the middle, or bottom, of an organization, check out John Maxwell's The 360 Degree Leader.
If you want a report about quality and it's impact on productivity - here's the best:
http://itprojectguide.blogspot.com/2008/11/caper-jones-2008-software-quality.html
Caper Jones has a few books out and is still showing up at conferences. Outside of a good IDE a developer/IT group needs source code control (VSS, SubVersion, etc ) and issue tracking
If an accountant is asked to produce a set of account without using double entry and don’t balance, no one would expect the accountant to do so.
However double entry has been in standard usage by accountants since about the 13th century.
It will take a long time before we as a profession have standard practise that are so ingrained that on-one will work without them.
So, sorry I expect we will have to face this type of problem for many year to come.
Sorry for not answering your question directly, but...
I feel strongly that the failure you refer to is one of communication, and it's incumbent on us as professionals to develop our communication skills to the point where we are respected enough and trusted enough to leverage the authority we need to improve our working environments and processes the way you suggest.
In short, I don't think there is a technical solution that can solve all the problems created through poor communication in the workplace.
If anything, technology has caused the attrition of direct face-to-face communication.
Sorry, I'm off on a tangent again - feel free to downmod.
Coding only you can only keep your own source files tidy, well commented, keep the bug count low with tests. But you are going to need external tools for tracking progress and bugs (bugzilla, yoxel, trac, gantt diagram tools, Mylyn for Eclipse, a blog, whatever). In these cases the people and the discipline and the good habits and the leadership are the overwhelming force, no software tools and no offert from the individual can win alone.

How well does Bugzilla work for managing Scrum projects? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
We have MS Sharepoint -- which isn't all bad for managing a task list. The data's publicly available, people are notified of changes and assignments.
I think that Bugzilla might be a little easier for management and reporting purposes. While there are some nice Open Source Scrum management tools, I've used up a lot of my political capital and can't ask for too much more than what we've got now. Money isn't the object -- obviously -- it's the idea that my team has too many specialized tools.
Will Bugzilla work out as a more general project management tool -- outside the bug fix use cases?
Will I be bitterly disappointed and wish I'd downloaded something else and made my case for a better project management tool?
Bugzilla Is a great bug tracking system. We have tried to use it for other project management tasks and the results are less then stellar. I would recommend finding something designed with your goals in mind.
Try it for yourself.
Get a $15/month account at wush.net and use it yourself for a while (no business relationship besides satisfied customer).
Bugzilla is powerful and has a lot of configuration options, which can be confusing.
I personally used it three years ago on a project I was working on. I had no project manager and I was the developer, so I needed a very-light-overhead systtem. Bugzilla gave me that. I put my main goal as an enhancement "productionalized system" and then I made dependencies to reach that point. I ended up having 160 nodes all dependent on each other. This essentially was a work breakdown structure. I didn't bother with time estimates, and I didn't bother with creating any other kind of project documentation.
A cool advantage was that as I coded, if I noticed something needed to be done, I would just pop it into bugzilla (20 second process once it's set up), tie it as a dependency, and go back to what I was doing.
Whenever I completed a task, I would look at the dependency diagram and find the outermost leaves (bugs that blocked other but weren't themselves blocked), and work at it.
The advantage of this method for me is that if a task had looked simple and had one node associated with it, but when doing the thing itself I realized it was more complex, I would just split it into different subtasks. This took only a minute and absolutely didn't involve a meeting with a project manager.
Other people on the team could track my progress by looking at open bugs, closed bugs sorted by dates, etc. They saw action, they left me alone. When I had external dependecies, I would make a bug, detail the work, and send that person a link via email. They could then see why this was needed by looking at the dependency diagram.
Note that unless previously agreed upon, I did not assign them the bug.
It worked really well and the system was ready one month early.
How will it work with SCRUM? Having only had a cursory glance at scrum I can't tell you. But that was my experience.
Using a dedicated host will allow you three things:
support
easy upgrades (unless you got gurus in-house, bugzilla management ain't easy--for me at least)
users across organizational boundaries.
Note that bugzilla has all sorts of security features, so it's easy to lock-down the users to what they need to see.
My stand-alone solution is DokuWiki + MantisBT + Subversion + Review Board, which can be integrated with relative ease. Hosted alternative is Bitbucket.org. The rationale is you write user stories on Wiki and can reference them specific tasks. Larger bugs can be collaboratively designed and the "wiki" link is provided on the bug report by Mantis. Review board lets you do peer code reviews against svn diff before change is committed.
We've used Trac and Subversion very successfully for several projects.
The main advantage here is being able to tailor reports, some very Scrum specific, to provide information to management.

Resources