Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
Did you ever had a bug in your code, you could not resolve? I hope I'm not the only one out there, who made this experience ...
There exist some classes of bugs, that are very hard to track down:
timing-related bugs (that occur during inter-process-communication for example)
memory-related bugs (most of you know appropriate examples, I guess !!!)
event-related bugs (hard to debug, because every break point you run into makes your IDE the target for mouse release/focus events ...)
OS-dependent bugs
hardware dependent bugs (occurs on
release machine, but not on
developer machine)
...
To be honest, from time to time I fail to fix such a bug on my own ... After debugging for hours (or sometimes even days) I feel very demoralized.
What do you do in this situation (apart from asking others for help which is not always possible)?
Do you
use pencil and paper instead of a debugger
face for another thing and return to
this bug later
...
Please let me know!
Some things that help:
1) Take a break, approach the bug from a different angle.
2) Get more aggressive with tracing and logging.
3) Have another pair of eyes look at it.
4) A usual last resort is to figure out a way to make the bug irrelevant by changing the fundamental conditions in which it occurs
5) Smash and break things. (Stress relief only!)
I once worked for a company that sold a client-server application that was basically a file transfer and synchronization tool. Both the client and the server were custom applications we had designed.
We had a persistent bug that was very hard to duplicate in the lab. Our server could only handle a certain number of incoming client connections per box, so many of our customers would "cluster" multiple servers together to handle large user populations. The back end data for the cluster was on a file server they all shared. In this cluster configuration there was a bug that would happen under load where we would get a low-level file system error code on a file sharing call involving one of the back end files. Nobody could get this to repeat reliably in the lab, and even when they could they couldn't narrow down what was happening.
(I forget the exact error, it was probably 59 ERROR_UNEXP_NET_ERR or maybe 65 ERROR_NETWORK_ACCESS_DENIED. As I recall it was not even one of the documented error codes you were supposed to be able to get from the API we were calling, which was usually a lock or unlock call on a file section).
Since it involved the communication between the server and the back-end file store, and I was the "network transport" guy, I was tasked with looking at it. Many others had looked at it with no luck.
The one solid thing I had was I knew where in the code the error was being detected, but not what to do about it. So I needed to find the root cause. So I set up an appropriate hardware environment to duplicate it, and I put a custom build of the server software that instrumented the section of code in question.
The instrumentation was as follows: I added a test for the troublesome error code, and had it call a piece of code to send a UDP packet to a predetermined network address when the error occurred. The UDP packet contained a unique string in it to key on.
I then set a packet sniffing tool on the network. (At the time I was using Microsoft Network Monitor). I positioned it where it would be able to "see" this UDP packet when it was sent as well as all the communication between the cluster servers and the file server.
Most good sniffers have a mode where you can have it capture until it sees a particular piece of traffic, then stop. I turned on that mode and set it to look for that UDP packet my code would send. The goal was to end up with a packet capture of all the file server traffic right before the bug occurred. The very last network packets to and from the system where the UDP packet originated would presumably be a big clue as to what was happening.
I set the "stress test" configuration going and went home for the weekend.
When I got back on Monday, lo and behold I had my data. The sniffer had stopped just as expected after many hours of running and contained a capture. After studying the capture, what I found was that the Server Message Block or SMB (aka CIFS aka SAMBA) connection between our server and the file server was actually timing out at the TCP level due to extreme loading on the server. Because all of Microsoft's stuff is heavily layered, it would percolate back up through the file sharing stack as an "unexpected" error instead of returning a more intelligible error code that said "hey, you lost your connection at the TCP level".
I did a little more research on the TCP settings for Windows, and lo and behold the defaults for the version of Windows we were using (probably NT 4 in that era) were none too generous. It was only allowing for a very small number of failures on the TCP connection and boom, you were dead. Once you lost your SMB connection to the file server, all your file locks were toast and there was no way to recover.
So I ended up writing an appendix to the user manual that explained how to alter the TCP settings in Windows to make your cluster server a bit more tolerant of high load situations. And that was it. The fix to the bug was zero change in code, merely some additional documentation on how to properly configure the OS for use by this product.
What have we learned?
Be prepared to run altered versions of your code to investigate the problem
Consider using non-traditional tools to solve the problem (sniffers)
Not all bug fixes require code changes
Sometimes you can diagnose a bug while at home having a beer
I do a number of different things:
throw out all my assumptions and start from scratch. Remember, a bug exists because something which appears correct is actually wrong. Even those lines or functions or classes that you are absolutely certain are correct may be incorrect. Until you can convince yourself of the correctness you can't assume anything is right.
keep putting in print statements and assert statements to eliminate things and allow me to reform new assumptions.
step through code in the debugger if the problem is a control flow problem. Don't step over functions. Step in them and go through all the detail of their execution to confirm they are working right. Confirm the arguments and return values.
If a line or function or class is suspect but I can't prove it in situ, then write a small test case that does what you think the problem construct does. This may locate the problem or give some insights as to where to look next.
stop for the day. It's amazing what kind of offline processing your brain will do overnight. Often the answer or a key insight will appear the next day while I'm doing something mindless like showering or driving.
Create an automated way to cause the bug. The worst bug to fix is one that takes hours to reproduce.
Quote taken from "The Cryptonomicon":
"Intuition, like a flash of lightning, lasts only for a second. It generally comes when one is tormented by a difficult decipherment and when one reviews in his mind the fruitless experiments already tried. Suddenly the light breaks through and one finds after a few minutes what previous days of labor were unable to reveal."
I usually ask someone else to take a look at the code. While I'm explaining what the code is supposed to do, I sometimes see the bug just as I talk.
When a bug is a tough one, I sit and work until I figure it out and solve the problem. Interestingly enough, there are times when catching a mysterious bug is more enjoyable than everything running smoothly. And the relief and feeling when a bug is resolved, well, not many other things can beat that (except the obvious ones).
If all else fails, don't tackle it directly. Rewrite the problem area code in a more refactored way.
I have definitely had bugs which I worked on for 4-5 days continuously before finding a solution. Other bugs have sat in the bug tracker for months, as I put in a few hours spread out over a long period of time. I think this sort of bug is inevitable in any complex software project.
Some stuff that works well for me:
binary search through the program flow with logging
use Trace statements along with DbgView to search for bugs which show up in release mode
find an alternate way to reproduce the bug without changing the code
(works against logic, but...) change the code so that the bug is more easily reproducible (the failure condition is more readily achieved)
sleep on it and try again tomorrow with a fresh pair of eyes :)
The worst sort of bug in my opinion is a concurrency bug which disappears when logging is inserted.
Lots of great answers here. One thing that's worked for me in the past is to ask "what can I do to make it totally obvious when this problem has occured?".
For example, if the problem is a corrupted value in a data structure, try building a consistency-check routine that you can run periodically. Also consider implementing all access to the shared data through a set of functions that log each change.
Or, if the problem is a "random" memory overwrite, use a replacement malloc()/free() implementation that traps writing to "free" memory (like electric fence or dmalloc).
Someone else mentioned automating the process of triggering the bug. This is greeat if you can do it. Even having a routine that randomly exercises the program might help in these cases.
Seriously? I do things in this order.
Go to bed
Ask a colleague
Rewrite so the area isn't affected.
Ask SO
Raise a support ticket with your 3rd party library vendor.
"What do you do in this situation (apart from asking others for help which is not always possible)?"
When is it not possible to ask for help?
There are always others you can turn to for assistance - your coworkers, your boss, friends here at Stack Overflow, etc.
Understanding when to seek help shouldn't be demoralizing!
There are a lot of good tips here.
One that I absolutely do not agree with is the concept of changing the code hoping that it will go away. First off, you a probably going to introduce new bugs. Seconds, you can easily change things enough to hide the bug only to have it resurface again with the next patch.
Memory corruption bugs are especially likely to vanish as magically as they turn up. However, the memory corruption bug isn't fix, it is only that non-fatal areas of memory are getting trashed.
1) Try a different debugger. For example, I use WinDbg more and more often. When you load a program in a debugger, memory layout for your application will change slightly. Maybe a different debugger cause the error to manifest slightly differently.
2) If you resort to changing code without knowing exactly what the problem is, then if the bug goes away, YOU MUST go back and understand why the change fixed the bug. Otherwise, you are probably just hiding the bug.
3) Talk to others about the bug, maybe they have seen different versions of the same problem (i.e. other ways to recreate it)
4) Logging.
I've had bugs that took weeks or months before a solution was found, but eventually all bugs do get fixed. Aside from the classical non-debugger bug-tracking techniques like disabling parts of the system until you get a minimal test case, I've used these techniques:
Looking for better debugging tools. A new perspective goes a long way. Xdebug is something I started using in PHP only because of a performance bug that I wasn't making headway on.
Studying the technology that the bug is located in. This has helped to debug an outlook add-in. It had random errors that made no sense and that google searches turned up zilch about. By researching outlook add-in best practices, COM and MAPI programming, we got a clearer picture of what could go wrong, and thought of new things to try to fix the bugs, which eventually did fix them.
Trying to exacerbate the problem. If there's an issue that only happens occasionally, I'll try to find ways to make it happen constantly. This has helped to track down errors in web apps under IE and also to narrow down a crashing bug in the flash plugin.
When all else failed, I've rewritten the subsystem that caused problems from scratch. This may take a few days, or even weeks, but if you're stuck on a bug, and can't resolve it, and customers won't take no for an answer, what else can you do? This doesn't always fix things, but if it doesn't, you usually get a clearer picture of what's going wrong.
I've noticed a few commonalities in these bugs that I get stuck on for weeks:
Asking 3rd parties for help rarely helps, and it's generally not a good idea to wait for someone else to come save the day.
Almost always the fault is inside some 3rd party closed source technology, especially when using obscure parts. IE had nasty bugs when trying to use client certificates. Flash didn't deal well with randomly generated drawing instructions (some of which were nonsensical). Outlook doesn't like it when you try to change form layout dynamically from code. These days I've learned to respect the "comfort zones" of proprietary tech.
I give it more time. I once had a bug (in a personal project) that I just could not figure out. I tried every debugging method I could think of, including Google, with no success. Six months later, I came back and found the bug within an hour or so. It wasn't something simple (something apparently undocumented was going on deep inside Swing), but I just looked at it in a way I hadn't before.
I've had this problem before, I believe everyone has, I have flat out given up before, it was simply impossible to find, yet it kept crashing, when theres some kind of bug in the code, what I do is just sit down and concentrate on every bit of the code little by little until I find it, it's hard and it takes patience but it's all you can do in such a situation.
Hope this helps.
I honestly cannot recall a bug that I couldn't fix. It may cause a lot of refactoring, or may take a while, but I've never had one that I can't get rid of. If it takes me more than an hour to track it down then it's almost always something really stupid and small like looking right past that : that should've been a ;, etc.
In python, if I'm using an editor that isn't mine, or maybe it's someone else's code, I use retab! in vim, or paste into something like pastie to check indentation (if I don't have vim available).
If it's not a crasher/deal breaker, then I move on and come back with a fresh pair of eyes.
Oh, and you can never, ever have too much logging.
I add as much debug as possible (write to log file, message boxes, etc.), and test.
I don't think this is the worst bug you can find. The worst ones are those you can't reproduce deterministically or in the testing environment.
I get a bit demoralized too when unable to solve a bug. Usually when I hit a wall with a bug, I would just take note on my findings and stop working on it. I would jump on another bug that is easier to solve and then came back to the bug. By doing this, I would have a fresh mind and attitude in tackling the bug. Sometimes you might have tendency to overcomplicate things when spending too much times on a bug. Having a break, helps in breaking the wall.
RWendi
First off, is it reproducible? That's a HUGE plus if it is. I want things bugs to always/never happen... its the intermittent ones that are the troublesome ones.
And it is going to depend on the problem, but at my shop we'll generally tag-team such a problem figuring that 2 heads (or 3 or 4) is better than 1.
Occasionally the bug won't even be in MY code, but it generally is. There have been issues where a 3rd party library was the culprit or a particular implementation on a particular platform was the cause - those stink.
I'll use anything and everything to at least track it down: debuggers, trace output, whatever.
Typically, if I can isolate it to a class or module I'll write a test harness to duplicate the real world and try to duplicate it there. I generally write my test code first, but sometimes legacy code (or other developer's code) exists that doesn't have tests already.
I generally will talk the design and problem through, out loud with the team and whiteboard anything that isn't clear. Often the solution will bubble to the surface once we talk about it as a group.
That's what I do.
I usually, try hard solving it. But, if that is not possible for reasonable windows of time, I leave it for some time to braincells to solve it while i sleep ;) Sometime it works...
I've considered asking for help on this website called StackOverflow that I've been frequenting lately...
This is what I did today...
I debug HW/SW interaction and its often the case logging (instrumentation) changes or hides the bug. Hence tests are performed "at-speed". I call these bugs "roaches" as they run away from any light I can shine on them.
So I have to:
Find the transaction that causes the bug. List the HW interaction via logging (this test passes, but it illustrates the flow).
Instrument before and after the bug to print state changes.
The bug I'm solving now of course is worst case as the HW locks up. The HW includes the CPU so its like being in a well lit room then the power fails and its pitch black.
I have a special backdoor view into memory, but of course this is locked up also. I tried power cycling in the hopes that the memory would stay non-volatile long enough to reenable the backdoor. No such luck. This is possible though.
I very very carefully wrote all the steps I went through to characterize this bug (what works, what fails etc). Sent this to developers with similar HW to verify it just wasn't me or my HW.
I took a few hours break to let this info settle and see if any lightbulbs lit elsewhere.
No replies, this bug is mine to solve...
This HW SW interaction is a loop tha does some setup then enters a polling loop that reads when the transaction is finished. Many transactions should occur. Which transaction fails? Is it the first one (indicating I can debug the transaction and not some noise in the HW). Is it the always the Nth transaction? What makes the Nth different than the first or the (N-1)th. The SW is single threaded and built to be predictable. No preemption, no interrupts enabled.
This SW has worked before, whats new? All the HW is new. In this case all the silicon is new as its an ASIC. Even the embedded CPU is new and customized so the ISA is new.
So I suspect everything and I'm blind. I'll have to sneak up on this roach.
I enabled just the log that reports how many times the SW polls the HW for completion. In this way the first transaction runs at speed, I get an idea how often I touch the HW in a tight polling loop. The test passes. I know its the Nth transaction and I recorded the peak number of polls for all transactions (perhaps meaningless data).
After modifing anythin, I have to put it back the way it was to verify the bug still exists. After all the earth has rotated and the solar winds are not as strong ;)
Looked at all the checkins, saw a contractor changed some important setup parameters with no explanation. These (outsourced) people are still under evaluation. This will not help.
Found there was no spinwait in the polling loop. Bad for the loop timeout as without it the timeout depends on CPU speed. Added spinwait, still no happiness.
Limited the number of transactions to see where it fails, somewhere before 1000.
Setup the HW to run slower, still hangs.
Hate to leave anyone reading this hanging too, but this diatribe will have to wait till tomorrow.
There is no bug that can't be fixed, since there is no bug that can't be fixed with a total rewrite.
An unfixable bug is just a bug you aren't willing to replace.
For memory related bugs i have found that the Memory Profiling options of Ants Profiler have helped me quite a bit on finding bugs.
use more creative methods of tracking the bug down.
using remote debugging on the machine where its reproducable.
using profiling tools.
introduce more logging to the app.
Going away for a while and then coming back to a problem is one common approach I do and have heard.
How easily reproduced the bug is can be a factor as well since if the error only occurs in one in a zillion runs of a program that could be considered a negligible gain for fixing it by breaking something else.
There is also the question of nailing down where the bug is, is it in some configuration so that it occurs on a server but not my local XP Pro machine which runs IIS 5.0. Some other bugs may involve having to change the resolution of my machine that can be annoying to try to reproduce a bug that others have reported.
You left out the "occurs under another O/S" category of bugs so that a web page that is fine in IE and Firefox on PC may look like crap on Safari on a Mac. Do I get my hands dirty in trying to fix a CSS issue using my machine as a server and the Mac that is over a row or two in the cubicles of the floor in order to see this issue or is it so low a priority it gets swept under the rug? Alternatively, if a bug was on Linux and there aren't any Linux machines near me, what should I do?
I'm sorry to have left with some questions but these seem to be difficult questions for me at times.
In addition to the debugger, I've also used logging and old fashioned paper and pencil. On occasion I've found really hard bugs, like code that runs fine in debug mode, but breaks in release mode. I've even occasionally rewritten perfectly good code that for whatever reason, doesn't work reliably, figuring that it's better to be reliable than elegant.
I sometimes try to redefine what others term a bug as really being a feature, but that seldom works!
I have a bug that shows up every few months on a customer site. It usually happens at 3am and it's not discovered until early the next morning when the customer arrives at their site. And usually when they discover it, they want everything to get working immediately, so our support people generally just reboot the computer. It's been driving me nuts for years. It never happens on my test machine or in the QA lab, only at certain customer sites. Over time, I've
refactored some of the code that I thought was causing it
added more debugging printouts around where it appears to be crashing
redirected stdout so that next time I see it I can "kill -3" the process
given support some new tools to dump out the current state of database locks and the like.
added diagnostics to make it more obvious when it does happen
It hasn't happened in a few months, and I've got my fingers crossed that I might have fixed it this time, but I'm not counting on it.
If it's not critical, don't fix it, you'll just spend too much time!
Keep the bug open. comment/work on it when you can. It might get fix by accident (or by someone else) later on!
Sometimes it takes a little lateral thinking, but every bug is fixable. Sometimes you need to leave it and sleep over it, sometimes it's good to ask someone else to have a quick look (they may see something you haven't), but mostly it's about trying different things, calling up on previous experience. It can be frustrating, but the buzz you get when you do fix it, is like no other!
Related
I am experiencing kernel panics when I kill node js under certain circumstances such as when it is stuck in an infinite loop (always) or when it is a stopped job under Bash (sometimes).
EDIT: My code isn't doing anything network related. I'm running a modified CoffeeScript repl.
I don't expect to be able to get a direct answer since it is a rather complicated problem and may be a bug in node, v8, or OS X for all I know at the moment.
However, I am at least somewhat familiar with all the technical aspects required to find it so I think with the right clues I could narrow it down, prevent it, and send a bug report to the appropriate people.
Feel free to have me investigate anything, up to and include using programs such as SIMBL and Application Enhancer if need be.
Here is the error report from the last kernel panic:
http://pastie.org/3043592
Thanks!
I can't tell for sure, but my suspicions would lie first with the following kernel extensions:
at.obdev.nke.LittleSnitch. Little Snitch messes with the network stack in some pretty major ways, so it seems likely that it might have something to do with your crashes (assuming that your node.js app is using sockets).
com.cisco.nke.ipsec. It also has to do with networking, so I'm also suspicious. Less so, though, because it (theoretically...) should just be adding a Cisco VPN interface.
org.pqrs.driver.NoEjectDelay, org.pqrs.driver.PCKeyboardHack, org.pqrs.driver.KeyRemap4MacBook. They're hacks. Need I say more?
com.shapeservices.msm.driver.MSMFramebuffer, com.shapeservices.msm.driver.MSMVideoDevice. iDisplay is unlikely to be related, but it might be!
If all else fails, submit a bug report at https://bugreport.apple.com.
Very occasionally, despite all testing efforts, I get hit with a bug report from a customer that I simply can't reproduce in the office.
(Apologies to Jeff for the 'borrowing' of the badge)
I have a few "tools" that I can use to try and locate and fix these, but it always feels a bit like I'm knife-and-forking it:-
Asking for more and more context from the customer: (systeminfo)
Log files from our application
Ad-hoc tests with the customer to attempt to change the behaviour
Providing customer with a new build with additional diagnostics
Thinking about the problem in the bath...
Site visit (assuming customer is somewhere warm and sunny)
Are there set procedures, or other techniques than anyone uses to resolve problems like this?
One of the attributes of good debuggers, I think is that they always have a lot of weapons in their toolkit. They never seem to get "stuck" for too long and there is always something else for them to try. Some of the things I've been known to do:
ask for memory dumps
install a remote debugger on a client machine
add tracing code to builds
add logging code for debugging purposes
add performance counters
add configuration parameters to various bits of suspicious code so I can turn on and off features
rewrite and refactor suspicious code
try to replicate the issue locally on a different OS or machine
use debugging tools such as application verifier
use 3rd party load generation tools
write simulation tools in-house for load generation when the above failed
use tools like Glowcode to analyse memory leaks and performance issues
reinstall the client machine from scratch
get registry dumps and apply them locally
use registry and file watcher tools
Eventually, I find the bug just gives up out of some kind of awe at my persistence. Or the client realises that it's probably a machine or client side install or configuration issue.
Extensive logging usually helps.
The easiest way is always to see the customer in action (assuming that its readily reproducible by the customer). Oftentimes, problems arise due to issues with the customer's computer environment, conflicts with other programs, etc - these are details which you will not be able to catch on your dev rig. So a site visit might be useful; but if that's not convenient, tools like RealVNC might help as well in letting you see the customer 'do their thing'.
(watching the customer in action also allows you to catch them out in any WTF moments that they might have)
Now, if the problem is intermittent, then things get somewhat more complicated. The best way to get around this problem would be to log useful information in places where you guess problems could occur and perhaps use a tool like Splunk to index the log files during analysis. A diagnostic build (i.e. with extra logging) might be useful in this case.
I'm just in the middle of implementing an automated error reporting system that sends back to me information (currently via email although you could use a webservice) from any exception encountered by the app.
That way I get (nearly) all the information that I would do if I was sitting in front of VS2008 and it really helps me to work out what the problem is.
The customers are also usually (sorta) impressed that I know about their problem as soon as they encounter it!
Also, if you use the Application.ThreadException error handler you can send back info on unexpected exceptions too!
We use all the methods you mention progressively starting with the easiest and proceeding to the harder.
However you forget that sometimes hardware is at fault. For example, memory could be malfunctioning and some computation-intensive code will behave strangely throwing exceptions with weird diagnostics. Of cource, it works on your machine, since you don't have faulty hardware.
Experience is needed to identify such errors and insist that customer tries to install the program on another machine or does hardware check. One thing that helps greatly is good error handling - when your code throws an exception it should provide details, not just indicate that something is bad. With good error indication it's easier to identify such suspicious issues related to faulty hardware.
I think one of the most important things is the ability to ask sensible questions around what the customer has reported... More often than not they're not mentioning something that they don't see as relevant, but is actually key.
Telepathy would also be useful...
We've had good success using EurekaLog with it posting directly to FogBugz. This gets us a bug report containing a call stack, along with related system info (other processes running, memory, network details etc) and a screen shot. Occasionally customers enter further info too, which is helpful. It's certainly, in most cases, made it much easier and quicker to fix bugs.
One technique I've found useful is building an application with an integrated "diagnostic" mode (enabled by a command line switch when you launch the app). That certainly avoids the need to create custom builds with additional logging.
Otherwise, it sounds like what you're doing is as good an approach as any.
Copilot (assuming customer is somewhere cold and rainy :)
The usual procedure for this is to expect something like this will happen and add a ton of logging information. Of course you don't enable it from the beginning, but only when this happens.
Usually customers don't like to have to install a new version or some diagnostic tools. It is not their job to do your debugging. And visiting a client for cases like these is rarely an option. You must involve the client as little as possible. Changing a switch and sending you a log file is OK - anything more than this is too much.
I like the alternative of thinking the problem at the bath. I will start from trying to find out the differences between my machine and the client's configuration.
As a software engineer doing webstuff (booking/shop/member systems etc) the most important thing for us is to get as much information from the customer as possible.
Going from
it's broke!
to
it's broke! & here are screenshots of
every option I picked whilst
generating this particular report
reduces the amount of time it takes us to reproduce and fix an issue no end.
It may be obvious, but it takes a fair amount of chasing to get this kind of information from our customers sometimes! But it's worth it just for those moments you find they're not actually doing what they say they are.
I had these problems also. My solution was to add lots of logging and give the customer a debug build with all the possible debug information. Then make sure dr Watson (it was on Windows NT) created a memory dump with enough information.
After loading the memory dump in the debugger I could find out where and why it crashed.
EDIT: Oh, this obviously only works if the application terminates violently...
I think following the trail of the actions user took can lead us to the reasons of failure or selective failures. But most of the times users are at loss to precisely describe the interactions with the applications, the automatic screenshot taking (if it is desktop app. for .net app you can check Jeff's UnhandledExceptionHandler). Logging all the important action which change state of the objects can also help us in understanding it.
I don't have this problem very often, but if I did, I would use a screen sharing or recorded application to watch the user in action without having to go there (unless, as you said, it's warm and sunny and the company pays the trip).
I have recently been investigating such an issue myself. Over the course of my carrier I have learnt that, while computer systems may be complex, they are predictable so have faith that you can find the problem. My approach to these kinds of issues two fold:
1) Gather as much detailed information as possible from the customer about their failure and analyse it meticulously for patterns. Gather multiple sets of data for multiple failure occurrences to build up a clearer picture.
2) Try and reproduce the failure in house. Continue to make your system more and more similar to the customers system until you can reproduce it, the system is identical or it becomes impractical to make it more similar.
While doing this consider:
1)What differences exist between this system and other working systems.
2)What has recently changed in your product or the customers configuration that has caused the problem to start occurring.
Regards
Depending on the issue you could get WinDbg dumps, they normally give a pretty good idea of what is going on. We have diagnosed quite a few problems that weren't crashed from minidumps.
For .Net apps we also was Trace.Writeline then we can get the user to fire up DbgView and send us the output.
Its very complicated issue . I was thinking writing some procedure for this . I just made some procedure for this non-reproducible bug . it might be helpful
When the Bug accorded .. There are several factors it might to occur.
I am Sure all bugs are reproducible . I always keep eye for these kind of issues..
Get the System Information
what other process the customer did before that.
Time period it occurs . its rare or frequent
its next action happened after the issue ( its always same or different )
Find the factors for this bug ( as developer )
Find the exact position where this issue happened .
Find ALL THE SYSTEM Factors on that time
check all memory leaks or user error issue or wrong condition in code
List out all facotrs to may cause this issue.
How the each factors are affected this and wat are the data is holding those factors
Check memeory issues happened
check the customer have the current update code like yours
check all log from atleast 1 month and find any upnormal operation happened . keep on note
Just a short anecdote (hence 'community wiki'): Last week I thought it was a clever idea in a Django app to import the module pprint for pretty printing Python data only if DEBUG was True:
if settings.DEBUG:
from pprint import pprint
Then I used here and there the pprint command as debugging statement:
pprint(somevar) # show somevar on the console
After finishing the work, I tested the app with setting DEBUG=False. You can guess what happened: The site broke with HTTP500 errors all over the place, and I did not know why, because there is no traceback if DEBUG is False. I was puzzled that the errors disappeared magically, if I switched back to debug mode.
It took me 1-2 hours of putting print statements all over the code to find that the code crashes at exactly the above pprint() line. Then it took me another half an hour to convince myself to stop banging my head on the table.
Now comes the moral of the story:
Not every thing that looks like a clever idea in the first view is such savvy in the end.
An important point to look at for debugging these errors are all configuration options and platform switches your code by itself makes. This can be quite a lot more than just some user preferences. Document good, if you make an assumption about the user's platform (e.g., if you test for Win/Mac/Linux only, will your code crash on BSD or Solaris?)
Cheers,
However tough a non-reproducible problem is - we can still have a structured and strategic approach to solve them - and I can say through experience that it requires out of box thinking in 50% of the cases. Generally speaking, one can categorize the problems into different types which helps to identify what tool to be used. For example if you have a non-reproducible application crash issue or a memory issue you can use profilers and nail down the issue caused in the particular functionality.
Also, one of the most important approach is inforamation rich logging. I also use a lot of enums to describe the state of the process depending on the scenario in question. for exampe, I used like Initiated, Triggered, Running, Waiting Repaired etc to describe the schedules states and saved them to DB at different stages.
Not mentioned yet, but "directed code review" is one good solution, especially if you didn't do a proper review (at least 1 hour per 100 lines of code) before release.
I have also seen impressive demos of AppSight Suite, which is basically an advanced environment monitoring and logging tool. It allows the customer to record what happens on his machine in an extensive but fairly compact log file which you can then replay.
As many have mentioned, extensive logging, and asking the client for the log files when something goes wrong. In addition, as I worked more with web apps, I'll also provide detailed, but succinct deployment documentation (e.g., deployment steps, environmental resources that need to be set up etc).
Here are common problems I've seen that lead to the types of problem you are describing:
Environment not set up properly (e.g., missing environment variables, data sources etc).
Application not fully deployed (e.g., database schema not deployed).
Difference in operating system configuration (default character encoding being the most common culprit for me).
Most of the time, these issues can be identified through the log content.
You can use tools like Microsoft SharedView or TeamViewer to connect to remote PC and inspect problem directly on site. Of course, you'll need cooperation with customer.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I am interested in how much of your daily work time do you spend on implementing new features compared to fixing bugs.
I don't code any new features as long as there are some unfixed bugs in my software.
The only reason I can think of to let a bug unfixed in my software is that it's definitely to costly to fix. In this case, we may choose to change this from 'bug' to 'known limitation' or 'known bug', and we fix the feedback we give to the user accordingly, so that the user knows exactly what's going on and why it's not fixed (see my edit below)
So typically, I spend all of my time bug fixing as long as the QA is complaining about something, and all of my time coding when it's not ! :)
I do that because :
When a software does a lot of things, but crashes randomly, the user will get a feeling that he cannot rely on the software, and there's NOTHING you can do to fix this. ever.
When a software lacks some features, but is good at doing what it does, the user rather thinks "That may be a great software, too bad it doesn't support X and Y... I'll check the next release in 6 months".
Joel Spolsky has written an interesting post on that question in his
12 steps to better code.
Edit to answer comments : If I'm experiencing random crashes, that's definitely a bug, not a "known limitation". Once I know exactly what is going on, and only then, I can decide whether I can fix it or not.
I was rather thinking of the following situations :
the bug is provoked by code that doesn't belong to me (typically a third party library). If implementing a workaround is very complicated, it might be OK to wait for the third party vendor to fix it. Real world example : Clickonce doesn't work in some proxy situations... I expect Microsoft to fix it, eventually.
If the bug is that a specific feature doesn't work in all situations, and that this feature is too difficult to implement for those specific situations, I think it's ok to warn the user before he uses the feature that what is trying to do is not implemented, rather than just crashing.
I work for a group inside my company that is suppose to both create "featurettes" and respond to customer issues. I tend to spend more time on high priority customer issues (read: bugs). So I would say my time is nearly 100% spent on fixing bugs.
That said, lets read between the lines a bit. It seems that this question is a way of saying "ugg, I spend so much time on bugfixing...wish I could do more feature development". If that is the case, I think you need to look inward a bit.
As I said, I spend nearly all my time on fixing bugs for customer issues, but I have also written a ton of tools to help with that process. I have everything from specialized log analyzers to generic visualstudio solution file error checkers. Not to mention some of those sweet wndbg scripts I have written for esoteric breakpoints!
It is by doing stuff like that where I fulfill that desire to work on "something new". And in a way, it is much more rewarding than implementing some new small cog in a huge enterprise application.
Since I don't get paid to maintain any project, most of the time i'm working on new projects, hence adding new features all the time.
However, each feature needs to be tested and debugged thoroughly, so you can say that 30-40% of the time spent implementing a feature will go into debugging it.
Many projects have a development phase ("code thaw") where active adding of new features occurs concurrently with bug fixes, and a "code freeze" stage where feature set is frozen and 100% of the work goes towards bringing the critical bugs count to 0 (or fixing as many bugs until a fixed deadline as possible), so the answer would depend on the stage your project is in.
When I "do bugs," I also make my best effort to claim at least one feature to work on at the same time, or (when encountering a particularly buggy block of code) request a mandate to refactor the entire block. Thus I get to do some new development (and, face it, most of us prefer to write new stuff to fixing the old) while reducing the bug count.
There's a broad spectrum of priorities I have in my head when I'm triaging my work:
Bugs affecting a customer's ability to do their business or access their data. No work is done until any bugs like this are taken care of.
Other high priority bugs or features. These are usually "known issue" type bugs or enhancements that have become to painful to deal with anymore and now require a code change. Also, features requested by big customers or prospects generally fall into this category.
Everything else. This includes maintenance, nice-to-have features and just general itch-scratching-type maintenance on our code base.
As you can imagine, #3 category work doesn't get worked on all that often, which is a bit frustrating from an engineering perspective. But, our customers love us since they get an engineer working on their issues almost immediately after they call our support line and generally have a resolution within 24 hours, regardless of their size or importance.
It depends on the project, if there is a show stopper bug I focus on it but sometimes when I'm not motivated enough I just add one new cool feature so I can at least work on it instead of not doing anything.
This is for personal projects or before pre-release / research products
It all depends on kind of project I am working upon currently.
If the project is new then we do have a phase called bug fixing after the testing phase. Most of the bugs get fixed there. (!)
If the project is maintainance project then fixing bugs is a daily routine.
It depends on the bug.
Is it a minor cosmetic issue such as mislaigned label or a huge knock out bug that corrupts data?
Even if it is minor or cosmetic, is it causing user headaches, like a pop-up opening up in the wrong place? Is the data corruption bug only in Firefox 2 with a full moon (and your corporate intranet is IE 6)?
Good question though...
The question says it all. If you have a bug that multiple users report, but there is no record of the bug occurring in the log, nor can the bug be repeated, no matter how hard you try, how do you fix it? Or even can you?
I am sure this has happened to many of you out there. What did you do in this situation, and what was the final outcome?
Edit:
I am more interested in what was done about an unfindable bug, not an unresolvable bug. Unresolvable bugs are such that you at least know that there is a problem and have a starting point, in most cases, for searching for it. In the case of an unfindable one, what do you do? Can you even do anything at all?
Language
Different programming languages will have their own flavour of bugs.
C
Adding debug statements can make the problem impossible to duplicate because the debug statement itself shifts pointers far enough to avoid a SEGFAULT---also known as Heisenbugs. Pointer issues are arduous to track and replicate, but debuggers can help (such as GDB and DDD).
Java
An application that has multiple threads might only show its bugs with a very specific timing or sequence of events. Improper concurrency implementations can cause deadlocks in situations that are difficult to replicate.
JavaScript
Some web browsers are notorious for memory leaks. JavaScript code that runs fine in one browser might cause incorrect behaviour in another browser. Using third-party libraries that have been rigorously tested by thousands of users can be advantageous to avoid certain obscure bugs.
Environment
Depending on the complexity of the environment in which the application (that has the bug) is running, the only recourse might be to simplify the environment. Does the application run:
on a server?
on a desktop?
in a web browser?
In what environment does the application produce the problem?
development?
test?
production?
Exit extraneous applications, kill background tasks, stop all scheduled events (cron jobs), eliminate plug-ins, and uninstall browser add-ons.
Networking
As networking is essential to so many applications:
Ensure stable network connections, including wireless signals.
Does the software reconnect after network failures robustly?
Do all connections get closed properly so as to release file descriptors?
Are people using the machine who shouldn't be?
Are rogue devices interacting with the machine's network?
Are there factories or radio towers nearby that can cause interference?
Do packet sizes and frequency fall within nominal ranges?
Are packets being monitored for loss?
Are all network devices adequate for heavy bandwidth usage?
Consistency
Eliminate as many unknowns as possible:
Isolate architectural components.
Remove non-essential, or possibly problematic (conflicting), elements.
Deactivate different application modules.
Remove all differences between production, test, and development. Use the same hardware. Follow the exact same steps, perfectly, to setup the computers. Consistency is key.
Logging
Use liberal amounts of logging to correlate the time events happened. Examine logs for any obvious errors, timing issues, etc.
Hardware
If the software seems okay, consider hardware faults:
Are the physical network connections solid?
Are there any loose cables?
Are chips seated properly?
Do all cables have clean connections?
Is the working environment clean and free of dust?
Have any hidden devices or cables been damaged by rodents or insects?
Are there bad blocks on drives?
Are the CPU fans working?
Can the motherboard power all components? (CPU, network card, video card, drives, etc.)
Could electromagnetic interference be the culprit?
And mostly for embedded:
Insufficient supply bypassing?
Board contamination?
Bad solder joints / bad reflow?
CPU not reset when supply voltages are out of tolerance?
Bad resets because supply rails are back-powered from I/O ports and don't fully discharge?
Latch-up?
Floating input pins?
Insufficient (sometimes negative) noise margins on logic levels?
Insufficient (sometimes negative) timing margins?
Tin whiskers?
ESD damage?
ESD upsets?
Chip errata?
Interface misuse (e.g. I2C off-board or in the presence of high-power signals)?
Race conditions?
Counterfeit components?
Network vs. Local
What happens when you run the application locally (i.e., not across the network)? Are other servers experiencing the same issues? Is the database remote? Can you use a local database?
Firmware
In between hardware and software is firmware.
Is the computer BIOS up-to-date?
Is the BIOS battery working?
Are the BIOS clock and system clock synchronized?
Time and Statistics
Timing issues are difficult to track:
When does the problem happen?
How frequently?
What other systems are running at that time?
Is the application time-sensitive (e.g., will leap days or leap seconds cause issues)?
Gather hard numerical data on the problem. A problem that might, at first, appear random, might actually have a pattern.
Change Management
Sometimes problems appear after a system upgrade.
When did the problem first start?
What changed in the environment (hardware and software)?
What happens after rolling back to a previous version?
What differences exist between the problematic version and good version?
Library Management
Different operating systems have different ways of distributing conflicting libraries:
Windows has DLL Hell.
Unix can have numerous broken symbolic links.
Java library files can be equally nightmarish to resolve.
Perform a fresh install of the operating system, and include only the supporting software required for your application.
Java
Make sure every library is used only once. Sometimes application containers have a different version of a library than the application itself. This might not be possible to replicate in the development environment.
Use a library management tool such as Maven or Ivy.
Debugging
Code a detection method that triggers a notification (e.g., log, e-mail, pop-up, pager beep) when the bug happens. Use automated testing to submit data into the application. Use random data. Use data that covers known and possible edge cases. Eventually the bug should reappear.
Sleep
It is worth reiterating what others have mentioned: sleep on it. Spend time away from the problem, finish other tasks (like documentation). Be physically distant from computers and get some exercise.
Code Review
Walk through the code, line-by-line, and describe what every line does to yourself, a co-worker, or a rubber duck. This may lead to insights on how to reproduce the bug.
Cosmic Radiation
Cosmic Rays can flip bits. This is not as big as a problem in the past due to modern error checking of memory. Software for hardware that leaves Earth's protection is subject to issues that simply cannot be replicated due to the randomness of cosmic radiation.
Tools
Sometimes, albeit infrequently, the compiler will introduce a bug, especially for niche tools (e.g. a C micro-controller compiler suffering from a symbol table overflow). Is it possible to use a different compiler? Could any other tool in the tool-chain be introducing issues?
If it's a GUI app, it's invaluable to watch the customer generate the error (or try to). They'll no doubt being doing something you'd never have guessed they were doing (not wrongly, just differently).
Otherwise, concentrate your logging in that area. Log most everything (you can pull it out later) and get your app to dump its environment as well. e.g. machine type, VM type, encoding used.
Does your app report a version number, a build number, etc.? You need this to determine precisely which version you're debugging (or not!).
If you can instrument your app (e.g. by using JMX if you're in the Java world) then instrument the area in question. Store stats e.g. requests+parameters, time made, etc. Make use of buffers to store the last 'n' requests/responses/object versions/whatever, and dump them out when the user reports an issue.
If you can't replicate it, you may fix it, but can't know that you've fixed it.
I've made my best explanation about how the bug was triggered (even if I didn't know how that situation could come about), fixed that, and made sure that if the bug surfaced again, our notification mechanisms would let a future developer know the things that I wish I had known. In practice, this meant adding log events when the paths which could trigger the bug were crossed, and metrics for related resources were recorded. And, of course, making sure that the tests exercised the code well in general.
Deciding what notifications to add is a feasability and triage question. So is deciding on how much developer time to spend on the bug in the first place. It can't be answered without knowing how important the bug is.
I've had good outcomes (didn't show up again, and the code was better for it), and bad (spent too much time not fixing the problem, whether the bug ended up fixed or not). That's what estimates and issue priorities are for.
Sometimes I just have to sit and study the code until I find the bug. Try to prove that the bug is impossible, and in the process you may figure out where you might be mistaken. If you actually succeed in convincing yourself it's impossible, assume you messed up somewhere.
It may help to add a bunch of error checking and assertions to confirm or deny your beliefs/assumptions. Something may fail that you'd never expect to.
It can be difficult, and sometimes near impossible. But my experience is, that you will sooner or later be able to reproduce and fix the bug, if you spend enough time on it (if that spent time is worth it, is another matter).
General suggestions that might help in this situation.
Add more logging, if possible, so that you have more data the next time the bug appears.
Ask the users, if they can replicate the bug. If yes, you can have them replicate it while watching over their shoulder, and hopefully find out, what triggers the bug.
Make random changes until something works :-)
Assuming you have already added all the logging that you think would help and it didn't... two things spring to mind:
Work backwards from the reported symptom. Think to yourself.. "it I wanted to produce the symptom that was reported, what bit of code would I need to be executing, and how would I get to it, and how would I get to that?" D leads to C leads to B leads to A. Accept that if a bug is not reproducible, then normal methods won't help. I've had to stare at code for many hours with these kind of thought processes going on to find some bugs. Usually it turns out to be something really stupid.
Remember Bob's first law of debugging: if you can't find something, it's because you're looking in the wrong place :-)
Think. Hard. Lock yourself away, admit no interuptions.
I once had a bug where the evidence was a hex dump of a corrupt database. The chains of pointers were systematically screwed up. All the user's programs, and our database software, worked faultlessly in testing. I stared at it for a week (it was an important customer), and after eliminating dozens of possible ideas, I realised that the data was spread across two physical files and the corruption occurred where the chains crossed file boundaries. I realized that if a backup/restore operation failed at a critical point, the two files could end up "out of sync", restored to different time points. If you then ran one of the customer's programs on the already-corrupt data, it would produce exactly the knotted chains of pointers I was seeing. I then demonstrated a sequence of events that reproduced the corruption exactly.
modify the code where you think the problem is happening, so extra debug info is recorded somewhere. when it happens next time, you will have what your need to solve the problem.
There are two types of bugs you can't replicate. The kind you discovered, and the kind someone else discovered.
If you discovered the bug, you should be able to replicate it. If you can't replicate it, then you simply haven't considered all of the contributing factors leading towards the bug. This is why whenever you have a bug, you should document it. Save the log, get a screenshot, etc. If you don't, then how can you even prove the bug really exists? Maybe it's just a false memory?
If someone else discovered a bug, and you can't replicate it, obviously ask them to replicate it. If they can't replicate it, then you try to replicate it. If you can't replicate it quickly, ignore it.
I know that sounds bad, but I think it is justified. The amount of time it will take you to replicate a bug that someone else discovered is very large. If the bug is real, it will happen again naturally. Someone, maybe even you, will stumble across it again. If it is difficult to replicate, then it is also rare, and probably won't cause too much damage if it happens a few more times.
You can be a lot more productive if you spend your time actually working, fixing other bugs and writing new code, than you will be trying to replicate a mystery bug that you can't even guarantee actually exists. Just wait for it to appear again naturally, then you will be able to spend all your time fixing it, rather than wasting your time trying to reveal it.
Discuss the problem, read code, often quite a lot of it. Often we do it in pairs, because you can usually eliminate the possibilities analytically quite quickly.
Start by looking at what tools you have available to you. For example crashes on a Windows platform go to WinQual, so if this is your case you now have crash dump information. Do you can static analysis tools that spot potential bugs, runtime analysis tools, profiling tools?
Then look at the input and output. Anything similar about the inputs in situations when users report the error, or anything out of place in the output? Compile a list of reports and look for patterns.
Finally, as David stated, stare at the code.
Ask user to give you a remote access for his computer and see everything yourself. Ask user to make a small video of how he reproduces this bug and send it to you.
Sure both are not always possible but if they are it may clarify some things. The common way of finding bugs are still the same: separating parts that may cause bug, trying to understand what`s happening, narrowing codespace that could cause the bug.
There are tools like gotomeeting.com, which you can use to share screen with your user and observe the behaviour. There could be many potential problems like number of softwares installed on their machines, some tools utility conflicting with your program. I believe gotomeeting, is not the only solution, but there could be timeout issues, slow internet issue.
Most of times I would say softwares do not report you correct error messages, for example, in case of java and c# track every exceptions.. dont catch all but keep a point where you can catch and log. UI Bugs are difficult to solve unless you use remote desktop tools. And most of time it could be bug in even third party software.
If you work on a real significant sized application, you probably have a queue of 1,000 bugs, most of which are definitely reproducible.
Therefore, I'm afraid I'd probably close the bug as WORKSFORME (Bugzilla) and then get on fixing some more tangible bugs. Or doing whatever the project manager decides to do.
Certainly making random changes is a bad idea, even if they're localised, because you risk introducing new bugs.
By 'challenging development environment' I don't mean you're on a small boat that's rocking up and down and someone is holding a gun to your head. I mean, are the tools at your disposal making the problem difficult?
Development is typically a cycle of code, run, observe the result, repeat. In some environments this is a very quick and painless process, but in others it's very difficult. We end up using little tricks to help us observe the result and run the code faster.
I was thinking of this because I just started using SSIS (an ETL tool included with SQL Server 2005/8). It's been quite challenging for me to make progress, mainly because there's no guidance on what all the dialogs mean and also because the errors are very cryptic and most of the time don't really tell you what the problem is.
I think the easiest environment I've had to work in was VB6 because there you can edit code while the application is running and it will continue running with your new code! You don't even have to run it again. This can save you a lot of time. Surprisingly, in Netbeans with Java code, you can do the same. It steps out of the method and re-runs the method with the new code.
In SQL Server 2000 when there is an error in a trigger you get no stack trace, which can make it really tricky to locate where the problem occurred since an insert can have a cascading effect and trigger many triggers. In Oracle you get a very nice little stack trace with line numbers so resolving the problem is very easy.
Some of the things that I see really help in locating problems:
Good error messages when things go wrong.
Providing a stack trace when a problem occurs.
Debug environment where you can pause, then output the value of variables and step to follow the execution path.
A graphical debug environment that shows the code as it's running.
A screen that can show the current values of variables so you can print to them.
Ability to turn on debug logging on a production system.
What is the worst you've seen and what can be done to get around these limitations?
EDIT
I really didn't intend for this to be flame bait. I'm really just looking for ideas to improve systems so that if I'm creating something I'll think about these things and not contribute to people's problems. I'm also looking for creative ways around these limitations that I can use if I find myself in this position.
I was working on making modifications to Magento for a client. There is very little information on how the Magento system is organized. There are hundreds of folders and files, and there are at least a thousand view files. There was little support available from Magento forums, and I suspect the main reason for this lack of information is because the creators of Magento want you to pay them to become a certified Magento developer. Also, at that time last year there was no StackOverflow :)
My first task was to figure out how the database schema worked and which table stored some attributes I was looking for. There are over 300 tables in Magento, and I couldn't find out how the SQL queries were being done. So I had just one option...
I exported the entire database (300+ tables, and at least 20,000 lines of SQL code) into a .sql file using PhpMyAdmin, and I 'committed' this file into the subversion repositry. Then, I made some changes to the database using the Magento administration panel, and redownloaded the .sql file. Then, I ran a DIFF using TortioseSvn, and scrolled through the 20k+ lines file to find which lines had changed, LOL. As crazy as it sounds, it did work, and I was able to figure out which tables I needed to access.
My 2nd problem was, because of the crazy directory structure, I had to ftp to about 3 folders at the same time for trivial changes. So I had to keep 3 windows of my ftp program open, switch between them and ftp each time.
The 3rd problem was figuring out how the url mapping worked and where some of the code I wanted was being stored. Here, by sheer luck, I managed to find the Model class I was looking for.
Mostly by sheer luck and other similar crazy adventures I managed to work my way through and complete the project. Since then, StackOverflow was started and by a helpful answer to this bounty question I was able to finally get enough information about Magento that I can do future projects in a less crazy manner (hopefully).
Try keypunching your card deck in Fortran, complete with IBM JCL (Job Control Language), handing it in at the data center window, coming back the next morning and getting an inch-thick stack of printer paper with the hex dump of your crash, and a list of the charges to your account.
Grows hair on your fingernails.
I guess that was an improvement on the prior method of sitting at the console, toggling switches and reading the lights.
Occam on a 400x transputer network. As there was only one transputer that could output to console debugging was a nightmare. Had to build a test harness on a Sun network.
I took a class once, that was loosely based on SICP, except it was taught in Dylan rather than Scheme. Actually, it was in the old Dylan syntax, the prefix one that was based on Scheme. But because there were no interpreters for that old version of Dylan, the professor wrote one. In Java. As an applet. Which meant that it had no access to the filesystem; you had to write all of your code in a separate text editor, and then paste it into the Dylan interpreter. Oh, and it had no debugging facilities, of course. And being a Dylan interpreter written in Java, and this was back in 2000, it was ungodly slow.
Print statement debugging, lots of copying and pasting, and an awful lot of cursing at the interpreter were involved.
Back in the 90's, I was developing applications in Clipper, a compilable dBase-like language. I don't remember if it came with a debugger, we often used a 3rd-party debugger called 'Mr Debug' (really!). Although Clipper was fast, some of our more intensive routines were written in C. If you prayed to the correct gods and uttered the necessary incantations, you could use Microsoft's CodeView debugger to debug the C code. But usually not for more than a few minutes, as the program usually didn't like to spend much time running with CodeView (usually memory problems).
I had a series of makefile switches that I used to stub out the sections of code that I didn't need to debug at the time. My debugging environment was very sparse so there was as much free memory as possible for the program. I also think I drank a lot more back then...
Some years ago I reverse engineered game copy protections. Because the protections was written in C or C++ they were fairly easy to disassemble and understand what was going on. But in some cases it got hairy when the copy protection took a detour into the kernel to obfuscate what was happening. A few of them also started to use of custom made virtual machines to make the problem less understandable. I spent hours writing hooks and debuggers to be able to trace into them. The environment really offered a competetive and innovative mind. I had everything at my disposal save time. Misstakes caused reboots and very little feedback what went wrong. I realized thinking before acting is often a better solution.
Today I dispise debuggers. If the problem is in code visible to me I find it easiest to use verbose logging. (Sometimes the error is not understanding the interface/environment then debuggers are good.) I have also realized time is of an essance. You need to have a good working environment with possibility to test your code instantly. If you compiler takes 15 sec, your environment takes 20 sec to update or your caches takes 5 minutes to clear find another way to test your code. Progress keeps me motivated and without a good working environment I get bored, angry and frustrated.
The last job I had I was a Sitecore Developer. Bugfixing can be very painful if the bug only occurs on the client's system, and they do not have Visual Studio installed on the system, with the remote debugging off, and the problem only happens on the production server (not the staging server).
The worst in recent memory was developing SSRS reports using Dundas controls. We were doing quite a bit with the grids which required coding. The pain was the bugginess of the controls, and the lack of debugging support.
I never got around the limitations, but just worked through them.