This is intended to be a general-purpose question to assist new programmers who have a problem with a program, but who do not know how to use a debugger to diagnose the cause of the problem.
This question covers three classes of more specific question:
When I run my program, it does not produce the output I expect for the input I gave it.
When I run my program, it crashes and gives me a stack trace. I have examined the stack trace, but I still do not know the cause of the problem because the stack trace does not provide me with enough information.
When I run my program, it crashes because of a segmentation fault (SEGV).
A debugger is a program that can examine the state of your program while your program is running. The technical means it uses for doing this are not necessary for understanding the basics of using a debugger. You can use a debugger to halt the execution of your program when it reaches a particular place in your code, and then examine the values of the variables in the program. You can use a debugger to run your program very slowly, one line of code at a time (called single stepping), while you examine the values of its variables.
Using a debugger is an expected basic skill
A debugger is a very powerful tool for helping diagnose problems with programs. And debuggers are available for all practical programming languages. Therefore, being able to use a debugger is considered a basic skill of any professional or enthusiast programmer. And using a debugger yourself is considered basic work you should do yourself before asking others for help. As this site is for professional and enthusiast programmers, and not a help desk or mentoring site, if you have a question about a problem with a specific program, but have not used a debugger, your question is very likely to be closed and downvoted. If you persist with questions like that, you will eventually be blocked from posting more.
How a debugger can help you
By using a debugger you can discover whether a variable has the wrong value, and where in your program its value changed to the wrong value.
Using single stepping you can also discover whether the control flow is as you expect. For example, whether an if branch executed when you expect it ought to be.
General notes on using a debugger
The specifics of using a debugger depend on the debugger and, to a lesser degree, the programming language you are using.
You can attach a debugger to a process already running your program. You might do it if your program is stuck.
In practice it is often easier to run your program under the control of a debugger from the very start.
You indicate where your program should stop executing by indicating the source code file and line number of the line at which execution should stop, or by indicating the name of the method/function at which the program should stop (if you want to stop as soon as execution enters the method). The technical means that the debugger uses to cause your program to stop is called a breakpoint and this process is called setting a breakpoint.
Most modern debuggers are part of an IDE and provide you with a convenient GUI for examining the source code and variables of your program, with a point-and-click interface for setting breakpoints, running your program, and single stepping it.
Using a debugger can be very difficult unless your program executable or bytecode files include debugging symbol information and cross-references to your source code. You might have to compile (or recompile) your program slightly differently to ensure that information is present. If the compiler performs extensive optimizations, those cross-references can become confusing. You might therefore have to recompile your program with optimizations turned off.
I want to add that a debugger isn't always the perfect solution, and shouldn't always be the go-to solution to debugging. Here are a few cases where a debugger might not work for you:
The part of your program which fails is really large (poor modularization, perhaps?) and you're not exactly sure where to start stepping through the code. Stepping through all of it might be too time-consuming.
Your program uses a lot of callbacks and other non-linear flow control methods, which makes the debugger confused when you step through it.
Your program is multi-threaded. Or even worse, your problem is caused by a race condition.
The code that has the bug in it runs many times before it bugs out. This can be particularly problematic in main loops, or worse yet, in physics engines, where the problem could be numerical. Even setting a breakpoint, in this case, would simply have you hitting it many times, with the bug not appearing.
Your program must run in real-time. This is a big issue for programs that connect to the network. If you set up a breakpoint in your network code, the other end isn't going to wait for you to step through, it's simply going to time out. Programs that rely on the system clock, e.g. games with frameskip, aren't much better off either.
Your program performs some form of destructive actions, like writing to files or sending e-mails, and you'd like to limit the number of times you need to run through it.
You can tell that your bug is caused by incorrect values arriving at function X, but you don't know where these values come from. Having to run through the program, again and again, setting breakpoints farther and farther back, can be a huge hassle. Especially if function X is called from many places throughout the program.
In all of these cases, either having your program stop abruptly could cause the end results to differ, or stepping through manually in search of the one line where the bug is caused is too much of a hassle. This can equally happen whether your bug is incorrect behavior, or a crash. For instance, if memory corruption causes a crash, by the time the crash happens, it's too far from where the memory corruption first occurred, and no useful information is left.
So, what are the alternatives?
Simplest is simply logging and assertions. Add logs to your program at various points, and compare what you get with what you're expecting. For instance, see if the function where you think there's a bug is even called in the first place. See if the variables at the start of a method are what you think they are. Unlike breakpoints, it's okay for there to be many log lines in which nothing special happens. You can simply search through the log afterward. Once you hit a log line that's different from what you're expecting, add more in the same area. Narrow it down farther and farther, until it's small enough to be able to log every line in the bugged area.
Assertions can be used to trap incorrect values as they occur, rather than once they have an effect visible to the end-user. The quicker you catch an incorrect value, the closer you are to the line that produced it.
Refactor and unit test. If your program is too big, it might be worthwhile to test it one class or one function at a time. Give it inputs, and look at the outputs, and see which are not as you're expecting. Being able to narrow down a bug from an entire program to a single function can make a huge difference in debugging time.
In case of memory leaks or memory stomping, use appropriate tools that are able to analyze and detect these at runtime. Being able to detect where the actual corruption occurs is the first step. After this, you can use logs to work your way back to where incorrect values were introduced.
Remember that debugging is a process going backward. You have the end result - a bug - and find the cause, which preceded it. It's about working your way backward and, unfortunately, debuggers only step forwards. This is where good logging and postmortem analysis can give you much better results.
Related
I'm executing Dimension Reduction algo from this tool box DR toolbox. I'm executing Linear Discriminant Analysis code on this data set Gisette. Executing on train_data and train_labels. When I execute the code runs but after some time matlab shuts down by itself. Not able to figure why this might be happening?
Tracing MATLAB crashes is notoriously difficult (I used to work there doing exactly that for customers).
Even if there's a JAVA dump, or a seg-fault listing, there's not really a whole lot you can do to figure out what line this is on without going through line-by-line with the MATLAB de-bugger. And if the problem is random, or memory based, you might never track it down.
That's the bad news. The good news is that 95% of crashes are due to 3rd party MEX files and associated memory leaks. I'd guess in the dim-reduction toolbox is a MEX file, and that is what's crashing. And it's deterministic. If that's the case, you can dbstop and dbstep through the code to figure out which line MATLAB disappears on, then report that to the toolbox developers. Or start to edit the MEX file's C-code.
Here's information on debugging in case you didn't already know:
http://www.mathworks.com/help/matlab/ref/dbstop.html;jsessionid=b3d5f42e047aacb596868b7a5961
If that doesn't work, try another toolbox.
My friends and I wrote one that's free:
http://www.mathworks.com/matlabcentral/linkexchange/links/2947-pattern-recognition-toolbox
I saw a comment on another question (I forget which one) encouraging the asker to avoid testing his/her code in the debug harness unless strictly necessary, citing something to the effect of it acting as a crutch. There's certainly something to be said for developing the skill to deduce the cause of bugs without "direct" evidence. I'm quite a fan of debuggers myself (in fact, I tend to only run without if strictly necessary), but I got to thinking about the relative merits of each approach.
Debugger Pros
Starting with the obvious, takes less time to zero in on faults, exceptions and crashes
Tracing provides a nice alternative to littering your code with commented-out print statements
Performance overhead can give you extra wiggle room, i.e. if your program is responsive while debugging, it will almost definitely be so in the wild
Debugger Cons
Performance overhead can make iterations slower
(Edit) Tunnel Vision: Debugging the symptom can distract you from deducing the cause when the crash occurs long after or far from the defect
It may "help" you by initializing variables or otherwise masking bugs, leading to surprises later on
Conversely, there's the odd bug that only crops up in a debug configuration; tracking it down may be a waste of effort (though, this is often indicative of a deeper, subtler problem that is worth fixing)
These are general, of course--it varies wildly with language, environment and situation--but what are some other considerations?
I've had this argument many times. The debugger is only a crutch if you use it like one. I've met people who refused to use a debugger even to get a stack trace of where a piece of code crashed, instead using printf bisection to find the crashing line of code (this would take a day or more.. seriously, people?)
One problem you might encounter when using a debugger is tunnel vision. The debugger has a way of focusing your attention on the immediate area where the bug became apparent -- whether it's a crash, incorrect data, or otherwise -- at the expense of stealing your attention from other areas that might benefit from some investigation. On the other hand, actually watching code execute in a debugger can sometimes free you from your mental trap of thinking about the code the wrong way. You might swear it does X when it actually does Y -- seeing it do Y before your very eyes is sometimes a profound moment.
That said, I only fire up the debugger in two circumstances:
A bug manifested which, after five minutes or so, I cannot immediately guess as to the cause
I'm trying to understand some code I'm not familiar with, and I want to watch it execute
Honestly, the time in the debugger is usually just a few minutes, then the problem is found. Fixing the problem is usually the hard part, and the debugger is of little use for that.
I think it's a mistake, not so much to always have a debugger at the ready, or to even run code always under the debugger, but to run a DEBUG BUILD. You already pointed out the worst of the problems with this. Memory allocations tend to happen differently, uninitialized data is filled with different values, etc. If the first time you fire up the release build is a few weeks before QA gets their hands on it (or, in a crazy shop, before you start shipping), you may be in for a world of serious pain.
I have only once seen a bug which only manifested in the debug build. A few people argued that it wasn't important because that isn't what we ship, but I looked into it anyway and found a REALLY bad problem.
Like any tool the debugger has appropriate and inappropriate uses. There are no bad tools.
If you're reasonably certain you have some bugs to deal with, running in debug mode tends to make finding them a bit faster. If you're at the point that you think the bugs are gone, you want to simulate the target environment as closely as possible, which usually means turning debug mode off.
Depending on your language, tools, etc., chances are pretty decent that you can also do something that's more or less a hybrid of the two: generate debugging information, but everything else like debug mode. This is often extremely helpful as well, so you can do debugging on the code after it's generated the way the customer will see it (but beware that optimization can produce oddities, such as changing the order of code...)
Ultimately, you should run your tests in the same configuration your code will be running in the wild.
Then if a test fails, you can drop back to debug mode, and if it still fails, track it down and fix it. If it "fixes" itself when run in debug mode, then be glad you found it now rather than when you shipped, and get to tracking down the root cause in other ways.
Especially in GCC the compiler definitely likes to help you along, but honestly issues don't crop up very often.
I see nothing wrong with developing in debug mode, usually the only time you got odd behavior is when you aren't quite following the standards for the language anyway (not initializing variables, etc).
Once you're ready to release switch off debug and run tests again, locate bugs, rinse and repeat. I have very rarely come in contact with "debug mode only" bugs.
I like to run in debug mode during development. One big reason is simply so I'm setup to run the debugger when needed.
When I was doing more C++/MFC programming, I would put ASSERTS all over the place (or Tracing, as you described it) and caught many bugs and found many wrong assumptions in the process.
Who cares if it runs a bit slower? I say develop in debug mode. It's extremely rare that I have errors that turn up when I switch to release builds. But I generally start running release builds when I'm nearing completion. If I have some testers, I'd obviously send them release builds.
I found writing printf much superior over debugging in python. It is just because it is simpler and you do need to recompile. Watching variables even whit pydev in eclipse is painful.One of the reaso is that i was always "debugging" code that was not much than few tens of lines of code.
On the other hand whit larger projects in C I am using debuger. It is different that in C there are pointers and arrays and you want to observe structs and it is not simple to write simple printf to see if your serialization code works correct.
I always use the debugger. And I will always single step new code line-by-line. There's a whole generation of kids that debug with print statements. (especially for web development) IMHO that's why software state-of-the-art is lagging.
Of course, you have to run your unit tests with both debugging on and off, but I find true compiler bugs related to optimizations are rare.
I'm working on a board game algorithm where a large tree is traversed using recursion, however, it's not behaving as expected. How do I handle this and what are you experiences with these situations?
To make things worse, it's using alpha-beta pruning which means entire parts of the tree are never visited, as well that it simply stops recursion when certain conditions are met. I can't change the search-depth to a lower number either, because while it's deterministic, the outcome does vary by how deep is searched and it may behave as expected at a lower search-depth (and it does).
Now, I'm not gonna ask you "where is the problem in my code?" but I am looking for general tips, tools, visualizations, anything to debug code like this. Personally, I'm developing in C#, but any and all tools are welcome. Although I think that this may be most applicable to imperative languages.
Logging. Log in your code extensively. In my experience, logging is THE solution for these types of problems. when it's hard to figure out what your code is doing, logging it extensively is a very good solution, as it lets you output from within your code what the internal state is; it's really not a perfect solution, but as far as I've seen, it works better than using any other method.
One thing I have done in the past is to format your logs to reflect the recursion depth. So you may do a new indention for every recurse, or another of some other delimiter. Then make a debug dll that logs everything you need to know about a each iteration. Between the two, you should be able to read the execution path and hopefully tell whats wrong.
I would normally unit-test such algorithms with one or more predefined datasets that have well-defined outcomes. I would typically make several such tests in increasing order of complexity.
If you insist on debugging, it is sometimes useful to doctor the code with statements that check for a given value, so you can attach a breakpoint at that time and place in the code:
if ( depth = X && item.id = 32) {
// Breakpoint here
}
Maybe you could convert the recursion into an iteration with an explicit stack for the parameters. Testing is easier in this way because you can directly log values, access the stack and don't have to pass data/variables in each self-evaluation or prevent them from falling out of scope.
I once had a similar problem when I was developing an AI algorithm to play a Tetris game. After trying many things a loosing a LOT of hours in reading my own logs and debugging and stepping in and out of functions what worked out for me was to code a fast visualizer and test my code with FIXED input.
So, if time is not a problem and you really want to understand what is going on, get a fixed board state and SEE what your program is doing with the data using a mix of debug logs/output and some sort of your own tools that shows information on each step.
Once you find a board state that gives you this problem, try to pin-point the function(s) where it starts and then you will be in a position to fix it.
I know what a pain this can be. At my job, we are currently working with a 3rd party application that basically behaves as a black box, so we have to devise some interesting debugging techniques to help us work around issues.
When I was taking a compiler theory course in college, we used a software library to visualize our trees; this might help you as well, as it could help you see what the tree looks like. In fact, you could build yourself a WinForms/WPF application to dump the contents of your tree into a TreeView control--it's messy, but it'll get the job done.
You might want to consider some kind of debug output, too. I know you mentioned that your tree is large, but perhaps debug statements or breaks at key point during execution that you're having trouble visualizing would lend you a hand.
Bear in mind, too, that intelligent debugging using Visual Studio can work wonders. It's tough to see how state is changing across multiple breaks, but Visual Studio 2010 should actually help with this.
Unfortunately, it's not particularly easy to help you debug without further information. Have you identified the first depth at which it starts to break? Does it continue to break with higher search depths? You might want to evaluate your working cases and try to determine how it's different.
Since you say that the traversal is not working as expected, I assume you have some idea of where things may go wrong. Then inspect the code to verify that you have not overlooked something basic.
After that I suggest you set up some simple unit tests. If they pass, then keep adding tests until they fail. If they fail, then reduce the tests until they either pass or are as simple as they can be. That should help you pinpoint the problems.
If you want to debug as well, I suggest you employ conditional breakpoints. Visual Studio lets you modify breakpoints, so you can set conditions on when the breakpoint should be triggered. That can reduce the number of iterations you need to look at.
I would start by instrumenting the function(s). At each recursive call log the data structures and any other info that will be useful in helping you identify the problem.
Print out the dump along with the source code then get away from the computer and have a nice paper-based debugging session over a cup of coffee.
Start from the base case where you've mentioned if else statements and then try to channelize your thinking by writing it down on pen and paper + printing the values on console when the first few instances of recursive functions are generated with values.
The motto is to find the correct trend between the values you print and match them with those values you wrote on paper in the initial few steps of your recursive algorithm.
What's your standard way of debugging a problem? This might seem like a pretty broad question with some of you replying 'It depends on the problem' but I think a lot of us debug by instinct and haven't actually tried wording our process. That's why we say 'it depends'.
I was sort of forced to word my process recently because a few developers and I were working an the same problem and we were debugging it in totally different ways. I wanted them to understand what I was trying to do and vice versa.
After some reflection I realized that my way of debugging is actually quite monotonous. I'll first try to be able to reliably replicate the problem (especially on my local machine). Then through a series of elimination (and this is where I think it's problem dependent) try to identify the problem.
The other guys were trying to do it in a totally different way.
So, just wondering what has been working for you guys out there? And what would you say your process is for debugging if you had to formalize it in words?
BTW, we still haven't found out our problem =)
My approach varies based on my familiarity with the system at hand. Typically I do something like:
Replicate the failure, if at all possible.
Examine the fail state to determine the immediate cause of the failure.
If I'm familiar with the system, I may have a good guess about to root cause. If not, I start to mechanically trace the data back through the software while challenging basic assumptions made by the software.
If the problem seems to have a consistent trigger, I may manually walk forward through the code with a debugger while challenging implicit assumptions that the code makes.
Tracing the root cause is, of course, where things can get hairy. This is where having a dump (or better, a live, broken process) can be truly invaluable.
I think that the key point in my debugging process is challenging pre-conceptions and assumptions. The number of times I've found a bug in that component that I or a colleague would swear is working fine is massive.
I've been told by my more intuitive friends and colleagues that I'm quite pedantic when they watch me debug or ask me to help them figure something out. :)
Consider getting hold of the book "Debugging" by David J Agans. The subtitle is "The 9 Indispensable Rules for Finding Even the Most Elusive Software and Hardware Problems". His list of debugging rules — available in a poster form at the web site (and there's a link for the book, too) is:
Understand the system
Make it fail
Quit thinking and look
Divide and conquer
Change one thing at a time
Keep an audit trail
Check the plug
Get a fresh view
If you didn't fix it, it ain't fixed
The last point is particularly relevant in the software industry.
I picked those on the web or some book which I can't recall (it may have been CodingHorror ...)
Debugging 101:
Reproduce
Progressively Narrow Scope
Avoid Debuggers
Change Only One Thing At a Time
Psychological Methods:
Rubber-duck debugging
Don't Speculate
Don't be too Quick to Blame the Tools
Understand Both Problem and Solution
Take a Break
Consider Multiple Causes
Bug Prevention Methods:
Monitor Your Own Fault Injection Habits
Introduce Debugging Aids Early
Loose Coupling and Information Hiding
Write a Regression Test to Prevent Re occurrence
Technical Methods:
Inert Trace Statements
Consult the Log Files of Third Party Products
Search the web for the Stack Trace
Introduce Design By Contract
Wipe the Slate Clean
Intermittent Bugs
Explot Localility
Introduce Dummy Implementations and Subclasses
Recompile / Relink
Probe Boundary Conditions and Special Cases
Check Version Dependencies (third party)
Check Code that Has Changed Recently
Don't Trust the Error Message
Graphics Bugs
When I'm up against a bug that I can't get seem to figure out, I like to make a model of the problem. Make a copy of the section of problem code, and start removing features from it, one at a time. Run a unit test against the code after every removal. Through this process your will either remove the feature with the bug (and hence, locate the bug), or you will have isolated the bug down to a core piece of code that contains the essence of the problem. And once you figure out the essence of the problem, its a lot easier to fix.
I normally start off by forming an hypothesis based on the information I have at hand. Once this is done, I work to prove it to be correct. If it proves to be wrong, I start off with a different hypothesis.
Most of the Multithreaded synchronization issues get solved very easily with this approach.
Also you need to have a good understanding of the debugger you are using and its features. I work on Windows applications and have found windbg to be extremely helpful in finding bugs.
Reducing the bug to its simplest form often leads to greater understanding of the issue as well adding the benefit of being able to involve others if necessary.
Setting up a quick reproduction scenario to allow for efficient use of your time to test any hypothosis you chose.
Creating tools to dump the environment quickly for comparisons.
Creating and reproducing the bug with logging turned onto the maximum level.
Examining the system logs for anything alarming.
Looking at file dates and timestamps to get a feeling if the problem could be a recent introduction.
Looking through the source repository for recent activity in the relevant modules.
Apply deductive reasoning and apply the Ockham's Razor principles.
Be willing to step back and take a break from the problem.
I'm also a big fan of using process of elimination. Ruling out variables tremendously simplifies the debugging task. It's often the very first thing that should to be done.
Another really effective technique is to roll back to your last working version if possible and try again. This can be extremely powerful because it gives you solid footing to proceed more carefully. A variation on this is to get the code to a point where it is working, with less functionality, than not working with more functionality.
Of course, it's very important to not just try things. This increases your despair because it never works. I'd rather make 50 runs to gather information about the bug rather take a wild swing and hope it works.
I find the best time to "debug" is while you're writing the code. In other words, be defensive. Check return values, liberally use assert, use some kind of reliable logging mechanism and log everything.
To more directly answer the question, the most efficient way for me to debug problems is to read code. Having a log helps you find the relevant code to read quickly. No logging? Spend the time putting it in. It may not seem like you're finding the bug, and you may not be. The logging might help you find another bug though, and eventually once you've gone through enough code, you'll find it....faster than setting up debuggers and trying to reproduce the problem, single stepping, etc.
While debugging I try to think of what the possible problems could be. I've come up with a fairly arbitrary classification system, but it works for me: all bugs fall into one of four categories. Keep in mind here that I'm talking about runtime problems, not compiler or linker errors. The four categories are:
dynamic memory allocation
stack overflow
uninitialized variable
logic bug
These categories have been most useful to me with C and C++, but I expect they apply pretty well elsewhere. The logic bug category is a big one (e.g. putting a < b when the correct thing was a <= b), and can include things like failing to synchronize access among threads.
Knowing what I'm looking for (one of these four things) helps a lot in finding it. Finding bugs always seems to be much harder than fixing them.
The actual mechanics for debugging are most often:
do I have an automated test that demonstrates the problem?
if not, add a test that fails
change the code so the test passes
make sure all the other tests still pass
check in the change
No automated testing in your environment? No time like the present to set it up. Too hard to organize things so you can test individual pieces of your program? Take the time to make it so. May make it take "too long" to fix this particular bug, but the sooner you start, the faster everything else'll go. Again, you might not fix the particular bug you're looking for but I bet you find and fix others along the way.
My method of debugging is different, probably because I am still beginner.
When I encounter logical bug I seem to end up adding more variables to see which values go where and then I go and debug line by line in the piece of code that causing a problem.
Replicating the problem and generating a repeatable test data set is definitely the first and most important step to debugging.
If I can identify a repeatable bug, I'll typically try and isolate the components involved until I locate the problem. Frequently I'll spend a little time ruling out cases so I can state definitively: The problem is not in component X (or process Y, etc.).
First I try to replicate the error, without being able to replicate the error it is basically impossible in a non-trivial program to guess the problem.
Then if possible, break out the code in a separate standalone project. There are several reasons for this: If the original project is big it quite difficult to debug second it eliminates or highlights any assumptions about the code.
I normally always have another copy of VS open which I use for the debugging parts in mini projects and to test routines which I later add to the main project.
Once having reproduced the error in the separate module the battle is almost won.
Sometimes it is not easy to break out a piece of code so in those cases I use different methods depending on how complex the issue is. In most cases assumptions about data seem to come and bite me so I try to add lots of asserts in the code in order make sure my assumptions are correct. I also disabling code by using #ifdef until the error disappears. Eliminating dependencies to other modules etc... sort of slowly circling in the bug like a vulture ..
I think I don't have really a conscious way of doing it, it varies quite a lot but the general principle is to eliminate the noise around the issue until it is quite obvious what it is. Hope I didn't sound too confusing :)
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
What are some of the nastiest, most difficult bugs you have had to track and fix and why?
I am both genuinely curious and knee deep in the process as we speak. So as they say - misery likes company.
Heisenbugs:
A heisenbug (named after the Heisenberg Uncertainty Principle) is a computer bug that disappears or alters its characteristics when an attempt is made to study it.
Race conditions and deadlocks. I do a lot of multithreaded processes and that is the hardest thing to deal with.
Bugs that happen when compiled in release mode but not in debug mode.
Any bug based on timing conditions. These often come when working with inter-thread communication, an external system, reading from a network, reading from a file, or communicating with any external server or device.
Bugs that are not in your code per se, but rather in a vendor's module on which you depend. Particularly when the vendor is unresponsive and you are forced to hack a work-around. Very frustrating!
We were developing a database to hold words and definitions in another language. It turns out that this language had only recently been added to the Unicode standard and it didn't make it into SQL Server 2005 (though it was added around 2005). This had a very frustrating effect when it came to collation.
Words and definitions went in just fine, I could see everything in Management Studio. But whenever we tried to find the definition for a given word, our queries returned nothing. After a solid 8 hours of debugging, I was at the point of thinking I had lost the ability to write a simple SELECT query.
That is, until I noticed English letters matched other English letters with any amount of foreign letters thrown in. For example, EnglishWord would match E!n#gl##$ish$&Word. (With !##$%^&* representing foreign letters).
When a collation doesn't know about a certain character, it can't sort them. If it can't sort them, it can't tell whether two string match or not (a surprise for me). So frustrating and a whole day down the drain for a stupid collation setting.
Threading bugs, especially race conditions. When you cannot stop the system (because the bug goes away), things quickly get tough.
The hardest ones I usually run into are ones that don't show up in any log trace. You should never silently eat an exception! The problem is that eating an exception often moves your code into an invalid state, where it fails later in another thread and in a completely unrelated manner.
That said, the hardest one I ever really ran into was a C program in a function call where the calling signature didn't exactly match the called signature (one was a long, the other an int). There were no errors at compile time or link time and most tests passed, but the stack was off by sizeof(int), so the variables after it on the stack would randomly have bad values, but most of the time it would work fine (the values following that bad parameter were generally being passed in as zero).
That was a BITCH to track.
Memory corruption under load due to bad hardware.
Bugs that happen on one server and not another, and you don't have access to the offending server to debug it.
Bugs that have to do with threading.
The most frustrating for me have been compiler bugs, where the code is correct but I've hit an undocumented corner case or something where the compiler's wrong. I start with the assumption that I've made a mistake, and then spend days trying to find it.
Edit: The other most frustrating was the time I got the test case set slightly wrong, so my code was correct but the test wasn't. That took days to find.
In general, I guess the worst bugs I've had have been the ones that aren't my fault.
The hardest bugs to track down and fix are those that combine all the difficult cases:
reported by a third party but you can't reproduce it under your own testing conditions;
bug occurs rarely and unpredictably (e.g. because it's caused by a race condition);
bug is on an embedded system and you can't attach a debugger;
when you try to get logging information out the bug goes away;
bug is in third-party code such as a library ...
... to which you don't have the source code so you have to work with disassembly only;
and the bug is at the interface between multiple hardware systems (e.g. networking protocol bugs or bus contention bugs).
I was working on a bug with all these features this week. It was necessary to reverse engineer the library to find out what it was up to; then generate hypotheses about which two devices were racing; then make specially-instrumented versions of the program designed to provoke the hypothesized race condition; then once one of the hypotheses was confirmed it was possible to synchronize the timing of events so that the library won the race 100% of the time.
There was a project building a chemical engineering simulator using a beowulf cluster. It so happened that the network cards would not transmit one particular sequence of bytes. If a packet contained that string, the packet would be lost. They solved the problem by replacing the hardware - finding it in the first place was much harder.
One of the hardest bugs I had to find was a memory corruption error that only occurred after the program had been running for hours. Because of the length of time it took to corrupt the data, we assumed hardware and tried two or three other computers first.
The bug would take hours to appear, and when it did appear it was usually only noticed quite a length of time after when the program got so messed up it started misbehaving. Narrowing down in the code base to where the bug was occurring was very difficult because the crashes due to corrupted memory never occurred in the function that corrupted the memory, and it took so damned long for the bug to manifest itself.
The bug turned out to be an off-by-one error in a rarely called piece of code to handle a data line that had something wrong with it (invalid character encoding from memory).
In the end the debugger proved next to useless because the crashes never occurred in the call tree for the offending function. A well sequenced stream of fprintf(stderr, ...) calls in the code and dumping the output to a file was what eventually allowed us to identify what the problem was.
Concurrency bugs are quite hard to track, because reproducing them can be very hard when you do not yet know what the bug is. That's why, every time you see an unexplained stack trace in the logs, you should search for the reason of that exception until you find it. Even if it happens only one time in a million, that does not make it unimportant.
Since you can not rely on the tests to reproduce the bug, you must use deductive reasoning to find out the bug. That in turn requires a deep understanding of how the system works (for example how Java's memory model works and what are possible sources of concurrency bugs).
Here is an example of a concurrency bug in Guice 1.0 which I located just some days ago. You can test your bug finding skills by trying to find out what is the bug causing that exception. The bug is not too hard to find - I found its cause in some 15-30 min (the answer is here).
java.lang.NullPointerException
at com.google.inject.InjectorImpl.injectMembers(InjectorImpl.java:673)
at com.google.inject.InjectorImpl$8.call(InjectorImpl.java:682)
at com.google.inject.InjectorImpl$8.call(InjectorImpl.java:681)
at com.google.inject.InjectorImpl.callInContext(InjectorImpl.java:747)
at com.google.inject.InjectorImpl.injectMembers(InjectorImpl.java:680)
at ...
P.S. Faulty hardware might cause even nastier bugs than concurrency, because it may take a long time before you can confidently conclude that there is no bug in the code. Luckily hardware bugs are rarer than software bugs.
A friend of mine had this bug. He accidentally put a function argument in a C program in square brackets instead of parenthesis like this: foo[5] instead of foo(5). The compiler was perfectly happy, because the function name is a pointer, and there is nothing illegal about indexing off a pointer.
One of the most frustrating for me was when the algorithm was wrong in the software spec.
Probably not the hardest, but they are extremely common and not trivial:
Bugs concerning mutable state. It is hard to maintain invariants in a data structure if it has many mutable fields. And you have operation order dependency - swap two lines and something bad occurs. One of my recent hard-to-find bugs was when I found that previous developer of the system I maintained used mutable data for hashtable keys - in some rare conditions it lead to infinite loops.
Order of initialization bugs. Can be obvious when found, but not so when coding.
The hardest one ever was actually a bug I was helping a friend with. He was writing C in MS Visual Studio 2005, and forgot to include time.h. He further called time without the required argument, usually NULL. This implicitly declared time like: int time(); This corrupted the stack, and in a completely unpredictable way. It was a large amount of code, and we didn't think to look at the time() call for quite some time.
Buffer overflows ( in native code )
Last year I spent a couple of months tracking a problem that ended up being a bug in a downstream system. The team lead from the offending system kept claiming that it must be something funny in our processing even though we passed the data just like they requested it from us. If the lead would have been a little more cooperative we might have nailed the bug sooner.
Uninitialized variables. (Or have modern languages done away with this?)
For embedded systems:
Unusual behaviour reported by customers in the field, but which we're unable to reproduce.
After that, bugs which turn out to be due to a freak series or concurrence of events. These are at least reproducable, but obviously they can take a long time - and a lot of experimentation - to make happen.
Difficulty of tracking:
off-by-one errors
boundary condition errors
Machine dependent problems.
I'm currently trying to debug why an application has an unhandled exception in a try{} catch{} block (yes, unhandled inside of a try / catch) that only manifests on certain OS / machine builds, and not on others.
Same version of software, same installation media, same source code, works on some - unhandled exception in what should be a very well handled part of code on others.
Gak.
When objects are cached and their equals and hashcode implementations are implemented so poorly that the hash code value isn't unique and the equals returns true when it isn't equal.
Cosmetic web bugs involving styling in various browser O/S configurations, e.g. a page looks fine in Windows and Mac in Firefox and IE but on the Mac in Safari something gets messed up. These are annoying sometimes because they require so much attention to detail and making the change to fix Safari may break something in Firefox or IE so one has to tread carefully and realize that the styling may be a series of hacks to fix page after page. I'd say those are my nastiest ones that sometimes just don't get fixed as they aren't viewed as important.
WAY back in the days, memory leaks. Thankfully, there's a lot of tools to find them, these days.
Memory issues, particularly on older systems. We have some legacy 16-bit C software that must remain 16-bit for the time being. The 64K memory blocks are royal pain to work with, and we constantly add statics or code logic that pushes us past the 64K group limits.
To make matters worse, memory errors usually don't cause the program to crash, but cause certain features to sporadically break (and not always the same features). Debugging is a non-option - the debugger doesn't have the same memory constraints so the programs always run fine in debug mode ... plus, we can't add inline printf statements for testing since that bumps the memory usage even higher.
As a result, we can sometimes spend DAYS trying to find a single block of code to rewrite, or hours moving static chars to files. Luckily the system is slowly being moved offline.
Multithreading, memory leaks, anything requiring extensive mocks, interfacing with third-party software.