Does Netlogo have a way for Code Debugging? - debugging

Anybody knows guys is there a possibility to trace a code in Netlogo statement by statement, like using F7 or F8 in C/C++,...

Generic longer comment:
/*
So as the comments above say, there is not a debugger but there are work-arounds.
Replayable random-seeds and many print statements help. I define v-print routine to print if a "verbose" switch is TRUE, so I can put in lots of v-print statements, sometimes one per line of code, but shut them all off until needed with one interface switch.
For many Netlogo users, judging by the questions, this is the first programming language they have ever used. So there is some generic advice like "take small steps and do sanity checks often, expecting that something will go wrong."
But maybe the best and hardest advice is to learn enough Git and GitHub to set up a repository and "SAVE COMMENTED VERSIONS OFTEN", because otherwise it is a losing proposition to try to debug tangled code with multiple errors that you have been trying random changes on to see if they help. Some times you need to fall back to the prior last-good version and work forward from there again step by step looking for where it breaks. Working forward is easy. Debugging backwards is hard.
Debugging code with one error is much easier than debugging code with multiple errors.
Decades ago I used to repair analog televisions. I loved rich people because they called as soon as one thing was wrong. Poor people waited until the set was full of errors then called. I've seen similar behavior in programmers continuing to code hoping that prior errors will somehow magically go away. Slow is fast.
GitHub has been a life-saver a few times when I accidentally deleted a section of code in the editor and didn't notice I'd done it. It's easy to have lines of code highlighted and then type a word while not looking at the screen. Then you go crazy with code that worked yesterday and doesn't work today and you "didn't change anything".
Using GitHub to restore a prior version, or simply doing a differences ( "diff" ) between the current version and prior version shows what changed.
*/

Related

Can I mark some code as optional while debugging in Visual Studio 2012?

I'm not sure how to really put my question into words so let me try to explain it with an example:
Let's say my program runs into some weird behavior at a specific action. I already find some code which is the cause of this weird behavior. When disabling this sequence I don't run into this behavior. Unfortunately, I need this code because something else is not working then.
So, what I gonna do next is figuring out why something is going different when that code excerpt is active.
In order to better understand what's going on I sometimes want to run the whole action including the 'bad code' and sometimes without. Then I can compare the outcome, for example what happens in the UI or what my function returns.
The first approach which comes to my mind is to run my program with the code enabled, do whatever I want, then stop my program, comment out the code, recompile and run again. Um... that sounds dumb. Especially if I then again need to turn on that code to see another time the other behavior, and then again turn off, and on, and off and so on.
It's not an option for me to use breakpoints and influence the statement order or to modify values so that I run or not run into if-statements, for-loops etc. Two examples:
I debug a timing critical behavior and when I halt the program the timing changes significantly. Thus, the first breakpoint I can set must be at the end of the action. 1
I expect a tooltip or other window to appear which is 'suppressed' when focus is given to VS. Thus, I cannot use any breakpoints at all. Neither in the beginning nor at the end of the action.1
Is there any technique in Visual Studio 2012 which allows me to mark this code to be optional and I can decide whether or not I want to run this code sequence before I execute the action? I think of something like if(true|false) on a higher level.
I'm not looking for a solution where I need to re-run my program several times. In that case I could still doing the simple approach of simply commenting out the code with #if false.
1 Note that I, of course, may set a breakpoint when I need to look into a specific variable at a certain position (if I haven't written the value into output) but will turn off breakpoints again to run the whole action in one go.
In the Visual Studio debugger you can set a breakpoint right in front of your "code in question". When the code stops at that point, you can elect to let it continue or you can right-click on any other line and select Set Next Statement.
It's kind of a weird option, but I've come to appreciate it.
The only option I can think of is to add something to your UI that only appears when debugging, giving you the option to include/exclude the operations in question.
While you're at it, you might want to enable resetting the application to a "known state" from the UI as well.
I think of something like if(true|false) on a higher level.
Why "on a higher level"? Why not use exactly this?
You want a piece of code sometimes executed, sometimes not, and the switch should be changed at run time, not at compile time - this obviously leads to
if(condition)
{
// code in stake
}
The catch here is what kind of condition you will use - maybe a variable you set to true in the release version of your code, and to false sometimes in your debug version. Maybe the value is taken from a configuration file, maybe from an environment variable, maybe calculated by some kind of logic in your program, whatever and whenever you like.
EDIT: you could also introduce a boolean variable in your code for condition, initialize it to true by default and change its value using the debugger whenever you like.
Preprocessor Directives might be what you're after. They're bits of code for the compiler to execute, identifiable by starting with a # character (and stylistically, by default they don't follow the indent pattern of your code, instead always residing firmly at the left-hand edge of the editor):
#define INCLUDE_DODGY_CODE
public void MyMethodWithDodgyBits() {
#if INCLUDE_DODGY_CODE
myDodgyMethod();
#endif
myOkMethod();
}
In this case, if #define INCLUDE_DODGY_CODE was included, the myDodgyMethod() call will be compiled into your program. Otherwise, the call will be skipped by the compiler and will simply not exist in your binary.
There are a couple of options for debugging as you ask.
Visual Studio has a number of options to directly navigate through code. You can use the Set Next Statement feature to move directly to a particular statement. You can also directly edit values through the Immediate Window the QuickWatch and the tooltip that hovers over variables while debugging.
Visual Studio also has the ability to playback the execution history. Take a look at IntelliTrace to get started. It can be helpful when you have multiple areas of concern that are interacting and generating the error condition.
You can also wrap your sections of code within conditional blocks, and set the conditional variables as appropriate. That could be while you're debugging, or you could pass parameters in through a configuration file. Using conditional checks may be easier than manually stepping through code if there are a number of statements you wish to exclude.
It sometimes depends on the version of VS and the language, but you can happily edit the code (to comment it out, or wrap it in a big #ifdef 0) then press alt+F10 and the compiler will recompile, relink and continue execution as if you'd never fiddled with it.
But while that works beautifully in VC++ (since VS v6 IIRC), C# can have issues - I find (with VS2010) that I cannot edit and continue in this way with functions containing any lambda (mainly linq) statements, and 64-bit code never used to do this too. Still, its worth experimenting with as its really useful sometimes.
I have worked on applications that have optional code used for debugging alone that should not appear in the production environment. This segment of optional code was easiest for us to control using a config file since it didn't require a re-compile to change.
Such a fix might not be the end all be all for your end result, but it might help get through it until a fix is found. If you have multiple optional sections that need to be tested in combination this style of fix could require multiple keys in the config file, which could be a downside and a pain to keep track of.
Your question isn't exactly clear, which is possibly why there are so many answers which you think are invalid. You may want to consider rewording it if no one seems able to answer the question.
With the risk of giving another non-valid answer I'll add some input on how I've dealt with the issue in the past.
The easiest way is to place any optional code within
#if DEBUG
//Optional code here
#endif
That way, when you run in debug mode the code is implemented and when you run in release mode it's not. Switching between the two requires clicking one button.
I've also solved the same problem in a similar way with a simple flag:
bool runOptionalCode = false;
then
if (runOptionalCode)
{
//Place optional code here
}
Again, switching between modes requires changing one word, so is a simple task. You mention this in your question but discount it for reasons that are unclear. As I said, it requires very little effort to switch between the two.
If you need to make changes between the code while it's running the best way is to use a UI item or a keystroke which modifies the flag mentioned in the example above. Depending on your application though this could be more effort than it's worth. In the past I've found that when I have a key listener already implemented as part of the project, having a couple of key strokes decide whether to run my debug (optional) code works best. In an application without key listeners I'd rather stick with one of the previous methods.

How to localize syntax errors in Mathematica?

Is there a trick to localize syntax errors? I occasionally run into this situation when editing existing code. Here's a latest example, do you see an efficient way to find the error? (I found the error already, but not efficiently)
http://yaroslavvb.com/upload/save/syntax-error.nb
May be, I am missing something, but in your example it was quite easy: once I put your code into a notebook and tried to run, the usual orange bracket appeared, and when you expand the messages, it states very clearly that the problem was in an empty parentheses in your vertexFormula function. In my experience, most of the time, the orange box gives enough hints.
Another great way which I use on a daily basis it through the code highlighting in Workbench. It immediately highlights syntax errors, plus you have a very powerful Eclipse-based navigation both for a single package and multiple packages. It might seem as you lose some flexibility by going to Workbench from an interactive FrontEnd development, but I found the opposite (or may be this is the revenge of my enterprise Java background): you still can have your notebook(s) in a Workbench project, where you do the initial development, but then they get attached to the project and a number of packages that you already developed and use. The transition from notebook to a package could not be easier, since you can continue use that notebook after you transfer the code into a package, and you don't even have to worry about loading the package, as long as you do it just once inside a project. Overall, I find the Workbench - based development much more fun as soon as your project goes over some critical mass (I'd say, perhaps around 1000 loc, but ymmv). But, complete new and independent chunks of functionality I still prefer to prototype entirely in the FrontEnd.
If you adhere to some specific coding standards in your code, or work with code which does, you might be able to develop some simple partial parser which at least breaks the code into complete chunks (function defenitions, Module-s etc, CompoundExpressions). Then, you could use ToExpression (say, Map it on a list of string code chunks returned by your partial parser), to see which piece of code is problematic (it will return $Failed on it). But if you use Workbench, this is unnecessary altogether.

How do you reproduce bugs that occur sporadically?

We have a bug in our application that does not occur every time and therefore we don't know its "logic". I don't even get it reproduced in 100 times today.
Disclaimer: This bug exists and I've seen it. It's not a pebkac or something similar.
What are common hints to reproduce this kind of bug?
Analyze the problem in a pair and pair-read the code. Make notes of the problems you KNOW to be true and try to assert which logical preconditions must hold true for this happen. Follow the evidence like a CSI.
Most people instinctively say "add more logging", and this may be a solution. But for a lot of problems this just makes things worse, since logging can change timing-dependencies sufficiently to make the problem more or less frequent. Changing the frequency from 1 in 1000 to 1 in 1,000,000 will not bring you closer to the true source of the problem.
So if your logical reasoning does not solve the problem, it'll probably give you a few specifics you could investigate with logging or assertions in your code.
There is no general good answer to the question, but here is what I have found:
It takes a talent for this kind of thing. Not all developers are best suited for it, even if they are superstars in other areas. So know your team, who has a talent for it, and hope you can give them enough candy to get them excited about helping you out, even if it isn't their area.
Work backwards, and treat it like a scientific investigation. Start with the bug, what you see is wrong. Develop hypotheses about what could cause it (this is the creative/imaginative part, the art that not everyone has the talent for) - and it helps a lot to know how the code works. For each of those hypotheses (preferably sorted by what you think is most likely - again pure gut feel here), develop a test that tries to eliminate it as the cause, and test the hypothesis. Any given failure to meet a prediction doesn't mean the hypothesis is wrong. Test the hypothesis until it is confirmed to be wrong (although as it gets less likely you may want to move on to another hypothesis first, just don't discount this one until you have a definitive failure).
Gather as much data as you can during this process. Extensive logging and whatever else is applicable. Do not discount a hypothesis because you lack the data, rather remedy the lack of data. Quite often the inspiration for the right hypothesis comes from examining the data. Noticing something off in a stack trace, weird issue in a log, something missing that should be there in a database, etc.
Double check every assumption. So many times I have seen an issue not get fixed quickly because some general method call was not further investigated, so the problem was just assumed to be not applicable. "Oh that, that should be simple." (See point 1).
If you run out of hypotheses, that is generally caused by insufficient knowledge of the system (this is true even if you wrote every line of code yourself), and you need to run through and review code and gain additional insight into the system to come up with a new idea.
Of course, none of the above guarantees anything, but that is the approach that I have found gets results consistently.
Add some sort of logging or tracing. For example log the last X actions the user committed before causing the bug (only if you can set a condition to match bug).
It's quite common for programmers not to be able to reiterate a user-experienced crash simply because you have developed a certain workflow and habits in using the application that obviously goes around the bug.
At this frequency of 1/100, I'd say that the first thing to do is to handle exceptions and log anything anywhere or you could be spending another week hunting this bug.
Also make a priority list of potentially sensitive articulations and features in your project. For example :
1 - Multithreading
2 - Wild pointers/ loose arrays
3 - Reliance on input devices
etc.
This will help you segment areas that you can brute-force-until-break-again as suggested by other posters.
Since this is language-agnostic, I'll mention a few axioms of debugging.
Nothing a computer ever does is random. A 'random occurrence' indicates a as-yet-undiscovered pattern. Debugging begins with isolating the pattern. Vary individual elements and assess what makes a change in the behaviour of the bug.
Different user, same computer?
Same user, different computer?
Is the occurrence strongly periodic? Does rebooting change the periodicity?
FYI- I once saw a bug that was experienced by a single person. I literally mean person, not a user account. User A would never see the problem on their system, User B would sit down at that workstation, signed on as User A and could immediately reproduce the bug. There should be no conceivable way for the app to know the difference between the physical body in the chair. However-
The users used the app in different ways. User A habitually used a hotkey to to invoke a action and User B used an on-screen control. The difference in the user behaviour would cascade into a visible error a few actions later.
ANY difference that effects the behaviour of the bug should be investigated, even if it makes no sense.
There's a good chance your application is MTWIDNTBMT (Multi Threaded When It Doesn't Need To Be Multi Threaded), or maybe just multi-threaded (to be polite). A good way to reproduce sporadic errors in multi-threaded applications is to sprinkle code like this around (C#):
Random rnd = new Random();
System.Threading.Thread.Sleep(rnd.Next(2000));
and/or this:
for (int i = 0; i < 4000000000; i++)
{
// tight loop
}
to simulate threads completing their tasks at different times than usual or tying up the processor for long stretches.
I've inherited many buggy, multi-threaded apps over the years, and code like the above examples usually makes the sporadic errors occur much more frequently.
Add verbose logging. It will take multiple -- sometimes dozen(s) -- iterations to add enough logging to understand the scenario.
Now the problem is that if the problem is a race condition, which is likely if it doesn't reproduce reliably, so logging can change timing and the problem will stop happening. In this case do not log to a file, but keep a rotating buffer of the log in memory and only dump it on disk when you detect that the problem has occurred.
Edit: a little more thoughts: if this is a gui application run tests with a qa automation tool which allows you to replay macros. If this is a service-type app, try to come up with at least a guess as to what is happening and then programmatically create 'freak' usage patterns which would exercise the code that you suspect. Create higher than usual loads etc.
What development environment?
For C++, your best bet may be VMWare Workstation record/replay, see:
http://stackframe.blogspot.com/2007/04/workstation-60-and-death-of.html
Other suggestions include inspecting the stack trace, and careful code overview... there is really no silver bullet :)
Try to add code in your app to trace the bug automatically once it happens (or even alert you via mail / SMS)
log whatever you can so when it happens you can catch the right system state.
Another thing- try applying automated testing that can cover more territory than human based testing in a formed manner.. it's a long shot, but a good practice in general.
all the above, plus throw some brute force soft-robot at it that is semi random, and scater a lot of assert/verify (c/c++, probably similar in other langs) through the code
Tons of logging and careful code review are your only options.
These can be especially painful if the app is deployed and you can't adjust the logging. At that point, your only choice is going through the code with a fine-tooth comb and trying to reason about how the program could enter into the bad state (scientific method to the rescue!)
Often these kind of bugs are related to corrupted memory and for that reason they might not appear very often. You should try to run your software with some kind of memory profiler e.g., valgrind, to see if something goes wrong.
Let’s say I’m starting with a production application.
I typically add debug logging around the areas where I think the bug is occurring. I setup the logging statements to give me insight into the state of the application. Then I have the debug log level turned on and ask the user/operator(s) notify me of the time of the next bug occurrence. I then analyze the log to see what hints it gives about the state of the application and if that leads to a better understanding of what could be going wrong.
I repeat step 1 until I have a good idea of where I can start debugging the code in the debugger
Sometimes the number of iterations of the code running is key but other times it maybe the interaction of a component with an outside system (database, specific user machine, operating system, etc.). Take some time to setup a debug environment that matches the production environment as closely as possible. VM technology is a good tool for solving this problem.
Next I proceed via the debugger. This could include creating a test harness of some sort that puts the code/components in the state I’ve observed from the logs. Knowing how to setup conditional break points can save a lot of time, so get familiar with that and other features within your debugger.
Debug, debug , debug. If you’re going nowhere after a few hours, take a break and work on something unrelated for awhile. Come back with a fresh mind and perspective.
If you have gotten nowhere by now, go back to step 1 and make another iteration.
For really difficult problems you may have to resort to installing a debugger on the system where the bug is occurring. That combined with your test harness from step 4 can usually crack the really baffling issues.
Unit Tests. Testing a bug in the app is often horrendous because there is so much noise, so many variable factors. In general the bigger the (hay)stack, the harder it is to pinpoint the issue. Creatively extending your unit test framework to embrace edge cases can save hours or even days of sifting
Having said that there is no silver bullet. I feel your pain.
Add pre and post condition check in methods related to this bug.
You may have a look at Design by contract
Along with a lot of patience, a quiet prayer & cursing you would need:
a good mechanism for logging the user actions
a good mechanism for gathering the data state when the user performs some actions (state in application, database etc.)
Check the server environment (e.g. an anti-virus software running at a particular time etc.) & record the times of the error & see if you can find any trends
some more prayers & cursing...
HTH.
Assuming you're on Windows, and your "bug" is a crash or some sort of corruption in unmanaged code (C/C++), then take a look at Application Verifier from Microsoft. The tool has a number of stops that can be enabled to verify things during runtime. If you have an idea of the scenario where your bug occurs, then try to run through the scenario (or a stress version of the scenario) with AppVerifer running. Make sure to either turn on pageheap in AppVerifier, or consider compiling your code with the /RTCcsu switch (see http://msdn.microsoft.com/en-us/library/8wtf2dfz.aspx for more information).
"Heisenbugs" require great skills to diagnose, and if you want help from people here you have to describe this in much more detail, and patiently listen to various tests and checks, report result here, and iterate this till you solve it (or decide it is too expensive in terms of resources).
You will probably have to tell us your actual situation, language, DB, operative system, workload estimate, time of the day it happened in the past, and a myriad of other things, list tests you did already, how they went, and be ready to do more and share the results.
And this will not guarantee that we collectively can find it, either...
I'd suggest to write down all things that user has been doing. If you have lets say 10 such bug reports You can try to find something that connects them.
Read the stack trace carefully and try to guess what could be happened;
then try to trace\log every line of code that potentially can cause trouble.
Keep your focus on disposing resources; many sneaky sporadical bugs i found were related to close\dispose things :).
For .NET projects You can use Elmah (Error Logging Modules and Handlers) to monitor you application for un-caught exceptions, it's very simple to install and provides a very nice interface to browse unknown errors
http://code.google.com/p/elmah/
This saved me just today in catching a very random error that was occuring during a registration process
Other than that I can only recommend trying to get as much information from your users as possible and having a thorough understanding of the project workflow
They mostly come out at night....
mostly
The team that I work with has enlisted the users in recording their time they spend in our app with CamStudio when we've got a pesky bug to track down. It's easy to install and for them to use, and makes reproducing those nagging bugs much easier, since you can watch what the users are doing. It also has no relationship to the language you're working in, since it's just recording the windows desktop.
However, this route seems to be viable only if you're developing corporate apps and have good relationships with your users.
This varies (as you say), but some of the things that are handy with this can be
immediately going into the debugger when the problem occurs and dumping all the threads (or the equivalent, such as dumping the core immediately or whatever.)
running with logging turned on but otherwise entirely in release/production mode. (This is possible in some random environments like c and rails but not many others.)
do stuff to make the edge conditions on the machine worse... force low memory / high load / more threads / serving more requests
Making sure that you're actually listening to what the users encountering the problem are actually saying. Making sure that they're actually explaining the relevant details. This seems to be the one that breaks people in the field a lot. Trying to reproduce the wrong problem is boring.
Get used to reading assembly that was produced by optimizing compilers. This seems to stop people sometimes, and it isn't applicable to all languages/platforms, but it can help
Be prepared to accept that it is your (the developer's) fault. Don't get into the trap of insisting the code is perfect.
sometimes you need to actually track the problem down on the machine it is happening on.
#p.marino - not enough rep to comment =/
tl;dr - build failures due to time of day
You mentioned time of day and that caught my eye. Had a bug once were someone stayed later at work on night, tried to build and commit before they left and kept getting a failure. They eventually gave up and went home. When they caught in the next morning it built fine, they committed (probably should have been more suspiscious =] ) and the build worked for everyone. A week or two later someone stayed late and had an unexpected build failure. Turns out there was a bug in the code that made any build after 7PM break >.>
We also found a bug in one seldom used corner of the project this january that caused problems marshalling between different schemas because we were not accounting for the different calendars being 0 AND 1 month based. So if no one had messed with that part of the project we wouldn't have possibly found the bug until jan. 2011
These were easier to fix than threading issues, but still interesting I think.
hire some testers!
This has worked for really weird heisenbugs.
(I'd also recommend getting a copy of "Debugging" by Dave Argans, these ideas are partly derived form using his ideas!)
(0) Check the ram of the system using something like Memtest86!
The whole system exhibits the problem, so make a test jig that exercises the whole thing.
Say it's a server side thing with a GUI, you run the whole thing with a GUI test framework doing the necessary input to provoke the problem.
It doesn't fail 100% of the time, so you have to make it fail more often.
Start by cutting the system in half ( binary chop)
worse case, you have to remove sub-systems one at a time.
stub them out if they can't be commented out.
See if it still fails. Does it fail more often ?
Keep proper test records, and only change one variable at a time!
Worst case you use the jig and you test for weeks to get meaningful statistics. This is HARD; but remember, the jig is doing the work.
I've got No threads and only one process, and I don't talk to hardware
If the system has no threads, no communicating processes and contacts no hardware; it's tricky; heisenbugs are generally synchronization, but in the no-thread no processes case it's more likely to be uninitialized data, or data used after being released, either on the heap or the stack. Try to use a checker like valgrind.
For threaded/multi-process problems:
Try running it on a different number of CPU's. If it's running on 1, try on 4! Try forcing a 4-computer system onto 1.
It'll mostly ensure things happen one at a time.
If there are threads or communicating processes this can shake out bugs.
If this is not helping but you suspect it's synchronization or threading, try changing the OS time-slice size.
Make it as fine as your OS vendor allows!
Sometimes this has made race conditions happen almost every time!
Obversely, try going slower on the timeslices.
Then you set the test jig running with debugger(s) attached all over the place and wait for the test jig to stop on a fault.
If all else fails, put the hardware in the freezer and run it there. The timing of everything will be shifted.
Debugging is hard and time consuming especially if you are unable to deterministically reproduce the problem. My advice to you is to find out the steps to reproduce it deterministically (not just sometimes).
There has been a lot of research in the field of failure reproduction in the past years and is still very active. Record&Replay techniques have been (so far) the research direction of most researchers. This is what you need to do:
1) Analyze the source code and determine what are the sources of non-determinism in the application, that is, what are the aspects that may take your application through different execution paths (e.g. user input, OS signals)
2) Log them in the next time you execute the application
3) When your application fails again, you have the steps-to-reproduce the failure in your log.
If your log still does not reproduce the failure, then you are dealing with a concurrency bug. In that case, you should take a look at how your application accesses shared variables. Do not attempt to record the accesses to shared variables, because you would be logging too much data, thereby causing severe slowdowns and large logs. Unfortunately, there is not much I can say that would help you to reproduce concurrency bugs, because research still has a long way to go in this subject. The best I can do is to provide a reference to the most recent advance (so far) in the topic of deterministic replay of concurrency bugs:
http://www.gsd.inesc-id.pt/~nmachado/software/Symbiosis_Tutorial.html
Best regards
Use an enhanced crash reporter. In the Delphi environment, we have EurekaLog and MadExcept. Other tools exist in other environments. Or you can diagnose the core dump. You're looking for the stack trace, which will show you where it's blowing up, how it got there, what's in memory, etc.. It's also useful to have a screenshot of the app, if it's a user-interaction thing. And info about the machine that it crashed on (OS version and patch, what else is running at the time, etc..) Both of the tools that I mentioned can do this.
If it's something that happens with a few users but you can't reproduce it, and they can, go sit with them and watch. If it's not apparent, switch seats - you "drive", and they tell you what to do. You'll uncover the subtle usability issues that way. double-clicks on a single-click button, for example, initiating re-entrancy in the OnClick event. That sort of thing. If the users are remote, use WebEx, Wink, etc., to record them crashing it, so you can analyze the playback.

What helps to you improve your ability to find a bug?

I want to know if there are method to quickly find bugs in the program.
It seems that the more you master the architecture of your software, the more quickly
you can locate the bugs.
How the programmers improve their ability to find a bug?
Logging, and unit tests. The more information you have about what happened, the easier it is to reproduce it. The more modular you can make your code, the easier it is to check that it really is misbehaving where you think it is, and then check that your fix solves the problem.
Divide and conquer. Whenever you are debugging, you should be thinking about cutting down the possible locations of the problem. Every time you run the app, you should be trying to eliminate a possible source and zero in on the actual location. This can be done with logging, with a debugger, assertions, etc.
Here's a prophylactic method after you have found a bug: I find it really helpful to take a minute and think about the bug.
What was the bug exactly in essence.
Why did it occur.
Could you have found it earlier, easier.
Anything else you learned from the bug.
I find taking a minute to think about these things will make it far less likely that you will produce the same bug in the future.
I will assume you mean logic bugs. The best way I have found to capture logic bugs is to implement some sort of testing scheme. Check out jUnit as the standard. Pretty much you define a set of accepted outputs of your methods. Every time you compile your system it checks all of your test cases. If you have introduced new logic that breaks your tests, you will know about it instantly and know exactly what you have to fix.
Test driven design is a pretty big movement in programming right now. You will be hard pressed to find a language that doesn't support some kind of testing. Even JavaScript has a multitude of test suites.
Experience makes you a better debugger. Pay close attention to the bugs that you AND others commonly make. Try to figure out if/how these bugs apply to ALL code that affects you, not the single instance of where the bug was seen.
Raymond Chen is famous for his powers of psychic debugging.
Most of what looks like psychic
debugging is really just knowing what
people tend to get wrong.
That means that you don't necessarily have to be intimately familiar with the architecture / system. You just need enough knowledge to understand the types of bugs that apply and are easy to make.
I personally take the approach of thinking about where the bug may be in the code before actually opening up the code and taking a look. When you first start with this approach, it may not actually work very well, especially if you are pretty unfamiliar with the code base. However, over time someone will be able to tell you the behavior they are experiencing and you'll have a good idea where the problem is located or you may even know what to fix in the code to remedy the problem before even looking at the code.
I was on a project for several years that maintained by a vendor. They were not very good debuggers and most of the time it was up to us to point them to an area of the code that had the problem. What made our problem worse was that we didn't have a nice way to view the source code, so a lot of our "debugging" was just feeling.
Error checking and reporting. The #1 newbie coder debugging mistake is to turn off error reporting, avoid checking for whether what's going on makes sense, etc etc. In general, people feel like if they can't see anything going wrong then nothing is going wrong. Which of course could not be further from the case.
Instead, your code should be chock full of error conditions that will make lots of noise, with detailed reporting, someplace you will see it. (This doesn't mean inside a production web page.) Then, instead of having to trace an error all over the place because it got passed through sixteen layers of execution before it finally got someplace that broke, your errors start happening proximately to the actual issue.
It seems that the more you master the
architecture of your software ,the
more quickly you can locate the bugs.
After understanding the architecture, one's ability to find bugs in the application increases with their ability to identify and write extensive tests.
Know your tools.
Make sure that you know how to use conditional breakpoints and watches in your debugger.
Use static analysis tools as well - they can point out the more obvious issues.
Sleep and rest.
Use programming methods that produce fewer bugs in the first place.
If to implement a single stand-alone functional requirement it takes N separate point-edits to source code, the number of bugs put into the code is roughly proportional to N, so find programming methods that minimize N. Ways to do this: DRY (don't repeat yourself), code generation, and DSL (domain-specific-language).
Where bugs are likely, have unit tests.
Obviously.IMHO, the best unit tests are monte-carlo.
Make intermediate results visible.
For example, compilers have intermediate representations, in the form of 4-tuples. If there is a bug, the intermediate code can be examined. That tells if the bug is in the first or second half of the compiler.
P.S. Most programmers are not aware that they have a choice of how much data structure to use. The less data structure you use, the less are the chances for bugs (and performance issues) caused by it.
I find tracepoints to be an invaluable debugging tool. They are a bit like logging, except you create them during a debugging session to solve a particular issue, like breakpoints.
Printing the stacktrace in a tracepoint can be especially useful. For example, you can print the hash code and stacktrace in the constructor of an object, and then later on when the object is used again you can search for its hashcode to see which client code created it. Same for seeing who disposed it or called a certain method etc.
They are also great for debugging issues related to window focus changes etc, where the debugger would interfere if you drop in break mode.
Static code tools like FindBugs
Assertions, assertions, and assertions.
Some areas of our code has 4 or 5 assertions for each line of real code. When we get a bug report the first thing that happens is that the customer data is processed in our debug build 99 times out a hundred an assert will fire near the cause of the bug.
Additionally our debug build perform redundant calculations to ensure that an optimized algorithm is returning the correct result, and also debug functions are used to examine the sanity of data structures.
The hardest thing new developers have to contend with is getting their code to survive the assertions of the code gthey are calling.
Additionally we do not allow any code to be putback to toplevel that causes any integration or unit test to fail.
Stepping through the code, examining flow/state where unexpected behavior is occurring. (Then develop a test for it, of course).
Writing Debug.Write(message) in your code and using DebugView is another option. And then run your application find out what is going on.
"Architecture" in software means something like:
Several components
The components interact across clearly-defined interfaces
Each component has a well-defined responsibility
The responsibility of one component is unlike the responsibilities of other components
So, as you said, the better the architecture the easier it is to find bugs.
First: knowing the bug, you can decide which functionality is broken, and therefore know which component implements that functionality. For example, if the bug is that something isn't being logged properly, therefore this bug should be in one of 3 places:
In the component that's responsible for logging (your logging library)
Or, above that in the application code which is using this library
Or, below that in the system code which this library is using
Second: examine the data transfered across the interfaces between components. To continue the previous example above:
Set a debugger breakpoint on the application code which invokes the logger API, to verify whether the logger API is being used correctly (e.g. whether it's being invoked at all, whether parameters are as-expected, etc.).
Doing this tells you whether the bug is in the component above this interface, or in the component that's below this interface.
Repeat (perhaps using binary search if the call stack is very deep) until you've found which component is at fault.
When you come to the point that you think there must be a bug in the OS, check your assertions -- and put them into the code with "assert" statements.
Conversely, as you are writing the code, think of the range of valid inputs for your algorithms and put in assertions to make sure you have what you think you have. Same goes for output: Check that you produced what you think you produced.
E.g. if you expect a non-empty list:
l = getList(input)
assert l, "List was empty for input: %s" % str(input)
I'm part of the QA team # work, and knowing anything about the product and how it is developed, helps a lot in finding bugs, also when I make new QA tools I pass it to our dev team to test it, finding bugs in your own code is just plain hard!
Some people say programmers are tainted, so we cannot see bugs in their own product; we are not talking about code here, we are beyond that, usability and functionality itself.
Meanwhile unit testing seams to be a nice solution to find bugs in your own code, its totally pointless if you're wrong even before writing the unit test, how are you going to find the bugs then? you don't!, let your co-worker find them, hire a QA guy.
Scientific debugging is what I always used, and it greatly helps.
Basically, if you can replicate a bug, you can track its origin. You should then experiment some tests, observe the results, and infer hypotheses on why the bug happens.
Writing about all your hypotheses, attempts, expected results and observed results can help you track down the bugs, particularly if they're nasty.
There are automated tools that can help you with that process, particularly git-bisect (and similar bisection tools on other revision systems) to quickly find which change introduced the bug, unit testing to reproduce a bug and prevent regressions in your code (can be used in combination with bisect), and delta debugging to find the culprit in your code (similar to git-bisect but whereas git-bisect works on the code history, delta debugging works on the code directly).
But whatever the tools you are using, the most important benefit is in the scientific methodology, as this is the formalization of what most experienced debuggers do.

How to keep your own debug lines without checking them in?

When working on some code, I add extra debug logging of some kind to make it easier for me to trace the state and values that I care about for this particular fix.
But if I would check this in into the source code repository, my colleagues would get angry on me for polluting the Log output and polluting the code.
So how do I locally keep these lines of code that are important to me, without checking them in?
Clarification:
Many answers related to the log output, and that you with log levels can filter that out. And I agree with that.
But. I also mentioned the problem of polluting the actual code. If someone puts a log statement between every other line of code, to print the value of all variables all the time. It really makes the code hard to read. So I would really like to avoid that as well. Basically by not checking in the logging code at all. So the question is: how to keep your own special purpose log lines. So you can use them for your debug builds, without cluttering up the checked in code.
If the only objetive of the debugging code you are having problems with is to trace the values of some varibles I think that what you really need is a debugger. With a debugger you can watch the state of any variable in any moment.
If you cannot use a debugger, then you can add some code to print the values in some debug output. But this code should be only a few lines whose objective has to be to make easier the fix you are doing. Once it's commited to trunk it's fixed and then you shouldn't need more those debug lines, so you must delete them. Not delete all the debug code, good debug code is very useful, delete only your "personal" tracing debug code.
If the fix is so long that you want to save your progress commiting to the repository, then what you need is a branch, in this branch you can add so much debugging code as you want, but anyway you should remove it when merging in trunk.
But if I would check this in into the
source code repository, my colleagues
would get angry on me for polluting
the Log output and polluting the code.
I'm hoping that your Log framework has a concept of log levels, so that your debugging could easily be turned off. Personally I can't see why people would get angry at more debug logging - because they can just turn it off!
Why not wrap them in preprocessor directives (assuming the construct exists in the language of your choice)?
#if DEBUG
logger.debug("stuff I care about");
#endif
Also, you can use a log level like trace, or debug, which should not be turned on in production.
if(logger.isTraceEnabled()) {
logger.log("My expensive logging operation");
}
This way, if something in that area does crop up one day, you can turn logging at that level back on and actually get some (hopefully) helpful feedback.
Note that both of these solutions would still allow the logging statements to be checked in, but I don't see a good reason not to have them checked in. I am providing solutions to keep them out of production logs.
If this was really an ongoing problem, I think I'd assume that the central repository is the master version, and I'd end up using patch files to contain the differences between the official version (the last one that I worked on) and my version with the debugging code. Then, when I needed to reinstate my debug, I'd check out the official version, apply my patch (with the patch command), fix the problem, and before check in, remove the patch with patch -R (for a reversed patch).
However, there should be no need for this. You should be able to agree on a methodology that preserves the information in the official code line, with mechanisms to control the amount of debugging that is produced. And it should be possible regardless of whether your language has conditional compilation in the sense that C or C++ does, with the C pre-processor.
I know i'm going to get negative votes for this...
But if I were you, i'd just build my own tool.
It'll take you a weekend, yes, but you'll keep your coding style, and your repository clean, and everyone will be happy.
Not sure what source control you use. With mine, you can easily get a list of the things that are "pending to be checked in". And you can trigger a commit, all through an API.
If I had that same need, i'd make a program to commit, instead of using the built-in command in the Source Control GUI. Your program would go through the list of pending things, take all the files you added/changed, make a copy of them, remove all log lines, commit, and then replace them back with your version.
Depending on what your log lines look like, you may have to add a special comment at the end of them for your program to recognize them.
Again, shouldn't take too much work, and it's not much of a pain to use later.
I don't expect you'll find something that does this for you already done (and for your source control), it's pretty specific, I think.
Similar to
#if DEBUG #endif....
But that will still mean that anyone running with the 'Debug' configuration will hit those lines.
If you really want them skipped then use a log level that no one else uses, or....
Create a different run configuration called MYDEBUGCONFIG
and then put your debug code in between blocks like this:
#if MYDEBUGCONFIG
...your debugging code
#endif;
What source control system are you using? Git allows you to keep local branches. If worse comes to worst, you could just create your own 'Andreas' branch in the repository, though branch management could become pretty painful.
If you really are doing something like:
puts a log statement
between every other line of code, to
print the value of all variables all
the time. It really makes the code
hard to read.
that's the problem. Consider using a test framework, instead, and write the debug code there.
On the other hand, if you are writing just a few debug lines, then you can manage to avoid those by hands (e.g. removing the relevant lines with the editor before the commit and undoing the change after it's done) - but of course it have to be very infrequent!
IMHO, you should avoid the #if solution. That is the C/C++ way of doing conditional debugging routines. Instead attribute all of logging/debugging functions with the ConditionalAttribute. The constructor of the attribute takes in a string. This method will only be called if the particular pre-processor definition of the same name as the attribute string is defined. This has the exact same runtime implications as the #if/#endif solution but it looks a heck of a lot better in code.
This next suggestion is madness do not do it but you could...
Surround your personal logging code with comments such as
// ##LOG-START##
logger.print("OOh A log statment");
// ##END-LOG##
And before you commit your code run a shell script that strips out your logs.
I really wouldn't reccomend this as it's a rubbish idea, but that never stops anyone.
Alternative you could also not add a comment at the end of every log line and have a script remove them...
logger.print("My Innane log message"); //##LOG
Personally I think that using a proper logging framework with a debug logging level etc should be good enough. And remove any superfluous logs before you submit your code.
Treat it as first class code and keep with the code with proper logging API and build option to compile it out/completely disable.

Resources