ExpertSystem Error prolog - prolog

I have the clam.pl and the car.ckb, I need to run the expert system taken from amzi. I want to run my project on SWI PROLOG, so I write :
super. to launch the interpreter
load" to load the data
'car.ckb'. input the specific file
but the output is :
ghoul(problem)
rule 1
rule 2
rule 3
rule 4
rule 5
rule 6
output battery
output out_of_gas
output flooded
askable turn_over
askable lights_weak
askable radio_weak
askable smell_gas
askable gas_gauge
The system doesn't ask me any questions and seems to crash after the last answer! Why ?
Does anyone a good clam.pl file or similar ? I need one with a good cf handling !
clam.pl is Here
car.ckb is Here

Not sure this is going to help with your question (I have little time now), but Anniepoo stated this commit comment on clam.pl
cleaned up many non SWI-Prolog bits. Still needs output/3 defined
So, is output/3 properly defined in your downloaded source ?
edit
You can read the book from where these examples are taken, to get a working knowledge about this topic... beware, there could be bugs in the listings. For instance, I've built an enhanced 'native shell' and I had to correct a bug in the inference engine... but I refer to the book' listings, indeed I was unaware of the availability of these resources. Thanks #Anniepoo for making these (again) available to wider audience.
As a general remark, I would say that an Expert System Shell is a difficult theme is you are still learning the basics of Prolog... you really should start from the simpler one, i.e. the native shell (birds.ncb)

Related

Software Foundations - automatic grading

In order to learn Coq, I downloaded Benjamin Pierce's ebook Software Foundations from here, and extracted the contents. I am now starting to work through the exercises in Basics.v, by editing the file directly in Vim.
I would like to automatically grade my answers (e.g. to track my point score against time).
In preparation for this, I ran coqc against each of the .v files in the order given in the Makefile. As such, I am now able to invoke, e.g. coqtop -batch -l BasicsTest.v.
However, although this reports the available number of points for that chapter, it does not report my score. (I am mid-way through the chapter, and am confident my answers so far are correct, as coqtop -batch -l Basics.v executes without errors.)
I suspect I have overlooked an invocation of Make or Coq that will produce a points score for my answers so far. If so, what is it?
The autograder is currently incomplete. We hope to finish it over the next few months and will make it available when we do. But as Rob says it's really not telling you much more than what you get by running BasicsTest.v in the current beta version.
UPDATE December 2018: The autograder is finished. We haven't packaged it (except for the actual tester files like BasicsTest.v) for public distribution, but we're happy to give access to the Git repo to instructors who want to use it.
BasicsTest.v does not generate a grade in the current version of Software Foundations. You can step through it and see what it does: it simply goes through the exercises, performs some basic checks and reports their results. However, scores are not generated based on the results of these checks.
If your definitions and proofs are complete (e.g., not Admitted) and Coq's typechecker accepts them, you can have reasonable confidence that the answers are correct, unless something in your development breaks the consistency of Coq's logic (very unlikely at this early stage) or you stumbled upon a bug (also extremely unlikely).

Is there any Haskell-land equivalent to the Ruby-land's Bundler et. al and, if not, how would a project so structured be contrived?

Note to readers: Bear with me. I promise there's a question.
I have a problem to solve and think to myself "Oh, I'll do it in Ruby."
$ bundle gem problemsolver
create problemsolver/Gemfile
create problemsolver/Rakefile
create problemsolver/.gitignore
create problemsolver/problemsolver.gemspec
create problemsolver/lib/problemsolver.rb
create problemsolver/lib/problemsolver/version.rb
Initializating git repo in /tmp/harang/problemsolver
Remove the comment on s.add_development_dependency "rspec" in problemsolver/problemsolver.gemspec and then
$ bundle exec rspec --init
The --configure option no longer needs any arguments, so true was ignored.
create spec/spec_helper.rb
create .rspec
New tests go into spec/ and must be in files that end in _spec.rb. For instance, spec/version_spec.rb
describe 'Problemsolver' do
it 'should be at version 0.0.1' do
Problemsolver::VERSION.should == '0.0.1'
end
end
To run specs--ignorning code-change runners like guard--is trivial:
$ bundle exec rspec
.
Finished in 0.00021 seconds
1 example, 0 failures
You can't see it, but the message is nicely color coded for quick "Did I screw up?" scanning? The things that are very good about this:
Setup was rapid, almost brainless (though figuring out which commands to invoke is not trivial).
Standardized layout of the source tree reduces the familiarization period with a new code-base, making collaboration more simple and reducing the lull time when picking up a project you've left for a bit.
A heavy reliance on tooling distributes best-practices through the community, roughly at the speed of new project creation.
Adding coverage tools, code watchers, linters, behavior test tools and others is no more difficult.
This stands unfavorably in contrast to the situation if one thinks, "Oh, I'll do it in Haskell."
$ mkdir problemsolver
$ cd problemsolver/
$ cabal init
Package name [default "problemsolver"]?
Package version [default "0.1"]? 0.0.1
Please choose a license:
1) GPL
2) GPL-2
3) GPL-3
4) LGPL
5) LGPL-2.1
6) LGPL-3
* 7) BSD3
8) MIT
9) PublicDomain
10) AllRightsReserved
11) OtherLicense
12) Other (specify)
Your choice [default "BSD3"]?
Author name? Brian L. Troutwine
Maintainer email [default "brian#troutwine.us"]?
Project homepage/repo URL?
Project synopsis? Solves a problem.
Project category:
1) Codec
2) Concurrency
3) Control
4) Data
5) Database
6) Development
7) Distribution
8) Game
9) Graphics
10) Language
11) Math
12) Network
13) Sound
14) System
15) Testing
16) Text
17) Web
18) Other (specify)
Your choice? ProblemSolver
ProblemSolver is not a valid choice.
Your choice? 18
Please specify? ProblemSolver
What does the package build:
1) Library
2) Executable
Your choice? 2
Generating LICENSE...
Generating Setup.hs...
Generating y.cabal...
You may want to edit the .cabal file and add a Description field.
"Great," you think, "I was so pestered I bet all the latest Haskell best-practices in software development are just waiting on my disk."
$ ls
LICENSE problemsolver.cabal Setup.hs
Allow me to summarize my feelings: :(
The generated cabal file doesn't even have a Main specified, much less instructions for setting up a rudimentary project. Still, okay. If you fart around for a bit trying to find the right search keywords you'll land on How to write a Haskell program which is okay except:
All of Haq source code gets thrown into the root directory.
The test code for Haq is only in Test.hs, is only QuickCheck and has no facility for continuing the project with split-file tests.
All of this has to be manually written or copied for each new project.
Checking Real World Haskell's Chapter 11 you'll find it doesn't even mention cabal and skirts the issue of project layout entirely. None of the resources that Don Stewart kindly answers with here are addressed in either of the aforementioned and, I'll note, Mr. Stewart doesn't explain how to use any of the tools referenced.
Note that the accepted answer in Haskell testing workflow references a project that's since moved on sufficiently so as not be a good answer but does say
As cabal test doesn't yet exist -- we have a student working on it for this year's summer of code! -- the best mechanism we have is to use cabal's user hook mechanism.
Hey, okay, the cabal documentation! The appropriate section does have examples, but they're awfully contrived but don't fail to give the impression that everyone is on their own and good luck to you.
Of course, there's always test-framework that seems to be nice but it example code doesn't offer anything beyond what's seen in the wiki and is non-scalable in the sense that as soon as my program grows in complexity I'm on the hook to develop ways of dividing up tests into manageable modules. I'm not even sure what's going on with HTF and agree with Mr. Volkov's assessment.
Mr. Jelvis' comment on the linked HTF question was of particular interest to me: the Haskell tool-chain suffers, very badly, from a tyranny of small decisions. I can't actually get down to the task at hand--solving my problem in Haskell--because I'm on the hook for getting my environment just right. Why this is bad:
It's wasted effort. Unless I'm writing a test tool, I will very, very rarely care about how my tests are slurped up, only from where.
It's difficult to learn. There seems to be no singular resource for setting up a project with testing baked in, and the various sources that do exist are sufficiently diverse as to be unhelpful.
It's difficult to reproduce. With so many moving pieces to arrange I'm bound to do it differently each time.
As a corollary, it's idiosyncratic. That means its difficult to collaborate and to pick up dormant projects.
This just plain stinks.
Maybe I'm wrong, though. Does there exist some poorly advertised tool or closely developed tools to do something similar to Bundler+Rspec in the Haskell space? If not, is there a poorly advertised canonical example of modern Haskell testing with all of Mr. Stewart's referenced goodies baked right in? The project created or demonstrated:
should by convention and tooling keep test code separate from application code in a well-defined manner (in Ruby-land, Rspec tests go in spec/, Cucumber features in features/),
should not require end-users to compile and install testing dependencies
should be easily reproducible, desirably in no more than 10 minutes and
should be standardized or have the hope of standardization.
Am I wrong in believing that there's nothing at all like this in Haskell-land?
Edit0: Please note, the Ruby language's community isn't the only applicable comparison. Paul R. is correct in identifying the strong current of configuration over convention. Other languages solve the problem of getting a scalable project structure off the ground in other ways:
C :: This language is venerable and so well-documented that you'll have trouble figuring out which well-documented approach to take. No tooling as such.
Java :: Configuration over convention: you're bound into it at the compiler level. Many tools and very well documented.
Scala :: Strong tool support.
Erlang :: Venerable and loosely documented if you know what you're looking for. Arguably configuration over convention if you're using rebar or are otherwise targeting the OTP.
Paul R.'s solution of using a custom template works great if, like C, there's sufficient documentation to compile such a thing. This still runs into issues that I attempted to identify explicitly in the post, but it's workable. Haskell's best offering--that I'm aware of--is "How to write a Haskell program" but falls short of being more than the equivalent of dumping a lone Cub Scout off in the woods with a flashlight and a flask of water.
Also, yes, Static Types are great and do solve many problems that would otherwise need explicit testing. If they were an end-all solution, or mostly sufficient, even, the snap-framework would not be so thoroughly tested. (Arguably "Copy snap-core." is an answer to my question.)
There's currently no one single way to set up a testsuite. Hopefully, people will standardize on cabal test, which is out-of-the box. In fact, both HUnit and QuickCheck are also provided with the Haskell Platform, and so setting up tests doesn't require downloading any extra dependencies.
You're correct that an old accepted answer doesn't provide information on cabal test. I edited it, and now it does! You're also probably correct that the linked page on the Haskell wiki (also written before cabal test became available) doesn't provide information on current testing best practices. It's a wiki, and I encourage folks to edit it! Note that the page does, however, provide a link to another page that describes how one might structure a more complex Haskell project.
tldr; Use cabal test. I'm fond of test-framework, which you can integrate with cabal test should you so desire. Sorry that cabal test is sort of new and not all the resources we have (generally community editable) have been updated to point to it and describe how to use it. Updating lots of resources and creating tutorials is the job of a community. We should probably do a better job promoting lots of the new awesome tools introduced to the Haskell ecosystem in the last few years.
There are many points here. First, there is a comparison of convention-over-configuration with explicit configuration. In the ruby land, the former is often prefered. In my experience, although it works great for do-a-{blog|social-thing|gem|library}-in-5-minutes-screencast and quick experiments, it has much less value in your real projects (more than 5 minutes) as init time gets quickly amortized. Also, there is a reason why tools provide configuration facilities : there are many different needs and usages. So my advice to your cabal-init problem is : make your own template file. Put stub for everything you need, with great comments, and use it whenever you need it.
Regarding tests, the landscape is quiet different between ruby and haskell. In ruby, one can write foo do { oh dear I am typing nonsense here } and there is no other way to catch this nonsense than actually running the code. So automated tests are absolutely required. In the haskell land however, there is a great static analysis of your code coupled with a very sane paradigm (purely functional non-strict), and after years of using it, I'm still surprised haw hard it is to write nonsense without being immediately caught by the compiler. I do ruby at work as well, and really, 90% of my tests are poor-man manual "static checks".
Still, there is room for wrong design or corner-case errors, that's why quickcheck exists. It will automatically (yes, really automatically) find corner-case errors and help you a lot find design errors. You can still write unit tests with one of existing packages if you need manual checks.
So my conclusion here is : don't be surprised to find shadow everywhere if you shade ruby light on the haskell land. Things are very different over here, and need to be experienced to grab power. That doesn't mean that everything is perfect, actually improving the toolchain is a commonly expressed wish. Just the points you raised are not really problematic, and really don't deserve some vocabulary you picked. Try first, judge after :)

the best or speedest way to understand uncommented and complex project [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I have complex project without comments. The project is programed in Java but have more than one main class, use several .txt files like a template and use several .bat files. I don't know where to start and how to start discovering the project, because I need to make some changes in that project.
As with others I say this is a slow process.
However having done this in the past many times, this is my methodology:
Identify as many requirements that the code fulfils. This may give you the some reasons as to why certain things are the way they are when you look deeper. A common way of finding these is look for any tests that be available. The automated ones are best, but usually they're as missing as the comments.
Find the entry points to the code. These will give you places where you can poke the code to see how different inputs affect the flow. Common entry points are Code Loading 'Main' type functions, service interfaces, web page post backs etc..
Diagram the code. Look for tools that can build black/white box pictures of the code. For me this invaluable. I have on occasion printed out large listings and attacked then with markers and rulers. You're aim to create your own flow chart (mental or other wise) of the code flow.
Using the above (iteratively) build a set of outputs to the code that you think should occur, and add to these the outputs you may already know about such as logs, data files, database writes etc..
Finally if you have time, create some manual tests though preferably in automated test harnesses to verify the above. This where I start to involve the debugger to see detail in the code.
This methodology usually gives me confidence to make changes.
Note this is iterative process and can be done with portions of the code or overall as you see fit. I usually favour a top down approach to start with and then as I gain some insight I drill down till details become overwhelming and then I repeat. However this is just because my mind works in this way - you may be different. Good luck.
Find the main Main class. The starting point.
Start drawing a picture of the classes and the objects they own and the external entities they reference.
Follow all the branches until you can find a logical ending.
I've used UML reverse engineering tools in the past and while a visual picture is good, stepping through the code has always been the hardest and yet best methodology for me.
And, as you step through each piece you can add in your own comments..
I usually start with doxygen, turning on every extracting option (especially EXTRACT_ALL and EXTRACT_PRIVATE), and enable the SOURCE_BROWSER, HAVE_DOT, CALL_GRAPH and CALLER_GRAPH options (you also need to have dot installed). This gives good view of the software. For every function the calls are displayed and linked in a graph, also the sources are linked from there.
While doxygen is intended for C and C++, it also works with Java sources (set the OPTIMIZE_OUTPUT_JAVA option).
Auch. I'm afraid there is no speedy way to do this. Comment out a line (or two) -> test -> see what breaks. You could also put break statements here and there and run the debugger. That should give you some indication how you got there (ie. what the hierarchy between the classes is).
Hopefully the original developers used some patterns that you can recognize and make notes. Make lots of notes of everything. Start by trying to understand the high level structure and work down from there.
Be prepared to spend endless hours not understanding what the thing is doing.
Speak to the client and try to understand what the project is for, and what are all the things that it does. Someone somewhere had to put in some requirements for the stuff that's in there, if only in an email.
I would try to find the first entry point in the code closest to where you suspect you'll need to start making your changes, set a breakpoint, and start debugging. Check out the contents of local variables and work your way deeper as you get to become familiar with whats going on. Then, when you have a basic understanding of the area of code you're going to be working with, start fiddling with some small changes. Test your understanding of it. Try diagramming what you see happening. If you can do that confidently, you'll be able to decide if you need to go back and continue learning more about the code, or if you know enough to get done what you need to get done.
A start is to use an automated uml modeling tool (if you use eclipse you can use a plug-gin), and start creating UML diagrams of the various classes to see how they are related in a high level and visualize the code. This has helped many times
If there are log files being generated, have a look at it to understand the flow from the starting point (main class). Otherwise, put debug statements to understand the flow.
Ya, that sounds like a pretty bad spot to be in.
I would say that the best way is to just walk through the program line for line. Try to grasp the big picture in the code, and write alot of notes, both on paper and in comments in the code.
I would say, a good approach would be to generate documentation using javadoc or doxygen's class diagram feature, then as you run the code traverse through the class diagrams generated using doxygen and see who calls what. This works wonderfully for me everytime i am working on such a project.
I completely agree to most of the answers posted.
I can add to use a development tool that reverse engineering the code and create a class diagram, to have an overall picture of what is involved.
Then you need patience. But you will be a stronger and smarter developer when you'll get through...
Good luck!
One of the best and first things to do is to try to build and run the code. It might sound a bit simplistic but the problem when you take over undocumented code is that you can't even build it and run it. When have no clue were to start.

How to break someone into testing?

OK. Our product works. Beta testers are actually getting their stuff done. Time for the next iteration. But how to ensure quality? We need a tester!
How do I get someone fresh off the street started in testing? I have no clue on how to do it myself (I'm a developer, not a tester)!
We are a tiny team:
2 architects (as in buildings, not software, they are the domain experts here) figuring out what to build
me building it
and a new guy to do some testing before we push releases out
None of us has a clue on how to do this professionally. So far we have:
a bunch of virtual machines spanning the configurations we would like to test
various versions of windows
german and english, the two languages likely to be in use by our customers
the host software we are writing for (Autodesk Revit Architecture 2010, we are building a plugin for energy calculations)
a text document describing some tests I did (installed release xyz, did this, did that, etc.)
a bug tracking system the tester can add all the bugs he finds
I expect we will need a test script. But how? Who? What? When?
Why are you looking for "someone off the street"? To me, it sounds kind of like asking "I want to hire a new programmer, how do I get someone off the street and get him up to speed programming my software?". Why would you want to do that, over hiring someone who is a programmer already?
In your situation, which is that you don't know much about testing, I'd definitely think about hiring someone with experience in the field.
Specifically, I'd probably look for:
Someone with some experience performing tests under his belt (since you're going to want him actually doing tests).
Someone with some experience writing test plans/etc.
Someone with some experience running a QA team.
The last point is optional, but hopefully your team will be growing as your software grows, so it might make sense to get someone who can grow in the role as well (not to mention having the experience to help you decide when and how to grow the QA team).
Well, are you looking to expand your team with a tester? Have you considered just hiring a test specialist from a consultancy firm?
Before you get somebody to test, make sure you meet the requirements for testing. At a minimum you need:
A specification: Some authoritative source on what the application is supposed to do. This could be an expert that can answer any and all questions on exactly what the app is supposed to do, but the more that is written down and the more formally defined it is the better.
Time: Testing takes time. You can't hand off an application to the tester 30 minutes before it's supposed to go live and expect any worthwhile results. If you're doing waterfall development, testing will require a lot of time at the end. Lots of other development models let testing run in parallel with development, which saves a lot of time, but regardless of the model you use, testing will require more time than not testing.
If you don't have these two things, quality assurance is just a pipe dream.
Now if you do have those met, and you're trying to train somebody to test, here's my crash course on testing.
Fundamentally, testing an application means that you are attempting to ensure two things:
The program does what it is supposed to do.
The program does not do what it is not supposed to do.
That's the core mindset that I use. Building from that I approach things in terms of actions and attempt to verify:
An expected action with expected preconditions produces an expected effect.
An expected action with unexpected preconditions produces no effect or is handled appropriately.
An unexpected action produces no effect or is handled appropriately.
No unexpected effects occur.
Item 1 comes directly from the spec: You make sure that the program does what it is supposed to do.
Items 2 and 3 are where the art of testing comes in. What unexpected actions and preconditions can I perform? I could try to enter the wrong password. I could try to directly type in the URL of a supposedly secured page. I could try to paste odd unicode characters into a text field. I could try to put SQL or javascript code into a text field.
Item 4 is the infinite no-man's land of testing, the part that makes complete testing impossible. (2 and 3 are also infinite, but not as depressing to think about.) That doesn't mean you ignore it. You always keep an eye out for anything unusual. Also, sometimes inspiration strikes and you think of a possible way to cause an unexpected effect: "What happens if I log in between 11:59:59PM and 12:00:00AM on the third tuesday of the month? Oh look, it made me an administrator." Technical knowledge and a peek inside the black box help with coming up with scenarios like that.
There is a whole lot more to say about testing, but that's the bare minimum I can think of: The technical requirements and the approach to the problem.
Ideally, you'll need to give the tester:
training to make sure he knows the product to be tested.
documentation on what the expected results are.
test plans - what needs to be tested and how
a test tracking system to track what is being tested, what passed the tests, what needs to be fixed, etc. That system does not have to be too sophisticated, depending on the size of the project, an Excel spreadsheet may suffice.
In their podcast #64, Jeff and Joel discuss (among other things) what skills a good tester should possess. Transcript also available (about halfway down the page)

How to debug a program without a debugger? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
Interview question-
Often its pretty easier to debug a program once you have trouble with your code.You can put watches,breakpoints and etc.Life is much easier because of debugger.
But how to debug a program without a debugger?
One possible approach which I know is simply putting print statements in your code wherever you want to check for the problems.
Are there any other approaches other than this?
As its a general question, its not restricted to any specific language.So please share your thoughts on how you would have done it?
EDIT- While submitting your answer, please mention a useful resource (if you have any) about any concept. e.g. Logging
This will be lot helpful for those who don't know about it at all.(This includes me, in some cases :)
UPDATE: Michal Sznajderhas put a real "best" answer and also made it a community wiki.Really deserves lots of up votes.
Actually you have quite a lot of possibilities. Either with recompilation of source code or without.
With recompilation.
Additional logging. Either into program's logs or using system logging (eg. OutputDebugString or Events Log on Windows). Also use following steps:
Always include timestamp at least up to seconds resolution.
Consider adding thread-id in case of multithreaded apps.
Add some nice output of your structures
Do not print out enums with just %d. Use some ToString() or create some EnumToString() function (whatever suits your language)
... and beware: logging changes timings so in case of heavily multithreading you problems might disappear.
More details on this here.
Introduce more asserts
Unit tests
"Audio-visual" monitoring: if something happens do one of
use buzzer
play system sound
flash some LED by enabling hardware GPIO line (only in embedded scenarios)
Without recompilation
If your application uses network of any kind: Packet Sniffer or I will just choose for you: Wireshark
If you use database: monitor queries send to database and database itself.
Use virtual machines to test exactly the same OS/hardware setup as your system is running on.
Use some kind of system calls monitor. This includes
On Unix box strace or dtrace
On Windows tools from former Sysinternals tools like http://technet.microsoft.com/en-us/sysinternals/bb896645.aspx, ProcessExplorer and alike
In case of Windows GUI stuff: check out Spy++ or for WPF Snoop (although second I didn't use)
Consider using some profiling tools for your platform. It will give you overview on thing happening in your app.
[Real hardcore] Hardware monitoring: use oscilloscope (aka O-Scope) to monitor signals on hardware lines
Source code debugging: you sit down with your source code and just pretend with piece of paper and pencil that you are computer. Its so called code analysis or "on-my-eyes" debugging
Source control debugging. Compare diffs of your code from time when "it" works and now. Bug might be somewhere there.
And some general tips in the end:
Do not forget about Text to Columns and Pivot Table in Excel. Together with some text tools (awk, grep or perl) give you incredible analysis pack. If you have more than 32K records consider using Access as data source.
Basics of Data Warehousing might help. With simple cube you may analyse tons of temporal data in just few minutes.
Dumping your application is worth mentioning. Either as a result of crash or just on regular basis
Always generate you debug symbols (even for release builds).
Almost last but not least: most mayor platforms has some sort of command line debugger always built in (even Windows!). With some tricks like conditional debugging and break-print-continue you can get pretty good result with obscure bugs
And really last but not least: use your brain and question everything.
In general debugging is like science: you do not create it you discover it. Quite often its like looking for a murderer in a criminal case. So buy yourself a hat and never give up.
First of all, what does debugging actually do? Advanced debuggers give you machine hooks to suspend execution, examine variables and potentially modify state of a running program. Most programs don't need all that to debug them. There are many approaches:
Tracing: implement some kind of logging mechanism, or use an existing one such as dtrace(). It usually worth it to implement some kind of printf-like function that can output generally formatted output into a system log. Then just throw state from key points in your program to this log. Believe it or not, in complex programs, this can be more useful than raw debugging with a real debugger. Logs help you know how you got into trouble, while a debugger that traps on a crash assumes you can reverse engineer how you got there from whatever state you are already in. For applications that you use other complex libraries that you don't own that crash in the middle of them, logs are often far more useful. But it requires a certain amount of discipline in writing your log messages.
Program/Library self-awareness: To solve very specific crash events, I often have implemented wrappers on system libraries such as malloc/free/realloc which extensions that can do things like walk memory, detect double frees, attempts to free non-allocated pointers, check for obvious buffer over-runs etc. Often you can do this sort of thing for your important internal data types as well -- typically you can make self-integrity checks for things like linked lists (they can't loop, and they can't point into la-la land.) Even for things like OS synchronization objects, often you only need to know which thread, or what file and line number (capturable by __FILE__, __LINE__) the last user of the synch object was to help you work out a race condition.
If you are insane like me, you could, in fact, implement your own mini-debugger inside of your own program. This is really only an option in a self-reflective programming language, or in languages like C with certain OS-hooks. When compiling C/C++ in Windows/DOS you can implement a "crash-hook" callback which is executed when any program fault is triggered. When you compile your program you can build a .map file to figure out what the relative addresses of all your public functions (so you can work out the loader initial offset by subtracting the address of main() from the address given in your .map file). So when a crash happens (even pressing ^C during a run, for example, so you can find your infinite loops) you can take the stack pointer and scan it for offsets within return addresses. You can usually look at your registers, and implement a simple console to let you examine all this. And voila, you have half of a real debugger implemented. Keep this going and you can reproduce the VxWorks' console debugging mechanism.
Another approach, is logical deduction. This is related to #1. Basically any crash or anomalous behavior in a program occurs when it stops behaving as expected. You need to have some feed back method of knowing when the program is behaving normally then abnormally. Your goal then is to find the exact conditions upon which your program goes from behaving correctly to incorrectly. With printf()/logs, or other feedback (such as enabling a device in an embedded system -- the PC has a speaker, but some motherboards also have a digital display for BIOS stage reporting; embedded systems will often have a COM port that you can use) you can deduce at least binary states of good and bad behavior with respect to the run state of your program through the instrumentation of your program.
A related method is logical deduction with respect to code versions. Often a program was working perfectly at one state, but some later version is not longer working. If you use good source control, and you enforce a "top of tree must always be working" philosophy amongst your programming team, then you can use a binary search to find the exact version of the code at which the failure occurs. You can use diffs then to deduce what code change exposes the error. If the diff is too large, then you have the task of trying to redo that code change in smaller steps where you can apply binary searching more effectively.
Just a couple suggestions:
1) Asserts. This should help you work out general expectations at different states of the program. As well familiarize yourself with the code
2) Unit tests. I have used these at times to dig into new code and test out APIs
One word: Logging.
Your program should write descriptive debug lines which include a timestamp to a log file based on a configurable debug level. Reading the resultant log files gives you information on what happened during the execution of the program. There are logging packages in every common programming language that make this a snap:
Java: log4j
.Net: NLog or log4net
Python: Python Logging
PHP: Pear Logging Framework
Ruby: Ruby Logger
C: log4c
I guess you just have to write fine-grain unit tests.
I also like to write a pretty-printer for my data structures.
I think the rest of the interview might go something like this...
Candidate: So you don't buy debuggers for your developers?
Interviewer: No, they have debuggers.
Candidate: So you are looking for programmers who, out of masochism or chest thumping hamartia, make things complicated on themselves even if they would be less productive?
Interviewer: No, I'm just trying to see if you know what you would do in a situation that will never happen.
Candidate: I suppose I'd add logging or print statements. Can I ask you a similar question?
Interviewer: Sure.
Candidate: How would you recruit a team of developers if you didn't have any appreciable interviewing skill to distinguish good prospects based on relevant information?
Peer review. You have been looking at the code for 8 hours and your brain is just showing you what you want to see in the code. A fresh pair of eyes can make all the difference.
Version control. Especially for large teams. If somebody changed something you rely on but did not tell you it is easy to find a specific change set that caused your trouble by rolling the changes back one by one.
On *nix systems, strace and/or dtrace can tell you an awful lot about the execution of your program and the libraries it uses.
Binary search in time is also a method: If you have your source code stored in a version-control repository, and you know that version 100 worked, but version 200 doesn't, try to see if version 150 works. If it does, the error must be between version 150 and 200, so find version 175 and see if it works... etc.
use println/log in code
use DB explorer to look at data in DB/files
write tests and put asserts in suspicious places
More generally, you can monitor side effects and output of the program, and trigger certain events in the program externally.
A Print statement isn't always appropriate. You might use other forms of output such as writing to the Event Log or a log file, writing to a TCP socket (I have a nice utility that can listen for that type of trace from my program), etc.
For programs that don't have a UI, you can trigger behavior you want to debug by using an external flag such as the existence of a file. You might have the program wait for the file to be created, then run through a behavior you're interested in while logging relevant events.
Another file's existence might trigger the program's internal state to be written to your logging mechanism.
like everyone else said:
Logging
Asserts
Extra Output
&
your favorite task manager or process
explorer
links here and here
Another thing I have not seen mentioned here that I have had to use quite a bit on embedded systems is serial terminals.
You can cannot a serial terminal to just about any type of device on the planet (I have even done it to embedded CPUs for hydraulics, generators, etc). Then you can write out to the serial port and see everything on the terminal.
You can get real fancy and even setup a thread that listens to the serial terminal and responds to commands. I have done this as well and implemented simple commands to dump a list, see internal variables, etc all from a simple 9600 baud RS-232 serial port!
Spy++ (and more recently Snoop for WPF) are tremendous for getting an insight into Windows UI bugs.
A nice read would be Delta Debugging from Andreas Zeller. It's like binary search for debugging

Resources