What rarely used debugging tools you found useful ?
My recent debugging situation on Visual Studio required trapping the breakpoint on fresh built 32-bit DLL, which was loaded by GUI-less executable, which was spawned by COM+ server on remote x64 machine, which was called through RPC from actual GUI. As usual, all worked well on all 32 bit machine, but kept failing on "machine other than development one". So remote debugging was inevitable.
So after scratching the head beaten against wall for 2 days, I have added 10 sec delay into DLL attach entry point and used Microsoft Remote Debugger wich I never used before. It saved my day.
Another favorite: Java JMX console as a performance "debugging" tool. You can see all threads, memory chart, have a snapshot of any thread stack any time you click. Clicking several times helps to find what exactly is slow in J2EE application.
Process Monitor and other Mark Russinovich's tools.
A logic analyzer plugged to CPU pins and able to disassemble executed code. I tracked a bug in the boot sequence of an embedded system.
I find printf to be the most useful.
These - in my experience at least - do not seem to be the intuitive first choice for many when debugging apps accessing a database (i.e. the majority), that perhaps they should be :
SQL Profiler (SQL Server)
TKPROF (Oracle)
Another interesting combination was using eclipse running in a virtual machine, accessing a remote server, attaching to the Tomcat process there; and doing it from two different machines to debug two different packages simultaneously.
All time favorite is depends.exe, for finding out why a dll or exe is not starting http://dependencywalker.com/
For performance, at my former job we used to have really simple to use C++ macro's that did statistics on runtime function calls. This is so much better than a profiler, because you can use it from your regular IDE, and it allows you to zoom in on the code you are optimizing.
In my new job, I wrote a C# version of the same idea.
WinDbg and other lower level debuggers are the ultimate weapon if you know the tricks and tips.
For Windows/.Net development I am always using Debugview and Ildasm.
Related
I am running Windows 7 on which I want to do kernel debugging and I do not want to mess with boot loader. So I've downloaded LiveKd as suggested here and make it run and seems it is working. If I understand correct it is some kind of read only debugging. Here is mentioned that it is very limited and even breakpoint cannot be used. I would like to ask if is possible in this mode to periodically dump all the instructions that are being executed or basically all events which are happening on current OS? I would like to have some system wide strace (Linux users know) and to do some statistical analysis on this. I suppose it depends on more factors like installed debug symbols to begin able resolve addresses etc.
I'm not sure if debugger is the best tool you can use for tracing live system calls. As you've mentioned LiveKd session is quite limited and you are not allowed to place breakpoints in it (otherwise you would hang your own system). However, you still can create memory dumps using the .dump command (check windbg help: .hh .dump). Keep in mind though that getting a full dump (/f) of a running system might take a lot of time.
Moving back to the subject of your question, by using the "dump approach" you will miss many system calls as you will have only snapshots of a system at given points in time. So if you are looking for something similar to Linux strace I would recommend checking those tools:
Process Monitor (procmon) - it's a tool which will show you all I/O requests in the system, as well as operations performed on the registry or process activity events
Windows Performance Toolkit - it contains tools for collecting (WPR) and analysing (WPA) system and application tracing events. It might be a lot of events and it's really important to filter them accordingly to your needs. ETW (Event Tracing for Windows) is a huge subject and you probably will need to read some tutorials or books before you will be able to use it effectively (but it's really worth it!).
API Monitor - it's one of many (I consider it as one of the best) tracing applications - this tool will allow you to trace method calls in any of the running processes. It has a nice interface and even allows you to place breakpoints on methods you'd like to intercept.
There are many other tools which might be used for tracing on Windows, but I would start with the ones I listed above. You may also check a great book on this subject: Inside Windows Debugging. Good luck! :)
It seems there are dozens of debugger and debugging tools that Microsoft produces, which creates a maze of choices and questions concerning which tool to apply, and when. For example, there is windbg - and the debugger built into Visual Studio. Both can access minidumps. Why would I choose one over the other?
Dr. Watson was the default post-mortem crash analysis tool of the past. It has now been replaced with "Problem Reports & Solutions". Which is in turn replaced with IIS Exception Monitor on servers? And perhaps all of this is built on top of "Microsoft CDB Debugger," or perhaps that is a another duplicate tool? ADPlus, yet another one, is built on CDB Debugger. The maze seems to go on endlessly.
Can someone provide a link to a taxonomy or roadmap of all these tools, with comments of which are being deprecated (Dr. Watson?) and what "tool direction" debug students should absorb? I'm sure there are a number of tools and base libraries I've not mentioned here. it would be nice to know the dependencies between them too (such as ADPlus using the CDB Debugger).
I've found this link to be helpful, since it answers some of the questions I'm asking - though the material is dated. Any other resources that give a similarly simple compare / contrast run-down?
There is no difference between CDB and NTSD, other than how they spawn new windows. Choosing when to use Visual Studio over the command line debuggers is sometimes a matter of personal choice, but sometimes the command line is a better tool for the job. Once you get good at using the command line debuggers, you can get things done much more quickly. I suspect there are a few scenarios that remain where you can only debug a specific problem with the command line debugger, but I can't think of any off hand. The third debugger you've missed is kd, which is the kernal debugger. If you want to debug kernal mode stuff (i.e. your device drivers you've written) it's really your only choice.
CDB, NTSD and KD are all part of the debugging tools for Windows, itself part of the DDK. Visual Studio does not depend on the other debugging package and vice-versa.
Watson and the like are not debuggers. They merely observe and report. I suspect the best advice there is use whichever one is appropriate to your problem. I mean, there are lots of tools for all sorts of different MS technologies. E.g. Orca for MSI databases. All of these products are unrelated, often released and maintained by different divisions, etc. As a result, I doubt you'll find a chart showing their relationships since they are so diverse.
With the dump debugging support in .NET 4.0 we are looking into automatically (after asking the user of course :) creating minidumps of C# program crashes to upload them to our issue tracking system (so that the minidumps can assist in resolving the cause of the crash).
Everything is working fine when using the WithFullMemory minidump type. We can see both stack and heap variables. Unfortunately the (zipped) dumps are quite large even for small C# programs.
If we use the "Normal" minidump type we get a very small dump, but not even stack variable information is available in the managed debugger. In fact, anything less than WithFullMemory seems quite useless in the managed debugger. We have made a few attempts at using a MINIDUMP_CALLBACK_ROUTINE to limit the included module information to our own modules, but it seems that it has almost no effect on a managed dump but still manages to break the managed debugging?
Does anyone have any tips on how to trim the minidump while keeping it useful for managed debugging?
I use the following flags to save space will generating useful minidumps for C++ applications:
MiniDumpWithPrivateReadWriteMemory |
MiniDumpWithDataSegs |
MiniDumpWithHandleData |
MiniDumpWithFullMemoryInfo |
MiniDumpWithThreadInfo |
MiniDumpWithUnloadedModules
The flag values are specified in DbgHelp.h and would need to be marshaled into C#. The dump is further restricted by specifying a CallbackRoutine.
Just fyi, as mentioned above ClrDump looks very cool but it appears it only works with the 1.1. and 2.0 runtimes.
With all due respect, I STRONGLY encourage you to sign-up for a Microsoft WinQual account, register your applications with Microsoft.
http://www.microsoft.com/whdc/winlogo/maintain/StartWER.mspx
This will allow you to not only take advantage of Microsoft's extensive crash collection and analysis services (for free!), but will also allow you to publish fixes and patches for your applications through Windows' built-in error reporting facilties.
Further, by participating in the WinQual program, enterprises who deploy your app and who employ an in-house Windows Error Reporting system will be able to collect, report and receive patches for your app too.
Another benefit is that employing WinQual, you're one step closer to getting your app logo certified!
Every OEM & ISV I've worked with who uses WinQual saves an ENORMOUS amount of effort and expense compared to rolling their own crash collection and reporting system.
I wrote an email to author of ClrDump asking a question what MINIDUMP_TYPE parameters his tool used to create dumps in 'min' mode. I posted his answer here: What is minimum MINIDUMP_TYPE set to dump native C++ process that hosts .net component to be able to use !clrstack in windbg
ClrDump might help you out.
ClrDump is a set of tools that allow to produce small minidumps of
managed applications. In the past, it was necessary to use full dumps
(very large in size) if you needed to perform post-mortem analysis of
a .NET application. ClrDump can produce small minidumps that contain
enough information to recover the call stacks of all threads in the
application.
I was wondering if there is a less intrusive way to analyze a running, managed process in production environments.
Less intrusive meaning:
No delay of execution when attaching the debugger.
No delay of execution when getting basic stats like running threads.
In the Java world there is a such a tool part of the JDK. I was wondering if there're similar tools in the .NET world.
The tool should answer questions like:
What are the thread pool parameters? Same as "!threadpool" in Windbg.
What are the callstacks of my currently running threads (yep, you get it from the Java tool :) ).
Basic heap analysis e.g. howmany objects of type ABC.
Any ideas?
Alex
If I understand you correctly, you don't want to actually debug the program, only get some basic information. In such cases, Process Explorer may be sufficient.
As Oefe says, you can get a lot of info including the stacks of all threads from Process Explorer. Also, the .NET runtime has a number of useful performance counters, that may give you some insight. If you have special needs, your application can publish its own counters.
Here is production debugging in a non-intrusive manner using ETW and another one
It depends on what you want to debug. WinDbg is the giant hammer of Windows debugging, suitable for debugging anything from kernel extensions on up.
If you just want to debug a program, most people just use visual studio, which will attach to a running processs.
However, #oefe may have the bull by the horns here. When most people say 'debugger' they want backtraces and breakpoints and such. In Java, you need to make prior arrangements to attach that sort of debugger. Either Windbg or visual studio (-debugexe) is more convenient than that.
I'm using WinDBG occasionally to analyze problems in production environment, where VS cannot be installed. There's no doubt it's an extremely powerful tool, but using it is a bit annoying. Even though the product is frequently updated, its GUI goes back to the Win95 days or so, and its usability is accordingly. Having to fight the GUI to layout the windows the way I want, and having to remember all those textual commands, is just quite low a standard for a modern desktop application.
AFAIK, WinDBG is pretty much built on top of CDB, which is a command line debugger. Being so, it shouldn't be that hard to built a modern days GUI wrapper that will replace the existing dinosaur. Has anyone ever done that? Am I the only one having those mixed feelings toward WinDBG?
(BTW, I know I can create a dump and take it back to where I have VS, but I sometimes have to debug 64 bit processes, and I don't have a 64 bit dev machine. Sad, but true)
Consider the new WinDbg. (It's still in Preview). It also supports Time Travel Debugging.
You can install it from the Microsoft Store, or use the links here.
Here is what's new with Windbg Preview.
Have look at this if you fancy trying out a GUI to replace WinDbg.
EDIT:
Since SOS Assist is no longer available, this answer should be deleted. As this answer has been accepted, I personally cannot delete it. In that, please ignore my answer.
I guess thats too much to expect. With such a large number of commands that it has, it will not be trivial to have UI that displays everything in fancy controls. It might also make it bulkier, slower.
However it does provide you with controls that any user mode application debugger should have. It displays most frequently needed information like call stack, local variables, threads and so on in seperate windows.
But if you need more advanced debugging feature, you alwalys have the command interface.
WinDBG is pretty much it, no one has ever bothered to write their own UI for it. Even with its quirks I'm a fan because it's mostly command line driven. So, to each their own :)
Note that the VS 2011 Dev Preview basically integrates WinDBG support, so in the future VS will be the new WinDBG UI.
-scott