I have a program that runs under windows. I only have the binary and no symbol information, and I have VS2008. When I run this program, it hangs for around 60 seconds doing something, and I would like to understand what it is doing. Under Linux, I would use ltrace, strace and gdb, but on Windows I have no experience whatsoever.
I found Process Monitor to solve my problem. It's a very nice program with great filtering capabilities.
Related
I noticed that some common commands(ls, cat, touch, etc.) run very slow on my Mac. I couldn't find out why. So I use top to monitor cpu usage after I run some program in a terminal. I found that no matter what program I am running, a process called automountd pops up immediately and starts using lots of cpu (60% - 70%). I feel this might be the cause. If so, why this happened? What should I do?
Edit: I've confirmed that automontd/autofsd slows down my command line. After I kill the autofsd, ls and other commands become responsive. But disabling autofsd doesn't seem to be a perfect solution, so I hope someone can shed some light on this.
I have a Python program that is mostly complete, and there is one thing that I'd like to change, which may or may not be possible.
This program uses PyQT to display a GUI and I have it pretty much pinned up so I was wondering if I can make Python not open up a termianl when I open the program.
I am using Windows XP right now, but the machines it will run on will be Windows 7. I generally work with Linux, so I'm not terribly familiar with Windows.
If the terminal has to be there, it's no big deal, but I feel like it's extraneous at this point.
Thanks!
Use the python extension .pyw.
E.g program.pyw
This causes your program to be run with pythonw.exe instead of python.exe which suppresses the terminal.
When there is a GUI program not behaving as I expect it to in Linux I run it from the terminal so that I can see what errors are happening in the background, this helps in configuring the system and figuring out dependencies when the problem is not from the program but rather the system. This is especially helpful when the program doesn't give any feedback to the user in the GUI.
Is there a way to do the same thing in Windows?
In general, Windows programs are much less well defined in how they run, and there tends to be a much wider variety in the quality of the application, especially in regards to logging and debugging.
Most (all?) windows applications can be run from the command line. You just have to find where the executable is located, and then run it. Looking at the path in an application shortcut can be helpful for this.
Some applications have command line switches that do various things, but many don't. If there is documentation for the particular program, you may want to check there.
Some well behaved windows programs will log to the System log. Depending on your access level and the version of Windows you are running, the System log can give you some good information about things that might be going wrong with the program, such as file permissions, etc.
In general, however, no, there is no way to get error output from a program if it doesn't already output it.
I have big system that make my system crash hard. When I boot up, I don't even have
a coredump. If I log every line that
get executed until my system goes down. I will find that evil code.
Can I log every source code line in GDB to a file?
UPDATE:
ok, I found the bug. It was nasty. The application I started did not
take the system down. After learning about coredump inspection with mdb, and some gdb stepping I found out that the systemcall causing the dump, was not implemented. Updating the system to latest kernel will fix my problem. Thanks to all of you.
MY LESSON:
make sure you know what process causes the coredump. It's not always the one you started.
Sounds like a tricky little problem.
I often try to eliminate as many possible suspects as I can by commenting out large chunks of code, configuring the system to not run certain pieces (if it allows you to do that) etc. This amounts to doing an ad-hoc binary search on the problem, and is a surprisingly effective way of zooming in on offending code relatively quickly.
A potential problem with logging is that the log might not hit the disk before the system locks up - if you don't get a core dump, you might not get the log.
Speaking of core dumps, make sure you don't have a limit on your core dump size (man ulimit.)
You could try to obtain a list of all the functions in your code using objdump, process it a little bit and create a bunch of GDB trace statements on those functions - basically creating a GDB script automatically. If that turns out to be overkill, then a binary search on the code using tracepoints can also help you zoom in on the problem.
And don't panic. You're smarter than the bug - you'll find it.
You can not reasonably track every line of your source using GDB (too slow). Besides, a system crash is most likely a result of a system call, and libc is probably doing the system call on your behalf. Even if you find the line of the application that caused OS crash, you still don't really know anything.
You should start by clarifying which OS is crashing. For Linux, you can try the following approaches:
strace -fo trace.out /path/to/app
After reboot, trace.out will contain syscalls the application was doing just before the crash. If you are lucky, you'll see the last syscall-of-death, but I wouldn't count on it.
Alternatively, try to reproduce the crash on the user-mode Linux, or on kernel with KGDB compiled in.
These will tell you where the problem in the kernel is. Finding the matching system call in your application will likely be trivial.
Please clarify your problem: What part of the system is crashing?
Is it an application?
If so, which application? Is this an application which you have written yourself? Is this an application you have obtained from elsewhere? Can you obtain a clean interrupt if you use a debugger? Can you obtain a backtrace showing which functions are calling the section of code which crashes?
Is it a new hardware driver?
Is it based on an older driver? If so, what has changed? Is it based on a manufacturer's data sheet? Is that data sheet the latest and most correct?
Is it somewhere in the kernel? Which kernel?
What is the OS? I assume it is linux, seeing that you are using the GNU debugger. But of course, that is not necessarily so.
You say you have no coredump. Have you enabled coredumps on your machine? Most systems these days do not have coredumps enabled by default.
Regarding logging GDB output, you may have some success, but it depends where the problem is whether or not you will have the right output logged before the system crashes. There is plenty of delay in writing to disk. You may not catch it in time.
I'm not familiar with the gdb way of doing this, but with windbg the way to go is to have a debugger attached to the kernel and control the debugger remotely over a serial cable (or firewire) from a second debugger. I'm pretty sure gdb has similar capabilities, I could quickly find some hints here: http://www.digipedia.pl/man/gdb.4.html
I have a Win32 console application which is doing some computations, compiled in Compaq Visual Fortran (which probably doesn't matter).
I need to run a lot of them simultaneously.
In XP, they take around 90-100% CPU together, work very fast.
In Vista, no matter how many of them I run, they take no more than 10% of CPU (together), and work very slow respectively.
There is quite a bit of console output going on, but now VERY much.
I can minimize all the windows, it does not help. CPU is basically doing nothing...
Any ideas?
Update:
No, these are different machines, but they run relatively the same hardware. 2. Threads are not used, this is a VERY OLD (20 yrs) plain app for DOS, compiled in win32. It is supposed to compute iterations until they meet, consume all it has. My impression - VISTA just does NOT GIVE IT MORE CPU
Have you tried redirecting the console output to a file?
If your applications are being held up writing to the console (this happens sometimes unfortunately) then redirecting the output should help, as it's much quicker to write to a simple file than write to the console.
You do this like so
c:\temp> dir > output.log
If you really don't care about the output at all, you can throw it away, by redirecting to nul. eg:
c:\temp> dir > nul
There was a known "feature" in Vista that limits certain console applications to 32MB of RAM. I don't know if those compiled by Compaq Visual Fortran are affected by this "feature."
This article appears to have been updated as recently as October 2008, so the problem still exists.
To expound on Daok's post - your XP machine might be CPU bound for this process, whereas the vista machine is bound by some other resource.
To clarify:
output to stdout (or other) can be slowing down the processing. (as can context switching or file access, etc)
As Tim hinted, console output (stdout) is EXTREMELY expensive.
I suggest rerunning your test while redirecting the console output to a separate log file for each process. If possible, tune down the verbosity of the output in another test run.
Beyond that, there are other obvious possibilities: is the hardware significantly different, are there other major processes running, is there a shared resource that is under contention?
Other than the obvious, look for a nonobvious resource contention such as a shared file.
But the main area where I would look is whether there is a significant difference in how your code is compiled for the two OS environments--I wonder if your Fortran code is incurring some kind of special penalty when running on Vista, such as a compatibility mode. Look to see how well Vista is supported and whether you can target your compile for Vista specifically. Also look for anyone reporting similar issues, such as in bug reports, feature requests, etc.
Your loops are obviously not simple computations. There is a blocking system call in there somewhere. Just because it worked on XP doesn't mean the app is bug free.
Since you can minimize the console windows and see no improvement, I would not consider that an issue. In my experience console output slows a program down only if the console window is drawing text, not when it's minimized.
Is it the same machine hardware on your Vista and XP? It might use just 10% of the Vista because it doesn't require more. Are you using Thread? I think it requires more information about your project to have a better idea. Have you try to use a profiler to see what's going on?