I have a crash dump of an application that is supposedly leaking GDI. The app is running on XP and I have no problems loading it into WinDbg to look at it. Previously we have use the Gdikdx.dll extension to look at Gdi information but this extension is not supported on XP or Vista.
Does anyone have any pointers for finding GDI object usage in WinDbg.
Alternatively, I do have access to the failing program (and its stress testing suite) so I can reproduce on a running system if you know of any 'live' debugging tools for XP and Vista (or Windows 2000 though this is not our target).
I've spent the last week working on a GDI leak finder tool. We also perform regular stress testing and it never lasted longer than a day's worth w/o stopping due to user/gdi object handle overconsumption.
My attempts have been pretty successful as far as I can tell. Of course, I spent some time beforehand looking for an alternative and quicker solution. It is worth mentioning, I had some previous semi-lucky experience with the GDILeaks tool from msdn article mentioned above. Not to mention that i had to solve a few problems prior to putting it to work and this time it just didn't give me what and how i wanted it. The downside of their approach is the heavyweight debugger interface (it slows down the researched target by orders of magnitude which I found unacceptable). Another downside is that it did not work all the time - on some runs I simply could not get it to report/compute anything! Its complexity (judging by the amount of code) was another scare-away factor. I'm not a big fan of GUIs, as it is my belief that I'm more productive with no windows at all ;o). I also found it hard to make it find and use my symbols.
One more tool I used before setting on to write my own, was the leakbrowser.
Anyways, I finally settled on an iterative approach to achieve following goals:
minor performance penalties
implementation simplicity
non-invasiveness (used for multiple products)
relying on as much available as possible
I used detours (non-commercial use) for core functionality (it is an injectible DLL). Put Javascript to use for automatic code generation (15K script to gen 100K source code - no way I code this manually and no C preprocessor involved!) plus a windbg extension for data analysis and snapshot/diff support.
To tell the long story short - after I was finished, it was a matter of a few hours to collect information during another stress test and another hour to analyze and fix the leaks.
I'll be more than happy to share my findings.
P.S. some time did I spend on trying to improve on the previous work. My intention was minimizing false positives (I've seen just about too many of those while developing), so it will also check for allocation/release consistency as well as avoid taking into account allocations that are never leaked.
Edit: Find the tool here
There was a MSDN Magazine article from several years ago that talked about GDI leaks. This points to several different places with good information.
In WinDbg, you may also try the !poolused command for some information.
Finding resource leaks in from a crash dump (post-mortem) can be difficult -- if it was always the same place, using the same variable that leaks the memory, and you're lucky, you could see the last place that it will be leaked, etc. It would probably be much easier with a live program running under the debugger.
You can also try using Microsoft Detours, but the license doesn't always work out. It's also a bit more invasive and advanced.
I have created a Windbg script for that. Look at the answer of
Command to get GDI handle count from a crash dump
To track the allocation stack you could set a ba (Break on Access) breakpoint past the last allocated GDICell object to break just at the point when another GDI allocation happens. That could be a bit complex because the address changes but it could be enough to find pretty much any leak.
Related
I have been investigating the windows Prefetching system hoping to find a way to speed up the load time of an application I am working on. I found the following link where a developer describes modifications to the prefetcher registry values:
http://dotnet.dzone.com/news/improving-cold-startup
I have made similar modifications locally and found that they do provide faster application loading times. My problem is that I cannot find any documentation on the registry values that were changed and why the new values are better than the old ones.
So my question in short is, does anybody have any further information on the prefetcher registry values given below:
VideoInitTime
EnablePrefetcher
AppLaunchMaxNumPages
AppLaunchMaxNumSections
AppLaunchTimerPeriod
BootMaxNumPages
BootMaxNumSections
BootTimerPeriod
MaxNumActiveTraces
MaxNumSavedTraces
RootDirPath
HostingAppList
You don't say what profiling or other changes you've done, and when people dive in with off-the-wall solutions to perf problems but don't describe how they arrived at the need for them, I'm always a bit doubtful.
Where is your app spending its start-up time? How did you measure that? Can you fix an underlying '300 dlls' problem of the type described in that article?
Messing with OS prefetch policy may being improving your application at the expense of everyone else, which may be the right thing to do (on a single-use industrial control system or something like that), but may be completely antisocial.
"Load less code" is often a good general way to improve application startup time - do you have some very expensive config file storage mechanism, for example (XmlSerializer was notorious for this at one point, for example).
I am trying to help a client with a problem, but I am running out of ideas. They have a custom, written in house application that runs on a schedule, but it crashes. I don't know how long it has been like this, so I don't think I can trace the crashes back to any particular software updates. The most unfortunate part is there is no longer any source code for the VB6 DLL which contains the meat of the logic.
This VB6 DLL is kicked off by 2-3 function calls from a VB Script. Obviously, I can modify the VB Script to add error logging, but I'm not having much luck getting quality information to pinpoint the source of the crash. I have put logging messages on either side of all of the function calls and determined which of the calls is causing the crash. However, nothing is ever returned in the err object because the call is crashing wscript.exe.
I'm not sure if there is anything else I can do. Any ideas?
Edit: The main reason I care, even though I don't have the source code is that there may be some external factor causing the crash (insufficient credentials, locked file, etc). I have checked the log file that is created in drwtsn32.log as a result of wscript.exe crashing, and the only information I get is an "Access Violation".
I first tend to think this is something to do with security permissions, but couldn't this also be a memory access violation?
You may consider using one of the Sysinternals tools if you truly think this is a problem with the environment such as file permissions. I once used Filemon to figure out all the files my application was touching and discovered a problem that way.
You may also want to do a quick sanity check with Dependency Walker to make sure you are actually loading the DLL files you think you are. I have seen the wrong version of the C runtime being loaded and causing a mysterious crash.
Depending on the scope of the application, your client might want to consider a rewrite. Without source code, they will eventually be forced to do so anyway when something else changes.
It's always possible to use a debugger - either directly on the PC that's running the crashing app or on a memory dump - to determine what's happening to a greater or lesser extent. In this case, where the code is VB6, that may not be very helpful because you'll only get useful information at the Win32 level.
Ultimately, if you don't have the source code then will finding out where the bug is really help? You won't be able to fix it anyway unless you can avoid that code path for ever in the calling script.
You could use the debugging tools for windows. Which might help you pinpoint the error, but without the source to fix it, won't do you much good.
A lazier way would be to call the dll from code (not a script) so you can at least see what is causing the issue and inspect the err object. You still won't be able to fix it, unless the problem is that it is being called incorrectly.
The guy of Coding The Wheel has a pretty interesting series about building an online poker bot which is full of serious technical info, a lot of which is concerned with how to get into existing applications and mess with them, which is, in some way, what you want to do.
Specifically, he has an article on using WinDbg to get at important info, one on how to bend function calls to your own code and one on injecting DLLs in other processes. These techniques might help to find and maybe work around or fix the crash, although I guess it's still a tough call.
There are a couple of tools that may be helpful. First, you can use dependency walker to do a runtime profile of your app:
http://www.dependencywalker.com/
There is a profile menu and you probably want to make sure that the follow child processes option is checked. This will do two things. First, it will allow you to see all of the lib versions that get pulled in. This can be helpful for some problems. Second, the runtime profile uses the debug memory manager when it runs the child processes. So, you will be able to see if buffers are getting overrun and a little bit of information about that.
Another useful tool is process monitor from Mark Russinovich:
http://technet.microsoft.com/en-us/sysinternals/bb896645.aspx
This tool will report all file, registry and thread operations. This will help you determine if any you are bumping into file or registry credential issues.
Process explorer gives you a lot of the same information:
http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx
This is also a Russinovich tool. I find that it is a bit easier to look at some data through this tool.
Finally, using debugging tools for windows or dev studio can give you some insight into where the errors are occurring.
Access violation is almost always a memory error - all the more likely in this case because its random crashing (permissions would likely be more obviously reproducible). In the case of a dll it could be either
There's an error in the code in the dll itself - this could be something like a memory allocation error or even a simple loop boundary condition error.
There's an error when the dll tries to link out to another dll on the system. This will generally be caused by a mismatch between dll versions on the machine.
Your first step should be to try and get a reproducible crash condition. If you don't have a set of circumstances that will crash the system then you cannot know when you have fixed it.
I would then install the system on a clean machine and attempt to reproduce the error on that. Run a monitor and check precisely what other files (dlls etc) are open when the program crashes. I have seen code that crashes on a hyperthreaded Pentium but not on an earlier one - so restoring an old machine as a testbed may be a good option to cover that one. Varying the amount of ram in the machine is also worthwhile.
Hopefully these steps might give you a clue. Hopefully it will be an environment problem and so can be avoided by using the right version of windows, dlls etc. However if you're still stuck with the crash at this point with no good clues then your options are either to rewrite or attempt to hunt down the problem further by debugging the dll at assembler lever or dissassembling it. If you are not familiar with assembly code then both of these are long-shots and it's difficult to see what you will gain - and either option is likely to be a massive time-sink. Myself I have in the past, when faced with a particularly low-level high intensity problem like this advertised on one of the 'coder for hire' websites and looked for someone with specialist knowledge. Again you will need a reproducible error to be able to do this.
In the long run a dll without source code will have to be replaced. Paying a specialist with assembly skills to analyse the functions and provide you with flowcharts may well be worthwhile considering. It is good business practice to do this sooner in a controlled manner than later - like after the machine it is running on has crashed and that version of windows is no longer easily available.
You may want to try using Resource Hacker you may have luck de-compiling the in house application. it may not give you the full source code but at least maybe some more info about what the app is doing, which also may help you determine your culrpit.
Add the maximum possible RAM to the machine
This simple and cheap hack has work for me in the past. Of course YMMV.
Reverse engineering is one possibility, although a tough one.
In theory you can decompile and even debug/trace a compiled VB6 application - this is the easy part, modifying it without source, in all but the most simple cases, is the hard part.
Free compilers/decompilers:
VB decompilers
VB debuggers
Rewrite would be, in most cases, a more successful and faster way to solve the problem.
By 'challenging development environment' I don't mean you're on a small boat that's rocking up and down and someone is holding a gun to your head. I mean, are the tools at your disposal making the problem difficult?
Development is typically a cycle of code, run, observe the result, repeat. In some environments this is a very quick and painless process, but in others it's very difficult. We end up using little tricks to help us observe the result and run the code faster.
I was thinking of this because I just started using SSIS (an ETL tool included with SQL Server 2005/8). It's been quite challenging for me to make progress, mainly because there's no guidance on what all the dialogs mean and also because the errors are very cryptic and most of the time don't really tell you what the problem is.
I think the easiest environment I've had to work in was VB6 because there you can edit code while the application is running and it will continue running with your new code! You don't even have to run it again. This can save you a lot of time. Surprisingly, in Netbeans with Java code, you can do the same. It steps out of the method and re-runs the method with the new code.
In SQL Server 2000 when there is an error in a trigger you get no stack trace, which can make it really tricky to locate where the problem occurred since an insert can have a cascading effect and trigger many triggers. In Oracle you get a very nice little stack trace with line numbers so resolving the problem is very easy.
Some of the things that I see really help in locating problems:
Good error messages when things go wrong.
Providing a stack trace when a problem occurs.
Debug environment where you can pause, then output the value of variables and step to follow the execution path.
A graphical debug environment that shows the code as it's running.
A screen that can show the current values of variables so you can print to them.
Ability to turn on debug logging on a production system.
What is the worst you've seen and what can be done to get around these limitations?
EDIT
I really didn't intend for this to be flame bait. I'm really just looking for ideas to improve systems so that if I'm creating something I'll think about these things and not contribute to people's problems. I'm also looking for creative ways around these limitations that I can use if I find myself in this position.
I was working on making modifications to Magento for a client. There is very little information on how the Magento system is organized. There are hundreds of folders and files, and there are at least a thousand view files. There was little support available from Magento forums, and I suspect the main reason for this lack of information is because the creators of Magento want you to pay them to become a certified Magento developer. Also, at that time last year there was no StackOverflow :)
My first task was to figure out how the database schema worked and which table stored some attributes I was looking for. There are over 300 tables in Magento, and I couldn't find out how the SQL queries were being done. So I had just one option...
I exported the entire database (300+ tables, and at least 20,000 lines of SQL code) into a .sql file using PhpMyAdmin, and I 'committed' this file into the subversion repositry. Then, I made some changes to the database using the Magento administration panel, and redownloaded the .sql file. Then, I ran a DIFF using TortioseSvn, and scrolled through the 20k+ lines file to find which lines had changed, LOL. As crazy as it sounds, it did work, and I was able to figure out which tables I needed to access.
My 2nd problem was, because of the crazy directory structure, I had to ftp to about 3 folders at the same time for trivial changes. So I had to keep 3 windows of my ftp program open, switch between them and ftp each time.
The 3rd problem was figuring out how the url mapping worked and where some of the code I wanted was being stored. Here, by sheer luck, I managed to find the Model class I was looking for.
Mostly by sheer luck and other similar crazy adventures I managed to work my way through and complete the project. Since then, StackOverflow was started and by a helpful answer to this bounty question I was able to finally get enough information about Magento that I can do future projects in a less crazy manner (hopefully).
Try keypunching your card deck in Fortran, complete with IBM JCL (Job Control Language), handing it in at the data center window, coming back the next morning and getting an inch-thick stack of printer paper with the hex dump of your crash, and a list of the charges to your account.
Grows hair on your fingernails.
I guess that was an improvement on the prior method of sitting at the console, toggling switches and reading the lights.
Occam on a 400x transputer network. As there was only one transputer that could output to console debugging was a nightmare. Had to build a test harness on a Sun network.
I took a class once, that was loosely based on SICP, except it was taught in Dylan rather than Scheme. Actually, it was in the old Dylan syntax, the prefix one that was based on Scheme. But because there were no interpreters for that old version of Dylan, the professor wrote one. In Java. As an applet. Which meant that it had no access to the filesystem; you had to write all of your code in a separate text editor, and then paste it into the Dylan interpreter. Oh, and it had no debugging facilities, of course. And being a Dylan interpreter written in Java, and this was back in 2000, it was ungodly slow.
Print statement debugging, lots of copying and pasting, and an awful lot of cursing at the interpreter were involved.
Back in the 90's, I was developing applications in Clipper, a compilable dBase-like language. I don't remember if it came with a debugger, we often used a 3rd-party debugger called 'Mr Debug' (really!). Although Clipper was fast, some of our more intensive routines were written in C. If you prayed to the correct gods and uttered the necessary incantations, you could use Microsoft's CodeView debugger to debug the C code. But usually not for more than a few minutes, as the program usually didn't like to spend much time running with CodeView (usually memory problems).
I had a series of makefile switches that I used to stub out the sections of code that I didn't need to debug at the time. My debugging environment was very sparse so there was as much free memory as possible for the program. I also think I drank a lot more back then...
Some years ago I reverse engineered game copy protections. Because the protections was written in C or C++ they were fairly easy to disassemble and understand what was going on. But in some cases it got hairy when the copy protection took a detour into the kernel to obfuscate what was happening. A few of them also started to use of custom made virtual machines to make the problem less understandable. I spent hours writing hooks and debuggers to be able to trace into them. The environment really offered a competetive and innovative mind. I had everything at my disposal save time. Misstakes caused reboots and very little feedback what went wrong. I realized thinking before acting is often a better solution.
Today I dispise debuggers. If the problem is in code visible to me I find it easiest to use verbose logging. (Sometimes the error is not understanding the interface/environment then debuggers are good.) I have also realized time is of an essance. You need to have a good working environment with possibility to test your code instantly. If you compiler takes 15 sec, your environment takes 20 sec to update or your caches takes 5 minutes to clear find another way to test your code. Progress keeps me motivated and without a good working environment I get bored, angry and frustrated.
The last job I had I was a Sitecore Developer. Bugfixing can be very painful if the bug only occurs on the client's system, and they do not have Visual Studio installed on the system, with the remote debugging off, and the problem only happens on the production server (not the staging server).
The worst in recent memory was developing SSRS reports using Dundas controls. We were doing quite a bit with the grids which required coding. The pain was the bugginess of the controls, and the lack of debugging support.
I never got around the limitations, but just worked through them.
I'm trying to use boundschecker to analyze a rather complex program. Running the program with boundschecker is almost too slow for it to be of any use since it takes me almost a day to run the program to the point in the code where I suspect the issue exists. Can anyone give me some ideas for how to inspect only certain parts of my software using boundschecker (DevPartner) in Visual Studio 2005?
Thanks in advance for all your help!
I last used BoundsChecker a few years ago, and had the same problems. With large projects, it makes everything run so slowly that it is useless. We ended up ditching it.
But, we still needed some of it's functionality, but like you, not for the whole program. So we had to do it ourselves.
In our case, we mainly used it to try and track down memory leaks. If that's your objective as well, there are other options.
Visual Studio does a pretty good job of telling you about memory leaks when your program exits
It reports leaks in the order that they were created
It will tell you exactly where the leaked memory was created if your source files have this at the top
#ifdef _DEBUG
#undef THIS_FILE
static char THIS_FILE[]=__FILE__;
#define new DEBUG_NEW
#endif
Those help a lot, but it's often not enough. Adding that snippet everywhere isn't always feasible. If you use factory classes, knowing where memory was allocated doesn't help at all. So when all else fails, we take advantage of #2.
Add something like the following:
#define LEAK(str) {char *s = (char*)malloc(100); strcpy(s, str);}
Then, pepper your code with "LEAK("leak1");" or whatever. Run the program, and exit it. Your new leaked strings will display in Visual Studio's leak dump surrounding the existing leaks. Keep adding/moving your LEAK statements and re-running the program to narrow your search until you've pinpointed the exact location. Then fix the leak, remove your debugging leaks, and you're all set!
BoundsChecker tracks all memory allocations and releases in extreme detail. It knows, for instance, that such and such a memory allocation was done from the C runtime heap, which in turn was taken from a Win32 heap, which in turn started life as memory allocated by VirtualAlloc. If the application was instrumented (FinalCheck), it also has detailed information as to which pointers reference the memory.
This is one reason (of many) why the thing is slow.
If BC were to connect to an application late, it would have none of this data built up, and would have either (1) dig it all up at once, or (2) start guessing about things. Neither solution is very practical.
One way to lighten up BoundsChecker is by excluding from instrumentation all but the few modules you are interested in. I know thats not great because if you knew where the leak was you wouldn't need BoundsChecker. What I usually recommend is that you use BC's Active Check mode first with only Memory Tracking available. You miss the API Validations but you could always rerun that seperately. After you run Active Check and you get clues regarding which modules tend to be problematic, only then do you enable instrumentation for the module or modules of interest and their dependencies. We know Final Check is annoyingly slow but as Mistiano correctly states, with Final Check not only does BC keep a graph of all allocated blocks but also all pointers and contexts to them. Therein lies the magic in how Final Check can nail leaks and corruptions at the point of occurance, not just on application shutdown or fault. Shameless plug: I work on the DevPartner team. We are releasing DPS 10.5 on February 4, 2011 with x64 application support in BC. Unlike the relatively ancient and undersold BC64 for Itanium which only provided Active Check, DPS 10.5 provides full Final Check support for x64 applications, both for pure C++ and for native modules running in .NET processes. See microfocus.com under MF Developer for details.
Our application takes significantly more time to launch after a reboot (cold start) than if it was already opened once (warm start).
Most (if not all) the difference seems to come from loading DLLs, when the DLLs' are in cached memory pages they load much faster. We tried using ClearMem to simulate rebooting (since its much less time consuming than actually rebooting) and got mixed results, on some machines it seemed to simulate a reboot very consistently and in some not.
To sum up my questions are:
Have you experienced differences in launch time between cold and warm starts?
How have you delt with such differences?
Do you know of a way to dependably simulate a reboot?
Edit:
Clarifications for comments:
The application is mostly native C++ with some .NET (the first .NET assembly that's loaded pays for the CLR).
We're looking to improve load time, obviously we did our share of profiling and improved the hotspots in our code.
Something I forgot to mention was that we got some improvement by re-basing all our binaries so the loader doesn't have to do it at load time.
As for simulating reboots, have you considered running your app from a virtual PC? Using virtualization you can conveniently replicate a set of conditions over and over again.
I would also consider some type of profiling app to spot the bit of code causing the time lag, and then making the judgement call about how much of that code is really necessary, or if it could be achieved in a different way.
It would be hard to truly simulate a reboot in software. When you reboot, all devices in your machine get their reset bit asserted, which should cause all memory system-wide to be lost.
In a modern machine you've got memory and caches everywhere: there's the VM subsystem which is storing pages of memory for the program, then you've got the OS caching the contents of files in memory, then you've got the on-disk buffer of sectors on the harddrive itself. You can probably get the OS caches to be reset, but the on-disk buffer on the drive? I don't know of a way.
How did you profile your code? Not all profiling methods are equal and some find hotspots better than others. Are you loading lots of files? If so, disk fragmentation and seek time might come into play.
Maybe even sticking basic timing information into the code, writing out to a log file and examining the files on cold/warm start will help identify where the app is spending time.
Without more information, I would lean towards filesystem/disk cache as the likely difference between the two environments. If that's the case, then you either need to spend less time loading files upfront, or find faster ways to load files.
Example: if you are loading lots of binary data files, speed up loading by combining them into a single file, then do a slerp of the whole file into memory in one read and parse their contents. Less disk seeks and time spend reading off of disk. Again, maybe that doesn't apply.
I don't know offhand of any tools to clear the disk/filesystem cache, but you could write a quick application to read a bunch of unrelated files off of disk to cause the filesystem/disk cache to be loaded with different info.
#Morten Christiansen said:
One way to make apps start cold-start faster (sort of) is used by e.g. Adobe reader, by loading some of the files on startup, thereby hiding the cold start from the users. This is only usable if the program is not supposed to start up immediately.
That makes the customer pay for initializing our app at every boot even when it isn't used, I really don't like that option (neither does Raymond).
One succesful way to speed up application startup is to switch DLLs to delay-load. This is a low-cost change (some fiddling with project settings) but can make startup significantly faster. Afterwards, run depends.exe in profiling mode to figure out which DLLs load during startup anyway, and revert the delay-load on them. Remember that you may also delay-load most Windows DLLs you need.
A very effective technique for improving application cold launch time is optimizing function link ordering.
The Visual Studio linker lets you pass in a file lists all the functions in the module being linked (or just some of them - it doesn't have to be all of them), and the linker will place those functions next to each other in memory.
When your application is starting up, there are typically calls to init functions throughout your application. Many of these calls will be to a page that isn't in memory yet, resulting in a page fault and a disk seek. That's where slow startup comes from.
Optimizing your application so all these functions are together can be a big win.
Check out Profile Guided Optimization in Visual Studio 2005 or later. One of the thing sthat PGO does for you is function link ordering.
It's a bit difficult to work into a build process, because with PGO you need to link, run your application, and then re-link with the output from the profile run. This means your build process needs to have a runtime environment and deal cleaning up after bad builds and all that, but the payoff is typically 10+ or more faster cold launch with no code changes.
There's some more info on PGO here:
http://msdn.microsoft.com/en-us/library/e7k32f4k.aspx
As an alternative to function order list, just group the code that will be called within the same sections:
#pragma code_seg(".startUp")
//...
#pragma code_seg
#pragma data_seg(".startUp")
//...
#pragma data_seg
It should be easy to maintain as your code changes, but has the same benefit as the function order list.
I am not sure whether function order list can specify global variables as well, but use this #pragma data_seg would simply work.
One way to make apps start cold-start faster (sort of) is used by e.g. Adobe reader, by loading some of the files on startup, thereby hiding the cold start from the users. This is only usable if the program is not supposed to start up immediately.
Another note, is that .NET 3.5SP1 supposedly has much improved cold-start speed, though how much, I cannot say.
It could be the NICs (LAN Cards) and that your app depends on certain other
services that require the network to come up. So profiling your application alone may not quite tell you this, but you should examine the dependencies for your application.
If your application is not very complicated, you can just copy all the executables to another directory, it should be similar to a reboot. (Cut and Paste seems not work, Windows is smart enough to know the files move to another folder is cached in the memory)