How often do you solve your problems by restarting a computer, router, program, browser? Or even by reinstalling the operating system or software component?
This seems to be a common pattern when there is a suspect that software component does not keep its state in the right way, then you just get the initial state by restarting the component.
I've heard that Amazon/Google has a cluster of many-many nodes. And one important property of each node is that it can restart in seconds. So, if one of them fails, then returning it back to initial state is just a matter of restarting it.
Are there any languages/frameworks/design patterns out there that leverage this techinque as a first-class citizen?
EDIT The link that describes some principles behind Amazon as well as overall principles of availability and consistency:
http://www.infoq.com/presentations/availability-consistency
This is actually very rare in the unix/linux world. Those oses were designed (and so was windows) to protect themselves from badly behaved processes. I am sure google is not relying on hard restarts to correct misbehaved software. I would say this technique should not be employed and if someone says that the fatest route to recovery for their software you should look for something else!
This is common in the embedded systems world, and in telecommunications. It's much less common in the server based world.
There's a research group you might be interested in. They've been working on Recovery-Oriented Computing or "ROC". The key principle in ROC is that the cleanest, best, most reliable state that any program can be in is right after starting up. Therefore, on detecting a fault, they prefer to restart the software rather than attempt to recover from the fault.
Sounds simple enough, right? Well, most of the research has gone into implementing that idea. The reason is exactly what you and other commenters have pointed out: OS restarts are too slow to be a viable recovery method.
ROC relies on three major parts:
A method to detect faults as early as possible.
A means of isolating the faulty component while preserving the rest of the system.
Component-level restarts.
The real key difference between ROC and the typical "nightly restart" approach is that ROC is a strategy where the reboots are a reaction. What I mean is that most software is written with some degree of error handling and recovery (throw-and-catch, logging, retry loops, etc.) A ROC program would detect the fault (exception) and immediately exit. Mixing up the two paradigms just leaves you with the worst of both worlds---low reliability and errors.
Microcontrollers typically have a watchdog timer, which must be reset (by a line of code) every so often or else the microcontroller will reset. This keeps the firmware from getting stuck in an endless loop, stuck waiting for input, etc.
Unused memory is sometimes set to an instruction which causes a reset, or a jump to a the same location that the microcontroller starts at when it is reset. This will reset the microcontroller if it somehow jumps to a location outside the program memory.
Embedded systems may have a checkpoint feature where every n ms, the current stack is saved.
The memory is non-volatile on power restart(ie battery backed), so on a power start, a test is made to see if the code needs to jump to an old checkpoint, or if it's a fresh system.
I'm going to guess that a similar technique(but more sophisticated) is used for Amazon/Google.
Though I can't think of a design pattern per se, in my experience, it's a result of "select is broken" from developers.
I've seen a 50-user site cripple both SQL Server Enterprise Edition (with a 750 MB database) and a Novell server because of poor connection management coupled with excessive calls and no caching. Novell was always the culprit according to developers until we found a missing "CloseConnection" call in a core library. By then, thousands were spent, unsuccessfully, on upgrades to address that one missing line of code.
(Why they had Enterprise Edition was beyond me so don't ask!!)
If you look at scripting languages like php running on Apache, each invocation starts a new process. In the basic case there is no shared state between processes and once the invocation has finished the process is terminated.
The advantages are less onus on resource management as they will be released when the process finishes and less need for error handling as the process is designed to fail-fast and it cannot be left in an inconsistent state.
I've seen it a few places at the application level (an app restarting itself if it bombs).
I've implemented the pattern at an application level, where a service reading from Dbase files starts getting errors after reading x amount of times. It looks for a particular error that gets thrown, and if it sees that error, the service calls a console app that kills the process and restarts the service. It's kludgey, and I hate it, but for this particular situation, I could find no better answer.
AND bear in mind that IIS has a built in feature that restarts the application pool under certain conditions.
For that matter, restarting a service is an option for any service on Windows as one of the actions to take when the service fails.
Related
I understand why cold starts happen (Byte code needs to be turned into machine code through JIT compilation). However with all the generated meta data available for binaries these days I do not understand why there isn't a simple tool that automatically takes the byte code and turns ALL PATHS THROUGH THE CODE (auto discovered) into machine code specific for that target platform. That would mean the first request through any path (assume a rest api) would be fast and not require any further Just In Time Compilation.
We can create an automation test suite or load test to JIT all the paths before allowing the machine into the load balancer rotation (good best practice anyway). We can also flip the "always on" setting in cloud hosting providers to keep the warmed application from getting evicted from memory (requiring the entire process over again). However, it seems like such an archaic process to still be having in 2020.
Why isn't there a tool that does this? What is the limitation that prevents us from using meta data, debug symbols and/or other means to understand how to generate machine code that is already warm and ready for users from the start?
So I have been asking some sharp minds around my professional network and no one seems to be able to point out exactly what limitation makes this so hard to do. However, I did get a few tools on my radar that do what i'm looking for to some level.
Crossgen appears to be the most promising but it's far from widely used among the many peers I've spoken to. Will have to take a closer look.
Also several do some sort of startup task that runs some class initialization and also register them as singletons. I wouldn't consider those much different then just running integration or load tests on the application.
Most programming languages have some form of native image compiler tool. It's up to you to use them if that is what you are looking to do.
Providers are supposed to give you a platform for your application and there is a certain amount of isolation and privacy you should expect from your provider. They should not go digging into your application to figure out all its "paths". That would be very invasive.
Plus "warming up" all paths would be an overly resource intensive process for a provider to be obligated to perform for every application they host.
Could someone here shed some light about how NASA goes about designing their spacecraft architecture to ensure that they are able to patch bugs in the deployed code?
I have never built any “real time” type systems and this is a question that has come to mind after reading this article:
http://pluto.jhuapl.edu/overview/piPerspective.php?page=piPerspective_05_21_2010
“One of the first major things we’ll
do when we wake the spacecraft up next
week will be uploading almost 20 minor
bug fixes and other code enhancements
to our fault protection (or “autopilot
response”) software.”
I've been a developer on public telephone switching system software, which has pretty severe constraints on reliability, availability, survivability, and performance that approach what spacecraft systems need. I haven't worked on spacecraft (although I did work with many former shuttle programmers while at IBM), and I'm not familiar with VXworks, the operating system used on many spacecraft (including the Mars rovers, which have a phenomenal operating record).
One of the core requirements for patchability is that a system should be designed from the ground up for patching. This includes module structure, so that new variables can be added, and methods replaced, without disrupting current operations. This often means that both old and new code for a changed method will be resident, and the patching operation simply updates the dispatching vector for the class or module.
It is just about mandatory that the patching (and un-patching) software is integrated into the operating system.
When I worked on telephone systems, we generally used patching and module-replacement functions in the system to load and test our new features as well as bug fixes, long before these changes were submitted for builds. Every developer needs to be comfortable with patching and replacing modules as part of their daly work. It builds a level of trust in these components, and makes sure that the patching and replacement code is exercised routinely.
Testing is far more stringent on these systems than anything you've ever encountered on any other project. Complete and partial mock-ups of the deployment system will be readily available. There will likely be virtual machine environments as well, where the complete load can be run and tested. Test plans at all levels above unit test will be written and formally reviewed, just like formal code inspections (and those will be routine as well).
Fault tolerant system design, including software design, is essential. I don't know about spacecraft systems specifically, but something like high-availability clusters is probably standard, with the added capability to run both synchronized and unsynchronized, and with the ability to transfer information between sides during a failover. An added benefit of this system structure is that you can split the system (if necessary), reload the inactive side with a new software load, and test it in the production system without being connected to the system network or bus. When you're satisfied that the new software is running properly, you can simply failover to it.
As with patching, every developer should know how to do failovers, and should do them both during development and testing. In addition, developers should know every software update issue that can force a failover, and should know how to write patches and module replacement that avoid required failovers whenever possible.
In general, these systems are designed from the ground up (hardware, operating system, compilers, and possibly programming language) for these environments. I would not consider Windows, Mac OSX, Linux, or any unix variant, to be sufficiently robust. Part of that is realtime requirements, but the whole issue of reliability and survivability is just as critical.
UPDATE: As another point of interest, here's a blog by one of the Mars rover drivers. This will give you a perspective on the daily life of maintaining an operating spacecraft. Neat stuff!
I've never build real-time system either, but in those system, I suspect their system would not have memory protection mechanism. They do not need it since they wrote all their own software themselves. Without memory protection, it will be trivial for a program to write the memory location of another program and this can be used to hot-patch a running program (writing a self-modifying code was a popular technique in the past, without memory protection the same techniques used for self-modifying code can be used to modify another program's code).
Linux has been able to do minor kernel patching without rebooting for some time with Ksplice. This is necessary for use in situations where any downtime can be catastrophic. I've never used it myself, but I think the technique they uses is basically this:
Ksplice can apply patches to the Linux
kernel without rebooting the computer.
Ksplice takes as input a unified diff
and the original kernel source code,
and it updates the running kernel in
memory. Using Ksplice does not require
any preparation before the system is
originally booted (the running kernel
does not need to have been specially
compiled, for example). In order to
generate an update, Ksplice must
determine what code within the kernel
has been changed by the source code
patch. Ksplice performs this analysis
at the ELF object code layer, rather
than at the C source code layer.
To apply a patch, Ksplice first
freezes execution of a computer so it
is the only program running. The
system verifies that no processors
were in the middle of executing
functions that will be modified by the
patch. Ksplice modifies the beginning
of changed functions so that they
instead point to new, updated versions
of those functions, and modifies data
and structures in memory that need to
be changed. Finally, Ksplice resumes
each processor running where it left
off.
(from Wikipedia)
Well I'm sure they have simulators to test with and mechanisms for hot-patching. Take a look at the linked article below - there's a pretty good overview of the spacecraft design. Section 5 discusses the computation machinery.
http://www.boulder.swri.edu/pkb/ssr/ssr-fountain.pdf
Of note:
Redundant processors
Command switching by the uplink card that does not require processor help
Time-lagged rules
I haven't worked on spacecraft, but the machines I've worked on have all been built to have a stable idle state where it's possible to shut down the machine briefly to patch the firmware. The systems that have accommodated 'live' updates are those that were broken into interacting components, where you can bring down one segment of the system long enough to update it and the other components can continue operating as normal, as they can tolerate the temporary downtime of the serviced component.
One way you can do this is to have parallel (redundant) capabilities, such as parallel machines that all perform the same task, so that the process can be routed around the machine under service. The benefit of this approach is that you can bring it down for longer periods for more significant service, such as regular hardware preventative maintenance. Once you have this capability, supporting downtime for a firmware patch is fairly easy.
One of the approaches that's been used in the past is to use LISP.
The question says it all. If you have a bug that multiple users report, but there is no record of the bug occurring in the log, nor can the bug be repeated, no matter how hard you try, how do you fix it? Or even can you?
I am sure this has happened to many of you out there. What did you do in this situation, and what was the final outcome?
Edit:
I am more interested in what was done about an unfindable bug, not an unresolvable bug. Unresolvable bugs are such that you at least know that there is a problem and have a starting point, in most cases, for searching for it. In the case of an unfindable one, what do you do? Can you even do anything at all?
Language
Different programming languages will have their own flavour of bugs.
C
Adding debug statements can make the problem impossible to duplicate because the debug statement itself shifts pointers far enough to avoid a SEGFAULT---also known as Heisenbugs. Pointer issues are arduous to track and replicate, but debuggers can help (such as GDB and DDD).
Java
An application that has multiple threads might only show its bugs with a very specific timing or sequence of events. Improper concurrency implementations can cause deadlocks in situations that are difficult to replicate.
JavaScript
Some web browsers are notorious for memory leaks. JavaScript code that runs fine in one browser might cause incorrect behaviour in another browser. Using third-party libraries that have been rigorously tested by thousands of users can be advantageous to avoid certain obscure bugs.
Environment
Depending on the complexity of the environment in which the application (that has the bug) is running, the only recourse might be to simplify the environment. Does the application run:
on a server?
on a desktop?
in a web browser?
In what environment does the application produce the problem?
development?
test?
production?
Exit extraneous applications, kill background tasks, stop all scheduled events (cron jobs), eliminate plug-ins, and uninstall browser add-ons.
Networking
As networking is essential to so many applications:
Ensure stable network connections, including wireless signals.
Does the software reconnect after network failures robustly?
Do all connections get closed properly so as to release file descriptors?
Are people using the machine who shouldn't be?
Are rogue devices interacting with the machine's network?
Are there factories or radio towers nearby that can cause interference?
Do packet sizes and frequency fall within nominal ranges?
Are packets being monitored for loss?
Are all network devices adequate for heavy bandwidth usage?
Consistency
Eliminate as many unknowns as possible:
Isolate architectural components.
Remove non-essential, or possibly problematic (conflicting), elements.
Deactivate different application modules.
Remove all differences between production, test, and development. Use the same hardware. Follow the exact same steps, perfectly, to setup the computers. Consistency is key.
Logging
Use liberal amounts of logging to correlate the time events happened. Examine logs for any obvious errors, timing issues, etc.
Hardware
If the software seems okay, consider hardware faults:
Are the physical network connections solid?
Are there any loose cables?
Are chips seated properly?
Do all cables have clean connections?
Is the working environment clean and free of dust?
Have any hidden devices or cables been damaged by rodents or insects?
Are there bad blocks on drives?
Are the CPU fans working?
Can the motherboard power all components? (CPU, network card, video card, drives, etc.)
Could electromagnetic interference be the culprit?
And mostly for embedded:
Insufficient supply bypassing?
Board contamination?
Bad solder joints / bad reflow?
CPU not reset when supply voltages are out of tolerance?
Bad resets because supply rails are back-powered from I/O ports and don't fully discharge?
Latch-up?
Floating input pins?
Insufficient (sometimes negative) noise margins on logic levels?
Insufficient (sometimes negative) timing margins?
Tin whiskers?
ESD damage?
ESD upsets?
Chip errata?
Interface misuse (e.g. I2C off-board or in the presence of high-power signals)?
Race conditions?
Counterfeit components?
Network vs. Local
What happens when you run the application locally (i.e., not across the network)? Are other servers experiencing the same issues? Is the database remote? Can you use a local database?
Firmware
In between hardware and software is firmware.
Is the computer BIOS up-to-date?
Is the BIOS battery working?
Are the BIOS clock and system clock synchronized?
Time and Statistics
Timing issues are difficult to track:
When does the problem happen?
How frequently?
What other systems are running at that time?
Is the application time-sensitive (e.g., will leap days or leap seconds cause issues)?
Gather hard numerical data on the problem. A problem that might, at first, appear random, might actually have a pattern.
Change Management
Sometimes problems appear after a system upgrade.
When did the problem first start?
What changed in the environment (hardware and software)?
What happens after rolling back to a previous version?
What differences exist between the problematic version and good version?
Library Management
Different operating systems have different ways of distributing conflicting libraries:
Windows has DLL Hell.
Unix can have numerous broken symbolic links.
Java library files can be equally nightmarish to resolve.
Perform a fresh install of the operating system, and include only the supporting software required for your application.
Java
Make sure every library is used only once. Sometimes application containers have a different version of a library than the application itself. This might not be possible to replicate in the development environment.
Use a library management tool such as Maven or Ivy.
Debugging
Code a detection method that triggers a notification (e.g., log, e-mail, pop-up, pager beep) when the bug happens. Use automated testing to submit data into the application. Use random data. Use data that covers known and possible edge cases. Eventually the bug should reappear.
Sleep
It is worth reiterating what others have mentioned: sleep on it. Spend time away from the problem, finish other tasks (like documentation). Be physically distant from computers and get some exercise.
Code Review
Walk through the code, line-by-line, and describe what every line does to yourself, a co-worker, or a rubber duck. This may lead to insights on how to reproduce the bug.
Cosmic Radiation
Cosmic Rays can flip bits. This is not as big as a problem in the past due to modern error checking of memory. Software for hardware that leaves Earth's protection is subject to issues that simply cannot be replicated due to the randomness of cosmic radiation.
Tools
Sometimes, albeit infrequently, the compiler will introduce a bug, especially for niche tools (e.g. a C micro-controller compiler suffering from a symbol table overflow). Is it possible to use a different compiler? Could any other tool in the tool-chain be introducing issues?
If it's a GUI app, it's invaluable to watch the customer generate the error (or try to). They'll no doubt being doing something you'd never have guessed they were doing (not wrongly, just differently).
Otherwise, concentrate your logging in that area. Log most everything (you can pull it out later) and get your app to dump its environment as well. e.g. machine type, VM type, encoding used.
Does your app report a version number, a build number, etc.? You need this to determine precisely which version you're debugging (or not!).
If you can instrument your app (e.g. by using JMX if you're in the Java world) then instrument the area in question. Store stats e.g. requests+parameters, time made, etc. Make use of buffers to store the last 'n' requests/responses/object versions/whatever, and dump them out when the user reports an issue.
If you can't replicate it, you may fix it, but can't know that you've fixed it.
I've made my best explanation about how the bug was triggered (even if I didn't know how that situation could come about), fixed that, and made sure that if the bug surfaced again, our notification mechanisms would let a future developer know the things that I wish I had known. In practice, this meant adding log events when the paths which could trigger the bug were crossed, and metrics for related resources were recorded. And, of course, making sure that the tests exercised the code well in general.
Deciding what notifications to add is a feasability and triage question. So is deciding on how much developer time to spend on the bug in the first place. It can't be answered without knowing how important the bug is.
I've had good outcomes (didn't show up again, and the code was better for it), and bad (spent too much time not fixing the problem, whether the bug ended up fixed or not). That's what estimates and issue priorities are for.
Sometimes I just have to sit and study the code until I find the bug. Try to prove that the bug is impossible, and in the process you may figure out where you might be mistaken. If you actually succeed in convincing yourself it's impossible, assume you messed up somewhere.
It may help to add a bunch of error checking and assertions to confirm or deny your beliefs/assumptions. Something may fail that you'd never expect to.
It can be difficult, and sometimes near impossible. But my experience is, that you will sooner or later be able to reproduce and fix the bug, if you spend enough time on it (if that spent time is worth it, is another matter).
General suggestions that might help in this situation.
Add more logging, if possible, so that you have more data the next time the bug appears.
Ask the users, if they can replicate the bug. If yes, you can have them replicate it while watching over their shoulder, and hopefully find out, what triggers the bug.
Make random changes until something works :-)
Assuming you have already added all the logging that you think would help and it didn't... two things spring to mind:
Work backwards from the reported symptom. Think to yourself.. "it I wanted to produce the symptom that was reported, what bit of code would I need to be executing, and how would I get to it, and how would I get to that?" D leads to C leads to B leads to A. Accept that if a bug is not reproducible, then normal methods won't help. I've had to stare at code for many hours with these kind of thought processes going on to find some bugs. Usually it turns out to be something really stupid.
Remember Bob's first law of debugging: if you can't find something, it's because you're looking in the wrong place :-)
Think. Hard. Lock yourself away, admit no interuptions.
I once had a bug where the evidence was a hex dump of a corrupt database. The chains of pointers were systematically screwed up. All the user's programs, and our database software, worked faultlessly in testing. I stared at it for a week (it was an important customer), and after eliminating dozens of possible ideas, I realised that the data was spread across two physical files and the corruption occurred where the chains crossed file boundaries. I realized that if a backup/restore operation failed at a critical point, the two files could end up "out of sync", restored to different time points. If you then ran one of the customer's programs on the already-corrupt data, it would produce exactly the knotted chains of pointers I was seeing. I then demonstrated a sequence of events that reproduced the corruption exactly.
modify the code where you think the problem is happening, so extra debug info is recorded somewhere. when it happens next time, you will have what your need to solve the problem.
There are two types of bugs you can't replicate. The kind you discovered, and the kind someone else discovered.
If you discovered the bug, you should be able to replicate it. If you can't replicate it, then you simply haven't considered all of the contributing factors leading towards the bug. This is why whenever you have a bug, you should document it. Save the log, get a screenshot, etc. If you don't, then how can you even prove the bug really exists? Maybe it's just a false memory?
If someone else discovered a bug, and you can't replicate it, obviously ask them to replicate it. If they can't replicate it, then you try to replicate it. If you can't replicate it quickly, ignore it.
I know that sounds bad, but I think it is justified. The amount of time it will take you to replicate a bug that someone else discovered is very large. If the bug is real, it will happen again naturally. Someone, maybe even you, will stumble across it again. If it is difficult to replicate, then it is also rare, and probably won't cause too much damage if it happens a few more times.
You can be a lot more productive if you spend your time actually working, fixing other bugs and writing new code, than you will be trying to replicate a mystery bug that you can't even guarantee actually exists. Just wait for it to appear again naturally, then you will be able to spend all your time fixing it, rather than wasting your time trying to reveal it.
Discuss the problem, read code, often quite a lot of it. Often we do it in pairs, because you can usually eliminate the possibilities analytically quite quickly.
Start by looking at what tools you have available to you. For example crashes on a Windows platform go to WinQual, so if this is your case you now have crash dump information. Do you can static analysis tools that spot potential bugs, runtime analysis tools, profiling tools?
Then look at the input and output. Anything similar about the inputs in situations when users report the error, or anything out of place in the output? Compile a list of reports and look for patterns.
Finally, as David stated, stare at the code.
Ask user to give you a remote access for his computer and see everything yourself. Ask user to make a small video of how he reproduces this bug and send it to you.
Sure both are not always possible but if they are it may clarify some things. The common way of finding bugs are still the same: separating parts that may cause bug, trying to understand what`s happening, narrowing codespace that could cause the bug.
There are tools like gotomeeting.com, which you can use to share screen with your user and observe the behaviour. There could be many potential problems like number of softwares installed on their machines, some tools utility conflicting with your program. I believe gotomeeting, is not the only solution, but there could be timeout issues, slow internet issue.
Most of times I would say softwares do not report you correct error messages, for example, in case of java and c# track every exceptions.. dont catch all but keep a point where you can catch and log. UI Bugs are difficult to solve unless you use remote desktop tools. And most of time it could be bug in even third party software.
If you work on a real significant sized application, you probably have a queue of 1,000 bugs, most of which are definitely reproducible.
Therefore, I'm afraid I'd probably close the bug as WORKSFORME (Bugzilla) and then get on fixing some more tangible bugs. Or doing whatever the project manager decides to do.
Certainly making random changes is a bad idea, even if they're localised, because you risk introducing new bugs.
I am debugging an application which slows down the system very badly. The application loads a large amount of data (some 1000 files each of half an MB) from the local hard disk.The files are loaded as memory mapped files and are mapped only when needed. This means that at any given point in time the virtual memory usage does not exceed 300 MB.
I also checked the Handle count using handle.exe from sysinternals and found that there are at the most some 8000 odd handles opened. When the data is unloaded it drops to around 400. There are no handle leaks after each load and unload operation.
After 2-3 Load unload cycles, during one load, the system becomes very slow. I checked the virtual memory usage of the application as well as the handle counts at this point and it was well within the limits (VM about 460MB not much fragmentation also, handle counts 3200).
I want how an application could make the system very slow to respond? What other tools can I use to debug this scenario?
Let me be more specific, when i mean system it is entire windows that is slowing down. Task manager itself takes 2 mins to come up and most often requires a hard reboot
The fact that the whole system slows downs is very annoying, it means you can not attach a profiler easily, it also means it would be even difficult to stop the profiling session in order to view the results ( since you said it require a hard reboot ).
The best tool suited for the job in this situation is ETW ( Event Tracing for Windows ), these tools are great, will give you the exact answer you are looking for
Check them out here
http://msdn.microsoft.com/en-us/library/cc305210.aspx
and
http://msdn.microsoft.com/en-us/library/cc305221.aspx
and
http://msdn.microsoft.com/en-us/performance/default.aspx
Hope this works.
Thanks
Tools you can use at this point:
Perfmon
Event Viewer
In my experience, when things happen to a system that prevent Task Manager from popping up, they're usually of the hardware variety -- checking the system event log of Event Viewer is sometimes just full of warnings or errors that some hardware device is timing out.
If Event Viewer doesn't indicate that any kind of loggable hardware error is causing the slowdown, then try Perfmon -- add counters for system objects to track file read, exceptions, context switches etc. per second and see if there's something obvious there.
Frankly the sort of behavior demonstrated is meant to be impossible - by design - for user-mode code to cause. WinNT goes to a lot of effort to insulate applications from each other and prevent rogue applications from making the system unusable. So my suspicion is some kind of hardware fault is to blame. Is there any chance you can simply run the same test on a different PC?
If you don't have profilers, you may have to do the same work by hand...
Have you tried commenting out all read/write operations, just to check whether the slow down disappears ?
"Divide and conquer" strategies will help you find where the problem lies.
If you run it under an IDE, run it until it gets real slow, then hit the "pause" button. You will catch it in the act of doing whatever takes so much time.
You use tools like "IBM Rational Quantify" or "Intel VTune" to detect performance issue.
[EDIT]
Like Benoît did, one good mean is measuring tasks time to identify which is eating cpu.
But remember, as you are working with many files, is likely to be missing that causes the memory to disk swap.
when task manager is taking 2 minutes to come up, are you getting a lot of disk activity? or is it cpu-bound?
I would try process explorer from sysinternals. When your system is in the slowed-down state, and you try running, say, notepad, pay attention to page fault deltas.
Windows is very greedy about caching file data. I would try removing file I/O as someone suggested, and also making sure you close the file mapping as soon as you are done with a file.
I/O is probably causing your slowdown,especially if your files are on the same disk as the OS. Another way to test that would be to move your files to another disk and see if that alleviates the problem.
I'm a (happy?) user of Windows, but recently have problems that I don't know how to track.
I have a WinXP plus home and work Win2k3 systems. Some of them are freezing itermittently for a short amount of time (from less than a second to a few seconds). There is no CPU usage spike and not much HDD activity. Neither Process Explorer nor Windows Task Manager show any suspicious processes. The services also look ok.
On one of computers, dragging and droping (within Explorer windows or windows and apps) freezes the machine for 10-20 sec. After this period I can continue to use drag & drop for some (long) time with no delays. Don't think it is virus – it would probably infect all machines easily.
How can I know what is going on with my systems?
Update: Thank you for your suggestions. I solved the problem on one of the machines – it was a nasty rootkit. I needed to use 3rd party tools to detect and remove it. How can I diagnose it without this tool?
This is most likely not faulty hardware.
On Windows, there are occasional messages that are broadcast system-wide to all top-level windows. If a window does not respond (or is slow in responding), then the whole system will appear to freeze. There is a built-in timeout and if exceeded, the system will assume that the window isn't going to respond and it skips the window (this could be the 10-20 second delay you're seeing although I think the timeout is a little higher than this).
I have not seen a solution for tracking these kinds of problems. You might experiment by creating a program that sends individual messages to each top-level window and record the time taken for each to respond. This isn't failsafe but it's a starting point, and this is (if I recall correctly) the technique I used to identify such a problem with Adobe's iFilter (for the Microsoft indexing service).
But before you go down this path, you said that these are recent problems. See if you can figure out what you might have installed recently and then uninstall it. This includes Windows patches as well as any new drivers or applications.
Are you able to peg it to a rough time-frame of when the symptoms started? If so, you could match the critical updates/installs in Add/Remove programs to that estimation and start looking there.
More generally, I find using MSCONFIG to temporarily turn off all startup programs and all non-Microsoft services can help quickly divide and conquer - If the symptoms disappear, you have a shorter list to work through.
Safe mode (with or without network - see next idea) is another way of narrowing the list of suspects.
Since it is multiple machines, if it were hardware it would have to be something common... Especially if it is two different locations. That said, network connectivity (or lack thereof) is the other frequent culprit. Bringing up a system in a standalone config (net cable unplugged/wireless radio disabled) will seem VERY slow at first, then once the timeouts and various retries have been exceeded, should zip along, especially if you are still running in a limited startup environment. I have had recalcitrant switches/routers be a problem, as well as sluggish external services (like an ISP's DNS) cause symptoms like this.
No floppy, optical, or other removable drive access at those times?
I would recommend a tool that can show files, COM objects and network addresses accessed within the application:
http://www.moduleanalyzer.com/
You can see the dlls that use each resource and the time is taking the accesses.
The problem with Windows slowdown is in general related to a dll that is running in a process/es that is doing some staff inside a process.
In these situations you won't see anything in tools that monitor from a Process perspective. You will need to see what is happening inside the process to see any suspicious dll or module.
This tool use call stack information to see what module is accessing resources.
Try that application that has a full-feature trial.
You probably have a faulty piece of hardware, from my experience likely your HD. If you are connect to a network share (SMB) and having connectivity issues that also could cause hangs. The drag and drop slowness in general points to the "explorer" process hanging, the same process used to communicate with network resources (file shares for example).
To diagnose the activities or infiltration a rootkit or other malware uses, you might check out the forums on Bleeping Computer, some of the volunteers there who help people remove such may be willing to help you figure out where to look for such infestations.
I recently cleaned up some malware through the help of an expert on that site which I also needed to use a third-party tool (in my case Malwarebytes) to remove, but the malware was relatively new such that this tool couldn't fully clean out the stuff until a more recent update to its definitions got released.
I still don't know how or where exactly to look on a given system for such an infestation, but that site might hook you up with someone who has that expertise. As long as you emphasize that you're looking for this to be able to track down such and not for purposes of writing your own malware I would hope they'd be receptive to your request.