stress testing concurrency: how to slow down thread execution oin my application? - debugging

I have hit situations where certain race conditions only occur when I run my application in the valgrind emulator. But if I run my executable as a normal program these issues did not occur.
My theory is that the emulator slows down execution so much that threads have larger common time frames in which they execute and can operate on shared data structures thereby increasing the chance of hitting race conditions on these shared resources.
I would like to have a more fine grained control on e.g. a virtual clock frequency by means of a specialized emulator.
Does somebody know about an existing tool to do this job. I have tried to search for this online. There should be some academic paper dealing with this. But so far I haven't found it yet.

Related

using hundreds of child processes in Windows?

I have a Windows application which uses many third-party modules of questionable reliability. My app has to create many objects from those modules, and one bad object may cause a lot of problems.
I was thinking of a multi-process scheme where the objects are created in a separate process (the interfaces are basically all the same, so creating the cross-process communication shouldn't be so difficult). At the most extreme, I'm considering one object per process so I might end up with anywhere between 20 processes and a few hundred processes launch from my main app.
Would that cause Windows any problems? I'm using Windows 7, 64-bit, and memory and CPU power won't be an issue.
As long as you have enough CPU power and memory there are no problems. Having the general rules for distributed applications, multithreading (yes, multithreading), deadlocks, atomic operations and co. everything should be fine.

Lazy evaluation/initialization in GUI applications- Non-disruptive ways to do it?

Suppose you are making a GUI application, and you need to load/parse/calculate a bunch of things before a user can use a certain tool, and you know what you have to do beforehand.
Suddenly, it makes sense to start doing these calculations in the background over a period of time (as opposed to "in one go" on start-up or exactly when it is needed). However, doing too much in the background will slow down the responsiveness of the application.
Are there any standard practices in this kind of approach? Perhaps ways to detect low load on the CPU or the user being idle and execute code in those times? Arguments against this type of approach?
Thanks!
Without knowing your app or your audience, I can't give you specific advice.
My main argument against the approach is that unless you have a high-profile application which will see a lot of use by non-programmers, I wouldn't bother. This sounds like a lot of busy work that could be spent developing or refining features that actually allow people to do new things with your app.
That being said, if there is a reason to do it, there is nothing wrong with lazy-loading data.
The problem with waiting until idle time is that some people have programs like SETI#Home installed on their computer, in which case their computer has little to no idle time. If loading full-throttle kills the responsiveness of your app, you could try injecting sleeps. This is what a lot of video games do when you specify a target frame rate, to avoid pegging the CPU. This would get the data loaded faster, rather than waiting for idle time.
If parts of your app depend on data to work, and the user invokes that part of the app, you will have to abandon the lazy-loading approach and resume your full CPU/disk taxing load. If it takes a long time, or make the app unresponsive, you could display a loading dialog with a progress bar.
If your target audience will tend to have a multicore CPU and if your app startup and background initialization tasks won't contend for the same other resources (e.g. disk IO, network, ...) to the point of creating a new bottleneck, you might want to kick off a background thread to perform the initialization tasks (or even a thread per initialization task if you have several tasks that can run in parallel). That will make more efficient use of a multicore hardware architecture.
You didn't specify your target platform, but it's exceedingly easy to achieve this in .NET and so I have begun doing it in many of my desktop apps.

Fast restart technique instead of keeping the good state (availability and consistency)

How often do you solve your problems by restarting a computer, router, program, browser? Or even by reinstalling the operating system or software component?
This seems to be a common pattern when there is a suspect that software component does not keep its state in the right way, then you just get the initial state by restarting the component.
I've heard that Amazon/Google has a cluster of many-many nodes. And one important property of each node is that it can restart in seconds. So, if one of them fails, then returning it back to initial state is just a matter of restarting it.
Are there any languages/frameworks/design patterns out there that leverage this techinque as a first-class citizen?
EDIT The link that describes some principles behind Amazon as well as overall principles of availability and consistency:
http://www.infoq.com/presentations/availability-consistency
This is actually very rare in the unix/linux world. Those oses were designed (and so was windows) to protect themselves from badly behaved processes. I am sure google is not relying on hard restarts to correct misbehaved software. I would say this technique should not be employed and if someone says that the fatest route to recovery for their software you should look for something else!
This is common in the embedded systems world, and in telecommunications. It's much less common in the server based world.
There's a research group you might be interested in. They've been working on Recovery-Oriented Computing or "ROC". The key principle in ROC is that the cleanest, best, most reliable state that any program can be in is right after starting up. Therefore, on detecting a fault, they prefer to restart the software rather than attempt to recover from the fault.
Sounds simple enough, right? Well, most of the research has gone into implementing that idea. The reason is exactly what you and other commenters have pointed out: OS restarts are too slow to be a viable recovery method.
ROC relies on three major parts:
A method to detect faults as early as possible.
A means of isolating the faulty component while preserving the rest of the system.
Component-level restarts.
The real key difference between ROC and the typical "nightly restart" approach is that ROC is a strategy where the reboots are a reaction. What I mean is that most software is written with some degree of error handling and recovery (throw-and-catch, logging, retry loops, etc.) A ROC program would detect the fault (exception) and immediately exit. Mixing up the two paradigms just leaves you with the worst of both worlds---low reliability and errors.
Microcontrollers typically have a watchdog timer, which must be reset (by a line of code) every so often or else the microcontroller will reset. This keeps the firmware from getting stuck in an endless loop, stuck waiting for input, etc.
Unused memory is sometimes set to an instruction which causes a reset, or a jump to a the same location that the microcontroller starts at when it is reset. This will reset the microcontroller if it somehow jumps to a location outside the program memory.
Embedded systems may have a checkpoint feature where every n ms, the current stack is saved.
The memory is non-volatile on power restart(ie battery backed), so on a power start, a test is made to see if the code needs to jump to an old checkpoint, or if it's a fresh system.
I'm going to guess that a similar technique(but more sophisticated) is used for Amazon/Google.
Though I can't think of a design pattern per se, in my experience, it's a result of "select is broken" from developers.
I've seen a 50-user site cripple both SQL Server Enterprise Edition (with a 750 MB database) and a Novell server because of poor connection management coupled with excessive calls and no caching. Novell was always the culprit according to developers until we found a missing "CloseConnection" call in a core library. By then, thousands were spent, unsuccessfully, on upgrades to address that one missing line of code.
(Why they had Enterprise Edition was beyond me so don't ask!!)
If you look at scripting languages like php running on Apache, each invocation starts a new process. In the basic case there is no shared state between processes and once the invocation has finished the process is terminated.
The advantages are less onus on resource management as they will be released when the process finishes and less need for error handling as the process is designed to fail-fast and it cannot be left in an inconsistent state.
I've seen it a few places at the application level (an app restarting itself if it bombs).
I've implemented the pattern at an application level, where a service reading from Dbase files starts getting errors after reading x amount of times. It looks for a particular error that gets thrown, and if it sees that error, the service calls a console app that kills the process and restarts the service. It's kludgey, and I hate it, but for this particular situation, I could find no better answer.
AND bear in mind that IIS has a built in feature that restarts the application pool under certain conditions.
For that matter, restarting a service is an option for any service on Windows as one of the actions to take when the service fails.

What could cause the application as well as the system to slowdown?

I am debugging an application which slows down the system very badly. The application loads a large amount of data (some 1000 files each of half an MB) from the local hard disk.The files are loaded as memory mapped files and are mapped only when needed. This means that at any given point in time the virtual memory usage does not exceed 300 MB.
I also checked the Handle count using handle.exe from sysinternals and found that there are at the most some 8000 odd handles opened. When the data is unloaded it drops to around 400. There are no handle leaks after each load and unload operation.
After 2-3 Load unload cycles, during one load, the system becomes very slow. I checked the virtual memory usage of the application as well as the handle counts at this point and it was well within the limits (VM about 460MB not much fragmentation also, handle counts 3200).
I want how an application could make the system very slow to respond? What other tools can I use to debug this scenario?
Let me be more specific, when i mean system it is entire windows that is slowing down. Task manager itself takes 2 mins to come up and most often requires a hard reboot
The fact that the whole system slows downs is very annoying, it means you can not attach a profiler easily, it also means it would be even difficult to stop the profiling session in order to view the results ( since you said it require a hard reboot ).
The best tool suited for the job in this situation is ETW ( Event Tracing for Windows ), these tools are great, will give you the exact answer you are looking for
Check them out here
http://msdn.microsoft.com/en-us/library/cc305210.aspx
and
http://msdn.microsoft.com/en-us/library/cc305221.aspx
and
http://msdn.microsoft.com/en-us/performance/default.aspx
Hope this works.
Thanks
Tools you can use at this point:
Perfmon
Event Viewer
In my experience, when things happen to a system that prevent Task Manager from popping up, they're usually of the hardware variety -- checking the system event log of Event Viewer is sometimes just full of warnings or errors that some hardware device is timing out.
If Event Viewer doesn't indicate that any kind of loggable hardware error is causing the slowdown, then try Perfmon -- add counters for system objects to track file read, exceptions, context switches etc. per second and see if there's something obvious there.
Frankly the sort of behavior demonstrated is meant to be impossible - by design - for user-mode code to cause. WinNT goes to a lot of effort to insulate applications from each other and prevent rogue applications from making the system unusable. So my suspicion is some kind of hardware fault is to blame. Is there any chance you can simply run the same test on a different PC?
If you don't have profilers, you may have to do the same work by hand...
Have you tried commenting out all read/write operations, just to check whether the slow down disappears ?
"Divide and conquer" strategies will help you find where the problem lies.
If you run it under an IDE, run it until it gets real slow, then hit the "pause" button. You will catch it in the act of doing whatever takes so much time.
You use tools like "IBM Rational Quantify" or "Intel VTune" to detect performance issue.
[EDIT]
Like BenoƮt did, one good mean is measuring tasks time to identify which is eating cpu.
But remember, as you are working with many files, is likely to be missing that causes the memory to disk swap.
when task manager is taking 2 minutes to come up, are you getting a lot of disk activity? or is it cpu-bound?
I would try process explorer from sysinternals. When your system is in the slowed-down state, and you try running, say, notepad, pay attention to page fault deltas.
Windows is very greedy about caching file data. I would try removing file I/O as someone suggested, and also making sure you close the file mapping as soon as you are done with a file.
I/O is probably causing your slowdown,especially if your files are on the same disk as the OS. Another way to test that would be to move your files to another disk and see if that alleviates the problem.

Windows System Idle Processes interfering with performance measurement

I am doing some performance measurement of my code on a Windows box and I am finding that I am getting dramatically different results between measurements. A quick bit of ad hoc exploration during a slow one shows in the task manager System Idle Processes taking up almost 100% CPU.
Does anyone know what System Idle Processes actually means and what Windows features it may be running?
NB: I am not measuring performance using the task manager, I just used it to take a look at what else was running during a particularly slow measurement.
Please think before saying this is not programming related and closing the question. I would not ask it unless I thought there were grounds to say it is. In this case I believe it clearly is because it is detrimentally affecting my development and test environments and in order to sort it out I need to know a bit more about it. Programming does not start and end with the writing of the code.
The idle process typically doesn't do any useful work except execute the HLT instruction, which puts that CPU core into a lower power state (C1). However, the fact that your benchmark is not consuming 100% CPU time does open the door for speculation about what is going on.
If your application is single-threaded and your test system is multicore/hyperthreaded/multi-CPU, then you should expect to see around 50% idle CPU time for two cores, 75% for four, etc. This is because the CPU time percentages in Task Manager include all cores. (I believe that older versions of Windows had an option to change this, but I don't see it on Vista.)
If the idle process is consuming a lot of CPU, that may indicate that your application is spending a lot of time sleeping. It might be waiting for data from some external source (e.g. a disk or network). It might be spending a lot of time waiting for synchronization objects (e.g. mutexes or events). It also might be spending a lot of time calling the Sleep() function. Profiling your code should identify where it's spending the time.
Getting fully reproducible benchmark results may require you to disable processor/disk/network-intensive background applications and services (e.g. search indexing, SMS software inventory, virus scanning, Windows Update downloads, IncrediBuild/distcc) or to connect the machine to an isolated network (or to no network at all).
I'm assuming that you wrote a benchmark for your application, and that you're just trying to use Task Manager to diagnose why the benchmark results aren't what you expected. Task Manager isn't an accurate way to measure application performance.
System Idle Process is a sort of a default process that Windows runs on the processor when it has nothing else to schedule for running. This process is like a housekeeper that does things like trying to save system power etc.
If you're measuring the performance of your program, don't use the Windows Task manager. Use Performance Monitor instead (which you can start by typing 'perfmon' at the commandline). Or better still, use a profiler.
If you "system idle process" is taking up 100% then essentially you machine is bored, nothing is going on. If you add up everything going on in task manager, subtract this number from 100%, then you will have the value of "system idle process." Notice it consumes almost no memory at all and cannot be affecting performance.

Resources