Why do my processes only consume 5% of the processors power? - windows

Not sure if this is the appropriate place for this question, but it seems related to threading and system resources and all that.
Why Does my Task Manager show that the System Idle Process is using 90%+ of the CPU power when I have 3 different processes going!?!?
Is it because of I/O bottlenecks?
For example, if I do an SVN checkout, and empty my recycle bin, and browse the web at the same time, why is the System Idle process at 97%, and the other processes around 1% each? None of them seem to actually be going very fast.

Mostly the processes are waiting for disk or network operations to complete, or waiting for user input.
You might think you have a fast disk or network connection, but compared to memory/cpu it's like walking to the nearest library, looking up a book in the catalog and finding it on the shelf vs already having the book in your hand.
This is why you pay thousands or dollars for 10,000 and 15,000 rpm scsi drives (or even more for SANs) on high performance servers.

I can't say for sure. But I'd say I/O bottlenecks would be a large part of it. In fact, I wouldn't imagine any of the tasks you described would be very CPU intensive.
Now, try to re-encode a raw AVI file into DIVX format while simultaneously rendering a 3D animation in Maya, and your CPU should be plenty busy.

It really depends on what your processes are doing. If they are IO bound, then there's a good chance that they're sitting and waiting most of the time.
If they're winforms apps that wait for user input, then they sit there, doing nothing, and waiting for input.

SVN checkout and recycle bin emptying are both very heavy disk activities with not much demand on the CPU, and web browsing is very spiked in terms of CPU usage (spikes when rendering pages, for example, but costs very little once that is done).
If you want to see your CPU sustain high utilization, do something that is almost purely CPU/memory based, like Folding#Home.

Related

high load low cpu low iowait, why?

Look at the those peaks in the first graph, which factor can cause this?
cpu 24X6
There's a lot of stuff going on in any general purpose computer. When I performance profiled apps in a former life, I saw this all the time and factored it out.
It's caused by a whole host of sources: Processor dealing with interrupts, some disk maintenance routine, file system clean up, completely useless background apps that have been installed unknown to you as automatically launched services, etc.
Your plot of idle time is a little disconcerting. It is awfully low. What apps do you have running taking up all that processing? Also, if your memory is low, say because you have 20 or 30 browser tabs/windows open, your CPU load will go through the roof due to all that page and context swapping.

Optimal CPU utilization thresholds

I have built software that I deploy on Windows 2003 server. The software runs as a service continuously and it's the only application on the Windows box of importance to me. Part of the time, it's retrieving data from the Internet, and part of the time it's doing some computations on that data. It's multi-threaded -- I use thread pools of roughly 4-20 threads.
I won't bore you with all those details, but suffice it to say that as I enable more threads in the pool, more concurrent work occurs, and CPU use rises. (as does demand for other resources, like bandwidth, although that's of no concern to me -- I have plenty)
My question is this: should I simply try to max out the CPU to get the best bang for my buck? Intuitively, I don't think it makes sense to run at 100% CPU; even 95% CPU seems high, almost like I'm not giving the OS much space to do what it needs to do. I don't know the right way to identify best balance. I guessing I could measure and measure and probably find that the best throughput is achived at a CPU avg utilization of 90% or 91%, etc. but...
I'm just wondering if there's a good rule of thumb about this??? I don't want to assume that my testing will take into account all kinds of variations of workloads. I'd rather play it a bit safe, but not too safe (or else I'm underusing my hardware).
What do you recommend? What is a smart, performance minded rule of utilization for a multi-threaded, mixed load (some I/O, some CPU) application on Windows?
Yep, I'd suggest 100% is thrashing so wouldn't want to see processes running like that all the time. I've always aimed for 80% to get a balance between utilization and room for spikes / ad-hoc processes.
An approach i've used in the past is to crank up the pool size slowly and measure the impact (both on CPU and on other constraints such as IO), you never know, you might find that suddenly IO becomes the bottleneck.
CPU utilization shouldn't matter in this i/o intensive workload, you care about throughput, so try using a hill climbing approach and basically try programmatically injecting / removing worker threads and track completion progress...
If you add a thread and it helps, add another one. If you try a thread and it hurts remove it.
Eventually this will stabilize.
If this is a .NET based app, hill climbing was added to the .NET 4 threadpool.
UPDATE:
hill climbing is a control theory based approach to maximizing throughput, you can call it trial and error if you want, but it is a sound approach. In general, there isn't a good 'rule of thumb' to follow here because the overheads and latencies vary so much, it's not really possible to generalize. The focus should be on throughput & task / thread completion, not CPU utilization. For example, it's pretty easy to peg the cores pretty easily with coarse or fine-grained synchronization but not actually make a difference in throughput.
Also regarding .NET 4, if you can reframe your problem as a Parallel.For or Parallel.ForEach then the threadpool will adjust number of threads to maximize throughput so you don't have to worry about this.
-Rick
Assuming nothing else of importance but the OS runs on the machine:
And your load is constant, you should aim at 100% CPU utilization, everything else is a waste of CPU. Remember the OS handles the threads so it is indeed able to run, it's hard to starve the OS with a well behaved program.
But if your load is variable and you expect peaks you should take in consideration, I'd say 80% CPU is a good threshold to use, unless you know exactly how will that load vary and how much CPU it will demand, in which case you can aim for the exact number.
If you simply give your threads a low priority, the OS will do the rest, and take cycles as it needs to do work. Server 2003 (and most Server OSes) are very good at this, no need to try and manage it yourself.
I have also used 80% as a general rule-of-thumb for target CPU utilization. As some others have mentioned, this leaves some headroom for sporadic spikes in activity and will help avoid thrashing on the CPU.
Here is a little (older but still relevant) advice from the Weblogic crew on this issue: http://docs.oracle.com/cd/E13222_01/wls/docs92/perform/basics.html#wp1132942
If you feel your load is very even and predictable you could push that target a little higher, but unless your user base is exceptionally tolerant of periodic slow responses and your project budget is incredibly tight, I'd recommend adding more resources to your system (adding a CPU, using a CPU with more cores, etc.) over making a risky move to try to squeeze out another 10% CPU utilization out of your existing platform.

CPU consumption equivalent for harddisk scanning

I would like my software that scans disk structure to work in background but lowing the priority for the thread that that scans disk structure doesn't work. I mean you still have the feeling of the computer hard working and even freezing even if your program consumes only 1 percent of the processor time. Is it possible to implement "hard disk time consumption" equivalent of CPU consumption in Win32
Since Vista you can lower your IO priority, which is separate from CPU priority.
http://msdn.microsoft.com/en-us/library/ms686219(VS.85).aspx
SetPriorityClass(GetCurrentProcess(), PROCESS_MODE_BACKGROUND_BEGIN)
For XP, 2003 and older, you'd have to find some other way to throttle your disk activity, like using Sleep() often.
Disk accesses are typically measured by a few different metrics transfers per second (which can be broken down into reads/writes), and data read or written per second. If you want to limit the impact of your disk scanning application, one way to do this would be to track one (or both) of these metrics, determine a reasonable cap, and periodically sleep your thread for some time period. Nothing you can do to CPU scheduling is going to be effective at accomplishing this task except in the most diaphanous, indirect way.

Easiest way to determine compilation performance hardware bottleneck on single PC?

I've now saved a bit of money for the hardware upgrade. What I'd like to know, which is the easiest way to measure which part of hardware is the bottleneck for compiling and should be upgraded?
Are there any clever techniques I could use? I've looked into perfmon, but it has too many counters and isn't very helpful without exact knowledge what should be looked at.
Conditions: Home development, Windows XP Pro, Visual Studio 2008
Thanks!
The question is really "what is maxed out during compilation?"
If you don't want to use perfmon, you can use something like the task monitor.
Run a compile.
See what's maxed out.
Did you go to 100% CPU for the whole time? Get more CPU -- faster or more cores or something.
Did you go to 100% memory for the whole time? Which number matters on the display? The only memory you can buy is "physical" memory. The only factor that matters is physical memory. The other things you see on the meter are not things you buy, they're adjustments to make to the way Windows works.
Did you go to "huge" amounts of I/O? You can't easily tell what's "huge", but you can conclude this. If you're not using memory and not using CPU, then you're using the only resource that's left -- you're I/O bound and you need a faster bus -- which usually means a whole new machine.
A faster HDD is of little or no value -- the bus clock speed is one limiting factor. The bus width is the other limiting factor. No one designs an ass-kicking I/O bus and then saddles it with junk HDD's. Usually, they design the bus that fits a specific cost target based on available HDD's.
A faster HDD is of little or no value -- the bus clock speed is one limiting factor. The bus width is the other limiting factor. No one designs an ass-kicking I/O bus and then saddles it with junk HDD's. Usually, they design the bus that fits a specific cost target based on available HDD's.
Garbage. Modern HDDs are slow compared to the I/O buses they are connected to. Name a single HDD that can max out a SATA 2 interface (and that is even a generation old now) for random IOPS... A hard drive is lucky to hit 10MB/s when the bus is capable of around 280MB/s.
E.g. http://www.anandtech.com/show/2948/3. Even there the SSDs are only hitting 50MB/s. It's clear the IOPs are NOT the bottleneck otherwise the HDD would do just as much as the SSDs.
I've never seen a computer IOPs bound rather than HDD bound. It doesn't happen.
Using the task monitor has already been suggested but the Sys Internals task monitor gives you more information than the built-in Windows task monitor:
Sys Internals task monitor
You might also want to see what other things are running on your PC which are using up memory and / or CPU processing power. It may be possible to remove or only run on demand things which are affecting performance.
Windows XP will only support 3GB of memory using a switch that you have to turn on and
I seem to remember that applications need to be written to actually take this into consideration.

Are solid-state drives good enough to stop worrying about disk IO bottlenecks?

I've got a proof-of-concept program which is doing some interprocess communication simply by writing and reading from the HD. Yes, I know this is really slow; but it was the easiest way to get things up and running. I had always planned on coming back and swapping out that part of the code with a mechanism that does all the IPC(interprocess communication) in RAM.
With the arrival of solid-state disks, do you think that bottleneck is likely to become negligible?
Notes: It's server software written in C# calling some bare metal number-crunching libraries written in FORTRAN.
The short answer is probably no. A famous researcher named Jim Gray gave a talk about storage and performance which included this great analogy. Assuming your brain as the processor, accessing a register takes 1 clock tick (numbers on left) which roughly equivalent to that information being in your brain. Accessing memory takes 100 clock ticks, so roughly equivalent to getting data somewhere in the city you live in. Accessing a standard disk takes roughly 10^6 ticks, which is the equivalent to the data being on pluto. Where does solid state fit it? Current SSD technology is somewhere between 10^4-10^5 depending on who you ask. While they can be an order of magnitude faster, there is still a tremendous gap between reading from memory and reading from disk. This is why the answer to your question is likely no, since as fast as SSDs become they will still be significantly slower than disk (at least in the foreseeable future).
I think that you will find the bottlenecks are just moved. As we expect higher throughput then we write programs with higher demands.
This pushes bottlenecks to buses, caches and parts other than the read/write mechanism (which is last in the chain anyway).
With a process not bound by disk I/O, then I think you might find it becomes bound by the scheduler which limits the amount of read/write instructions (as with all process instructions).
To take full advantage of limitless I/O speed you would require real-time response and very aggressive management of caches and so on.
When disks get faster then so does RAM and processors and the demand on devices. The bottleneck is the same, the workload just gets bigger.
I don't believe that it will change the way I/O bound applications are written the tiniest bit. Having faster processors did not make people pick bubblesort as a sorting algorithm either.
The external memory hierarchies are an inherent problem of computing.
Joel on Software has an article about his experience upgrading to solid state. Not exactly the same issue you have, but my takeaway was:
Solid state drives can significantly speed up I/O bound operations, but many things (like compiling) are still cpu-bound.
I have a solid-state drive, and no, this won't eliminate I/O as a bottleneck. The SSD is nice, bit it's not that nice.
It's actually not hard to master your system's IPC primitives or to build something on top of TCP. But if you want to stick with your disk stuff and make it faster, ramdisk or tmpfs might do the trick.
No. Current SSDs are designed as disk replacements. Every layer, from SATA controller to filesystem driver treats them as storage.
This is not a problem of the underlying technology, NAND flash. When NAND flash is directly mapped into memory, and uses a rotating log storage system instead of a file system based on named files it can be quite fast. The fundamental problem is that NAND Flash only performans well in block updates. File metadata updates cause expensive read-modify-write operations. Also, NAND blocks are much bigger than typical disk blocks, which doesn't help performance either.
For these reasons, the future of SSDs will be better cached SSDs. DRAM will hide the overhead of poor mapping and a small supercap backup will allow the SSD to commit writes faster.
Solid state drives do make one important improvement to IO throughput, and that is the fact that on solid state disks, block locality is less of an issue from rotating media. This means that high performance IO bound applications can shift their focus from structures that arrange data accessed in order to structures that optimize IO in other ways, such as by keeping data in a single block by means of compression. That said, Even solid state drives benefit from linear access patterns because they can prefetch subsequent blocks into a read cache before the application requests it.
A noticeable regression on solid state disks is that writes take longer than reads, although both are still generally faster than rotating drives, and the difference is narrowing with newer, high end solid state disks.
No, sadly not. They do make it more interesting though: SSD drives have very fast reads and no sync time, but their writes are almost as slow as normal hard drives. This means that you will want to read most of the time. However when you do write to the drive you should write as much as possible in the same spot since SSD drives can only write entire blocks at a time.
How about using a ram drive instead of the disk? You would not have to rewrite anything. Just point it to a different file system. Windows and Linux both have them. Make sure you have lots of memory on the machine and create a virtual disk with enough space for your processing. I did this for a system that listened to multiple protocols on a network tap. I never new what packet I was going to get and there was too much data to keep it in memory. I would write it to the RAM drive and when something was completed, I would move it and let another process get it off the RAM drive and onto a physical disk. I was able to keep up with really busy server class network cards in this way. Good luck!
Something to keep in mind here:
If the communication involves frequent messages and is on the same system you'll get very good performance because Windows won't actually write the data out in the first place.
I've had to resort to it once and discovered this--the drive light did NOT come on at all so long as the data kept getting written.
but it was the easiest way to get things up and running.
I usually find that it's much cheaper to think good once with your own head, than to make the cpu think millions of times in vain.

Resources