What causes the MS Windows 'System' Process to go nuts when compiling? - windows

A couple of times recently I have noticed that 'something' is causing the Windows System Process to sit at 50+% and it will not quit until the PC is rebooted. Happening on Win2k and Win XP so far.
This is particularly troublesome because it currently appears to be triggered by MSVC 2005/Incredibuild and rebooting the build servers is not a nice thing.
At the same time the 'System Idle Process' process is holding the rest of the CPU and the build steps themselves seem to be starved. ie. a module that normally takes <5 minutes to compile is currently taking 20+.
I'd take a few guesses at maybe being virus checker or tortoise svn but would desperatly like some other suggestions.
Edit:
I've been experiencing this as something that is triggered, and the culprit may not be ongoing. Thats not to say that some other ongoing process hasn't done something 'stupid' and is managing an active lock up of System while appearing to be idle itself.
System (100% of 1 core), and System Idle Process are sharing 98-100% of the total CPU.
Occasionaly mt.exe, link.exe, buildservice would get a look in at 1-2%.
I'm running VNC to view the machine, so it's getting a look in on occasion.
Edit 2:
When left the previous evening the build process seemed to be progressing all be it slowly, but after waiting another 13 hours the 1 hour build process hasn't completed. System is still hogging the 1 core.

My understanding is that the "System" process is the time spent in the kernel (so performing disk I/O, network I/O (you did mention Incredibuild) and the like) -- I'd check for disk fragmentation, virus checkers and possibly look at these on other machines in your Incredibuild cluster.
As the System Idle process runs at "Low" priority, it's a red herring that it'd be "taking up CPU time" -- if anything it's just showing that there is available CPU time available. The fact the processing is stuck to a single processor shows that the process is doing something that is not multi-core aware, or someone has set it's thread affinity to 1.

I've noticed the virus checking software that I use can radically slow down compilation but it does not extend beyond the end of the build. Turning off advanced and heuristic checking improves this to the extent that I do not have to disable the scanner entirely. I have changed my scanning strategy such that I use scheduled full scans now more than advanced on the fly scanning, as it hurts the perfromance of a number of apps. (n.b. I am using the latest cut of Kaspersky). I'm also using an automated backup tool (AJCBackup) that also needs to be restrained when compiling.
You may also want to consider disableing the Windows Indexing service on drives that are be used to create a lot of temporary and object files, as it doesn't provide much value in this context for the amount of performance it draws.
Edit: Have checked which processes are actually hogging the CPU core and traced them back to a given app?

We've encountered issues with Kaspersky and Incredibuild in our offices - compiles and sometimes links will just hang and never finish.
Only seems to affect some machines though which is wierd, and only Windows XP (Vista seems immune from what I've seen).
Only solution I've found so far is to turn Kaspersky off entirely - so if you find a solution then let me know!

RE: smacl, work from the Windows Search/Indexing Service (WSearch) won't be attributed to the System process's CPU time, it should come from the SearchIndexer.exe/SearchFilterHost.exe services (Vista+).
The majority of activity from System you will see will be in disk activity from the lazy writer and other disk accesses. CPU activity from System will be because of kernel activity such as drivers (ISRs/DPCs) and other kernel-level filters (which could include AV file and process filters).
Process Explorer (http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx) can aid in viewing CPU usage across processes, including System. You can use the public Microsoft Symbol Server and this resource to get you started.
If you can take a trace with Xperf (http://msdn.microsoft.com/en-us/performance/cc825801.aspx), I can help you analyze where the CPU time is being spent in the System (kernel) context. Xperf isn't officially supported on XP, but you can take a trace on XP and analyze it on other systems.
Xperf and Process Explorer should be able to shine a spotlight on exactly the module(s) that are causing the runaway CPU usage. Symbols may not even be necessary to diagnose the problem; simply the module name can often point to the component in question that is slowing down your system. For example, high CPU usage from ndis.sys can point to network interrupts, or activity from modules such as aavmker4.sys can point to AV software (Avast! in this case).
And as always, check if there are any updated drivers and AV software for your system.

In my office, a conflict between Incredibuild and Spyware Doctor's Immunize feature caused similar issues. Turning off Immunize solved it for us.
What anti-virus/malware do you use?

I'm having same hangs when compiling using IncrediBuild in VS2003, on clean Windows 7 without any anti-virus. It worked fine on same box in XP and Vista.

Related

Windows 7: poor GUI response in my program while downloading data; is there some way to improve this?

I've written a program that (among other things) downloads multiple large files from a server on the LAN, using TCP. This program runs fine under Linux, MacOS/X, and generally under Windows as well (it uses Qt for the GUI and straight sockets calls for networking), but on certain Windows machines the download appears to be too much for the machine to handle, and I'm wondering if anyone has any ideas as to why that is and what can be done about it.
When downloading files, my program spawns a separate I/O thread that basically just sits in a loop, downloading data over TCP and writing it to a file, writing 128KB per call to QFile:write(). Each file is typically several hundred megabytes long, and a typical download session writes out several dozen of these files. Note that the I/O thread runs independently of the GUI thread, so I wouldn't expect it to affect GUI's performance much if at all -- especially not when running on a multicore PC.
The PC in question is a Core-2Duo Quad Q6600 running at 2.40GHz, with 4GB of RAM. It's running Windows 7 Ultimate SP1, 32-bit. It is receiving data over a Gigabit Ethernet connection and writing it to files on the NTFS-formatted boot partition of the 232GB internal Hitachi ATA drive.
The symptom is that sometimes during a download (seemingly at random) the program's GUI will become non-responsive for 10 to 30 seconds at a time, and often the title bar of the window will have "(not responding)" appended to it. The symptom will then clear up again and the download will proceed normally again. Another symptom is that the desktop is extremely sluggish during the download... for example, if I click on the "Start" button, the Start menu will take ~30 seconds to populate, instead of being populated near-instantaneously as I would expect.
Note that Task Manager shows plenty of free memory, but it does show short spikes of CPU usage to 100% one one of the 4 cores, at the same time the problems are seen.
The data is arriving over Gigabit Ethernet, and if I have my program just receive the data and throw it away (without writing it to the hard drive), the machine can maintain a constant download rate of about 96MB/sec without breaking a sweat. If I write the received data to a file, however, the download rate decreases to about 37MB/sec, and the symptoms described above start to appear.
The interesting thing is that just for curiosity's sake I added this call to my I/O thread's entry function, just before the beginning of its event loop:
SetThreadPriority(GetCurrentThread(), THREAD_PRIORITY_BELOW_NORMAL);
When I did that, the "(not responding)" symptoms cleared, but then download speed was reduced to only ~25MB/sec.
So my questions are:
Does anyone know what might be causing the sporadic hangups of the GUI when the hard drive is under a heavy write-load?
Why does lowering the I/O thread's priority cause the download rate to drop so much, given that there are three idle cores on the machine? I would think that even a lower-priority thread would have plenty of CPU available in this situation.
Is there any way to get a maximum download rate without causing Windows' desktop responsiveness and/or my app's GUI responsiveness to suffer problems?
Without seeing any code is hard to answer but this seems to be something related to processors and the fact that your download thread is not leaving any space for other threads to performs other operations.
It seems it never waits and that the driver of the network card is not well written.
Are you sure your thread is entering in an idle state when there is no data incoming?
In OS with a single processor a for (;;) {} will consume 100% cpu and if it talks continuously with the kernel it may stops other processes or other threads for doing that, especially if there is a bug or a very bad behaviour in some network card driver in your case.
Probably putting the thread priority below normal you are asking the OS to use your thread less often, this gives by a magical combination of things that allow things to not hang too much.
Check the code, maybe you are forgetting something?
Check if adding a sleep(0) to force the OS to yield to another thread sometime will make things better, but this is a temporary fix, you should find why your thread is consuming 100% cpu, if it is.

How to improve GWT hosted mode / compilation times?

At work we are using some pretty powerful machines: HP Z600 with dual xeon #2.5GHz, 8-16GB ram. Unfortunately due to ill-fitting company policies we are forced to use 32-bit XP, so I made a PAE ramdrive out of the unused 4GB RAM.
Now, the temporary files are on the ramdrive. I've also tried moving the whole project to the ramdrive, then to an SSD, but there was no noticeable improvement in hosted-mode startup or compilation times.
I've then run SysInternals' Process Monitor to see if there are any bottlenecks that are not visible with task manager / hard-disk activity led, but I have not seen anything notable - except some buffer overflows which I don't understand what they mean.
I can assume that performance for OOMPH startup and GWT compilation are tied so I'm using the compilation times as benchmarking between various changes.
I've activated and deactivated hyper-threading and turbo-boost in BIOS, but again saw no differences. Hyperthreading even seems to make everything slower, I can assume that context switching penalty is higher for 16 cores than for 8 cores. Turbo-boost does not seem to do anything, I can assume it only works under Win7, I have not succeeded in activating the driver. It should boost the core from 2.5Ghz to 2.8Ghz.
Deactivated indexing and timestamping on NTFS drives, changed the performance setting from foreground to background and back, used another instance of Eclipse - no changes.
For compilation I've tried specifying a different number of workers, larger memory and some other options. Everything above two workers increases compilation time.
Older HP machines (XW6600) seem to compile a bit faster, perhaps because of the 2.8GHz clock, but their hosted mode seems to start up slower.
To sum up, memory usage is at about 2.6GB, pagefile usage is zero, harddrive is not signaling much activity, CPU activity is <10% (single core at about 50-70%) but still the computer seems to do nothing for some time while compiling or launching OOMPH GWT.
Ok, so now that I've tried everything I knew and found on the Internet, is there anything else that I can try? Will switching to 64-bit Win7 improve much (this is due next year anyway)? Are there any hardware/software options that I can tweak?
L.E.: also ran RATT (tracer from MS) to see if there are any interrupts taking too long, but everything seems in order. Antivirus does not make a difference. Benchmarked another GWT project against my i7 mobile (2630q) and the i7 is about 70% faster, though it has about the same clock.
In most cases I've encountered the reason for slow compilation/hosted mode refresh is that application is written this way.
For compilation there are few things to watch out for:
Don't keep unused classes and modules in your classpath, remove them if possible because GWT is parsing them anyway at precompile stage
whatch out for GWT-RPC type explosion, this is usually a cause of all problems
Be really carefull with interface ContextWithLokup, always try to minimize the number of methods used in interface which extends ContextWithLookup
Try to check out some other compilation options, like distributed compilation , soft permutations or multi-jvm compilation (run compiler with system property -Dgwt.jjs.permutationWorkerFactory=com.google.gwt.dev.ExternalPermutationWorkerFactory and -localWorkers to specify number of JVMs)
For hosted mode:
Use lazy loading where it is possible. The greatest problem with hosted mode i saw, i that application is trying to initialize too many classes which aren't even used on current screen, this can greatly speedup a hosted mode startup. Again, stuff like RPC type explosion affects hosted mode as well
That's all I can advise. Stuff like ram disk can speed up stuff like -compileReport, since it can generate a huge number of files.

What is causing one Vista machine to be 10 times faster than another machine?

We run a Fortran console program we have run for years. Recently we purchased identical new HP server class machines (4 processors, 8 gig ram, 4 hard drives) for everyone in the office. We configured them identically as nearly as we know. We can compile the Fortran program on one machine, pass the executable to the different machines, and on two machines it executes painfully slow, while on two others it has modest performance (but not as good as before we upgraded from XP machines).
It uses almost no console output (about 40 lines) but outputs about 15 megs of files.
We open task manager to see what's going on, and we see that on the slow machines it's loading ONE CPU to about 15%. On the fast machines it's loading ALL CPUs to about 40% (but one of them seems to load more than the others). As I recall, on XP it loaded the CPU to 99%, and ran much faster.
These machines are the employees' general purpose machines, and have lots of company bloatware on them. And there is the possibility they have slightly different directory structures. But what seems totally puzzling to me is why Vista is not giving them more CPU time. If the CPUs were loading up, I might blame the performance variation on different directory structures, but not loading up the CPUs just boggles my mind.
David
if there's a bottleneck in IO, the CPU wouldn't be loaded as much because it's mostly waiting for the IO to take place. One could even imagine this to cause the one CPU vs many CPUs problem if there's just no point in kicking in another CPU because there's plenty of time between while waiting. What if you take an external HD and try out if the differences also take place if you run the same program on that HD on different machines?
Please go into Windows task manager, Performance / - Select in [View] the option: [Kernel Times] and look what's displayed on the bars during program execution.
If its only 15% load on quad+hyperthreading box, that says basically, OpenMP, MPI (or whatever it uses) - isn't properly working - works on 1/8 => 15%. Can you run the MPI-test command for your specific system in order to check for errors in multiprocessing on each box? Therefore, the question would be - why does the multiprocessing environment not work?
Regards
rbo
SWAG, but have you checked your virus scanner configuration? If the scanner isn't set to ignore the type of file you're writing on the slow machine, then each write to those files might be getting intercepted and scanned before being written to the disk. This could lead to the process sitting in I/O wait and not getting scheduled as often.
Vista had a problem with some uncontrollably memory leaks, perhaps this is your error, some conflict in the "bloatware" is causing a memory leak and so your Fortran program is running so much slower?
I assume you have tested this with all programs ended. It seems unlikely that your console program is the issue. Sounds like there's definitely a memory conflict going on though.

what to do when windows goes in dreaded 100% cpu usage zombie mode

happens to me occasionally:
I start my program in visual studio and due to some bug my program goes into 100% cpu usage and basically freezes windows completely.
Only by utter patience requesting the task manager (takes forever to come up and paint itself) I can kill my process.
Do others encounter this too sometimes? Is there a clever trick to get this process down (other than pulling the plug and possible ruining files on the HD)? It now takes 5-10 minutes to kill it properly if the task manager is not accidentally present and I have to request this first
R
p.s. weird that a 'multitasking os' can still allow processes to eat up so much time that nothing else can be done anymore. My program doesn't even bump up it's thread priorities or anything
Check out Process Lasso
"Process Lasso is a unique new technology that will, amongst other things, improve your PC's responsiveness and stability. Windows, by design, allows programs to monopolize your CPU without restraint -- leading to freezes and hangs. Process Lasso's ProBalance (Process Balance) technology intelligently adjusts the priority of running programs so that badly behaved or overly active processes won't interfere with your ability to use the computer!"
http://www.bitsum.com/prolasso.php
I am not affiliated with Bitsum, just a user of their product, and it helps me solve this type of problems.
For what it's worth, I've never see this on either XP 64 or Vista 64, developing C++ apps in Visual Studio. Perhaps an OS upgrade is in order?
Edit: I use Process Explorer as a replacement Task Manager - it wouldn't surprise me if it did a better job of appearing in good time even when there's a rogue process running. And you can use it to boost its own priority.
I usually hit ctrl-alt-delete start the task manager sort by cpu find the offending process and right click and end the process..
task manager usually has enough priority to do this although it may be slow.
I think a shotgun to the head is the only way to be sure.
I generally don't see anything like this happen strictly as a function of an app that's eating 100% CPU. As part of stability / performance testing, I've gotten apps to cause Windows to get very slow, but this is usually done by writing heavily threaded apps (thus causing the O/S scheduler to thrash), or by writing apps that consume all available system memory or resources (much more impactful to the GUI apps than simply one thread that consumes its full share of processor time during its slices).
You say you get this behavior under Visual Studio? VS has a "Pause" button...

Is there any way of throttling CPU/Memory of a process?

Problem: I have a developers machine (read: fast, lots of memory), but the user has a users machine (read: slow, not very much memory).
I can simulate a slow network using Fiddler (http://www.fiddler2.com/fiddler2/)
I can look at how CPU is used over time for a process using Process Explorer (http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx).
Is there any way I can restrict the amount of CPU a process can have, or the amount of memory a process can have in order to simulate a users machine more effectively? (In order to isolate performance problems for instance)
I suppose I could use a VM, but I'm looking for something a bit lighter.
I'm using Windows XP, but a solution for any Windows machine would be welcome. Thanks.
The platform SDK used to come with stress tools for doing just this back in the good old days (STRESS.EXE, CPUSTRESS.EXE in the SDK), but they might still be there (check your platform SDK and/or Visual Studio installation for these two files -- unfortunately I have niether the PSDK nor VS installed on the machine I'm typing from.)
Other tools:
memory: performance & reliability (e.g. handling failed memory allocation): can use EatMem
CPU: performance & reliability (e.g. race conditions): can use CPU Burn, Prime95, etc
handles (GDI, User): reliability (e.g. handling failed GDI resource allocation): ??? may have to write your own, but running out of GDI handles (buggy GTK apps would usually eat them all away until all other apps on the system would start falling dead like flies) is a real test for any Windows app
disk: performance & reliability (e.g. handling disk full): DiskFiller, etc.
AppVerifier has a low-resource simulation feature.
You could also try setting the priority of your process to be very low.
You can run MemAlloc to chew up RAM, possibly a few copies at once.
I found a related question:
Set Windows process (or user) memory limit
The accepted answer for the question has a link to the Windows API's SetProcessWorkingSetSize, so it's not exactly a tool that can limit the amount of memory that a process can use.
In terms of changing the amount of CPU resources a process can use, if you don't mind the granularity of per-core limiting of resources, Task Manager can change the processor affinity of a process.
In Task Manager, right-click a process and select "Set Affinity...", then select the processor cores that the process can be assigned to.
If the development machine has many cores but the user machine only has one, then, rather than allowing the process to run on all the available cores, set the process' processor affinity to only one core.
It has nothing to do with SetProcessWorkingSetSize
Just use internal Win32 kernel apis to restrict CPU Usage

Resources