How can I deliberately slow Windows? - windows

How do I reversably slow a PC with XP?
I want to achieve this without using visible CPU-cycles, so I'm guessing some hardware settings might do.
I don't want my app to run slow, I want the whole OS to be slow. I know some network lookups especially out of a trusted environment (think Active Directory) slow a PC way down. This is the effect I want.
Disclaimer: this is not for a bad/evil/illegal cause!

We use a 'crippled' server we call doofus for load testing. it is an old P3/500 box with limited RAM.
Another option is setting up a VM with very limited resources.

Use powercfg.exe, to force the computer to a power plan you've created that locks the CPU into a lower frequency to conserve power. What states are available depend on your platform (most desktops only have a couple.)

If you think your hardware setup can handle it, some motherboards let you manually specify a clockspeed multiplier or other speed settings in the BIOS. Often there'll be an option for a slower speed or a field where you can manually enter a lower multiplier.
If not, you might consider setting up a virtual machine, and making sure it's not fully virtualized - paravirtualized machines run slower due to the necessary translations that take place in the virtualization layer.

The open source Bochs emulator is pretty easy to slow down by editing its config file. Windows XP will run in it. It is not as powerful as vmware, but there are many more configuration options.
Look at the documentation for the config file, "bochsrc", and particularly the "IPS" entry. (Instructions Per Second)

Remove the thermopaste and put some dust on the CPU :-) Also, remove some RAM.

You may want to take a look at a full-system simulator such as Simics. Simics allows you deterministically simulate an entire system (including networks, if you want). Not only can you tweak the CPU frequency, you can study the system in detail to see how it behaves.
Unfortunately, Simics has quite a pricetag.

If you want to see really dramatic effects very easily, set the /MAXMEM switch in boot.ini (or use msconfig). This will limit the amount of memory used by the system - switching to 256mb or lower would make things very, very slow.

You have lots of options. Things I can think of:
Change your disks to good old fashioned IDE. None of that high-speed DMA stuff, just good old fashioned PIO.
Remove RAM (or diable in the BIOS)
Switch to generic video drivers (I mean "Generic SVGA" type, that are un-accelerated)
Disable core(s) in the BIOS
Slow the CPU down in the bios (if possible)

We keep an old laptop around for this reason. Helped me to find a subtle timing issue in some splash screen code that was absolutely unreproducable on decent quad core dev boxes.

Install Norton 360. It makes the mouse cursor lag during updates and constantly nags for restarts.

Disable the L2 cache in the BIOS

Two Windows applications: Mo'Slo and Cpukiller.
I remember hearing of one that grabs large chunks of RAM, to reduce your available RAM, but I forget what it is called.

Related

Dynamic frequency scaling

I would like to adjust the CPU frequency , in other word, looking for an API or c++ code for frequency scaling in windows ?
In Windows, you can call SetPriorityClass to set the priority of the process
You can also set the priority of a thread by calling SetThreadPriority
The CPU clock speed is not something for which there are just some simple instructions to execute. The clock speed is controlled by the motherboard chipset, and that in turn is controlled by a motherboard-specific device driver.
You can get some control over the clock speed by using the Windows settings for power management. The usual way to slow things down and save energy is to choose a setting on this basis. Modern laptop, tablet and phone computers have extremely sophisticated algorithms but you can hint them in the direction of less power.
You may be able to automate the operation of these Windows programs, if that's all you need.
Many motherboards come with the ability to overclock, and a utility to control it. If you have such a motherboard you may be able to find a way to automate its control program, or it may provide an API. It will not be a generic solution, but one highly specific to the motherboard. Check with your motherboard supplier.
Is there a general Windows capability to do this? Not so far as I know, but there could be something hiding in there somewhere. It will be privileged call to a device driver requiring admin rights, if it exists. My be is that it doesn't.
You can use: PowerWriteDCValueIndex(); / PowerWriteACValueIndex(); with PowerSetActiveScheme(NULL, pwrGUID);

Testing perceived performance

I recently got a shiny new development workstation. The only disadvantage of this is that the desktop apps I'm developing now run very, very fast, and so I fear that parts of the code that would be annoyingly slow on end users' machines will go unnoticed during my testing.
Is there a good way to slow down an application for testing? I've tried searching around, but all of the results I've been able to find seem pretty fiddly to set up (e.g., manually setting up a high-priority CPU-bound task on the same CPU core as the target app, or running a background process that rapidly interrupts and resumes the target app), and I don't know if the end result is actually a good representation of running on a slower computer (with its slower CPU, slower RAM, slower disk I/O...).
I don't think that this is a job for a profiler; I'm interested in the user's perception of end-to-end performance rather than in where the time goes for particular operations.
setup a virtual machine, give in as little ram as needed and also you can have it use 1,2 or more CPUs. I like VirtualBox myself install your app and test with different RAM configs
Personally, I'd get an old used crappy computer that is typical of what the users have and test on that. It should be cheap and you will see pretty fast how bad things are.
I think the only way to deal with this is through proper end-user testing, i.e. get yourself a "typical" system for testing and use that to identify any perceptible performance bottlenecks.
You can try out either Virtual PC or VMWare Player/Workstation, load an OS onto it, and then throttle back the resources. I know that with any of those tools you can reduce the memory to whatever you'd like. You can also specify the number of cores you want to use. You might even be able to adjust the clock speed in VMWare Workstation... I'm not sure.
I upvoted SQLMenace's answer, but, I also think that profiling needs to be mentioned, no matter how quickly the code is executing - you'll still see what's taking the most time. If you find yourself with some free time, I think profiling and investigating the results is a good way to spend it.

What performance indicators can I use to convince management that I need my development PC upgraded?

At work, my PC is slow. I feel that I can be way more productive if I just wasn't waiting for Visual Studio and everything else to respond. My PC isn't bad (dual-core, 3GB of RAM), but there is a lot of corporate software and whatnot to slow everything down and sometimes lock it up.
Now, some developers have begun getting Windows 7 machines with 8 GB of RAM. Of course, I start salivating at this. However, I was told that I "had to justify" why I should get a new machine.
I can think of a lot of different things, but I am curious as to what every one else on SO would have to say.
NOTE: Ideally, these reasons should be specifically related to .NET development in Visual Studio on a Windows machine. This isn't a "how can I make my machine faster" question.
I would ask myself "What am I waiting on?" And then let the answer to that question drive whether or not I felt like I could justify it.
For example, right now, I'm dealing with 90 minute compiles of the project I'm working on. Would a faster machine help that? A little. But, more impactful would be sane configuration management. So, I'm pushing that way (to no avail) rather than to the hardware route.
Bring in a chess clock.
If you are waiting start the clock
when you aren't stop the clock.
At the end of day, total up the time
multiply it by your pay rate,
multiply it by 2000,
and that is a reasonable upper limit as
to the amount of money the company is squandering on you
per year due to a slow machine
Most useful metric: How much time do you spend reading The Onion (or, these days, StackOverflow)?
This is item #9 on The Joel Test:
9. Do you use the best tools money can buy?
Writing code in a compiled language is one of the last things that still can't be done instantly on a garden variety home computer. If your compilation process takes more than a few seconds, getting the latest and greatest computer is going to save you time. If compiling takes even 15 seconds, programmers will get bored while the compiler runs and switch over to reading The Onion, which will suck them in and kill hours of productivity.
I agree with the "what is holding me up?" approach.
I start with improviing workflow by looking at repetitive things I do that can be automated or a little helper tool can fix. Helper tools don't take long to write and add a lot of productivity. Purchasing tools is also a good return on your time - a lot of things you could write, you shouldn't bother, concentrate on your core activity and let the tool makers concentrate on theirs, whether is is help software, screen grabing, SEO tools, debugging tools, whatever.
If you can't improve things by changing your workflow (and I'd be surprised if you can't), then look at your hardware.
Increase memory if you can. If you're at 3GB with a 32 bit OS, no point going any further.
Add independent disks. One disk for the OS another for your build drive. That way there is less contention for disk access from the OS and the compiler. Makes a difference.
Better CPU. Only valid if you are doing the work to justify it.
Example: What do I use?
Dual Xeon Quad Core (8 cores, total)
8 GB RAM
Dual Monitors
VMWare virtual machines
What are the benefits?
Dual Monitor is great, much better than a single 1920x1200 screen.
Having lots of memory when using Virtual Machines is great because you can realistically give the VM a realistic amount of memory (2GB) without killing the host machine.
Having 8 cores means I can do a build and mess about in a VM doing a build or a debug at the same time, no problems.
I've had this machine for some time. Its old hat compared to an iCore7 machine, but its more than fast enough for any developer. Very rarely have I seen all the cores close to maxing out (pretty much going to be held back by I/O with that much CPU power - which is why I commented on multiple disks).
For me (working in a totally different environment, where JBoss, Eclipse and Firefox are the main resource sinks), it was simple enough:
"I have 2GBs of RAM available. I'm spending most of my time with 1GB of swap in use: imagine what task switching and application building looks like there. Another 2GB of RAM costs 50 euro. Ignoring the fact that it's frustrating working like this, you do the productivity math."
I could have shown CPU load figures and application build times as well, but it didn't come to that. It took them a month or two, but boy is development a joy since then! Oh, and for performance, it's likely you'd do best with Windows XP, but you probably already know that. ;]
Use some performance monitor to determine the cause.
For me its the antivirus has some kind of critical resource leak the slows down IO after a few days requiring a reboot and no hardware upgrades will help much.
The justification will need hard data to back it. If their business software is causing the problem that "this is industry standard" obviously doesn't fly anymore. Maybe they'll realize their business software sucks and fix that instead.

How do I limit RAM to test low memory situations?

I'm trying to reproduce a bug that seems to appear when a user is using up a bunch of RAM. What's the best way to either limit the available RAM the computer can use, or fill most of it up? I'd prefer to do this without physically removing memory and without running a bunch of arbitrary, memory-intensive programs (ie, Photoshop, Quake, etc).
Use a virtual machine and set resource limits on it to emulate the conditions that you want.
VMWare is one of the leaders in this area and they have a free vmware player that lets you do this.
I'm copying my answer from a similar question:
If you are testing a native/unmanaged/C++ application you can use AppVerifier and it's Low Resource Simulation setting which will use fault injection to simulate errors in memory allocations (among many other things). It's also really useful for finding a ton of other subtle problems that often lead to application crashes.
You can also use consume.exe, which is part of the Microsoft Windows SDK for Windows 7 and .NET Framework 3.5 Service Pack 1 to easily use a lot of memory, disk space, cpu time, the page file, or kernel pool and see how your application handles the lack of available resources. (Does it crash? How is the performance affected? etc.)
Use either a job object or ulimit(1).
Create a virtual machine and set the ram to what you need.
The one I use is Virtual Box from SUN.
http://www.virtualbox.org/
It is easy to set up.
If you are developing in Java, you can set the memory limits for the JVM at startup.

Determining recommended system requirements

We recently changed some of our system requirements on a light weight application (it is essentially a thin gui client that connects to a "mainframe" that runs IBM UniVerse). We didn't change our minimum requirements at all, but changed our recommended requirements to match those of Windows 7 and Vista (since we run on those machines).
Some system requirements are fairly easy to determine (ie: network card, hard drive space, etc...). But CPU and RAM are harder to nail down.
Our current list of minimum requirements for CPU and RAM both state that you have to meet the minimum's for your operating system. That seems fairly reasonable to us, since our app uses only 15MB or active memory and very little CPU (it's a simple GUI, in this case), so that works. This seems fine, no one complains about that.
When it comes to recommended requirements though, we've run into trouble nailing down specifics, especially nowadays, when saying minimum 1.6 gHz (or similar) can mean anything when you start talking about multi-core processors, atom processors, etc... The thin client is starting to do more intensive stuff (it now contains an embedded web browser to help display more user friendly html pages, for example).
What would be a good way to go about determining recommended values for CPU and RAM?
Do you take the recommended for an O/S and add your usage values on top (so do we then say 1GB for Vista machines?)?
Is there a better way to do so?
(Note: this is similar in nature to the server question here, but from an application base instead)
Let's try this from another perspective.
First, test your application on a minimum configuration machine. What bottlenecks if any exist?
Does it cause a lot of disk swapping? If so, you need more RAM.
Is it generally slow when performing regular operations (excluding memory usage) then increase processor requirements.
Does it require diskspace beyond the app footprint such as for file handling? List that.
Does your app depend on certain instruction sets to be on chip? (SSE, Execute Disable Bit, Intel Virtualization,.. as examples). If so, then you have to list what processors will actually work with the app.
Typically speaking, if the app works fine when using a minimum configuration for the OS; then your "recommended" configuration should be identical to the OS's recommended.
At the end of the day, you probably need to have a couple of machines on hand to profile. Virtual machines are NOT a good option in this case. By definition, the VM and the host OS will have an impact. Further, just because you can throttle a certain processor down doesn't mean that it is running at a similar level to a processor normally built for that level.
For example, a Dual Core 1.8 GHz processor throttled to only use one core is still a very different beast than a P4 1.8 GHz processor. There are architectural differences as well as L2 and L3 cache changes.
By the same token, a machine with a P4 processor uses a different type of RAM than one with a dual core (DDR vs DDR2). RAM speeds do have an impact.
So, try to stick to the OS recommendations as they've already done the hard part for you.
Come up with some concrete non-functional requirements relating to things like latency of response, throughput, and startup time, and then benchmark them on a few varied machines. The attempt to extrapolate to what hardware will allow a typical user to have an experience that matches your requirements.
For determining the CPU and RAM you could try using Microsoft Virtual PC which allows you to set your CPU and RAM settings. You can then test a few different setups to see what would be sufficient for a regular user.
As for the recommended requirements, adding them on top of the basic OS requirements would probably be the safe bet.
Microsoft introduced the Windows Experience Index in Vista to solve this exact problem.
UPDATE FOR MORE INFO
It takes into consideration the entire system. Bear in mind that they may have a minimum level processor, but if they have a crap video card then a lot of processor time is going to be spent just drawing the windows... If you pick a decent experience index number like 3.0 then you can be reasonably assured that they will have a good experience with your application. If you require more horsepower, bump up the requirements to 4.0.
One example is the Dell I'm using to type this on. It's a 2 year old machine but still registers 4.2 on the experience index. Most business class machines should be able to register at least a 3; which should be enough horsepower for the app you described.
Incidentally, my 5 year old laptop registers as a 2.0 and it was mid level at the time I purchased it.

Resources