Slowing down computer for debugging intermittent defect - performance

Is there a way to slow down my development computer to try to replicate a defect that only occurs intermittently on a slow machine?
(For what it's worth, Ableton Live has a CPU usage simulation feature, but I've never seen something like this for debuggers.)

This tool provides a decent CPU stress capability. I assume that's what you mean by "slow down". :)

The currently popular stability test
programs are:
Prime95 (this program's torture test)
3DMark2001
CPU Stability test
Sisoft sandra
Quake and other games
Folding#Home
Seti#home
Genome#home
This is from the stress testing documentation for Prime95.

An old question, but I'd also suggest using VMWare Workstation. You can allocate more or less resources to virtual machines, and you can record/playback the execution of the machine, so you can catch the bug in the act and then step through it at your leisure.

Related

Is there any way to slow down webserver in Visual Studio 9.X

We're developing locally and sometimes it would be nice to test under the slightly slower conditions that are experienced when on QA or production. Is there any way to slightly throttle webserver.exe, under Visual Studio 9.x?
i don't know if there is a feature for that. but what i'm doing when i need something similiar, is that i create virtual pcs with less ram and just 1 cpu.
you can give it a try. it's for free
http://www.microsoft.com/windows/virtual-pc/default.aspx
cheers
After quick thingking i may present you the following ideas that came to my mind:
Use a CPU slowdown utility, i have used thoses ages agoe and dunno if they still work on modern OS's
Use the TaskManager to set a low priority to the webserver process
Write a small programm that uses lots of IO and CPU to slow down your other processes
If nr 2 is working, it might be the best sollution because nr 1 & 3 will slow down your whole machine. Otherwise the VM approach suggested by nWorx is also a good idea.

Testing perceived performance

I recently got a shiny new development workstation. The only disadvantage of this is that the desktop apps I'm developing now run very, very fast, and so I fear that parts of the code that would be annoyingly slow on end users' machines will go unnoticed during my testing.
Is there a good way to slow down an application for testing? I've tried searching around, but all of the results I've been able to find seem pretty fiddly to set up (e.g., manually setting up a high-priority CPU-bound task on the same CPU core as the target app, or running a background process that rapidly interrupts and resumes the target app), and I don't know if the end result is actually a good representation of running on a slower computer (with its slower CPU, slower RAM, slower disk I/O...).
I don't think that this is a job for a profiler; I'm interested in the user's perception of end-to-end performance rather than in where the time goes for particular operations.
setup a virtual machine, give in as little ram as needed and also you can have it use 1,2 or more CPUs. I like VirtualBox myself install your app and test with different RAM configs
Personally, I'd get an old used crappy computer that is typical of what the users have and test on that. It should be cheap and you will see pretty fast how bad things are.
I think the only way to deal with this is through proper end-user testing, i.e. get yourself a "typical" system for testing and use that to identify any perceptible performance bottlenecks.
You can try out either Virtual PC or VMWare Player/Workstation, load an OS onto it, and then throttle back the resources. I know that with any of those tools you can reduce the memory to whatever you'd like. You can also specify the number of cores you want to use. You might even be able to adjust the clock speed in VMWare Workstation... I'm not sure.
I upvoted SQLMenace's answer, but, I also think that profiling needs to be mentioned, no matter how quickly the code is executing - you'll still see what's taking the most time. If you find yourself with some free time, I think profiling and investigating the results is a good way to spend it.

What performance indicators can I use to convince management that I need my development PC upgraded?

At work, my PC is slow. I feel that I can be way more productive if I just wasn't waiting for Visual Studio and everything else to respond. My PC isn't bad (dual-core, 3GB of RAM), but there is a lot of corporate software and whatnot to slow everything down and sometimes lock it up.
Now, some developers have begun getting Windows 7 machines with 8 GB of RAM. Of course, I start salivating at this. However, I was told that I "had to justify" why I should get a new machine.
I can think of a lot of different things, but I am curious as to what every one else on SO would have to say.
NOTE: Ideally, these reasons should be specifically related to .NET development in Visual Studio on a Windows machine. This isn't a "how can I make my machine faster" question.
I would ask myself "What am I waiting on?" And then let the answer to that question drive whether or not I felt like I could justify it.
For example, right now, I'm dealing with 90 minute compiles of the project I'm working on. Would a faster machine help that? A little. But, more impactful would be sane configuration management. So, I'm pushing that way (to no avail) rather than to the hardware route.
Bring in a chess clock.
If you are waiting start the clock
when you aren't stop the clock.
At the end of day, total up the time
multiply it by your pay rate,
multiply it by 2000,
and that is a reasonable upper limit as
to the amount of money the company is squandering on you
per year due to a slow machine
Most useful metric: How much time do you spend reading The Onion (or, these days, StackOverflow)?
This is item #9 on The Joel Test:
9. Do you use the best tools money can buy?
Writing code in a compiled language is one of the last things that still can't be done instantly on a garden variety home computer. If your compilation process takes more than a few seconds, getting the latest and greatest computer is going to save you time. If compiling takes even 15 seconds, programmers will get bored while the compiler runs and switch over to reading The Onion, which will suck them in and kill hours of productivity.
I agree with the "what is holding me up?" approach.
I start with improviing workflow by looking at repetitive things I do that can be automated or a little helper tool can fix. Helper tools don't take long to write and add a lot of productivity. Purchasing tools is also a good return on your time - a lot of things you could write, you shouldn't bother, concentrate on your core activity and let the tool makers concentrate on theirs, whether is is help software, screen grabing, SEO tools, debugging tools, whatever.
If you can't improve things by changing your workflow (and I'd be surprised if you can't), then look at your hardware.
Increase memory if you can. If you're at 3GB with a 32 bit OS, no point going any further.
Add independent disks. One disk for the OS another for your build drive. That way there is less contention for disk access from the OS and the compiler. Makes a difference.
Better CPU. Only valid if you are doing the work to justify it.
Example: What do I use?
Dual Xeon Quad Core (8 cores, total)
8 GB RAM
Dual Monitors
VMWare virtual machines
What are the benefits?
Dual Monitor is great, much better than a single 1920x1200 screen.
Having lots of memory when using Virtual Machines is great because you can realistically give the VM a realistic amount of memory (2GB) without killing the host machine.
Having 8 cores means I can do a build and mess about in a VM doing a build or a debug at the same time, no problems.
I've had this machine for some time. Its old hat compared to an iCore7 machine, but its more than fast enough for any developer. Very rarely have I seen all the cores close to maxing out (pretty much going to be held back by I/O with that much CPU power - which is why I commented on multiple disks).
For me (working in a totally different environment, where JBoss, Eclipse and Firefox are the main resource sinks), it was simple enough:
"I have 2GBs of RAM available. I'm spending most of my time with 1GB of swap in use: imagine what task switching and application building looks like there. Another 2GB of RAM costs 50 euro. Ignoring the fact that it's frustrating working like this, you do the productivity math."
I could have shown CPU load figures and application build times as well, but it didn't come to that. It took them a month or two, but boy is development a joy since then! Oh, and for performance, it's likely you'd do best with Windows XP, but you probably already know that. ;]
Use some performance monitor to determine the cause.
For me its the antivirus has some kind of critical resource leak the slows down IO after a few days requiring a reboot and no hardware upgrades will help much.
The justification will need hard data to back it. If their business software is causing the problem that "this is industry standard" obviously doesn't fly anymore. Maybe they'll realize their business software sucks and fix that instead.

Measure CPU / RAM usage of a program

could anyone suggest a way (other than using Task Manager) to track and log a program's usage of CPU and RAM in order to profile its performance?
I'm working under Windows.
Something generic would be useful. A more specific request solution would involve Visual Studio. I've tried Performance Wizard, but it doesn't seem to give me the information I need. Thanks
Process Explorer can be useful.
You can use perfmon utility to gather various counters
Well, there are published APIs for that sort of thing. You might want to take a look at WMI and the Win32_Process class.
If you're looking for a command-line program that gets those things for you there is tasklist and wmic. You can parse their output if you're so inclined.
The Microsoft Platform SDK includes the Windows Performance Toolkit, which tracks CPU, disk, and memory usage over time (along with a ton of other features). It's very handy for tracking down spikes of CPU/memory usage, as well as tracking down issues like why your laptop won't sleep.
How about Intel VTune?
I view the measuring of performance, and the finding of performance problems so as to make the program faster, as two distinctly different goals.
For measuring, one can use profilers, or simply timers, to get the job done.
For finding performance problems, I take an entirely different approach.

How to benchmark virtual machines

I am trying to perform a fair comparison of XenServer vs ESX and one comparison I would like to make is performance with multiple VMs. Does anyone know how to go about benchmarking VM performance in a fair way?
On each server I would like to run a fixed number of XP/Vista VMs (for example 8) and have some measure of how quickly each one runs when under load. Ideally I would like some benchmark of the overall system (CPU/Memory/Disk/Network) rather than just one aspect.
It seems to me this is actually a very tricky thing to do and obtain any meaningful results so would be grateful for any suggestions!
I would also be interested to see any existing reports or comparisons that have been published (preferably independent rather than vendor biased!)
As a general answer, VMware (together with other virtualization vendors in the SPEC Virtualization sub-committee) has put together a hypervisor benchmarking suite called VMmark that is available for download. The VMmark website discusses why this benchmark may be useful for comparing hypervisors, including an FAQ and a whitepaper describing the benchmark.
That said, if you are looking for something very specific (e.g., how will it perform under your workload), you may have to roll your own variants of VMmark, especially if you are not trying to do the sorts of things that VMmark benchmarks (e.g., web servers, database servers, file servers, etc.) Nonetheless, the methodology behind its development should be of interest.
Disclaimer: I work for VMware, though not on VMmark.
I don't see why you can't use common benchmarks inside the VMs: WinSAT, Passmark, Futuremark, SiSoftware, etc... Host the VMs over different hosts and see how it goes.
As an aside, benchmarks that don't closely match your intended usage may actually hinder your evaluation. Depending on the importance of getting this right, you may have to build-your-own to make it relevant.
Why do you want to bench?
How about some anecdotal evidence?
I'm going to assume this is a test environment, because you're wanting to benchmark on XP/Vista. Please correct me if I'm wrong.
My current test environment is about 20 VMs with varying OS's (2000/XP/Vista/Vista64/Server 2008/Server 2003) in different configurations on a Dual Quad Core Xeon machine with 8Gb RAM (looking to upgrade to 16Gb soon) and the slowest machines of all are Vista primarily due to heavy disk access (this is even with Windows Defender disabled)
Recommendations
- Hardware RAID. Too painful to run Vista VMs otherwise.
- More RAM.
If you're benchmarking and looking to run Vista VMs, I would suggest putting your focus on benchmarking disk access. If there are going to be performance differences elsewhere I doubt they would be of anything significant.
I recently came across VMware's ESX Performance documentation - http://www.vmware.com/pdf/VI3.5_Performance.pdf. There is quite a bit in there on improving performance and benchmarking.

Resources