I'm trying to reproduce a bug that seems to appear when a user is using up a bunch of RAM. What's the best way to either limit the available RAM the computer can use, or fill most of it up? I'd prefer to do this without physically removing memory and without running a bunch of arbitrary, memory-intensive programs (ie, Photoshop, Quake, etc).
Use a virtual machine and set resource limits on it to emulate the conditions that you want.
VMWare is one of the leaders in this area and they have a free vmware player that lets you do this.
I'm copying my answer from a similar question:
If you are testing a native/unmanaged/C++ application you can use AppVerifier and it's Low Resource Simulation setting which will use fault injection to simulate errors in memory allocations (among many other things). It's also really useful for finding a ton of other subtle problems that often lead to application crashes.
You can also use consume.exe, which is part of the Microsoft Windows SDK for Windows 7 and .NET Framework 3.5 Service Pack 1 to easily use a lot of memory, disk space, cpu time, the page file, or kernel pool and see how your application handles the lack of available resources. (Does it crash? How is the performance affected? etc.)
Use either a job object or ulimit(1).
Create a virtual machine and set the ram to what you need.
The one I use is Virtual Box from SUN.
http://www.virtualbox.org/
It is easy to set up.
If you are developing in Java, you can set the memory limits for the JVM at startup.
Related
I am evaluating a system architecture where the application is split on two virtual servers on the same hardware.
The reason is that the overall system will perform better if a certain part of it runs on its own virtual server. Is this correct?
I would think that if the processes run on the same hardware, a split on two servers will give an overhead to communication compared with installation on one virtual server.
To make sure, it sounds like you're asking, more or less, about how 2 (or more) virtual machines might "compete" with one another, and how might separating the processes run on them affect the overall performance?
Quick Answer: In short, the good news is that you can control how the VM's "fight" over resources very tightly if you wish. This will keep VM's from competing with each other over things like RAM, CPU, etc. and can improve the overall performance of your application. I think you're interested in knowing about two main things: VM reservations/limits/shares and resource pools. Links are included below.
In-depth Answer: In general, it's a great idea to separate the functionality of your application. A web server and a DB server running on different machines is a perfect example of this. I don't know about your application in particular, but if it's not leveraging multi-threading (to enable the use of multiple processors) already, separating your application onto two servers might really help performance. Here's why:
VM's understand, in a sense, what HW they're running on. This means they know the CPU/RAM/disk space that's available to them. So let's say your ESX server has 4 CPU's and 16 GB of RAM. When you create a VM, you're free to give 2 CPU's and 8 GB of RAM to each server, or you can alter the setting to meet your needs. In VMware, you can guarantee a certain resource level by using something called: limits, shares, and reservations. Documentation on them can be found here http://pubs.vmware.com/vsphere-4-esx-vcenter/index.jsp?topic=/com.vmware.vsphere.vmadmin.doc_41/vsp_vm_guide/configuring_virtual_machines/t_allocate_cpu_resources.html, among other places. These reservations will help you guarantee that a certain VM always has access to a certain level of resources and will keep VM's from causing contention over RAM, CPU, etc. VMware also offers another extension of this idea using something called "resource pools", which are pools of RAM, CPU, etc. that can be set aside for certain machines. You can read about them here: http://pubs.vmware.com/vsphere-4-esx-vcenter/index.jsp?topic=/com.vmware.vsphere.resourcemanagement.doc_41/managing_resource_pools/c_why_use_resource_pools.html.
I recently got a shiny new development workstation. The only disadvantage of this is that the desktop apps I'm developing now run very, very fast, and so I fear that parts of the code that would be annoyingly slow on end users' machines will go unnoticed during my testing.
Is there a good way to slow down an application for testing? I've tried searching around, but all of the results I've been able to find seem pretty fiddly to set up (e.g., manually setting up a high-priority CPU-bound task on the same CPU core as the target app, or running a background process that rapidly interrupts and resumes the target app), and I don't know if the end result is actually a good representation of running on a slower computer (with its slower CPU, slower RAM, slower disk I/O...).
I don't think that this is a job for a profiler; I'm interested in the user's perception of end-to-end performance rather than in where the time goes for particular operations.
setup a virtual machine, give in as little ram as needed and also you can have it use 1,2 or more CPUs. I like VirtualBox myself install your app and test with different RAM configs
Personally, I'd get an old used crappy computer that is typical of what the users have and test on that. It should be cheap and you will see pretty fast how bad things are.
I think the only way to deal with this is through proper end-user testing, i.e. get yourself a "typical" system for testing and use that to identify any perceptible performance bottlenecks.
You can try out either Virtual PC or VMWare Player/Workstation, load an OS onto it, and then throttle back the resources. I know that with any of those tools you can reduce the memory to whatever you'd like. You can also specify the number of cores you want to use. You might even be able to adjust the clock speed in VMWare Workstation... I'm not sure.
I upvoted SQLMenace's answer, but, I also think that profiling needs to be mentioned, no matter how quickly the code is executing - you'll still see what's taking the most time. If you find yourself with some free time, I think profiling and investigating the results is a good way to spend it.
How do I reversably slow a PC with XP?
I want to achieve this without using visible CPU-cycles, so I'm guessing some hardware settings might do.
I don't want my app to run slow, I want the whole OS to be slow. I know some network lookups especially out of a trusted environment (think Active Directory) slow a PC way down. This is the effect I want.
Disclaimer: this is not for a bad/evil/illegal cause!
We use a 'crippled' server we call doofus for load testing. it is an old P3/500 box with limited RAM.
Another option is setting up a VM with very limited resources.
Use powercfg.exe, to force the computer to a power plan you've created that locks the CPU into a lower frequency to conserve power. What states are available depend on your platform (most desktops only have a couple.)
If you think your hardware setup can handle it, some motherboards let you manually specify a clockspeed multiplier or other speed settings in the BIOS. Often there'll be an option for a slower speed or a field where you can manually enter a lower multiplier.
If not, you might consider setting up a virtual machine, and making sure it's not fully virtualized - paravirtualized machines run slower due to the necessary translations that take place in the virtualization layer.
The open source Bochs emulator is pretty easy to slow down by editing its config file. Windows XP will run in it. It is not as powerful as vmware, but there are many more configuration options.
Look at the documentation for the config file, "bochsrc", and particularly the "IPS" entry. (Instructions Per Second)
Remove the thermopaste and put some dust on the CPU :-) Also, remove some RAM.
You may want to take a look at a full-system simulator such as Simics. Simics allows you deterministically simulate an entire system (including networks, if you want). Not only can you tweak the CPU frequency, you can study the system in detail to see how it behaves.
Unfortunately, Simics has quite a pricetag.
If you want to see really dramatic effects very easily, set the /MAXMEM switch in boot.ini (or use msconfig). This will limit the amount of memory used by the system - switching to 256mb or lower would make things very, very slow.
You have lots of options. Things I can think of:
Change your disks to good old fashioned IDE. None of that high-speed DMA stuff, just good old fashioned PIO.
Remove RAM (or diable in the BIOS)
Switch to generic video drivers (I mean "Generic SVGA" type, that are un-accelerated)
Disable core(s) in the BIOS
Slow the CPU down in the bios (if possible)
We keep an old laptop around for this reason. Helped me to find a subtle timing issue in some splash screen code that was absolutely unreproducable on decent quad core dev boxes.
Install Norton 360. It makes the mouse cursor lag during updates and constantly nags for restarts.
Disable the L2 cache in the BIOS
Two Windows applications: Mo'Slo and Cpukiller.
I remember hearing of one that grabs large chunks of RAM, to reduce your available RAM, but I forget what it is called.
We recently changed some of our system requirements on a light weight application (it is essentially a thin gui client that connects to a "mainframe" that runs IBM UniVerse). We didn't change our minimum requirements at all, but changed our recommended requirements to match those of Windows 7 and Vista (since we run on those machines).
Some system requirements are fairly easy to determine (ie: network card, hard drive space, etc...). But CPU and RAM are harder to nail down.
Our current list of minimum requirements for CPU and RAM both state that you have to meet the minimum's for your operating system. That seems fairly reasonable to us, since our app uses only 15MB or active memory and very little CPU (it's a simple GUI, in this case), so that works. This seems fine, no one complains about that.
When it comes to recommended requirements though, we've run into trouble nailing down specifics, especially nowadays, when saying minimum 1.6 gHz (or similar) can mean anything when you start talking about multi-core processors, atom processors, etc... The thin client is starting to do more intensive stuff (it now contains an embedded web browser to help display more user friendly html pages, for example).
What would be a good way to go about determining recommended values for CPU and RAM?
Do you take the recommended for an O/S and add your usage values on top (so do we then say 1GB for Vista machines?)?
Is there a better way to do so?
(Note: this is similar in nature to the server question here, but from an application base instead)
Let's try this from another perspective.
First, test your application on a minimum configuration machine. What bottlenecks if any exist?
Does it cause a lot of disk swapping? If so, you need more RAM.
Is it generally slow when performing regular operations (excluding memory usage) then increase processor requirements.
Does it require diskspace beyond the app footprint such as for file handling? List that.
Does your app depend on certain instruction sets to be on chip? (SSE, Execute Disable Bit, Intel Virtualization,.. as examples). If so, then you have to list what processors will actually work with the app.
Typically speaking, if the app works fine when using a minimum configuration for the OS; then your "recommended" configuration should be identical to the OS's recommended.
At the end of the day, you probably need to have a couple of machines on hand to profile. Virtual machines are NOT a good option in this case. By definition, the VM and the host OS will have an impact. Further, just because you can throttle a certain processor down doesn't mean that it is running at a similar level to a processor normally built for that level.
For example, a Dual Core 1.8 GHz processor throttled to only use one core is still a very different beast than a P4 1.8 GHz processor. There are architectural differences as well as L2 and L3 cache changes.
By the same token, a machine with a P4 processor uses a different type of RAM than one with a dual core (DDR vs DDR2). RAM speeds do have an impact.
So, try to stick to the OS recommendations as they've already done the hard part for you.
Come up with some concrete non-functional requirements relating to things like latency of response, throughput, and startup time, and then benchmark them on a few varied machines. The attempt to extrapolate to what hardware will allow a typical user to have an experience that matches your requirements.
For determining the CPU and RAM you could try using Microsoft Virtual PC which allows you to set your CPU and RAM settings. You can then test a few different setups to see what would be sufficient for a regular user.
As for the recommended requirements, adding them on top of the basic OS requirements would probably be the safe bet.
Microsoft introduced the Windows Experience Index in Vista to solve this exact problem.
UPDATE FOR MORE INFO
It takes into consideration the entire system. Bear in mind that they may have a minimum level processor, but if they have a crap video card then a lot of processor time is going to be spent just drawing the windows... If you pick a decent experience index number like 3.0 then you can be reasonably assured that they will have a good experience with your application. If you require more horsepower, bump up the requirements to 4.0.
One example is the Dell I'm using to type this on. It's a 2 year old machine but still registers 4.2 on the experience index. Most business class machines should be able to register at least a 3; which should be enough horsepower for the app you described.
Incidentally, my 5 year old laptop registers as a 2.0 and it was mid level at the time I purchased it.
I'm sure many have noticed that when you have a large application (i.e. something requiring a few MBs of DLLs) it loads much faster the second time than the first time.
The same happens if you read a large file in your application. It's read much faster after the first time.
What affects this? I suppose this is the hard-drive cache, or is the OS adding some memory-caching of its own.
What techniques do you use to speed-up the loading times of large applications and files?
Thanks in advance
Note: the question refers to Windows
Added: What affects the cache size of the OS? In some apps, files are slow-loading again after a minute or so, so the cache fills in a minute?
Two things can affect this. The first is hard-disk caching (done by the disk which has little impact and by the OS which tends to have more impact). The second is that Windows (and other OS') have little reason to unload DLLs when they're finished with them unless the memory is needed for something else. This is because DLLs can easily be shared between processes.
So DLLs have a habit of hanging around even after the applications that were using them disappear. If another application decides the DLL is needed, it's already in memory and just has to be mapped into the processes address space.
I've seen some application pre-load their required DLLs (usually called QuickStart, I think both MS Office and Adobe Reader do this) so that the perceived load times are better.
Windows's memory manager is actually pretty slick -- it services memory requests AND acts as the disk cache. With enough free memory on the system, lots of files that have been recently accessed will reside in memory. Until the physical memory is needed, those DLLs will remain in cache -- all ala the CacheManager.
As far as how to help, look into Delay Loading your DLLs. The advantages of LoadLibrary only when you need it, but automatic so you don't have LoadLibrary/GetProcAddress on all of your code. (Well automatic, as far as just needing to add a linker command switch):
http://msdn.microsoft.com/en-us/library/yx9zd12s.aspx
Or you could pre-load like Office and others do (as mentioned above), but I personally hate that -- slows down the computer at initial boot up.
I see two possibilities :
preload yourlibraries at system startup as already mentionned Office, OpenOffice and others are doing just that.
I am not a great fan of that solution : It makes your boot time longer and eats lots of memory.
load your DLL dynamically (see LoadLibrary) only when needed. Unfortunately not possible with all DLL.
For example, why load at startup a DLL to export file in XYZ format when you are not sure it will ever be needed ?? Load it when the user did select this export format.
I have a dream where Adobe Acrobat use this approach, instead of bogging me with loads of plugins I never use every time I want to display a PDF file !
Depending on your needs you might have to use both techniques : preload some big heavliy used librairies and load on demand only specific plugins...
One item that might be worth looking at is "rebasing". Each DLL has a preset "base" address that it prefers to be loaded into memory at. If an application is loading the DLL at a different address (because the preferred one is not available) the DLL is loaded at the new address and "rebased". Roughly speaking this means that parts of the dll are updated on the fly. This only applies to native images as opposed to .NET vm .dll's.
This really old MSDN article covers rebase'ng:
http://msdn.microsoft.com/en-us/library/ms810432.aspx
Not sure whether much of it still applies (it's a very old article)... but here's an enticing quote:
Prefer one large DLL over several
small ones; make sure that the
operating system does not need to
search for the DLLs very long; and
avoid many fixups if there is a chance
that the DLL may be rebased by the
operating system (or, alternatively,
try to select your base addresses such
that rebasing is unlikely).
Btw if you're dealing with .NET then "ngen'ng" your app/dlls should help speed things up (ngen = natve image generation).
Yep, anything read in from the hard drive is cached so it will load faster the second time. The basic assumption is that it's rare to use a large chunk of data from the HD only once and then discard it (this is usually a good assumption in practice). Typically I think it's the operating system (kernel) that implements the cache, taking up a chunk of RAM to do so, although I'm not sure if modern hard drives have some builtin cache capability. (I once wrote a small kernel as an academic project; caching of HD data in memory was one of its features)
One additional factor which affects program startup time is Superfetch, a technology introduced with (I believe) Windows XP. Essentially it monitors disk access during program startup, recognizes file access patterns and them attempts to "bunch up" the required data for quicker access (e.g. by rearranging the data sequentially on disk according to its loading order).
As the others mentioned, generally speaking any read operation is likely to be cached by the Windows disk cache, and reused unless the memory is needed for other operations.
NGENing the assemblies might help with the startup time, however, runtime might be effected (Sometimes the NGened code is not as optimal as OnDemand Compiled code)
NGENing can be done in the background as well: http://blogs.msdn.com/davidnotario/archive/2005/04/27/412838.aspx
Here's another good article NGen and Performance http://msdn.microsoft.com/en-us/magazine/cc163808.aspx
The system cache is used for anything that comes off disk. That includes file metadata, so if you are using applications that open a large number of files (say, directory scanners), then you can easily flush the cache if you also have applications running that eat up a lot of memory.
For the stuff I use, I prefer to use a small number of large files (>64 MB to 1 GB) and asynchronous un-bufferred I/O. And a good ol' defrag every once in a while.