Hi we have a server with 32 cores and 256GB RAM, we are using this with SQL Server 2008 Enterprise on Windows 2008 R2 Enterprise.
Currently windows has allocated automatically a swapfile of 256GB which seems excessive. Is it advisable to hard limit the swapfile to something smaller like 32GB to force it to use the physical RAM?
Is it the swap file or is it the hibernate file?
The answer depends upon the work the machine is expected to do. You might find that Windows doesn't touch the swap file much because you have adequate physical memory available. One approach would be to cut the swap file allocation in half, then use the inbuilt performance monitoring tools to make sure it is still running ok, then after a period of stable running look to half the swap allocation again.
But is it really a problem? With a machine like that you probably have a good chunk of hard drive space available, and i doubt that they would be slow old 5400rpm drives :)
An ideally setup OLTP SQL Server should never need to use the swap file. It depends what you are using this server for.
But unless you are short of disk space, I wouldn't worry too much. 32GB sounds a better size though.
Related
Hypervisors and Memory Management
I have been using virtual machines for years and never really had any issues. I have primarily used VMWare's free single ESXi host and had nothing but success. Because I have never had any issues I have never delved in much deeper. I have however always been very wary of loading the system up and get a lot of spare resources handy.
I have recently purchased a new server and we have decided to give Hyper-V a try and see how that goes. We have a fairly small team but utilise lots of servers for testing etc.
My question relates to memory and how much I need to leave free or available for the host machine to run appropriately.
Setup:Dell Server 24 Cores: 48GB Ram
When I run taskmgr in the windows host instance I see the following:
Physical Memory: 49139
Cached: 14933
Available: 17743
Free: 2982
What exactly do these figures mean? What is the difference between free and available?
My server uses hardly any CPU resources ever and has 10 Production servers running on it without a single user complaint ever about speed of the services.
Am I able to run up another server with 2GB ram effectivly leaving 982MB free? or am I starting to push my requirements a little?
Thanks for the help.
You shouldn’t use the host partition for anything other than Hyper-V (although you can run security and infrastructure software such as management agents, backup agents and firewalls). Therefore, that 2GB recommendation assumes you aren’t going to run any extra applications or server roles in the parent partition.
Hyper-V doesn’t let you allocate memory directly to the host partition. It essentially uses whatever memory is left over. Therefore, you have to remember to leave 2GB of your host server’s memory not allocated so it’s available for the parent partition.
Source
In an application I'm working on, under certain conditions the memory usage will shoot through the roof, effectively locking up my computer. I don't think it's a memory leak, and there are no errors, it just needs too much memory. The memory usage jumps to 99% in Task Manager and Windows stops working, forcing me to reboot.
Is it possible to set a maximum amount of memory VS can use while debugging? I'm not looking for a way to make it run out of memory faster, I just want to keep some memory free so Windows can keep working.
Visual Studio 2010
Windows 7 64b
8GB RAM
C# .NET
Edit:
I'm not asking how to fix a memory leak. I'm trying to limit the memory used by the VS debugger. For example, my PC has 8GB RAM, but my application has to run on a PC with 2GB RAM. So I want to configure VS to only use 2GB. If the application tries to allocate 2.0001GB I want VS to tell it there is no more memory (probably causing a crash).
This isn't exactly the answer you were looking for, but it might help others, so I'm posting:
I would try the following:
1) Download Oracle Virtualbox
2) Download Disk2VHD.exefrom Microsoft Sysinternals
3) Clone your system using Disk2VHD
4) Configure a VM with the memory restrictions you want.
In this way you can restrict the RAM and CPUs used by your task, and possibly recover easier from the case you describe.
I have an data transformation query which takes a long time to run on my development machine (Core i7 920 running at 3.9GHz, and with 12GB of RAM under Windows Server 2003 x86 and with 2 Velociraptors 300GB iN RAID0).
When I look at the task manager, the CPU stays around 26%, with the third (out of 4) core being the most active.
As this is not a production environment, is there any way to tell SQL Server 2008 that I am alright with it using more of my CPU or is it because my query can not be parallelized for some reason?
If, shouldn't SQL Server be smart enough to cut the query in smaller chunks and run it across several threads so each core can get it?
Thanks.
Optimize your query. Chances are that the issue is with it and not SQL Server.
It already knows that it's okay unless you specifically limited it to use only a certain number of CPUs either through configuration or through setting the MAXDOP parameter.
It sounds like you may be constrained by your hard drives or memory more than anything.
Note that because you are running an x86 version of windows (and by extension sql server), you may be RAM limited to around 3GB. And even with the PAE (physical addressing extensions) turned on, it's going to be a world of difference slower than if you have an x64 OS and SQL Server to begin with.
In other words, you might consider reinstalling the machine from the ground up to take advantage of all the x64 goodness you have.
I have the following setup for my daily/main/only development environment
Hardware/Tin = 4gb ram, 2.6ghz dual core CPU, 2x250gb HD's, usual array of periperhals
One the tin above, I currently have Windows XP installed, in Windows XP I have VMWare Workstation installed and I run a Windows Server 2003 deelopment environment. This includes,Visual Studio 2003/2005/2008, Sql Sever 2005/2008, Full MS Office suite, some producitivity tools (e.g. Redgate Sql/Data Compare, DevXpress Coderush, TestDriven.net etc).
I have problems with this, it runs slow (15 minutes to boot), the Watch/Autos windows in VS freeze up when debugging, I can't have more than 2-3 copies of VS open, the Errors window freezes up, WinGrep and COm+ constantly runs out of Virtual Desktop Memory and so forth (In fact, I would attribue most of the issues to Virtual Desktop Memory)
Now, I've tried every tweak in the book, I have second HD for VMWare, my paging file is on a differnt drive, I've adjusted my Ram split between guest and host, I've hacked the reg key for Virtual Desktop Memory and all of this to no avail.
Now, I could increase my Ram or CPU, but I'm not able to.
My question is, has anybody experienced the above, and if so, how did you solve it? Did you try ESXi? or shift your environment to raw tin?
IMHO, you've tried just about every tweak in the book. I'd suggest that you should just move to native for your main setup, and restrict VM use for testing.
I use a VM as my main dev env, but I don't run as much stuff as you, so I don't hit a big performance wall.
I guess the trick you didn't try was to run less things on your VM. 2-3 copies of VS are a recipe for slowness. Running Sql Server, same thing. Bump up memory would be good, but at least run services (iis, sql server) on another vm or better yet, another box. You are taxing your VM waay too much, it is not VM's fault.
The problem you run into most of the time on VPS is IO wait.
Do you run your virtual machine off of a disk image, if so try defragmenting your drive.
Or did you dedicate a partition to it?
Edit:
I would suggest to:
either try defragmenting the drive that has the disk image
either try dedicating a partition to the virtual machine, instead of using a disk image all together. (ideally the first partition on the drive, since this will have the lowest random access time)
Running off a disk image works, but since you're working on top of a filesystem, the disk image might be fragmented throughout the disk.
Good luck, hope it helps...
Which features and services in Vista can you remove with nLite (or tool of choice) to make a Virtual PC-image of Vista as small as possible?
The VPC must work with development in Visual Studio.
A normal install of Vista today is like 12-14 GB, which is silly when I got it to work with Visual Studio at 4 GB. But with Visual Studio it totals around 8 GB which is a bit heavy to move around in multiple copies.
You can try and cut stuff out with vLite, but unless you cut out a real lot it's not going to save a ton of drive space. Here's your best bets:
Disable Hibernate and run disk cleanup to remove any hibernation file.
Disable System restore entirely and use disk cleanup to remove all restore points... this will save an enormous amount of space.
Disable SuperFetch (since it kills your VM hard drive with it's crazy usage)
Minimize the size of your pagefile by setting a smaller static size and make sure to assign lots of memory to your VM to compensate.
Use the disk utilities to shrink your VM drive down as far as possible.
Once you have the base machine configured, I would suggest using VMware workstation and the awesome Linked Clones feature, which will let you create a completely new VM based on the base machine, but only using a portion of the space.
I would not advise running a Vista VM from a USB flash drive, it will be slower than dirt.