Set memory limit for debugging in Visual Studio - visual-studio-2010

In an application I'm working on, under certain conditions the memory usage will shoot through the roof, effectively locking up my computer. I don't think it's a memory leak, and there are no errors, it just needs too much memory. The memory usage jumps to 99% in Task Manager and Windows stops working, forcing me to reboot.
Is it possible to set a maximum amount of memory VS can use while debugging? I'm not looking for a way to make it run out of memory faster, I just want to keep some memory free so Windows can keep working.
Visual Studio 2010
Windows 7 64b
8GB RAM
C# .NET
Edit:
I'm not asking how to fix a memory leak. I'm trying to limit the memory used by the VS debugger. For example, my PC has 8GB RAM, but my application has to run on a PC with 2GB RAM. So I want to configure VS to only use 2GB. If the application tries to allocate 2.0001GB I want VS to tell it there is no more memory (probably causing a crash).

This isn't exactly the answer you were looking for, but it might help others, so I'm posting:
I would try the following:
1) Download Oracle Virtualbox
2) Download Disk2VHD.exefrom Microsoft Sysinternals
3) Clone your system using Disk2VHD
4) Configure a VM with the memory restrictions you want.
In this way you can restrict the RAM and CPUs used by your task, and possibly recover easier from the case you describe.

Related

Visual Studio 100% disk usage

I have VS 2013 and Microsoft Windows 8.1
The issue appeared at the ending of last week. Without any updating or important changing, when I do somethings in VS, disk usage reaches 100%. For example, when I click on "Check In" button in the "Team Explorer" window, disk usage raises up to 100%. Sometimes by a simple right-click in text editor this problems happens.
I googled about 100% disk usage problem but there are some things about this problem on windows 8.1 but on my computer, all applications are running without any problem, just VS2013 has a "full disk usage" problem.
Some information about my system:
OS Name: Microsoft Windows 8.1 Pro
OS Version: 6.3.9600 N/A Build 9600
System Type: x64-based PC
Processor(s): 1 Processor(s) Installed.
Intel64 Family 6 Model 60 Stepping 3GenuineIntel ~3500 Mhz
Total Physical Memory: 8,131 MB
Available Physical Memory: 3,836 MB
Virtual Memory: Max Size: 10,947 MB
Virtual Memory: Available: 5,275 MB
Virtual Memory: In Use: 5,672 MB
Page File Location(s): C:\pagefile.sys
(Comment for others landing here as #Marta explains that the problem no longer persists on their machine.)
In general, any performance issue in Visual Studio should be reported to Microsoft. It's easy to do this directly from VS using the Report a Problem tool. That feature will automatically attach logs/traces which are shared privately with Microsoft. Internally, tooling will analyse those attachments to assign a ticket to the relevant team. With such attachments, there is a high likelihood that the problem can be diagnosed and fixed in a future release of Visual Studio.
Instructions on the Report a Problem tool:
https://learn.microsoft.com/en-us/visualstudio/ide/how-to-report-a-problem-with-visual-studio?view=vs-2019
If you prefer to diagnose high disk IO yourself, FileMon can be a useful tool:
https://learn.microsoft.com/en-us/sysinternals/downloads/filemon
I am using Visual Code 1.71.2 as well as Visual Studio 2022 (Community Edition) on Windows 10. I am also facing the same issue.
After lot of checking, found disabling superfetch mitigates this issue. But again, Windows, applications startup take lot of time.
As a workaround, I found that by clearing %temp% folder after using visual studio or code eliminates this issue and disk activity is normal.
But every time, I may not remember this cleanup and hate it for forgetting :(
Hope this helps someone in similar situation.
It could be related to Visual Studio updates - which would show under C:\ProgramData\Package Cache.
A disk space management tool like TreeSize Pro will help figure it out though ... it will show which directory is using the most space. You can then target what aspect of Visual Studio is eating up your drive space.
There is a free trial at https://www.jam-software.com/treesize/
You can also use this tool to export and post a screenshot / export of the usage here and it may help identify what is going on.
I had a similar issue that turned out to be the built-in git provider having issues with large codebases containing a moderate-to-large amount of changes before a commit.
Changing to a third-party one fixed the issue.
The operating system manages the resources (CPU cores, disk drives, GPU) to deliver what you have asked of it.
Ideally (what the OS designers are hoping for), when you perform an action, all the resources spin up into action and due to a well balanced system, they all go to 100% utilization, for a brief length of time, then go back to idle.
This form of utilization is, in practice impossible to achieve, as the PC builders would have to know what your system is going to be used for.
When the task manager describes the CPU as 100% utilized, it means that all the cores on the box, are busy running code, and are the bottleneck.
When the task manager describes the disk as 100% utilized, it (as far as I can tell), means that there is always a queue of items to be read, or written to/from the disk. Even with 100% utilization, it may be that the metric is the only reason you are concerned, and the system is otherwise responsive.
In either of these cases, it shows that for a given workload, the CPU or the disk drive has become the rate determining step.
In practice, it should not matter, unless the length of time the system is at 100% is longer than a few minutes, or that your machine feels otherwise sluggish.
Further diagnosis can be performed by using the tool Sys internals : procmon, or the Microsoft : ADK
I would look using the procmon, at what files are being accessed during the 100%disk usage period, and decide whether
The behavior is sensible (if not raise a bug with Microsoft)
The machine is working usably (if not consider a hybrid or ssd disk)
I've had some exasperating problems with disk usage and source control explorer.
What fixed the issue for me was making sure that I never opened Source Control Explorer in more than one project at a time, keep it closed when I could and limit the amount of VS instances you have open.
An SSD may can solve this... Are you sure this is caused by visual studio? when I was using Windows 8.1, the Windows Defender get to 100% disk usage from time to time. If you're sure it occurs when you use Visual Studio, you can try to repair it using the installer. Hope these would help you.
Try moving the source code to SSD drive.
HDDs have much slower disk I/O performance compared to SSD drives.
Generally in windows C drive comes as SSD drive.

SQL Server 2008 enterprise setup and virtual memory

Hi we have a server with 32 cores and 256GB RAM, we are using this with SQL Server 2008 Enterprise on Windows 2008 R2 Enterprise.
Currently windows has allocated automatically a swapfile of 256GB which seems excessive. Is it advisable to hard limit the swapfile to something smaller like 32GB to force it to use the physical RAM?
Is it the swap file or is it the hibernate file?
The answer depends upon the work the machine is expected to do. You might find that Windows doesn't touch the swap file much because you have adequate physical memory available. One approach would be to cut the swap file allocation in half, then use the inbuilt performance monitoring tools to make sure it is still running ok, then after a period of stable running look to half the swap allocation again.
But is it really a problem? With a machine like that you probably have a good chunk of hard drive space available, and i doubt that they would be slow old 5400rpm drives :)
An ideally setup OLTP SQL Server should never need to use the swap file. It depends what you are using this server for.
But unless you are short of disk space, I wouldn't worry too much. 32GB sounds a better size though.

Significant Performance Decrease when moving from Windows Server 2003 to 2008 (IIS 6 to IIS 7)

Our ASP.Net 2.0 web app was running happily along on Windows Server 2003. We were starting to see some of the limits of the environment approaching, such as memory and CPU usage spikes, and as we're getting ready to scale we decided it was time for a larger server with higher availability.
We decided to move to Windows Server 2008 to take advantage of IIS 7's shared configuration. In our development and integration environments, we reproduced the OS and app in 2008/IIS 7 and everything seemed fine. But truth be told, don't have a good way of simulating production-like loads as of yet, nor can we reproduce our prod environment accurately (we're small with limited resources). So once we rolled out to production, we were surprised to find performance significantly worse on 2008 than it was on 2003.
We've also moved from a 32-bit environment to 64-bit in the process, and we've also incorporated ASP.Net 3.5 dll's into the project.
Memory usage is through the roof, but I'm not as worried about that. We believe in part this is because of the overhead with Server 2008's memory, so throwing more RAM at it may solve that issue. The troubling thing is we're seeing processor spikes to 99% CPU Utilization, which we've never seen before in the 2003/IIS 6 environment.
Has anyone encountered these issues before and are there any suggestions for a solution/places to look? Right now we're doing the following:
1) Buying time by adding memory.
2) Buying time by setting app pool limits: shut down w3wp.exe when CPU hits 99% load. Since you don't have the option to recycle the app pools, I have a scheduled task running that recycles any stopped app pools.
3) Profiling the app pools under Classic and Integrated modes to see which may perform better.
Any other ideas are completely welcome.
Our experiance is that code runs much faster on a 64bit windows 2008 than on a 32bit windows 2003 server.
I am wondering if something else is also running on the machine. For example is SQL Server installed with a maintainence plan that could cause the CPU spike.
I would check the following:
Which process is using the CPU?
Is there a change in the code? Try installing the new code on the old machine
Is it something to do with the compile options? Is the CPU usage a recompile?
Are there any errors in the event log?
In our cases, since we have 4 processors, we then increased the "number of worker process to 4" currently working well so far as compare before.
here a snapshot:
http://pic.gd/c3661a
You can use the application pool "Recycle" option in IIS7+ to configure physical and virtual memory limits for application pools. Once these are reached the process will recycle and the resources will be released. Unfortunately the option to recycle based on CUP usage has been removed from IIS7+ (some one correct me if I'm wrong). If you have other apps on the server and want to avoid them competing for resources when this condition happens you can implement Windows System Resources Manager and it's IIS policy (here is a good tutorial http://learn.iis.net/page.aspx/449/using-wsrm-to-manage-iis-70-apppool-cpu-utilization/)
Note SRWM is only available on Enterprise and Data Center editions.

Does Windows Server 2003 SP2 tell the truth about Free System Page Table Entries?

We have some Win32 console applications running on Windows Server 2003 Service Pack 2 that regularly fail with this:
Error 1450 (ERROR_NO_SYSTEM_RESOURCES): "Insufficient system resources exist to complete the requested service."
All the documentation we've found suggests it is linked to the number of Free System Page Table Entries running out. We have 16GB RAM in these machines and use the /3GB Operating System switch to squeeze the Windows kernel into 1GB and allow our processes access to 3GB of address space. This drastically reduces the total number of Free System Page Table Entries, so combined with our heavy use of MapViewOfFile() it is perhaps not surprising that the kernel page table entries are running out.
However, when using Performance Monitor to view the Free System Page Table Entries counter, the value is around 36,000 on reboot and doesn't go down when our application starts. I find it hard to believe that our application, which opens many large memory-mapped files, doesn't have any effect on the kernel page table. If we can't believe the counter, it's much more difficult to test the effect of any system changes we make.
There is a promising Knowledge Base article, The Performance tool does not accurately show the available Free System Page Table entries in Windows Server 2003, but it says the problem has been fixed in Service Pack 1, and we are already on Service Pack 2.
Has anyone else struggled with or solved this issue?
Update: I have checked !sysptes in windbg (debugging the kernel) and the value matches the performance counter, around 36,000. I guess this is most likely to mean that there really are that many free page table entries and Windows is telling the truth. It does leave the question of why we're getting 1450 errors though, if the PTEs are not running out.
Further update: We never did get to the bottom of why the 1450 errors were occurring. However, instead we upgraded the OS on these servers to 64-bit Windows. This allows the existing 32-bit applications (without recompilation) to access a full 4GB of virtual address space, and lets the kernel memory area with those pesky Page Table Entries be as big as it likes too. I don't think we've had a 1450 error since.
Can you try the windbg command "!sysptes" to get System PTE Information? I'm not sure if you can do this with live kernel debug, you may have to get a memory dump.
I'm not sure why you assume that ERROR_NO_SYSTEM_RESOURCES is caused only by running out of free System Page Table Entries ? As far as I know, such generic error codes are used for more than one resource type. And in fact, the first Google hit suggests that running out of file cache memory may cause it too. (KB on an XP bug, which tripped this error mode).
In your case, I'd be checking the "Handle Count". Another possible problem is address space fragmentation. If you you want to create a 1GB file mapping view, you need 1GB of free address space, and it has to be contiguous. If you map a 1GB file, a 800 MB file, and a 1GB file, close the 800MB one and open a 900MB file, the 900MB file may not fit in the hole that's left.
MS has 2 ways to allow there 32 bit OS to "deal" with hardware that has 4 GB or more of RAM.
Option 1: is what you did with the /3GB Switch in the Boot.ini.
Option 1 Pros and Cons:
(CONS) This option sucks 1 GB from the normal 2 GB kernel area - hence making the OS struggle to meet the demands of both Paged Pool allocations and kernel stack allocations. So a person might think that using the /3GB Switch will help their, but really this option is screwing the 32 bit Window OS into a slow death.
(CONS) But, This gives my App 3GB.... WRONG (Hence this is a CON) The catch is that ONLY application that have been recompiled from the vendor to be "/3GB Switch aware" can really use the extra 1 GB. Hence the whole use of the /3GB Switch is a really BAD J.O.K.E on everyone.
Read this link for a much better write-up:
http://blogs.technet.com/askperf/archive/2007/03/23/memory-management-demystifying-3gb.aspx
Option 2: Use the /PAE switch in the Boot.ini.
Option 2 Pros and Cons:
(PROS) This really this only option if you have a more then 4GB of RAM. It tricks a application by placing the complete application memory footprint in RAM. Normally, only a application "Working Set" memory is in RAM and the remaining application memory requirements go into Windows Pagefile. What is a application total memory requirements?? - it called "Virtual Size".
In my world, I have a big fat Java based IBM Product that I deal with. The server that is running the "application" has 16 GB of RAM. I simply add the /PAE switch and watch (thanks to sysinternals Processes Explorer) application paging requests go from 200 KB per sec to up to 4MB per sec.
Question: "Why"?
Answer: The whole application is in RAM.
Question: "Does the application know that it is completely running in RAM?
Answer: No - It is running that same old way that it was always run, "THINKING" that it's has part of itself as the "Working Set" memory living in RAM and the remaining application memory requirements go into Windows Pagefile.
Yes, it is that flipping GOOD.
Please Note: Microsoft has done a poor job telling anyone about the great Windows OS option. Duh
Try it and report back to stackoverflow....

Windows Vista Virtual PC-image for Visual Studio-development minimized

Which features and services in Vista can you remove with nLite (or tool of choice) to make a Virtual PC-image of Vista as small as possible?
The VPC must work with development in Visual Studio.
A normal install of Vista today is like 12-14 GB, which is silly when I got it to work with Visual Studio at 4 GB. But with Visual Studio it totals around 8 GB which is a bit heavy to move around in multiple copies.
You can try and cut stuff out with vLite, but unless you cut out a real lot it's not going to save a ton of drive space. Here's your best bets:
Disable Hibernate and run disk cleanup to remove any hibernation file.
Disable System restore entirely and use disk cleanup to remove all restore points... this will save an enormous amount of space.
Disable SuperFetch (since it kills your VM hard drive with it's crazy usage)
Minimize the size of your pagefile by setting a smaller static size and make sure to assign lots of memory to your VM to compensate.
Use the disk utilities to shrink your VM drive down as far as possible.
Once you have the base machine configured, I would suggest using VMware workstation and the awesome Linked Clones feature, which will let you create a completely new VM based on the base machine, but only using a portion of the space.
I would not advise running a Vista VM from a USB flash drive, it will be slower than dirt.

Resources