Issue with Visual C++ 2010 (Express) External Tools command - visual-studio-2010

I posted this on SuperUser...but I was hoping the pros here at SO might have a good idea about how to fix this as well....
Normally we develop in VS 2005 Pro, but I wanted to give VS 2010 a spin. We have custom build tools based off of GNU make tools that are called when creating an executable.
This is the error that I see whenever I call my external tool:
...\gnu\make.exe): *** couldn't commit memory for cygwin heap, Win32 error 487
The caveat is that it still works perfectly fine in VS2005, as well as being called straight from the command line. Also, my external tool is setup exactly the same as in VS 2005.
Is there some setting somewhere that could cause this error to be thrown?

From problem with heap, win32 error 487 :
Each Cygwin app gets a special heap
area to hold stuff which is inherited
to child processes. Eg. all file
descriptor structures are stored in
that heap area (called the "cygheap").
The cygheap has room for at least 4000
file descriptor structures. But -
that's the clue - it's fixed size. The
cygheap can't grow. It's size is
reserved at the application's start
and it's blocks are commited on
demand.
For some reason your server
application needs all the cygheap
space when running under the described
conditions.
A possible solution might be found in Changing Cygwin's Maximum Memory:
Cygwin's heap is extensible. However,
it does start out at a fixed size and
attempts to extend it may run into
memory which has been previously
allocated by Windows. In some cases,
this problem can be solved by adding
an entry in the either the
HKEY_LOCAL_MACHINE (to change the
limit for all users) or
HKEY_CURRENT_USER (for just the
current user) section of the registry.
Add the DWORD value heap_chunk_in_mb
and set it to the desired memory limit
in decimal MB. It is preferred to do
this in Cygwin using the regtool
program included in the Cygwin
package. (For more information about
regtool or the other Cygwin utilities,
see the section called “Cygwin
Utilities” or use the --help option of
each util.) You should always be
careful when using regtool since
damaging your system registry can
result in an unusable system. This
example sets memory limit to 1024 MB:
regtool -i set /HKLM/Software/Cygwin/heap_chunk_in_mb 1024
regtool -v list /HKLM/Software/Cygwin
Exit all running Cygwin processes and
restart them. Memory can be allocated
up to the size of the system swap
space minus any the size of any
running processes. The system swap
should be at least as large as the
physically installed RAM and can be
modified under the System category of
the Control Panel.
It wouldn't hurt to ensure that the maximum size of your windows swap file is large enough.
To summerize : The environment doesn't allocate enough heap space for the cygwin executables. For some reason the problem is more acute with VS2010 Express. You need to either fix the environment, or use another Linux port than cygwin, or use Microsoft utilities.

From the cygwin email lists it looks like other people have run into similar situations, even when not running via Visual Studio, to which they've found that the solution is often to play with Cygwin's maximum memory settings:
http://www.cygwin.com/cygwin-ug-net/setup-maxmem.html
(note: it's worth reading this conversation, from above, about some values that did and didn't work).
Others have also reported issues with Anti-Virus software (recommendation is to unload from memory for some reason), and possibly also compatibility settings (try with it set to XP) which can affect cygwin in certain cases. See: http://www.avrfreaks.net/index.php?name=PNphpBB2&file=viewtopic&p=377066
As for Visual Studio: Are you on a 64bit machine and if so are you usually running the tool in a 64bit environment?
I've found that because Visual Studio 2010 runs in 32bit, tools launched from it are launched as 32bit processes (for a good illustration of this, add "cmd" as a tool). I'm not sure why this wouldn't be affected on 2005 (unless 2005 lets the system launch the process (64bit) and 2010 handles it itself (32bit)).

Related

What are the implications of making Visual Studio 2010 able to use more than 2GB of RAM?

Alright, I found this guide and a few others on the internet which suggest running the following command from the VS 2010 IDE directory using the Visual Studio Command Prompt:
editbin /largeaddressaware devenv.exe
I've run this, and everything so far seems to work fine (I haven't run into any issues yet). But what I can't find information on is what negative implications, if any, there are by making Visual Studio 2010 use more than 2GB of RAM? Visual Studio was built to use a max of 2 GB of RAM. If VS was meant to use more than 2 GB of RAM, then I wouldn't have to hack the binary lol. While I love flying by the seat of my pants and trying new things without preparing for the worst (it's all I'm good at, haha), I'd at least like to know what issues I should be prepared to deal with should something go wrong.
TL;DR;: What negative implications are there, if any, by using the "editbin" command above to make Visual Studio 2010 aware of memory addresses greater than 2 GB?
The negative implications of enabling largeaddressaware is that the application could crash or corrupt memory in strange ways. The program was written assuming that no pointer value it had to deal with would be > 2GB. This can be done in subtle ways. The canonical example is probably calculating the midpoint address between to pointers.
ptrMid = (ptr1 + pt2) / 2;
That will work great if all of your pointers are < 2GB, but if they aren't you will get an incorrect result due to overflow.
ptrMid = (0x80000000 + 0x80000004) / 2 = 0x0000002, not 0x80000002
And not only do you have to worry about Visual Studio not being able to handle pointers > 2GB, any add-in would be affected by this as well.
See this question for some more things that have to be checked before enable largeaddressaware, see this question: What to do to make application Large Address Aware?
You really should never use editbin to change largeaddressaware on an application you don't control.
After reading this discussion and checking the existing headers, it looks like VS2010 already has this capability applied, at least for my installation anyway (64bit win7). If it was already compiled in I don't think you need to worry about bad side-effects.
This appears to be by design.
Recall that even when the /3GB switch is set, 32-bit programs receive
only 2GB of address space unless they indicate their willingness to
cope with addresses above 2GB by passing the /LARGEADDRESSAWARE flag.
This flag means the same thing on 64-bit Windows. But since 64-bit
Windows has a much larger address space available to it, it can afford
to give the 32-bit Windows program the entire 4GB of address space to
use. This is mentioned almost incidentally in Knowledge Base article Q889654 in the table "Comparison of memory and CPU limits in the
32-bit and 64-bit versions of Windows".
In other words, certain categories of 32-bit programs (namely, those
tight on address space) benefit from running on 64-bit Windows
machine, even though they aren't explicitly taking advantage of any
64-bit features.
http://blogs.msdn.com/b/oldnewthing/archive/2005/06/01/423817.aspx
Editbin is a Microsoft utility, so they're basically claiming that it works.

Does Windows XP have an equivalent to VAX/VMS Installed Shared Images?

Back in the good old/bad old days when I developed on VAX/VMS it had a feature called 'Installed Shared Images' whereby if one expected one's executable program would be run by many users concurrently one could invoke the INSTALL utility thus:
$ INSTALL
INSTALL> ADD ONES_PROGRAM.EXE/SHARE
INSTALL> EXIT
The /SHARE flag had the effect of separating out the code from the data so that concurrent users of ONES_PROGRAM.EXE would all share the code (on a read-only basis of course) but each would have their own copy of the data (on a read-write basis). This technique/feature saved Mbytes of memory (which was necessary in those days) as only ONE copy of the program's code ever needed to be resident in VAX memory irrespective of the number of concurrent users.
Does Windows XP have something similar? I can't figure out if the Control Panel's 'Add Programs/Features' is the equivalent (I think it is, but I'm not sure)
Many thanks for any info
Richard
p.s. INSTALL would also share Libraries as well as Programs in case you were curious
The Windows virtual memory manager will do this automatically for you. So long as the module can be loaded at the same address in each process, the physical memory for the code will be shared between each process that loads that module. That is true for all modules, libraries as well as executables.
This is achieved by the linker marking code segments as being shareable. So, linkers mark code segments as being shareable, and data segments otherwise.
The bottom line is that you do not have to do anything explicit to make this happen.

How do I get Windows to go as fast as Linux for compiling C++?

I know this is not so much a programming question but it is relevant.
I work on a fairly large cross platform project. On Windows I use VC++ 2008. On Linux I use gcc. There are around 40k files in the project. Windows is 10x to 40x slower than Linux at compiling and linking the same project. How can I fix that?
A single change incremental build 20 seconds on Linux and > 3 mins on Windows. Why? I can even install the 'gold' linker in Linux and get that time down to 7 seconds.
Similarly git is 10x to 40x faster on Linux than Windows.
In the git case it's possible git is not using Windows in the optimal way but VC++? You'd think Microsoft would want to make their own developers as productive as possible and faster compilation would go a long way toward that. Maybe they are trying to encourage developers into C#?
As simple test, find a folder with lots of subfolders and do a simple
dir /s > c:\list.txt
on Windows. Do it twice and time the second run so it runs from the cache. Copy the files to Linux and do the equivalent 2 runs and time the second run.
ls -R > /tmp/list.txt
I have 2 workstations with the exact same specs. HP Z600s with 12gig of ram, 8 cores at 3.0ghz. On a folder with ~400k files Windows takes 40seconds, Linux takes < 1 second.
Is there a registry setting I can set to speed up Windows? What gives?
A few slightly relevant links, relevant to compile times, not necessarily i/o.
Apparently there's an issue in Windows 10 (not in Windows 7) that closing a process holds a global lock. When compiling with multiple cores and therefore multiple processes this issue hits.
The /analyse option can adversely affect perf because it loads a web browser. (Not relevant here but good to know)
Unless a hardcore Windows systems hacker comes along, you're not going to get more than partisan comments (which I won't do) and speculation (which is what I'm going to try).
File system - You should try the same operations (including the dir) on the same filesystem. I came across this which benchmarks a few filesystems for various parameters.
Caching. I once tried to run a compilation on Linux on a RAM disk and found that it was slower than running it on disk thanks to the way the kernel takes care of caching. This is a solid selling point for Linux and might be the reason why the performance is so different.
Bad dependency specifications on Windows. Maybe the chromium dependency specifications for Windows are not as correct as for Linux. This might result in unnecessary compilations when you make a small change. You might be able to validate this using the same compiler toolchain on Windows.
A few ideas:
Disable 8.3 names. This can be a big factor on drives with a large number of files and a relatively small number of folders: fsutil behavior set disable8dot3 1
Use more folders. In my experience, NTFS starts to slow down with more than about 1000 files per folder.
Enable parallel builds with MSBuild; just add the "/m" switch, and it will automatically start one copy of MSBuild per CPU core.
Put your files on an SSD -- helps hugely for random I/O.
If your average file size is much greater than 4KB, consider rebuilding the filesystem with a larger cluster size that corresponds roughly to your average file size.
Make sure the files have been defragmented. Fragmented files cause lots of disk seeks, which can cost you a factor of 40+ in throughput. Use the "contig" utility from sysinternals, or the built-in Windows defragmenter.
If your average file size is small, and the partition you're on is relatively full, it's possible that you are running with a fragmented MFT, which is bad for performance. Also, files smaller than 1K are stored directly in the MFT. The "contig" utility mentioned above can help, or you may need to increase the MFT size. The following command will double it, to 25% of the volume: fsutil behavior set mftzone 2 Change the last number to 3 or 4 to increase the size by additional 12.5% increments. After running the command, reboot and then create the filesystem.
Disable last access time: fsutil behavior set disablelastaccess 1
Disable the indexing service
Disable your anti-virus and anti-spyware software, or at least set the relevant folders to be ignored.
Put your files on a different physical drive from the OS and the paging file. Using a separate physical drive allows Windows to use parallel I/Os to both drives.
Have a look at your compiler flags. The Windows C++ compiler has a ton of options; make sure you're only using the ones you really need.
Try increasing the amount of memory the OS uses for paged-pool buffers (make sure you have enough RAM first): fsutil behavior set memoryusage 2
Check the Windows error log to make sure you aren't experiencing occasional disk errors.
Have a look at Physical Disk related performance counters to see how busy your disks are. High queue lengths or long times per transfer are bad signs.
The first 30% of disk partitions is much faster than the rest of the disk in terms of raw transfer time. Narrower partitions also help minimize seek times.
Are you using RAID? If so, you may need to optimize your choice of RAID type (RAID-5 is bad for write-heavy operations like compiling)
Disable any services that you don't need
Defragment folders: copy all files to another drive (just the files), delete the original files, copy all folders to another drive (just the empty folders), then delete the original folders, defragment the original drive, copy the folder structure back first, then copy the files. When Windows builds large folders one file at a time, the folders end up being fragmented and slow. ("contig" should help here, too)
If you are I/O bound and have CPU cycles to spare, try turning disk compression ON. It can provide some significant speedups for highly compressible files (like source code), with some cost in CPU.
NTFS saves file access time everytime. You can try disabling it:
"fsutil behavior set disablelastaccess 1"
(restart)
The issue with visual c++ is, as far I can tell, that it is not a priority for the compiler team to optimize this scenario.
Their solution is that you use their precompiled header feature. This is what windows specific projects have done. It is not portable, but it works.
Furthermore, on windows you typically have virus scanners, as well as system restore and search tools that can ruin your build times completely if they monitor your buid folder for you. windows 7 resouce monitor can help you spot it.
I have a reply here with some further tips for optimizing vc++ build times if you're really interested.
The difficulty in doing that is due to the fact that C++ tends to spread itself and the compilation process over many small, individual, files. That's something Linux is good at and Windows is not. If you want to make a really fast C++ compiler for Windows, try to keep everything in RAM and touch the filesystem as little as possible.
That's also how you'll make a faster Linux C++ compile chain, but it is less important in Linux because the file system is already doing a lot of that tuning for you.
The reason for this is due to Unix culture:
Historically file system performance has been a much higher priority in the Unix world than in Windows. Not to say that it hasn't been a priority in Windows, just that in Unix it has been a higher priority.
Access to source code.
You can't change what you can't control. Lack of access to Windows NTFS source code means that most efforts to improve performance have been though hardware improvements. That is, if performance is slow, you work around the problem by improving the hardware: the bus, the storage medium, and so on. You can only do so much if you have to work around the problem, not fix it.
Access to Unix source code (even before open source) was more widespread. Therefore, if you wanted to improve performance you would address it in software first (cheaper and easier) and hardware second.
As a result, there are many people in the world that got their PhDs by studying the Unix file system and finding novel ways to improve performance.
Unix tends towards many small files; Windows tends towards a few (or a single) big file.
Unix applications tend to deal with many small files. Think of a software development environment: many small source files, each with their own purpose. The final stage (linking) does create one big file but that is an small percentage.
As a result, Unix has highly optimized system calls for opening and closing files, scanning directories, and so on. The history of Unix research papers spans decades of file system optimizations that put a lot of thought into improving directory access (lookups and full-directory scans), initial file opening, and so on.
Windows applications tend to open one big file, hold it open for a long time, close it when done. Think of MS-Word. msword.exe (or whatever) opens the file once and appends for hours, updates internal blocks, and so on. The value of optimizing the opening of the file would be wasted time.
The history of Windows benchmarking and optimization has been on how fast one can read or write long files. That's what gets optimized.
Sadly software development has trended towards the first situation. Heck, the best word processing system for Unix (TeX/LaTeX) encourages you to put each chapter in a different file and #include them all together.
Unix is focused on high performance; Windows is focused on user experience
Unix started in the server room: no user interface. The only thing users see is speed. Therefore, speed is a priority.
Windows started on the desktop: Users only care about what they see, and they see the UI. Therefore, more energy is spent on improving the UI than performance.
The Windows ecosystem depends on planned obsolescence. Why optimize software when new hardware is just a year or two away?
I don't believe in conspiracy theories but if I did, I would point out that in the Windows culture there are fewer incentives to improve performance. Windows business models depends on people buying new machines like clockwork. (That's why the stock price of thousands of companies is affected if MS ships an operating system late or if Intel misses a chip release date.). This means that there is an incentive to solve performance problems by telling people to buy new hardware; not by improving the real problem: slow operating systems. Unix comes from academia where the budget is tight and you can get your PhD by inventing a new way to make file systems faster; rarely does someone in academia get points for solving a problem by issuing a purchase order. In Windows there is no conspiracy to keep software slow but the entire ecosystem depends on planned obsolescence.
Also, as Unix is open source (even when it wasn't, everyone had access to the source) any bored PhD student can read the code and become famous by making it better. That doesn't happen in Windows (MS does have a program that gives academics access to Windows source code, it is rarely taken advantage of). Look at this selection of Unix-related performance papers: http://www.eecs.harvard.edu/margo/papers/ or look up the history of papers by Osterhaus, Henry Spencer, or others. Heck, one of the biggest (and most enjoyable to watch) debates in Unix history was the back and forth between Osterhaus and Selzer http://www.eecs.harvard.edu/margo/papers/usenix95-lfs/supplement/rebuttal.html
You don't see that kind of thing happening in the Windows world. You might see vendors one-uping each other, but that seems to be much more rare lately since the innovation seems to all be at the standards body level.
That's how I see it.
Update: If you look at the new compiler chains that are coming out of Microsoft, you'll be very optimistic because much of what they are doing makes it easier to keep the entire toolchain in RAM and repeating less work. Very impressive stuff.
I personally found running a windows virtual machine on linux managed to remove a great deal of the IO slowness in windows, likely because the linux vm was doing lots of caching that Windows itself was not.
Doing that I was able to speed up compile times of a large (250Kloc) C++ project I was working on from something like 15 minutes to about 6 minutes.
Incremental linking
If the VC 2008 solution is set up as multiple projects with .lib outputs, you need to set "Use Library Dependency Inputs"; this makes the linker link directly against the .obj files rather than the .lib. (And actually makes it incrementally link.)
Directory traversal performance
It's a bit unfair to compare directory crawling on the original machine with crawling a newly created directory with the same files on another machine. If you want an equivalent test, you should probably make another copy of the directory on the source machine. (It may still be slow, but that could be due to any number of things: disk fragmentation, short file names, background services, etc.) Although I think the perf issues for dir /s have more to do with writing the output than measuring actual file traversal performance. Even dir /s /b > nul is slow on my machine with a huge directory.
I'm pretty sure it's related to the filesystem. I work on a cross-platform project for Linux and Windows where all the code is common except for where platform-dependent code is absolutely necessary. We use Mercurial, not git, so the "Linuxness" of git doesn't apply. Pulling in changes from the central repository takes forever on Windows compared to Linux, but I do have to say that our Windows 7 machines do a lot better than the Windows XP ones. Compiling the code after that is even worse on VS 2008. It's not just hg; CMake runs a lot slower on Windows as well, and both of these tools use the file system more than anything else.
The problem is so bad that most of our developers that work in a Windows environment don't even bother doing incremental builds anymore - they find that doing a unity build instead is faster.
Incidentally, if you want to dramatically decrease compilation speed in Windows, I'd suggest the aforementioned unity build. It's a pain to implement correctly in the build system (I did it for our team in CMake), but once done automagically speeds things up for our continuous integration servers. Depending on how many binaries your build system is spitting out, you can get 1 to 2 orders of magnitude improvement. Your mileage may vary. In our case I think it sped up the Linux builds threefold and the Windows one by about a factor of 10, but we have a lot of shared libraries and executables (which decreases the advantages of a unity build).
How do you build your large cross platform project?
If you are using common makefiles for Linux and Windows you could easily degrade windows performance by a factor of 10 if the makefiles are not designed to be fast on Windows.
I just fixed some makefiles of a cross platform project using common (GNU) makefiles for Linux and Windows. Make is starting a sh.exe process for each line of a recipe causing the performance difference between Windows and Linux!
According to the GNU make documentation
.ONESHELL:
should solve the issue, but this feature is (currently) not supported for Windows make. So rewriting the recipes to be on single logical lines (e.g. by adding ;\ or \ at the end of the current editor lines) worked very well!
IMHO this is all about disk I/O performance. The order of magnitude suggests a lot of the operations go to disk under Windows whereas they're handled in memory under Linux, i.e. Linux is caching better. Your best option under windows will be to move your files onto a fast disk, server or filesystem. Consider buying an Solid State Drive or moving your files to a ramdisk or fast NFS server.
I ran the directory traversal tests and the results are very close to the compilation times reported, suggesting this has nothing to do with CPU processing times or compiler/linker algorithms at all.
Measured times as suggested above traversing the chromium directory tree:
Windows Home Premium 7 (8GB Ram) on NTFS: 32 seconds
Ubuntu 11.04 Linux (2GB Ram) on NTFS: 10 seconds
Ubuntu 11.04 Linux (2GB Ram) on ext4: 0.6 seconds
For the tests I pulled the chromium sources (both under win/linux)
git clone http://github.com/chromium/chromium.git
cd chromium
git checkout remotes/origin/trunk
To measure the time I ran
ls -lR > ../list.txt ; time ls -lR > ../list.txt # bash
dir -Recurse > ../list.txt ; (measure-command { dir -Recurse > ../list.txt }).TotalSeconds #Powershell
I did turn off access timestamps, my virus scanner and increased the cache manager settings under windows (>2Gb RAM) - all without any noticeable improvements. Fact of the matter is, out of the box Linux performed 50x better than Windows with a quarter of the RAM.
For anybody who wants to contend that the numbers wrong - for whatever reason - please give it a try and post your findings.
Try using jom instead of nmake
Get it here:
https://github.com/qt-labs/jom
The fact is that nmake is using only one of your cores, jom is a clone of nmake that make uses of multicore processors.
GNU make do that out-of-the-box thanks to the -j option, that might be a reason of its speed vs the Microsoft nmake.
jom works by executing in parallel different make commands on different processors/cores.
Try yourself an feel the difference!
I want to add just one observation using Gnu make and other tools from MinGW tools on Windows: They seem to resolve hostnames even when the tools can not even communicate via IP. I would guess this is caused by some initialisation routine of the MinGW runtime. Running a local DNS proxy helped me to improve the compilation speed with these tools.
Before I got a big headache because the build speed dropped by a factor of 10 or so when I opened a VPN connection in parallel. In this case all these DNS lookups went through the VPN.
This observation might also apply to other build tools, not only MinGW based and it could have changed on the latest MinGW version meanwhile.
I recently could archive an other way to speed up compilation by about 10% on Windows using Gnu make by replacing the mingw bash.exe with the version from win-bash
(The win-bash is not very comfortable regarding interactive editing.)

Not enough storage is available to complete this operation

Environment:
Visual Studio Ultimate 2010
Windows XP
WPF Desktop Application using .NET 4.0
We have a desktop application which plays a video. This video is part of a project and the project is packaged into the installer. Every once in a while building the installer project shows this error message:
Not enough storage is available to complete this operation
If I restart Visual Studio it works.
Is there a way to avoid this? Is there a better way to package videos in an installer?
This usually happens when the build process needs a lot of RAM memory and cannot get it. Since restarting Visual Studio fixes the problem, most likely it also your case.
Try closing some of the running applications. You can also try adding more RAM to your machine or increasing the page file.
I came across this question when trying to compile my C# solution in Visual Studio 2010 in Windows XP. One project had a fair number of embedded resources in (the size of the resultant assembly was ~140MiB) and I couldn't compile the solution because I was getting the
Not enough storage is available to complete this operation
error in my build output.
None of the answers on this question helped, but I did find an answer to "Not enough storage is available to complete this operation" by ScottBurton42 on social.msdn.microsoft.com. It suggests adding the 3GB switch to the Boot.ini file, and making devenv.exe large-address aware. Adding the 3GB switch to my Boot.ini file was what worked for me (I think devenv.exe for Visual Studio 2010 and above is already large-address aware).
My answer is based on that answer.
Solution 1: Set the /3GB Boot.ini switch
The page Memory Support and Windows Operating Systems on MSDN says:
The virtual address space of processes and applications is still limited to 2 GB unless the /3GB switch is used in the Boot.ini file.
The /3GB switch allocates 3 GB of virtual address space to an application that uses IMAGE_FILE_LARGE_ADDRESS_AWARE in the process header. This switch allows applications to address 1 GB of additional virtual address space above 2 GB.
The virtual address space of processes and applications is still limited to 2 GB, unless the /3GB switch is used in the Boot.ini file. The following example shows how to add the /3GB parameter in the Boot.ini file to enable application memory tuning:
[boot loader]
timeout=30
default=multi(0)disk(0)rdisk(0)partition(2)\WINNT
[operating systems]
multi(0)disk(0)rdisk(0)partition(2)\WINNT="????" /3GB
Note "????" in the previous example is be the programmatic name of the operating system.
In Windows XP, the Boot.ini file can be modified by going to
System Properties → Advanced → Startup and Recovery → Settings → System Startup → Edit
The page on the /3GB switch on MSDN says:
On 32-bit versions of Windows, the /3GB parameter enables 4 GT RAM Tuning, a feature that enlarges the user-mode virtual address space to 3 GB and restricts the kernel-mode components to the remaining 1 GB.
The /3GB parameter is supported on Windows Server 2003, Windows XP, and Windows 2000. On Windows Vista and later versions of Windows, use the IncreaseUserVA element in BCDEdit.
Restarting the machine will then cause the setting to take effect.
Solution 2: Make devenv.exe large address aware:
Open up a Visual Studio Command Prompt (or a Developer Command Prompt, depending on the version of Visual Studio)
Type and execute the following command line:
editbin /LARGEADDRESSAWARE {path}\devenv.exe`
where {path} is the path to devenv.exe (you can find this by going to the properties of the Visual Studio shortcut).
This will allow devenv.exe to access 3GB of memory instead of 2GB.
Problem
In my case, the problem was with a test project containing a very large (1.5GB) test file as an embedded resource. I have 16GB RAM in my machine with 8GB free when this occurred, so RAM was not the issue.
It is possible that we are hitting the 2 GB limit that the CLR has on any single object. Without delving into what MSBuild is doing under the hood, I can only speculate that during compile time, the embedded resource is loaded into an object graph that is hitting this limit.
The Error message is very unhelpful. My first thought when I saw it was, "Have I run out of disk space?"
Solution
It is a file validation test project. One of the requirements is to be able to handle files of this size, so on face value my team thought it reasonable to embed it for use in test cases.
We fixed the error by moving the file onto the network (in the same way that it would be accessed by the validator in production) and marking the test as an integration test instead of a unit test. After-all, aren't unit tests supposed to be fast-running?
Cleaning And rebuilding the solution worked for me
For Visual Studio, you can try to do the following:
Close All Visual Studio instances.
Open Visual Studio Developer tool in Administrator mode.
Navigate to:
C:\Program Files (x86)\Microsoft Visual Studio\2017\Professional\Common7\IDE.
Type following:
editbin /LARGEADDRESSAWARE devenv.exe.
It's worth also to restart PC.
Hope this helps )
In my case, I had very less memory left in C drive. I cleared few items from C drive and tried again. It worked.
I might be late to answer but for future reference, you might want to check the Windows dump file settings (and probably set it to none).
In my case the server I was executing the code on couldn't handle my parallelized code.
Normally I'm running a setup like the following
new ParallelOptions { MaxDegreeOfParallelism = Math.Max(1, Environment.ProcessorCount / 2) }
Introducing a variable and allowing lockdown of cores used to 1 (resulting in code like the following), resolved this issue for me.
new ParallelOptions { MaxDegreeOfParallelism = 1 }
The key for me:
We had embedded a huge database template (testing had filled it with lots of data) into the application. I have not seen this issue arise since removing Embedded Resource boils properly and moving the database to a recourse folder.
My fix this problem with delete or disable(exclude) the *.rpt files that have large size;and I've optimize the my reports!
I am late to Answer but may be useful for others
In my case just restarting Visual Studio fixes the problem

Visual Studio 2005 Memory Usage

I find that quite often Visual Studio memory usage will average ~150-300 MB of RAM.
As a developer who very often needs to run with multiple instances of Visual Studio open, are there any performance tricks to optimize the amount of memory that VS uses?
I am running VS 2005 with one add-in (TFS)
From this blog post:
[...]
These changes are all available from the Options dialog (Tools –> Options):
Environment
General:
Disable “Animate environment tools”
Documents:
Disable “Detect when file is changed outside the environment”
Keyboard:
Remove the F1 key from the Help.F1Help command
Help\Online:
Set “When loading Help content” to “Try local first, then online” or “Try local only, not online”
Startup:
Change the “At startup” option to “Show empty environment”
Projects and Solutions
General:
Disable “Track Active Item in Solution Explorer”
Text Editor
General (for each language you want):
Disable “Navigation bar” (this is the toolbar that shows the objects and procedures drop down lists allowing you to choose a particular object in your code.
Disable “Track changes”
Windows Forms Designer
General:
Set “AutotoolboxPopulate” to false.
Set “EnableRefactoringOnRename” to false.
Upgrade to a 64-bit OS. My instances of VS were taking ~700MB each (very large solutions).. and you rapidly run out of room with that.
Everyone on my team that has switched to 64-bit (and 8GB RAM) has wondered why they didn't do it sooner.
minimize and re-maximize the main vs window to get vs to release the memory.
By uninstalling (and re-installing) Visual Assist the problem got solved for me.
The number 1 thing you can do is switch to Windows 8.
It uses memory sharing / combining if the same DLL or memory page is loaded into multiple processes. Obviously there's a lot of overlap when running two instances of VS.
As you can see I've got 4 Visual studios running and the shared memory column (you need to enable this column for it to be visible) shows how much memory is being shared.
So in Windows 7 this would use 2454MB but I'm saving 600+MB that are shared with the other devenv processes.
Chrome too has a lot of savings (because each browser tab is a new process). So overall I've still got 2GB free where I'd normally be maxed out.

Resources