VS 2017 Installation failed - visual-studio

I install the VS2017 on Windows 7. After some time I receive the error:
MSI: C:\ProgramData\Microsoft\VisualStudio\Packages\Microsoft.VisualStudio.MinShell.Msi,version=15.6.27421.1\Microsoft.VisualStudio.MinShell.Msi.msi, Properties: REBOOT=ReallySuppress ARPSYSTEMCOMPONENT=1 MSIFASTINSTALL="7" VSEXTUI="1" VS7.3643236F_FC70_11D3_A536_0090278A1BB8="G:\Program Files (x86)\Microsoft Visual Studio\2017\Community"
Return code: 1632
Return code details: The Temp folder is on a drive that is full or is inaccessible. Free up space on the drive or verify that you have write permission on the Temp folder.
Log
G:\TEMP\dd_setup_20180318121545_006_Microsoft.VisualStudio.MinShell.Msi.log
I have checked the G: where the TEMP located. It has 200 GB free.
BUT one strange thing: this folder and all other folders are Read-Only. I uncheck it in the Properties, then close Properties dialog, open it again: it is Read-Only.
I can modify it, even MSI installer could: it created the log file there. But in the middle of installation the error occurs.
What is it and how I can solve this problem?
I run with log:
Machine policy value 'DisableUserInstalls' is 0
SRSetRestorePoint skipped for this transaction.
Note: 1: 1336 2: 3 3: C:\Windows\Installer\
MainEngineThread is returning 1632
No System Restore sequence number for this installation.
User policy value 'DisableRollback' is 0
Machine policy value 'DisableRollback' is 0
Incrementing counter to disable shutdown. Counter after increment: 0
Note: 1: 1402 2: HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Installer\Rollback\Scripts 3: 2
Note: 1: 1402 2: HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Installer\InProgress 3: 2
Decrementing counter to disable shutdown. If counter >= 0, shutdown will be denied. Counter after decrement: -1
Restoring environment variables
Decrementing counter to disable shutdown. If counter >= 0, shutdown will be denied. Counter after decrement: -1
MainEngineThread is returning 1632

Disc Space Reclaiming - Quick Wins?: Too much to read? The essential options (arguably).
Final Summary
This issue turned out to be a redirected TEMP and C:\Windows\Installer cache folder - with the latter being on an unavailable drive.
Please be careful redirecting system folders, in particular C:\Windows\Installer. It is a super-hidden system folder and side-effects are very common.
You must make sure that the relocated folder has the correct ACL permissions that the original folder had. This is crucially important for security reasons. For one thing the whole folder could be deleted by someone who do not understand what it is for - making all packages un-uninstallable and un-maintainable. There are also other security reasons.
Also: putting this folder on the network is not technically sound in my opinion - problems will result. A local drive is also problematic if drive letters change. Which brings me to the next point:
Lacking Space for your System SSD Drive?
If your real issue is lacking disk space on your system SSD drive, please consider some alternatives listed below. Proceed with care and at your own risk with every option. Most of them should be harmless.
Disc Space Visualizing: I have an ancient tool called SpaceMonger.exe which shows me a visual representation of whatever is taking up my disc space. Very useful. It seems this tool is no longer supported. Maybe check https://en.wikipedia.org/wiki/WinDirStat for a similar tool (untested by me - run it by virustotal.com).
DriverStore: And a word to the resident hacker in all computer guys: no, no - don't try to redirect %SystemRoot%\System32\DriverStore (!). "Seductive The Dark Side Is". "Run Forrest, Run!". "Careful With That Axe Eugene". Etc... You get the picture. Leaving out Monty Python allusions for now. Seriously: I do not know what low-level stuff could be involved in the boot process. One would have to ask Raymond Chen, but don't. He has important things to do. However: pnputil.exe, DriverStore Explorer - your own risk. Don't do it :-).
Overall Suggestions
UPDATE: For laptops I like to use a high capacity, low-profile USB flash drive and / or a high capacity SD-card permanently sitting in a port to
hold my downloads and installers, VS Help files, maybe even source code (riskier). An obvious, but somewhat "clunky" option.
One can combine this drive with the Library feature in Windows Explorer
to show the flash drive under whatever library you want (Downloads, Videos, Pictures, Source, etc...).
My preferred desktop disc cleanup options below would be: 7, 19, 2, 18, 1, 6, 11, 12 (in that order).
Preferred options for laptops: 7, 19, 2, 18, 6, 10 (reduce max cache sizes), 15, 17, 3 (in that order).
The real-world approach for me is a slightly different order: 2 (purge obsolete Windows Updates - this may also trim WinSxS - but I am not positive), 19 (uninstall unneccessary software - can be relatively quick), then I run SpaceMonger.exe to find space hogs and move them - this often involves zapping the Downloads folder (7) and purging, moving or clouding media files (Pictures, Videos, Music), then 6 for developer PCs (jogging Visual Studio and uninstall useless SDKs and help files), and 9 (eliminate hibernation - not great for laptops), 18 (enable compression - can take forever), and finally I might zap the recovery partitions (laptops) and create a new partition in its place to allow data files to be stored there (freeing up system partition space). This zapping is a high-risk operation - obviously. Very error-prone (especially if inexperienced users use the diskpart command-line tool or a Linux Live Boot tool - described below). And obviously verify that you have installation media AND a valid license key before wiping out recovery partitions - it has to be mentioned. Data files I move are usually: source code repository, downloads folder, outlook PST file, images and videos, etc... This procedure should reclaim many gigabytes of disc space. Don't do it for fun though - though risk should be acceptable for most of these options (barring the recovery partition zapping - it is relatively simple to do, but error prone).
Cleanup Options
Apply healthy skepticism to these options. They are not all terribly useful in many cases - just attempting to mention all kinds of tweaks. Potential easy, big wins without much configuration and fiddling could be 2, 6, 7, 9, 18. Options 2 and 18 are almost always time consuming, but very effective. Maybe hours for option 2 (especially on Windows 7 & 8 - do not abort when it is running) and even longer for option 18 on a large computer or a slow disk (but the operation can be cancelled).
Option 0, Cloud Storage is an implied overall option in this day and age. OneDrive Filer, GDisk, Dropbox, etc... Download data files on demand.
My Documents: It is generally much better to move user data folders to a network location or another, local drive (best) than to redirect system folders! Few system-entanglements.
I wouldn't move the desktop or other folders found here: HKCU\Software\Microsoft\Windows\CurrentVersion\Explorer\User Shell Folders, I would move "My Documents". Just right-click it in Windows Explorer, go to properties and there is a tab there with features to help you move it. Careful whilst doing this - a backup is in order first.
Pictures and Video might also be OK to move, but not the desktop or the other special folders - they may be involved in the boot or logon process (erroneous packages could cause that even for My Documents - nothing is without risk).
Streaming and media files from apps such as iTunes or similar can obviously totally hog a disc with limited capacity. I use SpaceMonger.exe to get an overview and then move the files somewhere else.
For computers with multiple users there will obviously be multiple "My Documents" folders to redirect.
Microsoft's Disk Cleanup Tool: Run cleanmgr.exe, select Clean up system files as described here: https://serverfault.com/q/573208/20599 (top).
UPDATE Oct.2018: "Downloads" folder is now a cleanup option! DO NOT ENABLE! It deletes the whole downloads folder without question. This issue appears corrected by now Oct.2021.
You can now zap the uninstalls for applied Windows Updates - this can give you back several gigabytes on your system drive. In the picture below I can zap 5.36 GB. For Windows 7 I have seen dozens of gigabytes being purgeable.
This tool might also slim down and shrink the WinSxS directory (the Win32 side-by-side assembly folder). I am not 100% positive.
Obviously you can remove unnecessary packages in Add / Remove Programs and remove system restore point (use the second tab in the image below to access these features):
Third Party Cleanup Applications: Third party tools such as CCleaner may be able to clean out even more space by wiping out cache files and temporary files for all kinds of applications and tools. This particular tool suffered a malware attack recently. Use at your own risk.
Personal opinion / suggestion: use only for test boxes or non-critical machines. The cleanup is quite awesome, but it also involves some risks (lost login passwords, lost system logs, etc...). Self-evident, but it should probably be mentioned.
My 2 cents: not a corporate solution, but may be fine for advanced home users who like to experiment and to keep their machines tuned.
Administrative Installations: For large MSI files, performing an administrative install will prevent the caching of the whole MSI file in C:\Windows\Installer. You must install from a proper network share so files are available for repair operations.
An administrative installation essentially extracts embedded CAB files from the MSI and allows the creation of a network installation point where all computers can pull files from instead of caching all files locally.
The generic method for running and administrative installation is: msiexec /a File.msi. More details in links below.
Extract MSI from EXE
What is the purpose of administrative installation initiated using msiexec /a?
How can I eliminate the huge, cached MSI files in C:\Windows\Installer?
There is a whole lot of installer caching going on - it is a little out of hand if you ask me.
Mounted Drives: Some guys dabble with mounting external drives as folders on their system drive. In other words another drive shows up as a regular folder on your system drive and functions as such (sample).
This I have no experience with, and I have doubts about its reliability over time. For all I know it might actually be better than several other options if you do it right (and never take out the physical drive).
I would do data file folders only (not settings folders, or core OS folders such as the desktop). Maybe for source control folders. If the link breaks, the data should still be safe and the system can still boot (and the link re-established).
UPDATE: Windows Explorer's "Include in library" is an alternative? (do have a peek) I like to create a "Source Code Library" with included folders from here and there.
Visual Studio: And the obvious cleanup options for Visual Studio (for completeness):
If you have downloaded MSDN help locally (Help => Add and Remove Help Content, remove items as appropriate and rely on online help instead or change the Local store path towards the bottom to use another drive for content).
Or you have several versions of the SDKs you do not need or you have Visual Studio features you do not need, get rid of them (In Visual Studio: Tools => Get Tools and Features... - get rid of unnecessary features - I often use the Individual Components view).
Downloads Folder: I am sure I have forgotten many viable options to get some more workspace without wrecking your box. One would be to clean out your Downloads folder and move all installers to a network location - this might be the biggest save of all for some people.
This also works great for laptops - it is just about the first thing I would do for a laptop with little disc space. If you will not have access to your network share of installers - for example whilst traveling - then just use a thumb drive or external hard drive to hold your installers and ISO files.
For computers with multiple users there will obviously be multiple download folders potentially full of stuff. Use a disk space visualizer to see (see link on top of list).
Page File: Some people move the system page file (pagefile.sys) from the system drive to another drive. Back in the day this caused me an unbootable system, but perhaps things are better now. Not the first thing I would do though - this is very core OS-stuff.
Obviously impossible for a laptop with only one drive (unless you erase the recovery partition and create a real, visible partition in its place).
I find this option risky, maybe I should have put it in the "dis-honerable mentions" part below.
Be careful. Maybe the "last-known good"-feature or system restore can help you if you get problems?
Hibernation File: the hibernation file on Windows systems will live on the system drive, and I am not aware of any way to move it anywhere else for very fundamental technical reasons. However, you can disable hibernation to get rid of the whole file. This will free up a few gigabytes on a modern computer.
You obviously lose the ability to put your machine into hibernation (memory dumped to disk), but sleep mode (low-power use mode / standby) should still be available.
Hibernation mode may be more desirable to keep on for laptops (if battery runs out whilst traveling the laptop can not auto-hibernate and you could lose data).
Application Temp & Cache Folders: The above mentioned CCleaner can wipe out a lot of temporary files for various applications (though I don't really recommend this for use - I use cleanmgr.exe instead - and CCleaner for test boxes).
Web Browsers (Firefox, Opera, Vivaldi, Chrome, IE, Edge, Safari, etc...) can also spam the disk with a lot of cache files and downloaded junk. It is possible to redirect all these folders, though I prefer to reduce them to a certain acceptable maximum size.
Plenty of other applications, of all kinds, leave trash on the system over time. Some of which can be cleaned with CCleaner mentioned above (or another such tool). Again not a tool recommendation. Use the cleanup features inside the application itself if available.
For computers with multiple users there will obviously be multiple cache folders folders to restrict and clean.
Special Data-Heavy Applications' Storage Folders: Some applications can potentially store enormous data files on your system drive (and outside "My Documents") that can be moved to other drives.
The biggest suspect is probably Outlook (in older versions at least) - or other email software (Thunderbird, Lotus Notes, etc...). For Outlook there is a single *.PST file storing all email and attachments, or a similar sync file if connected to Exchange. This file can be moved to different drive with relative ease. Some even resort to using the Web-interface only for their email and eliminate the local PST file (good for laptops).
Without going overboard, MS-SQL databases could be another type of massive data file that could be moved to a different drive with relative ease.
And this list could be made very big, but diminishing returns to add any more (web server folders, virtual machine images, media / video files (mentioned above), virtualized applications maybe, etc...).
For computers with multiple users there will obviously be multiple storage locations to redirect.
Source Control Working Folder & Repository: for a developer this is 100% self-evident - and almost embarrassing to list, but I just want to have it mentioned. It is also related to the previous point, but I add it as its own bullet point. You move both your working folder and your source code repository (if different, and if local) to a different drive than the system drive. For example GIT, Mercurial, Perforce, StarTeam, etc...
Build Process Junk: Beyond moving source control folders to other drives, it is also possible that certain processes generate huge log files that spam the system in unexpected locations at times. I hear MSBuild tends to enthusiastically create log files sprinkled across the system and I am not sure if normal Microsoft cleanup tools detect them (for example cleanmgr.exe mentioned above). And your source code could have lots of object files you can zap.
Visual Studio Code: silly option, but for ad-hoc developer laptops or traveling tech-workers, one could potentially rely on the smaller and multi-platform Visual Studio Code instead of Visual Studio to do small development testing / work. Significantly smaller install. Personal note: a bit odd the whole tool :-). Also browser version now?
Visual Studio Code (cross-platform).
What are the differences between Visual Studio Code and Visual Studio?
https://code.visualstudio.com/docs/supporting/faq
Download: https://www.visualstudio.com/
Windows Store Apps & Per User Installations: if there are multiple users on the box, several Store apps could be installed multiple times, once per user. Some cleanup could be done here if need be. I suppose some games could be quite big. And in the day and age of side-by-side installation features, we are now to deploy everything per-user? Odd.
Tweak Each Package Installation: almost every package you install can be modified slightly during installation to add less files to the system partition.
Redirect Application Installation Folder: this is an option I personally dislike, but it is used a lot. For every installation you redirect the installation folder to a different drive and folder hierarchy than the regular ProgramFilesFolder. This is done on a per-package basis, and not all packages support this. Typically you go to a "Custom" installation dialog where you perform "feature selection" (what setup features to install).
Leave Out Optional Features: most packages you install will have optional components that you can leave out or even run-from-source in the case of some MSI packages. Certain developer tools can often be tweaked quite a bit without too many side-effects. Large games are often installed to a regular non-SSD hard drive which is not the system drive.
Uninstall Windows Components: a few components can be added / removed from Windows. Click Turn Windows Features On or Off from the old-style Add / Remove Control Panel Applet. You can turn off / remove certain .NET versions, IE, IIS, Windows Media Player, Message Queue Server, Print to PDF, PowerShell and various other components. Maybe not that much to gain from this (some security benefits perhaps by removing some components - for example support for SMB 1.0 / CIFS file sharing or IIS).
Enable Compression For System Drive: you can enable compression on the whole system drive - with some performance penalties - provided the file system is NTFS. Simply Right-click the system drive => Properties => Compress drive to save disc space. This can take quite some time (old HD, SSDs are faster). You can also compress individual folders. I like to enable the "Show compressed or encrypted NTFS files in color" option in Windows Explorer. File Menu => Options => Show => Show compressed or encrypted NTFS files in color.
Uninstall Unnecessary Software: the forgotten obvious option mentioned in item 2 above, you should obviously uninstall any software that is not needed anymore. Common disk hogs: games, weird SDKs and development tools installed for testing, expired trial versions for various software, etc... To uninstall: Windows key + R, type appwiz.cpl and hit Enter.
User Data Cleanup: for certain uninstalled applications a lot of junk could be left in the %UserProfile% and in the %AllUsersProfile%. Cleanup is as usual risky, use caution, but there can be lots of junk here - sometimes gigabytes.
Great care must be taken during such cleanup. Zip up the folder first. "Big wins only" - why nitpick with tiny text files? Diminishing returns for real if you get bogged down in these folders. Use disc-space visualization tools to see the hogs.
%AllUsersProfile% - shared data
%UserProfile% and %UserProfile%\AppData - user specific data, remember to clean for all users (if multiple).
Stray Package Caches: as mentioned above a lot of caching goes on for MSI packages (and other installer packages). It is likely that a lot of these packages can be left behind after uninstall (this was the case with Installshield cached setups back in the day at least).
The most commonly known caching locations are described here: Cache locations for (MSI) packages. Clean at your own risk, obviously - I repeat it, and I mean it. Some gigabytes are commonly stored here.
Paths inline (just a selection, there can be many others):
WiX: %ProgramData%\Package Cache
Installshield: %SystemRoot%\Downloaded Installations (older IS setups) and %LocalAppData%\Downloaded Installations (newer IS setups)
Advanced Installer: [AppDataFolder][|Manufacturer]\[|ProductName] [|ProductVersion]\install
Visual Studio: %AllUsersProfile%\Microsoft\VisualStudio\Packages. See important tip in comment below (disable cache).
Package Distribution Cache Folders: SCCM and other package distribution systems have cache folders that get really big. For example ccmcache. These folders can usually be cleaned or re-configured to take less space.
There are no doubt numerous other little tricks, but please don't redirect system folders!
Alternative Approaches
(Dis)-Honorable Mentions: The below are not recommendations, but some alternative approaches. They are higher risk than the options above (which should be good enough), and best if you are setting up a new laptop fresh or reinstalling it, and want to get rid of pesky vendor recovery-partitions that you can do without.
Let's state the obvious with conviction: A lot of data is lost every year using these tools. So coffee or caffeine first. Glasses on. Look around. Adjust any pony tails and beards (ladies too). Speak to yourself in the third person. Assume a demonstrably insane posture and shout out "I do!" to really commit to the imminent disaster! Good luck! Fire in the hole! "Fire for effect". SNAFU. FUBAR. OK, enough already... I have had bad experiences - but no huge disasters (knock on wood) - with all these tools. Enough said - be careful, your data is important. Wife's baby pictures, your uncommitted code, etc...
diskmgmt.msc or diskpart.exe (Windows): open partition manager (diskmgmt.msc) and wipe out any recovery partitions or hidden partitions that you can live without and then expand your system disk to fill the whole physical disk or create a new visible partition.
Factory reset no longer possible (could be outdated anyway). You need installation media to reinstall (downloadable?).
Careful what you wipe out! Unrecoverable. Partitions are often protected and untouchable. They are also unmovable and un-expandable in many cases.
Maybe create a new, visible partition replacing the recovery partition and move data files and your downloads folder there to make more room on your system partition?
If the partitions are protected, you can use diskpart to delete them instead, or see next bullet point for gparted. Very easy to mess things up using diskpart though (command line).
gparted (Linux): you may be prevented from deleting a recovery partition from diskmgmt.msc (protected partitions). If you are adamant and insist, you can boot into a Linux Live Disc / System (booted from removable media) and delete using gparted for example.
I have done this to get rid of obsolete and useless recovery partitions and / or malware, and it worked just fine. But frankly I trust this gparted app as far as I can toss it. No offence to gparted, but playing well with Windows is challenging. Backup is crucial and mandatory for such risky endeavors - obviously.
Though risky (a Linux tool is updating the partition tables where your Windows partitions are declared) this may work for laptops where there is nowhere to redirect data folders since there is only one physical disk and you want the full disk for your system partition.
I think gparted even allows you to try to resize existing partitions at this point. I have never tried it. Good luck if you try. "Fire in the hole!".
Cloning: some use imaging tools or disk cloning features (hardware) to clone the old disk onto a bigger one. Backups essential obviously. Far from my comfort zone - just mentioning it. Not really relevant for this list (which was supposed to be about simple and effective measures to gain more disc space).
I believe there are features for this in gparted as well. Never tested.
Various hardware solutions. I gave them up years ago.
Why I am skeptical? Malware. Disk errors. Encryption. NTFS complexity? AD-problems (old & new drive in use post-clone)? Etc...
Several hard drive vendors seem to deliver proprietary solutions for this - these may be better tested than generic approaches?
File System Allocation Size: the file system used and its allocation size affects available space. Never bothered to look much at this, but a lot of space can be wasted by allocation size issues: Would SSD drives benefit from a non-default allocation unit size?
Allocation size cannot be easily / safely changed for a disk in use. There may be tools that can do it, but the benefits are
uncertain.
Modern Windows versions require NTFS as system partition file system. Other file systems such as FAT32 or exFAT have lower overhead (especially for smaller partitions - there will be more space available), and they are potentially faster but have more limitations. For FAT32 the biggest limitation is probably the 4GB max file size - not viable today.
The rest of this answer (below) was written during debugging - I will leave it in. It contains generic and general-purpose debugging options.
VC+ Runtimes
As seen in the link towards the bottom, other people have seen the same deployment error. Before getting into too much debugging, let's try the simplest approach possible. Please try to install the VC++ runtimes for 2017 (and 2015 perhaps) from here:
The latest supported Visual C++ downloads.
Potential General Fixes
This seems to be the better discussion online for this problem. I would first try the suggestion to run this tool: Microsoft Install and Uninstall Troubleshooter.
You can try this list of fixes as well. Crucially I would also try a reboot before trying again to release any potential locked files. Just to wipe the slate clean. The system's event log might have further information on the error seen (sometimes even beyond what is in an msiexec.exe log).
ACLs
What is the ACL (Access Control List) for your TEMP folder on that G: drive?
UPDATE: Also make sure the hidden folder C:\Windows\Installer exists and have the correct permission settings. You need to show protected operating system files in Windows Explorer to see this folder.
Verbose Logging
Try to create a proper, verbose log for the MSI install in question (much more informative than the log you refer to). This gives you something to start with to figure out what is happening. You can find some information on how to do logging here.
I would enable logging for all MSI installations for debugging purposes. See installsite.org on logging (section "Globally for all setups on a machine") for how to do this.
I prefer this default logging switched on for dev and test boxes. Typically you suddenly see an MSI error and you wish you had a log - now you can, always ready in %tmp%.
Quick Testing
In your case, I would go to C:\ProgramData\Microsoft\VisualStudio\Packages\Microsoft.VisualStudio.MinShell.Msi,version=15.6.27421.1\ to see if the MSI package is present on disk, and then I would launch it with logging enabled:
msiexec.exe /I "Microsoft.VisualStudio.MinShell.Msi.msi" /QN /L*V "C:\msilog.log"
Alternatively I would just double click the MSI file and see if I get a better, interactive error message. You will most likely need the verbose log to get any info.
See link in comment below (concrete error).

Just check c:\windows\temp and c:\windows\installer
do they exist and are they writable?
In my case i deleted c:\windows\installer previously and forgot about it, so i must recreate it.

The same error happens if UAC is disabled. Visual Studio installer can't write anything to TEMP if User Account Control is off. Solution - enable UAC.
It was VS 2019 and Windows Server 2012 R2 in my case.

Related

How do I get Windows to go as fast as Linux for compiling C++?

I know this is not so much a programming question but it is relevant.
I work on a fairly large cross platform project. On Windows I use VC++ 2008. On Linux I use gcc. There are around 40k files in the project. Windows is 10x to 40x slower than Linux at compiling and linking the same project. How can I fix that?
A single change incremental build 20 seconds on Linux and > 3 mins on Windows. Why? I can even install the 'gold' linker in Linux and get that time down to 7 seconds.
Similarly git is 10x to 40x faster on Linux than Windows.
In the git case it's possible git is not using Windows in the optimal way but VC++? You'd think Microsoft would want to make their own developers as productive as possible and faster compilation would go a long way toward that. Maybe they are trying to encourage developers into C#?
As simple test, find a folder with lots of subfolders and do a simple
dir /s > c:\list.txt
on Windows. Do it twice and time the second run so it runs from the cache. Copy the files to Linux and do the equivalent 2 runs and time the second run.
ls -R > /tmp/list.txt
I have 2 workstations with the exact same specs. HP Z600s with 12gig of ram, 8 cores at 3.0ghz. On a folder with ~400k files Windows takes 40seconds, Linux takes < 1 second.
Is there a registry setting I can set to speed up Windows? What gives?
A few slightly relevant links, relevant to compile times, not necessarily i/o.
Apparently there's an issue in Windows 10 (not in Windows 7) that closing a process holds a global lock. When compiling with multiple cores and therefore multiple processes this issue hits.
The /analyse option can adversely affect perf because it loads a web browser. (Not relevant here but good to know)
Unless a hardcore Windows systems hacker comes along, you're not going to get more than partisan comments (which I won't do) and speculation (which is what I'm going to try).
File system - You should try the same operations (including the dir) on the same filesystem. I came across this which benchmarks a few filesystems for various parameters.
Caching. I once tried to run a compilation on Linux on a RAM disk and found that it was slower than running it on disk thanks to the way the kernel takes care of caching. This is a solid selling point for Linux and might be the reason why the performance is so different.
Bad dependency specifications on Windows. Maybe the chromium dependency specifications for Windows are not as correct as for Linux. This might result in unnecessary compilations when you make a small change. You might be able to validate this using the same compiler toolchain on Windows.
A few ideas:
Disable 8.3 names. This can be a big factor on drives with a large number of files and a relatively small number of folders: fsutil behavior set disable8dot3 1
Use more folders. In my experience, NTFS starts to slow down with more than about 1000 files per folder.
Enable parallel builds with MSBuild; just add the "/m" switch, and it will automatically start one copy of MSBuild per CPU core.
Put your files on an SSD -- helps hugely for random I/O.
If your average file size is much greater than 4KB, consider rebuilding the filesystem with a larger cluster size that corresponds roughly to your average file size.
Make sure the files have been defragmented. Fragmented files cause lots of disk seeks, which can cost you a factor of 40+ in throughput. Use the "contig" utility from sysinternals, or the built-in Windows defragmenter.
If your average file size is small, and the partition you're on is relatively full, it's possible that you are running with a fragmented MFT, which is bad for performance. Also, files smaller than 1K are stored directly in the MFT. The "contig" utility mentioned above can help, or you may need to increase the MFT size. The following command will double it, to 25% of the volume: fsutil behavior set mftzone 2 Change the last number to 3 or 4 to increase the size by additional 12.5% increments. After running the command, reboot and then create the filesystem.
Disable last access time: fsutil behavior set disablelastaccess 1
Disable the indexing service
Disable your anti-virus and anti-spyware software, or at least set the relevant folders to be ignored.
Put your files on a different physical drive from the OS and the paging file. Using a separate physical drive allows Windows to use parallel I/Os to both drives.
Have a look at your compiler flags. The Windows C++ compiler has a ton of options; make sure you're only using the ones you really need.
Try increasing the amount of memory the OS uses for paged-pool buffers (make sure you have enough RAM first): fsutil behavior set memoryusage 2
Check the Windows error log to make sure you aren't experiencing occasional disk errors.
Have a look at Physical Disk related performance counters to see how busy your disks are. High queue lengths or long times per transfer are bad signs.
The first 30% of disk partitions is much faster than the rest of the disk in terms of raw transfer time. Narrower partitions also help minimize seek times.
Are you using RAID? If so, you may need to optimize your choice of RAID type (RAID-5 is bad for write-heavy operations like compiling)
Disable any services that you don't need
Defragment folders: copy all files to another drive (just the files), delete the original files, copy all folders to another drive (just the empty folders), then delete the original folders, defragment the original drive, copy the folder structure back first, then copy the files. When Windows builds large folders one file at a time, the folders end up being fragmented and slow. ("contig" should help here, too)
If you are I/O bound and have CPU cycles to spare, try turning disk compression ON. It can provide some significant speedups for highly compressible files (like source code), with some cost in CPU.
NTFS saves file access time everytime. You can try disabling it:
"fsutil behavior set disablelastaccess 1"
(restart)
The issue with visual c++ is, as far I can tell, that it is not a priority for the compiler team to optimize this scenario.
Their solution is that you use their precompiled header feature. This is what windows specific projects have done. It is not portable, but it works.
Furthermore, on windows you typically have virus scanners, as well as system restore and search tools that can ruin your build times completely if they monitor your buid folder for you. windows 7 resouce monitor can help you spot it.
I have a reply here with some further tips for optimizing vc++ build times if you're really interested.
The difficulty in doing that is due to the fact that C++ tends to spread itself and the compilation process over many small, individual, files. That's something Linux is good at and Windows is not. If you want to make a really fast C++ compiler for Windows, try to keep everything in RAM and touch the filesystem as little as possible.
That's also how you'll make a faster Linux C++ compile chain, but it is less important in Linux because the file system is already doing a lot of that tuning for you.
The reason for this is due to Unix culture:
Historically file system performance has been a much higher priority in the Unix world than in Windows. Not to say that it hasn't been a priority in Windows, just that in Unix it has been a higher priority.
Access to source code.
You can't change what you can't control. Lack of access to Windows NTFS source code means that most efforts to improve performance have been though hardware improvements. That is, if performance is slow, you work around the problem by improving the hardware: the bus, the storage medium, and so on. You can only do so much if you have to work around the problem, not fix it.
Access to Unix source code (even before open source) was more widespread. Therefore, if you wanted to improve performance you would address it in software first (cheaper and easier) and hardware second.
As a result, there are many people in the world that got their PhDs by studying the Unix file system and finding novel ways to improve performance.
Unix tends towards many small files; Windows tends towards a few (or a single) big file.
Unix applications tend to deal with many small files. Think of a software development environment: many small source files, each with their own purpose. The final stage (linking) does create one big file but that is an small percentage.
As a result, Unix has highly optimized system calls for opening and closing files, scanning directories, and so on. The history of Unix research papers spans decades of file system optimizations that put a lot of thought into improving directory access (lookups and full-directory scans), initial file opening, and so on.
Windows applications tend to open one big file, hold it open for a long time, close it when done. Think of MS-Word. msword.exe (or whatever) opens the file once and appends for hours, updates internal blocks, and so on. The value of optimizing the opening of the file would be wasted time.
The history of Windows benchmarking and optimization has been on how fast one can read or write long files. That's what gets optimized.
Sadly software development has trended towards the first situation. Heck, the best word processing system for Unix (TeX/LaTeX) encourages you to put each chapter in a different file and #include them all together.
Unix is focused on high performance; Windows is focused on user experience
Unix started in the server room: no user interface. The only thing users see is speed. Therefore, speed is a priority.
Windows started on the desktop: Users only care about what they see, and they see the UI. Therefore, more energy is spent on improving the UI than performance.
The Windows ecosystem depends on planned obsolescence. Why optimize software when new hardware is just a year or two away?
I don't believe in conspiracy theories but if I did, I would point out that in the Windows culture there are fewer incentives to improve performance. Windows business models depends on people buying new machines like clockwork. (That's why the stock price of thousands of companies is affected if MS ships an operating system late or if Intel misses a chip release date.). This means that there is an incentive to solve performance problems by telling people to buy new hardware; not by improving the real problem: slow operating systems. Unix comes from academia where the budget is tight and you can get your PhD by inventing a new way to make file systems faster; rarely does someone in academia get points for solving a problem by issuing a purchase order. In Windows there is no conspiracy to keep software slow but the entire ecosystem depends on planned obsolescence.
Also, as Unix is open source (even when it wasn't, everyone had access to the source) any bored PhD student can read the code and become famous by making it better. That doesn't happen in Windows (MS does have a program that gives academics access to Windows source code, it is rarely taken advantage of). Look at this selection of Unix-related performance papers: http://www.eecs.harvard.edu/margo/papers/ or look up the history of papers by Osterhaus, Henry Spencer, or others. Heck, one of the biggest (and most enjoyable to watch) debates in Unix history was the back and forth between Osterhaus and Selzer http://www.eecs.harvard.edu/margo/papers/usenix95-lfs/supplement/rebuttal.html
You don't see that kind of thing happening in the Windows world. You might see vendors one-uping each other, but that seems to be much more rare lately since the innovation seems to all be at the standards body level.
That's how I see it.
Update: If you look at the new compiler chains that are coming out of Microsoft, you'll be very optimistic because much of what they are doing makes it easier to keep the entire toolchain in RAM and repeating less work. Very impressive stuff.
I personally found running a windows virtual machine on linux managed to remove a great deal of the IO slowness in windows, likely because the linux vm was doing lots of caching that Windows itself was not.
Doing that I was able to speed up compile times of a large (250Kloc) C++ project I was working on from something like 15 minutes to about 6 minutes.
Incremental linking
If the VC 2008 solution is set up as multiple projects with .lib outputs, you need to set "Use Library Dependency Inputs"; this makes the linker link directly against the .obj files rather than the .lib. (And actually makes it incrementally link.)
Directory traversal performance
It's a bit unfair to compare directory crawling on the original machine with crawling a newly created directory with the same files on another machine. If you want an equivalent test, you should probably make another copy of the directory on the source machine. (It may still be slow, but that could be due to any number of things: disk fragmentation, short file names, background services, etc.) Although I think the perf issues for dir /s have more to do with writing the output than measuring actual file traversal performance. Even dir /s /b > nul is slow on my machine with a huge directory.
I'm pretty sure it's related to the filesystem. I work on a cross-platform project for Linux and Windows where all the code is common except for where platform-dependent code is absolutely necessary. We use Mercurial, not git, so the "Linuxness" of git doesn't apply. Pulling in changes from the central repository takes forever on Windows compared to Linux, but I do have to say that our Windows 7 machines do a lot better than the Windows XP ones. Compiling the code after that is even worse on VS 2008. It's not just hg; CMake runs a lot slower on Windows as well, and both of these tools use the file system more than anything else.
The problem is so bad that most of our developers that work in a Windows environment don't even bother doing incremental builds anymore - they find that doing a unity build instead is faster.
Incidentally, if you want to dramatically decrease compilation speed in Windows, I'd suggest the aforementioned unity build. It's a pain to implement correctly in the build system (I did it for our team in CMake), but once done automagically speeds things up for our continuous integration servers. Depending on how many binaries your build system is spitting out, you can get 1 to 2 orders of magnitude improvement. Your mileage may vary. In our case I think it sped up the Linux builds threefold and the Windows one by about a factor of 10, but we have a lot of shared libraries and executables (which decreases the advantages of a unity build).
How do you build your large cross platform project?
If you are using common makefiles for Linux and Windows you could easily degrade windows performance by a factor of 10 if the makefiles are not designed to be fast on Windows.
I just fixed some makefiles of a cross platform project using common (GNU) makefiles for Linux and Windows. Make is starting a sh.exe process for each line of a recipe causing the performance difference between Windows and Linux!
According to the GNU make documentation
.ONESHELL:
should solve the issue, but this feature is (currently) not supported for Windows make. So rewriting the recipes to be on single logical lines (e.g. by adding ;\ or \ at the end of the current editor lines) worked very well!
IMHO this is all about disk I/O performance. The order of magnitude suggests a lot of the operations go to disk under Windows whereas they're handled in memory under Linux, i.e. Linux is caching better. Your best option under windows will be to move your files onto a fast disk, server or filesystem. Consider buying an Solid State Drive or moving your files to a ramdisk or fast NFS server.
I ran the directory traversal tests and the results are very close to the compilation times reported, suggesting this has nothing to do with CPU processing times or compiler/linker algorithms at all.
Measured times as suggested above traversing the chromium directory tree:
Windows Home Premium 7 (8GB Ram) on NTFS: 32 seconds
Ubuntu 11.04 Linux (2GB Ram) on NTFS: 10 seconds
Ubuntu 11.04 Linux (2GB Ram) on ext4: 0.6 seconds
For the tests I pulled the chromium sources (both under win/linux)
git clone http://github.com/chromium/chromium.git
cd chromium
git checkout remotes/origin/trunk
To measure the time I ran
ls -lR > ../list.txt ; time ls -lR > ../list.txt # bash
dir -Recurse > ../list.txt ; (measure-command { dir -Recurse > ../list.txt }).TotalSeconds #Powershell
I did turn off access timestamps, my virus scanner and increased the cache manager settings under windows (>2Gb RAM) - all without any noticeable improvements. Fact of the matter is, out of the box Linux performed 50x better than Windows with a quarter of the RAM.
For anybody who wants to contend that the numbers wrong - for whatever reason - please give it a try and post your findings.
Try using jom instead of nmake
Get it here:
https://github.com/qt-labs/jom
The fact is that nmake is using only one of your cores, jom is a clone of nmake that make uses of multicore processors.
GNU make do that out-of-the-box thanks to the -j option, that might be a reason of its speed vs the Microsoft nmake.
jom works by executing in parallel different make commands on different processors/cores.
Try yourself an feel the difference!
I want to add just one observation using Gnu make and other tools from MinGW tools on Windows: They seem to resolve hostnames even when the tools can not even communicate via IP. I would guess this is caused by some initialisation routine of the MinGW runtime. Running a local DNS proxy helped me to improve the compilation speed with these tools.
Before I got a big headache because the build speed dropped by a factor of 10 or so when I opened a VPN connection in parallel. In this case all these DNS lookups went through the VPN.
This observation might also apply to other build tools, not only MinGW based and it could have changed on the latest MinGW version meanwhile.
I recently could archive an other way to speed up compilation by about 10% on Windows using Gnu make by replacing the mingw bash.exe with the version from win-bash
(The win-bash is not very comfortable regarding interactive editing.)

Rewrite Registry File in Windows

I have been trying to find a way to "defragment" the registry on my Windows machine. Firstly, does this make sense? Any benefits in doing this? (Not much love on superuser.com) Secondly, I am looking for a way to rewrite the registry using C/C++ with Windows API. Is there a way to read the registry and write it to a new file getting rid of unused bytes along the way? (I might have to write the new file and then boot into another OS/disk before I can overwrite the original... but I am willing to take that risk.)
Microsoft's PageDefrag does exactly this, as it states on its page "PageDefrag uses the standard file defragmentation APIs to defragment the files."
(A copy of the linked article is here because in typical MSDN style their link is dead.)
http://www.larshederer.homepage.t-online.de/erunt/ - NTREGOPT NT Registry Optimizer
Similar to Windows 9x/Me, the registry files in an NT-based system
can become fragmented over time, occupying more space on your hard
disk than necessary and decreasing overall performance. You should
use the NTREGOPT utility regularly, but especially after installing
or uninstalling a program, to minimize the size of the registry files
and optimize registry access.
The program works by recreating each registry hive "from scratch",
thus removing any slack space that may be left from previously
modified or deleted keys.
http://reboot.pro/index.php?showtopic=11212 - Offreg.dll MS WDK Offline Registry Library
The offline registry library (Offreg.dll) is used to modify a registry hive outside the active system registry. This library is intended for registry update scenarios such as servicing an operating system image. The library supports registry hive formats starting with Windows XP.
Developer Audience
http://reboot.pro/topic/11312-offline-registry/ - Offline Registry MS WDK Command-Line Tool
A command line tool that will allow one to read and write to an offline registry hive.
Reading the values should be possible.
But I've never seen any spec for how the registry files are written to disk, and unless you could find one you'd have to reverse engineer those files in your OS (might be differences between XP and 7 etc). Then you have to remember that the registry isn't just one file, it's multiple files and some of them belongs to certain users and I think they use SIDs rather than user names so even if you move them to a new computer, you have to be sure it's the same OS version with the same users with the same SIDs set up on it.
All this for little or no gain so I'd agree with the superuser users that it wouldn't make sense.

Is There Any Reason Not To Use The Windows Registry For Program Settings?

To me its a no-brainer. The settings for my program go into the Windows Registry. After all, that's what it's for, isn't it?
But some programmers are still hesitant in using the Registry. They state that as it grows it slows down your computer. Or they state that it gets corrupted and causes your computer to malfunction.
So they write their own configuration files, or may use the INI files that Microsoft has depreciated since a few OS's ago.
From what I hear, the problems with the registry that occurred in early Windows OS's were mostly fixed as of Windows XP. It may be the plethora of companies that make Registry Cleaners that are keeping up the rumors that "registry bloat" and "orphaned entries" are still bad.
So I ask, is there any reason today not to use the Windows Registry to store my program configuration settings?
If the user does not allow registry access, you're screwed.
If the user reinstalls Windows and he wants to migrate his settings, it's much more complicated than with a simple file
Working with a config file means your app is portable
Much simpler for the user to change a setting manually
When you'll want to port your app to other OS, what are you gonna do with your registry settings ?
Windows Registry is bloated. Do you really want to contribute to this chaos?
For me, quickly installing, migrating and moving applications is a key point to productivity. I can't if I need to care of hundreds of possible registry keys. If there's a simple .ini or .cfg or .xml file somewhere in my user folder (or even the application directory if it is a portable app), migration is easy.
Often-heard argument pro registry: easy to write and read (assuming you're using plain WinAPI). Really? I consider the RegXXXfamily of functions pretty verbose ... too many function calls and typing work for storing just a few bits of information. So you always end up wrapping the registry away .. and now compare this effort with a simple text configuration file, maybe just key=value-like.
It depends, when you have small entries that need to read by multiple programs registry is ok, as database have locking issues, and config files are application based.
The problem happens when the user does not allow registry access, that are lots of software in the market that will show a pop up when anyone tries to modify registry and the user can cancel or allow the users. These programs are too common with the anti virus programs.
Putting your settings into the Registry means that if your user wants to move your program and its settings to another computer, he can't. Backup, ditto. Those settings are in a mysterious invisible place. I find this to be a hostile approach to one's users.
I've written numerous small-to-medium programs, and always used a .ini file. A tech-savy user can edit this file using an editor, he can check the settings in it, he can email it to a tech supporter, he can do a large variety of things that are significantly harder to do with registry entries.
And my programs don't contribute to slowing the computer down.
Personally speaking, I just don't like binary configuration of any type. I much prefer text file format which can be easily copied, edited, diffed & merged, and put under change control complete with history.
The last of these is the biggest reason not to use the registry - I can stick configuration files into SVN (or similar) with the full support given to text files, instead of having to treat it as a blob.
I don't really have much of an opinion for or against using the registry, but I'd like to note something... Many answers here indicate that registry access may be restricted for a certain user. I'd say the exact same thing goes for config files.
With registry you need to write to the "current user" to be fairly certain about having access (and should do so anyway, in many cases). Config files should be put in a user based area as well (e.g. AppData/Local) if you want "guaranteed" access without questions asked. As far as I know putting config files in "global" areas are as likely to yield access problems as the registry is.

How do I improve Windows Subversion client update performance?

How do I improve Subversion client update performance? It appears to be disk bound on the client.
Details:
CollabNet Windows client version 1.6.2 (r37639)
Windows XP SP2
3 GB RAM with PF Usage around 1 GB and System Cache of 1.1 GB.
Disk has write caching enabled
Update takes 7-15 minutes (when very little to update).
Checkout has 36,083 directories/files (from svn list)
Repository has 58,750 revisions.
Checkout takes about 2.7 GB
Perf monitor shows % Disk Write time stays near 90% during update.
Max Disk Read Bytes/sec got up to 12.8M and write got up to 5.2M
CPU, paging file usage, and network usage are all low.
Watching the server performance seems to show that it isn't a bottleneck.
I'm especially interested in answers besides getting a faster disk (especially configuration changes).
Updates from some of the suggestions:
I need the whole thing so sparse directories won't work.
Another client (TortoiseSVN) takes 7 minutes also
TortoiseSVN icon overlays have be configured so they don't cause the problem.
Anti-virus is configured to to skip that directory is it isn't causing the problem.
I experience exatly the same thing. Recently replaced Perforce with svn, but if we cannot overcome the performance problems on Windows me must consider another tool.
Using svn 1.6.6, Win XP and Vista clients. RedHat server.
My observations matches yours:
Huge disk-write activity.
Antivirus not a bottleneck.
No matter witch svn-clients are used.
No server or network bottleneck.
Complementary info
More than 3 times faster operations on:
Linux (Ubuntu).
Linux (Ubuntu) running on VirtualBox at Win Vista host.
Win XP running on VMWare at RedHat host.
Do you need every bit of the repository on your working copy? If you truly only care about particular portions of the tree, look into Subversion's Sparse Directories (a.k.a. "Sparse Checkouts") feature. It allows you to manipulate your working copy so it only contains those directories of interest.
Just as an example, you might use this to prune documentation, installer-related files, etc. Depending on what you truly need on your local machine, embracing this approach could make a serious dent in your wait times.
Try svn client version 1.5.. It helped me on my Vista laptop. Versions 1.6. are extremely slow.
This is more likely to be your network and the amount of data moved as well as your client. Are you using Tortoise? I find it to be a bit slow myself when moving that much data!
Are you using TortoiseSVN? If so, the Icon Overlays do slow down operations. If you go to TortoiseSVN Settings/Icon Overlays there are several settings you can tweak to control the level to which you want to use the Overlays, including turning them off completely. See if that affects your performance.
Do you run a virus checker that uses on-access scanning? That can really make it crawl. If so, turn it off and see if that helps. Most scanners will have a way to exclude specific directories if that helps.
Nobody seems to be pointing out the one reason that I often consider a design flaw. Subversion creates a second "pristine" copy of the checkout for offline operations. If you're checking out 4G of files, it's actually writing 8G to disk.
Compare a checkout to an export. That will show you the massive difference when writing those second copies.
There's nothing you can do about that.
Upgrade to svn 1.7
From Discussion of Slow Performance of SVN Update:
The update process in svn 1.6 goes something like this:
search the entire working copy, to see what's there at the moment, and locking it so no one changes the answer during the next steps
tell that to the server
receive from the server whatever new stuff you need, applying the changes to the files as you go
recurse over the entire working copy again, unlocking it
If there are many directories and files, steps 1 and 4 can take up a
lot of time. This would be consistent with your observation of long
delays with no network traffic.
Working copy format was changed in svn 1.7. Now all meta information is stored in SQLite database in root folder of working copy and there is no need to perform steps 1 and 4 any more which consumed most of the time durring svn update.

Unmovable Files on Windows XP

When I defragment my XP machine I notice that there is a block of "Unmovable Files". Is there a file attribute I can use to make my own files unmovable?
Just to clarify, I want a way to programmatically tell Windows that a file that I create should be unmovable. Is this possible, and if so, how can I do it?
Thanks,
Terry
A lot of system files cannot be moved after the system boots, such as the page file and registry database files.
This utility runs before Windows boots to defragment those files. I have it set to run at every boot, and it works well for me on several machines.
Note that the very first time you boot up with this utility set to run, it may take several minutes to defrag. After that first run though, it finishes in just 3 or 4 seconds.
Edit0: To respond to your clarification- that link says windows has marked the page file and registry files as open for exclusive access. So you should be able to do the same thing with the LockFile API Call. However, that's not an attribute of the file itself. You'd have to actually run some background program that locks the file for exclusive access.
There are no file attributes that you can place on your files to mark them as immovable. The only way that a file cannot be moved (I think) during defragmentation is to have some other process have the file open (for read or write, I'm not even sure that you need to have the file open in exclusive mode or not).
Quite frankly, I cannot think of a reason that you'd want your files not to move, unless you have specific requirements about where on the disk platter your files reside. Defragmentation should generally lead to faster disk access and that seems to be desireable in all cases :-)
This usually means that the file is in use by some process. If you're defragmenting, you'll likely see this with a lot of system files. If the file should legitimately be movable and is stuck (it's being held by a process that runs at startup but shouldn't be, for example), the most useful way of resolving the problem is to remove all permissions on the file, reboot, restore the permissions, and then get rid of the file/run the program that's trying to use it.
I suppose the ugly way is to have an application boot on startup, check every few seconds if defrag is running and if so open the file in exclusive mode.
This is really ugly and I don't recommend it unless there is no cleaner solution.
Terry, the answers all mention ways to prevent files from becoming unmovable during defragmentation. From your question it appears that you are in fact wanting to make your personal files unmovable. Can you please clarify what is appealing about making your files unmovable.
I assume you're using the defragger that comes with Windows. Some commercial ones like DiskKeeper can move some of these files (usually system files). You can try their trial versions.
Contig might serve your purpose http://technet.microsoft.com/en-us/sysinternals/bb897428.aspx
I'm relatively certain I ran across some methods/attributes you could access programatically to do exactly what you want. This was back in NT4 days though and my memory isn't that good.
For a little more complete solution try Raxco's PerfectDisk. While it is a commercial product it does a very good job and supports boot time defrag of system files. The first defrag takes longer than say DiskKeeper but its a single pass defragger and supports defragging with very little free space left on the drive. Overall its a much smarter defrag program then any other I've seen and supports systems of any size.
http://www.raxco.com/
first try to move(or delete) the files within safe mode. If can not, try to move(or delete) the files with linux.
But be careful if those are the windows system files, then you are failed to boot up your windows.
Some reason why the files are unmovable are : the file size is too big, the files are being in open/in use condition, insufficient security privileges, being access by other computer/s, and many other things.

Resources