Virtual memory management on windows 7 - performance

I found this article on technet regarding this topic:
http://technet.microsoft.com/en-us/magazine/ff382717.aspx
Does microsoft have some benchmarks for various manual page file configurations?
Somebody told me that it is a good idea to create an extra partition on the drive with the OS with say 4.5 Gig and move the pagefile to this partition while disableing the first partition with the OS. Additionally it was suggested to manually set the page file size to limit 2 / 2 Gig.
In my eyes this makes no sense. Could someone comment on this?

I don't know if this answer will satisfy you, but I've seen an opinion, that if you place your page file on ext[2-4] file system (you need drivers to do this on Windows, of course), system performance will increase.
However, I unfortunately can't recall where I've seen this (it was somehow connected with writing file system drivers using Installable FIle System API).

Related

VS 2017 Installation failed

I install the VS2017 on Windows 7. After some time I receive the error:
MSI: C:\ProgramData\Microsoft\VisualStudio\Packages\Microsoft.VisualStudio.MinShell.Msi,version=15.6.27421.1\Microsoft.VisualStudio.MinShell.Msi.msi, Properties: REBOOT=ReallySuppress ARPSYSTEMCOMPONENT=1 MSIFASTINSTALL="7" VSEXTUI="1" VS7.3643236F_FC70_11D3_A536_0090278A1BB8="G:\Program Files (x86)\Microsoft Visual Studio\2017\Community"
Return code: 1632
Return code details: The Temp folder is on a drive that is full or is inaccessible. Free up space on the drive or verify that you have write permission on the Temp folder.
Log
G:\TEMP\dd_setup_20180318121545_006_Microsoft.VisualStudio.MinShell.Msi.log
I have checked the G: where the TEMP located. It has 200 GB free.
BUT one strange thing: this folder and all other folders are Read-Only. I uncheck it in the Properties, then close Properties dialog, open it again: it is Read-Only.
I can modify it, even MSI installer could: it created the log file there. But in the middle of installation the error occurs.
What is it and how I can solve this problem?
I run with log:
Machine policy value 'DisableUserInstalls' is 0
SRSetRestorePoint skipped for this transaction.
Note: 1: 1336 2: 3 3: C:\Windows\Installer\
MainEngineThread is returning 1632
No System Restore sequence number for this installation.
User policy value 'DisableRollback' is 0
Machine policy value 'DisableRollback' is 0
Incrementing counter to disable shutdown. Counter after increment: 0
Note: 1: 1402 2: HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Installer\Rollback\Scripts 3: 2
Note: 1: 1402 2: HKEY_LOCAL_MACHINE\Software\Microsoft\Windows\CurrentVersion\Installer\InProgress 3: 2
Decrementing counter to disable shutdown. If counter >= 0, shutdown will be denied. Counter after decrement: -1
Restoring environment variables
Decrementing counter to disable shutdown. If counter >= 0, shutdown will be denied. Counter after decrement: -1
MainEngineThread is returning 1632
Disc Space Reclaiming - Quick Wins?: Too much to read? The essential options (arguably).
Final Summary
This issue turned out to be a redirected TEMP and C:\Windows\Installer cache folder - with the latter being on an unavailable drive.
Please be careful redirecting system folders, in particular C:\Windows\Installer. It is a super-hidden system folder and side-effects are very common.
You must make sure that the relocated folder has the correct ACL permissions that the original folder had. This is crucially important for security reasons. For one thing the whole folder could be deleted by someone who do not understand what it is for - making all packages un-uninstallable and un-maintainable. There are also other security reasons.
Also: putting this folder on the network is not technically sound in my opinion - problems will result. A local drive is also problematic if drive letters change. Which brings me to the next point:
Lacking Space for your System SSD Drive?
If your real issue is lacking disk space on your system SSD drive, please consider some alternatives listed below. Proceed with care and at your own risk with every option. Most of them should be harmless.
Disc Space Visualizing: I have an ancient tool called SpaceMonger.exe which shows me a visual representation of whatever is taking up my disc space. Very useful. It seems this tool is no longer supported. Maybe check https://en.wikipedia.org/wiki/WinDirStat for a similar tool (untested by me - run it by virustotal.com).
DriverStore: And a word to the resident hacker in all computer guys: no, no - don't try to redirect %SystemRoot%\System32\DriverStore (!). "Seductive The Dark Side Is". "Run Forrest, Run!". "Careful With That Axe Eugene". Etc... You get the picture. Leaving out Monty Python allusions for now. Seriously: I do not know what low-level stuff could be involved in the boot process. One would have to ask Raymond Chen, but don't. He has important things to do. However: pnputil.exe, DriverStore Explorer - your own risk. Don't do it :-).
Overall Suggestions
UPDATE: For laptops I like to use a high capacity, low-profile USB flash drive and / or a high capacity SD-card permanently sitting in a port to
hold my downloads and installers, VS Help files, maybe even source code (riskier). An obvious, but somewhat "clunky" option.
One can combine this drive with the Library feature in Windows Explorer
to show the flash drive under whatever library you want (Downloads, Videos, Pictures, Source, etc...).
My preferred desktop disc cleanup options below would be: 7, 19, 2, 18, 1, 6, 11, 12 (in that order).
Preferred options for laptops: 7, 19, 2, 18, 6, 10 (reduce max cache sizes), 15, 17, 3 (in that order).
The real-world approach for me is a slightly different order: 2 (purge obsolete Windows Updates - this may also trim WinSxS - but I am not positive), 19 (uninstall unneccessary software - can be relatively quick), then I run SpaceMonger.exe to find space hogs and move them - this often involves zapping the Downloads folder (7) and purging, moving or clouding media files (Pictures, Videos, Music), then 6 for developer PCs (jogging Visual Studio and uninstall useless SDKs and help files), and 9 (eliminate hibernation - not great for laptops), 18 (enable compression - can take forever), and finally I might zap the recovery partitions (laptops) and create a new partition in its place to allow data files to be stored there (freeing up system partition space). This zapping is a high-risk operation - obviously. Very error-prone (especially if inexperienced users use the diskpart command-line tool or a Linux Live Boot tool - described below). And obviously verify that you have installation media AND a valid license key before wiping out recovery partitions - it has to be mentioned. Data files I move are usually: source code repository, downloads folder, outlook PST file, images and videos, etc... This procedure should reclaim many gigabytes of disc space. Don't do it for fun though - though risk should be acceptable for most of these options (barring the recovery partition zapping - it is relatively simple to do, but error prone).
Cleanup Options
Apply healthy skepticism to these options. They are not all terribly useful in many cases - just attempting to mention all kinds of tweaks. Potential easy, big wins without much configuration and fiddling could be 2, 6, 7, 9, 18. Options 2 and 18 are almost always time consuming, but very effective. Maybe hours for option 2 (especially on Windows 7 & 8 - do not abort when it is running) and even longer for option 18 on a large computer or a slow disk (but the operation can be cancelled).
Option 0, Cloud Storage is an implied overall option in this day and age. OneDrive Filer, GDisk, Dropbox, etc... Download data files on demand.
My Documents: It is generally much better to move user data folders to a network location or another, local drive (best) than to redirect system folders! Few system-entanglements.
I wouldn't move the desktop or other folders found here: HKCU\Software\Microsoft\Windows\CurrentVersion\Explorer\User Shell Folders, I would move "My Documents". Just right-click it in Windows Explorer, go to properties and there is a tab there with features to help you move it. Careful whilst doing this - a backup is in order first.
Pictures and Video might also be OK to move, but not the desktop or the other special folders - they may be involved in the boot or logon process (erroneous packages could cause that even for My Documents - nothing is without risk).
Streaming and media files from apps such as iTunes or similar can obviously totally hog a disc with limited capacity. I use SpaceMonger.exe to get an overview and then move the files somewhere else.
For computers with multiple users there will obviously be multiple "My Documents" folders to redirect.
Microsoft's Disk Cleanup Tool: Run cleanmgr.exe, select Clean up system files as described here: https://serverfault.com/q/573208/20599 (top).
UPDATE Oct.2018: "Downloads" folder is now a cleanup option! DO NOT ENABLE! It deletes the whole downloads folder without question. This issue appears corrected by now Oct.2021.
You can now zap the uninstalls for applied Windows Updates - this can give you back several gigabytes on your system drive. In the picture below I can zap 5.36 GB. For Windows 7 I have seen dozens of gigabytes being purgeable.
This tool might also slim down and shrink the WinSxS directory (the Win32 side-by-side assembly folder). I am not 100% positive.
Obviously you can remove unnecessary packages in Add / Remove Programs and remove system restore point (use the second tab in the image below to access these features):
Third Party Cleanup Applications: Third party tools such as CCleaner may be able to clean out even more space by wiping out cache files and temporary files for all kinds of applications and tools. This particular tool suffered a malware attack recently. Use at your own risk.
Personal opinion / suggestion: use only for test boxes or non-critical machines. The cleanup is quite awesome, but it also involves some risks (lost login passwords, lost system logs, etc...). Self-evident, but it should probably be mentioned.
My 2 cents: not a corporate solution, but may be fine for advanced home users who like to experiment and to keep their machines tuned.
Administrative Installations: For large MSI files, performing an administrative install will prevent the caching of the whole MSI file in C:\Windows\Installer. You must install from a proper network share so files are available for repair operations.
An administrative installation essentially extracts embedded CAB files from the MSI and allows the creation of a network installation point where all computers can pull files from instead of caching all files locally.
The generic method for running and administrative installation is: msiexec /a File.msi. More details in links below.
Extract MSI from EXE
What is the purpose of administrative installation initiated using msiexec /a?
How can I eliminate the huge, cached MSI files in C:\Windows\Installer?
There is a whole lot of installer caching going on - it is a little out of hand if you ask me.
Mounted Drives: Some guys dabble with mounting external drives as folders on their system drive. In other words another drive shows up as a regular folder on your system drive and functions as such (sample).
This I have no experience with, and I have doubts about its reliability over time. For all I know it might actually be better than several other options if you do it right (and never take out the physical drive).
I would do data file folders only (not settings folders, or core OS folders such as the desktop). Maybe for source control folders. If the link breaks, the data should still be safe and the system can still boot (and the link re-established).
UPDATE: Windows Explorer's "Include in library" is an alternative? (do have a peek) I like to create a "Source Code Library" with included folders from here and there.
Visual Studio: And the obvious cleanup options for Visual Studio (for completeness):
If you have downloaded MSDN help locally (Help => Add and Remove Help Content, remove items as appropriate and rely on online help instead or change the Local store path towards the bottom to use another drive for content).
Or you have several versions of the SDKs you do not need or you have Visual Studio features you do not need, get rid of them (In Visual Studio: Tools => Get Tools and Features... - get rid of unnecessary features - I often use the Individual Components view).
Downloads Folder: I am sure I have forgotten many viable options to get some more workspace without wrecking your box. One would be to clean out your Downloads folder and move all installers to a network location - this might be the biggest save of all for some people.
This also works great for laptops - it is just about the first thing I would do for a laptop with little disc space. If you will not have access to your network share of installers - for example whilst traveling - then just use a thumb drive or external hard drive to hold your installers and ISO files.
For computers with multiple users there will obviously be multiple download folders potentially full of stuff. Use a disk space visualizer to see (see link on top of list).
Page File: Some people move the system page file (pagefile.sys) from the system drive to another drive. Back in the day this caused me an unbootable system, but perhaps things are better now. Not the first thing I would do though - this is very core OS-stuff.
Obviously impossible for a laptop with only one drive (unless you erase the recovery partition and create a real, visible partition in its place).
I find this option risky, maybe I should have put it in the "dis-honerable mentions" part below.
Be careful. Maybe the "last-known good"-feature or system restore can help you if you get problems?
Hibernation File: the hibernation file on Windows systems will live on the system drive, and I am not aware of any way to move it anywhere else for very fundamental technical reasons. However, you can disable hibernation to get rid of the whole file. This will free up a few gigabytes on a modern computer.
You obviously lose the ability to put your machine into hibernation (memory dumped to disk), but sleep mode (low-power use mode / standby) should still be available.
Hibernation mode may be more desirable to keep on for laptops (if battery runs out whilst traveling the laptop can not auto-hibernate and you could lose data).
Application Temp & Cache Folders: The above mentioned CCleaner can wipe out a lot of temporary files for various applications (though I don't really recommend this for use - I use cleanmgr.exe instead - and CCleaner for test boxes).
Web Browsers (Firefox, Opera, Vivaldi, Chrome, IE, Edge, Safari, etc...) can also spam the disk with a lot of cache files and downloaded junk. It is possible to redirect all these folders, though I prefer to reduce them to a certain acceptable maximum size.
Plenty of other applications, of all kinds, leave trash on the system over time. Some of which can be cleaned with CCleaner mentioned above (or another such tool). Again not a tool recommendation. Use the cleanup features inside the application itself if available.
For computers with multiple users there will obviously be multiple cache folders folders to restrict and clean.
Special Data-Heavy Applications' Storage Folders: Some applications can potentially store enormous data files on your system drive (and outside "My Documents") that can be moved to other drives.
The biggest suspect is probably Outlook (in older versions at least) - or other email software (Thunderbird, Lotus Notes, etc...). For Outlook there is a single *.PST file storing all email and attachments, or a similar sync file if connected to Exchange. This file can be moved to different drive with relative ease. Some even resort to using the Web-interface only for their email and eliminate the local PST file (good for laptops).
Without going overboard, MS-SQL databases could be another type of massive data file that could be moved to a different drive with relative ease.
And this list could be made very big, but diminishing returns to add any more (web server folders, virtual machine images, media / video files (mentioned above), virtualized applications maybe, etc...).
For computers with multiple users there will obviously be multiple storage locations to redirect.
Source Control Working Folder & Repository: for a developer this is 100% self-evident - and almost embarrassing to list, but I just want to have it mentioned. It is also related to the previous point, but I add it as its own bullet point. You move both your working folder and your source code repository (if different, and if local) to a different drive than the system drive. For example GIT, Mercurial, Perforce, StarTeam, etc...
Build Process Junk: Beyond moving source control folders to other drives, it is also possible that certain processes generate huge log files that spam the system in unexpected locations at times. I hear MSBuild tends to enthusiastically create log files sprinkled across the system and I am not sure if normal Microsoft cleanup tools detect them (for example cleanmgr.exe mentioned above). And your source code could have lots of object files you can zap.
Visual Studio Code: silly option, but for ad-hoc developer laptops or traveling tech-workers, one could potentially rely on the smaller and multi-platform Visual Studio Code instead of Visual Studio to do small development testing / work. Significantly smaller install. Personal note: a bit odd the whole tool :-). Also browser version now?
Visual Studio Code (cross-platform).
What are the differences between Visual Studio Code and Visual Studio?
https://code.visualstudio.com/docs/supporting/faq
Download: https://www.visualstudio.com/
Windows Store Apps & Per User Installations: if there are multiple users on the box, several Store apps could be installed multiple times, once per user. Some cleanup could be done here if need be. I suppose some games could be quite big. And in the day and age of side-by-side installation features, we are now to deploy everything per-user? Odd.
Tweak Each Package Installation: almost every package you install can be modified slightly during installation to add less files to the system partition.
Redirect Application Installation Folder: this is an option I personally dislike, but it is used a lot. For every installation you redirect the installation folder to a different drive and folder hierarchy than the regular ProgramFilesFolder. This is done on a per-package basis, and not all packages support this. Typically you go to a "Custom" installation dialog where you perform "feature selection" (what setup features to install).
Leave Out Optional Features: most packages you install will have optional components that you can leave out or even run-from-source in the case of some MSI packages. Certain developer tools can often be tweaked quite a bit without too many side-effects. Large games are often installed to a regular non-SSD hard drive which is not the system drive.
Uninstall Windows Components: a few components can be added / removed from Windows. Click Turn Windows Features On or Off from the old-style Add / Remove Control Panel Applet. You can turn off / remove certain .NET versions, IE, IIS, Windows Media Player, Message Queue Server, Print to PDF, PowerShell and various other components. Maybe not that much to gain from this (some security benefits perhaps by removing some components - for example support for SMB 1.0 / CIFS file sharing or IIS).
Enable Compression For System Drive: you can enable compression on the whole system drive - with some performance penalties - provided the file system is NTFS. Simply Right-click the system drive => Properties => Compress drive to save disc space. This can take quite some time (old HD, SSDs are faster). You can also compress individual folders. I like to enable the "Show compressed or encrypted NTFS files in color" option in Windows Explorer. File Menu => Options => Show => Show compressed or encrypted NTFS files in color.
Uninstall Unnecessary Software: the forgotten obvious option mentioned in item 2 above, you should obviously uninstall any software that is not needed anymore. Common disk hogs: games, weird SDKs and development tools installed for testing, expired trial versions for various software, etc... To uninstall: Windows key + R, type appwiz.cpl and hit Enter.
User Data Cleanup: for certain uninstalled applications a lot of junk could be left in the %UserProfile% and in the %AllUsersProfile%. Cleanup is as usual risky, use caution, but there can be lots of junk here - sometimes gigabytes.
Great care must be taken during such cleanup. Zip up the folder first. "Big wins only" - why nitpick with tiny text files? Diminishing returns for real if you get bogged down in these folders. Use disc-space visualization tools to see the hogs.
%AllUsersProfile% - shared data
%UserProfile% and %UserProfile%\AppData - user specific data, remember to clean for all users (if multiple).
Stray Package Caches: as mentioned above a lot of caching goes on for MSI packages (and other installer packages). It is likely that a lot of these packages can be left behind after uninstall (this was the case with Installshield cached setups back in the day at least).
The most commonly known caching locations are described here: Cache locations for (MSI) packages. Clean at your own risk, obviously - I repeat it, and I mean it. Some gigabytes are commonly stored here.
Paths inline (just a selection, there can be many others):
WiX: %ProgramData%\Package Cache
Installshield: %SystemRoot%\Downloaded Installations (older IS setups) and %LocalAppData%\Downloaded Installations (newer IS setups)
Advanced Installer: [AppDataFolder][|Manufacturer]\[|ProductName] [|ProductVersion]\install
Visual Studio: %AllUsersProfile%\Microsoft\VisualStudio\Packages. See important tip in comment below (disable cache).
Package Distribution Cache Folders: SCCM and other package distribution systems have cache folders that get really big. For example ccmcache. These folders can usually be cleaned or re-configured to take less space.
There are no doubt numerous other little tricks, but please don't redirect system folders!
Alternative Approaches
(Dis)-Honorable Mentions: The below are not recommendations, but some alternative approaches. They are higher risk than the options above (which should be good enough), and best if you are setting up a new laptop fresh or reinstalling it, and want to get rid of pesky vendor recovery-partitions that you can do without.
Let's state the obvious with conviction: A lot of data is lost every year using these tools. So coffee or caffeine first. Glasses on. Look around. Adjust any pony tails and beards (ladies too). Speak to yourself in the third person. Assume a demonstrably insane posture and shout out "I do!" to really commit to the imminent disaster! Good luck! Fire in the hole! "Fire for effect". SNAFU. FUBAR. OK, enough already... I have had bad experiences - but no huge disasters (knock on wood) - with all these tools. Enough said - be careful, your data is important. Wife's baby pictures, your uncommitted code, etc...
diskmgmt.msc or diskpart.exe (Windows): open partition manager (diskmgmt.msc) and wipe out any recovery partitions or hidden partitions that you can live without and then expand your system disk to fill the whole physical disk or create a new visible partition.
Factory reset no longer possible (could be outdated anyway). You need installation media to reinstall (downloadable?).
Careful what you wipe out! Unrecoverable. Partitions are often protected and untouchable. They are also unmovable and un-expandable in many cases.
Maybe create a new, visible partition replacing the recovery partition and move data files and your downloads folder there to make more room on your system partition?
If the partitions are protected, you can use diskpart to delete them instead, or see next bullet point for gparted. Very easy to mess things up using diskpart though (command line).
gparted (Linux): you may be prevented from deleting a recovery partition from diskmgmt.msc (protected partitions). If you are adamant and insist, you can boot into a Linux Live Disc / System (booted from removable media) and delete using gparted for example.
I have done this to get rid of obsolete and useless recovery partitions and / or malware, and it worked just fine. But frankly I trust this gparted app as far as I can toss it. No offence to gparted, but playing well with Windows is challenging. Backup is crucial and mandatory for such risky endeavors - obviously.
Though risky (a Linux tool is updating the partition tables where your Windows partitions are declared) this may work for laptops where there is nowhere to redirect data folders since there is only one physical disk and you want the full disk for your system partition.
I think gparted even allows you to try to resize existing partitions at this point. I have never tried it. Good luck if you try. "Fire in the hole!".
Cloning: some use imaging tools or disk cloning features (hardware) to clone the old disk onto a bigger one. Backups essential obviously. Far from my comfort zone - just mentioning it. Not really relevant for this list (which was supposed to be about simple and effective measures to gain more disc space).
I believe there are features for this in gparted as well. Never tested.
Various hardware solutions. I gave them up years ago.
Why I am skeptical? Malware. Disk errors. Encryption. NTFS complexity? AD-problems (old & new drive in use post-clone)? Etc...
Several hard drive vendors seem to deliver proprietary solutions for this - these may be better tested than generic approaches?
File System Allocation Size: the file system used and its allocation size affects available space. Never bothered to look much at this, but a lot of space can be wasted by allocation size issues: Would SSD drives benefit from a non-default allocation unit size?
Allocation size cannot be easily / safely changed for a disk in use. There may be tools that can do it, but the benefits are
uncertain.
Modern Windows versions require NTFS as system partition file system. Other file systems such as FAT32 or exFAT have lower overhead (especially for smaller partitions - there will be more space available), and they are potentially faster but have more limitations. For FAT32 the biggest limitation is probably the 4GB max file size - not viable today.
The rest of this answer (below) was written during debugging - I will leave it in. It contains generic and general-purpose debugging options.
VC+ Runtimes
As seen in the link towards the bottom, other people have seen the same deployment error. Before getting into too much debugging, let's try the simplest approach possible. Please try to install the VC++ runtimes for 2017 (and 2015 perhaps) from here:
The latest supported Visual C++ downloads.
Potential General Fixes
This seems to be the better discussion online for this problem. I would first try the suggestion to run this tool: Microsoft Install and Uninstall Troubleshooter.
You can try this list of fixes as well. Crucially I would also try a reboot before trying again to release any potential locked files. Just to wipe the slate clean. The system's event log might have further information on the error seen (sometimes even beyond what is in an msiexec.exe log).
ACLs
What is the ACL (Access Control List) for your TEMP folder on that G: drive?
UPDATE: Also make sure the hidden folder C:\Windows\Installer exists and have the correct permission settings. You need to show protected operating system files in Windows Explorer to see this folder.
Verbose Logging
Try to create a proper, verbose log for the MSI install in question (much more informative than the log you refer to). This gives you something to start with to figure out what is happening. You can find some information on how to do logging here.
I would enable logging for all MSI installations for debugging purposes. See installsite.org on logging (section "Globally for all setups on a machine") for how to do this.
I prefer this default logging switched on for dev and test boxes. Typically you suddenly see an MSI error and you wish you had a log - now you can, always ready in %tmp%.
Quick Testing
In your case, I would go to C:\ProgramData\Microsoft\VisualStudio\Packages\Microsoft.VisualStudio.MinShell.Msi,version=15.6.27421.1\ to see if the MSI package is present on disk, and then I would launch it with logging enabled:
msiexec.exe /I "Microsoft.VisualStudio.MinShell.Msi.msi" /QN /L*V "C:\msilog.log"
Alternatively I would just double click the MSI file and see if I get a better, interactive error message. You will most likely need the verbose log to get any info.
See link in comment below (concrete error).
Just check c:\windows\temp and c:\windows\installer
do they exist and are they writable?
In my case i deleted c:\windows\installer previously and forgot about it, so i must recreate it.
The same error happens if UAC is disabled. Visual Studio installer can't write anything to TEMP if User Account Control is off. Solution - enable UAC.
It was VS 2019 and Windows Server 2012 R2 in my case.

How do I get Windows to go as fast as Linux for compiling C++?

I know this is not so much a programming question but it is relevant.
I work on a fairly large cross platform project. On Windows I use VC++ 2008. On Linux I use gcc. There are around 40k files in the project. Windows is 10x to 40x slower than Linux at compiling and linking the same project. How can I fix that?
A single change incremental build 20 seconds on Linux and > 3 mins on Windows. Why? I can even install the 'gold' linker in Linux and get that time down to 7 seconds.
Similarly git is 10x to 40x faster on Linux than Windows.
In the git case it's possible git is not using Windows in the optimal way but VC++? You'd think Microsoft would want to make their own developers as productive as possible and faster compilation would go a long way toward that. Maybe they are trying to encourage developers into C#?
As simple test, find a folder with lots of subfolders and do a simple
dir /s > c:\list.txt
on Windows. Do it twice and time the second run so it runs from the cache. Copy the files to Linux and do the equivalent 2 runs and time the second run.
ls -R > /tmp/list.txt
I have 2 workstations with the exact same specs. HP Z600s with 12gig of ram, 8 cores at 3.0ghz. On a folder with ~400k files Windows takes 40seconds, Linux takes < 1 second.
Is there a registry setting I can set to speed up Windows? What gives?
A few slightly relevant links, relevant to compile times, not necessarily i/o.
Apparently there's an issue in Windows 10 (not in Windows 7) that closing a process holds a global lock. When compiling with multiple cores and therefore multiple processes this issue hits.
The /analyse option can adversely affect perf because it loads a web browser. (Not relevant here but good to know)
Unless a hardcore Windows systems hacker comes along, you're not going to get more than partisan comments (which I won't do) and speculation (which is what I'm going to try).
File system - You should try the same operations (including the dir) on the same filesystem. I came across this which benchmarks a few filesystems for various parameters.
Caching. I once tried to run a compilation on Linux on a RAM disk and found that it was slower than running it on disk thanks to the way the kernel takes care of caching. This is a solid selling point for Linux and might be the reason why the performance is so different.
Bad dependency specifications on Windows. Maybe the chromium dependency specifications for Windows are not as correct as for Linux. This might result in unnecessary compilations when you make a small change. You might be able to validate this using the same compiler toolchain on Windows.
A few ideas:
Disable 8.3 names. This can be a big factor on drives with a large number of files and a relatively small number of folders: fsutil behavior set disable8dot3 1
Use more folders. In my experience, NTFS starts to slow down with more than about 1000 files per folder.
Enable parallel builds with MSBuild; just add the "/m" switch, and it will automatically start one copy of MSBuild per CPU core.
Put your files on an SSD -- helps hugely for random I/O.
If your average file size is much greater than 4KB, consider rebuilding the filesystem with a larger cluster size that corresponds roughly to your average file size.
Make sure the files have been defragmented. Fragmented files cause lots of disk seeks, which can cost you a factor of 40+ in throughput. Use the "contig" utility from sysinternals, or the built-in Windows defragmenter.
If your average file size is small, and the partition you're on is relatively full, it's possible that you are running with a fragmented MFT, which is bad for performance. Also, files smaller than 1K are stored directly in the MFT. The "contig" utility mentioned above can help, or you may need to increase the MFT size. The following command will double it, to 25% of the volume: fsutil behavior set mftzone 2 Change the last number to 3 or 4 to increase the size by additional 12.5% increments. After running the command, reboot and then create the filesystem.
Disable last access time: fsutil behavior set disablelastaccess 1
Disable the indexing service
Disable your anti-virus and anti-spyware software, or at least set the relevant folders to be ignored.
Put your files on a different physical drive from the OS and the paging file. Using a separate physical drive allows Windows to use parallel I/Os to both drives.
Have a look at your compiler flags. The Windows C++ compiler has a ton of options; make sure you're only using the ones you really need.
Try increasing the amount of memory the OS uses for paged-pool buffers (make sure you have enough RAM first): fsutil behavior set memoryusage 2
Check the Windows error log to make sure you aren't experiencing occasional disk errors.
Have a look at Physical Disk related performance counters to see how busy your disks are. High queue lengths or long times per transfer are bad signs.
The first 30% of disk partitions is much faster than the rest of the disk in terms of raw transfer time. Narrower partitions also help minimize seek times.
Are you using RAID? If so, you may need to optimize your choice of RAID type (RAID-5 is bad for write-heavy operations like compiling)
Disable any services that you don't need
Defragment folders: copy all files to another drive (just the files), delete the original files, copy all folders to another drive (just the empty folders), then delete the original folders, defragment the original drive, copy the folder structure back first, then copy the files. When Windows builds large folders one file at a time, the folders end up being fragmented and slow. ("contig" should help here, too)
If you are I/O bound and have CPU cycles to spare, try turning disk compression ON. It can provide some significant speedups for highly compressible files (like source code), with some cost in CPU.
NTFS saves file access time everytime. You can try disabling it:
"fsutil behavior set disablelastaccess 1"
(restart)
The issue with visual c++ is, as far I can tell, that it is not a priority for the compiler team to optimize this scenario.
Their solution is that you use their precompiled header feature. This is what windows specific projects have done. It is not portable, but it works.
Furthermore, on windows you typically have virus scanners, as well as system restore and search tools that can ruin your build times completely if they monitor your buid folder for you. windows 7 resouce monitor can help you spot it.
I have a reply here with some further tips for optimizing vc++ build times if you're really interested.
The difficulty in doing that is due to the fact that C++ tends to spread itself and the compilation process over many small, individual, files. That's something Linux is good at and Windows is not. If you want to make a really fast C++ compiler for Windows, try to keep everything in RAM and touch the filesystem as little as possible.
That's also how you'll make a faster Linux C++ compile chain, but it is less important in Linux because the file system is already doing a lot of that tuning for you.
The reason for this is due to Unix culture:
Historically file system performance has been a much higher priority in the Unix world than in Windows. Not to say that it hasn't been a priority in Windows, just that in Unix it has been a higher priority.
Access to source code.
You can't change what you can't control. Lack of access to Windows NTFS source code means that most efforts to improve performance have been though hardware improvements. That is, if performance is slow, you work around the problem by improving the hardware: the bus, the storage medium, and so on. You can only do so much if you have to work around the problem, not fix it.
Access to Unix source code (even before open source) was more widespread. Therefore, if you wanted to improve performance you would address it in software first (cheaper and easier) and hardware second.
As a result, there are many people in the world that got their PhDs by studying the Unix file system and finding novel ways to improve performance.
Unix tends towards many small files; Windows tends towards a few (or a single) big file.
Unix applications tend to deal with many small files. Think of a software development environment: many small source files, each with their own purpose. The final stage (linking) does create one big file but that is an small percentage.
As a result, Unix has highly optimized system calls for opening and closing files, scanning directories, and so on. The history of Unix research papers spans decades of file system optimizations that put a lot of thought into improving directory access (lookups and full-directory scans), initial file opening, and so on.
Windows applications tend to open one big file, hold it open for a long time, close it when done. Think of MS-Word. msword.exe (or whatever) opens the file once and appends for hours, updates internal blocks, and so on. The value of optimizing the opening of the file would be wasted time.
The history of Windows benchmarking and optimization has been on how fast one can read or write long files. That's what gets optimized.
Sadly software development has trended towards the first situation. Heck, the best word processing system for Unix (TeX/LaTeX) encourages you to put each chapter in a different file and #include them all together.
Unix is focused on high performance; Windows is focused on user experience
Unix started in the server room: no user interface. The only thing users see is speed. Therefore, speed is a priority.
Windows started on the desktop: Users only care about what they see, and they see the UI. Therefore, more energy is spent on improving the UI than performance.
The Windows ecosystem depends on planned obsolescence. Why optimize software when new hardware is just a year or two away?
I don't believe in conspiracy theories but if I did, I would point out that in the Windows culture there are fewer incentives to improve performance. Windows business models depends on people buying new machines like clockwork. (That's why the stock price of thousands of companies is affected if MS ships an operating system late or if Intel misses a chip release date.). This means that there is an incentive to solve performance problems by telling people to buy new hardware; not by improving the real problem: slow operating systems. Unix comes from academia where the budget is tight and you can get your PhD by inventing a new way to make file systems faster; rarely does someone in academia get points for solving a problem by issuing a purchase order. In Windows there is no conspiracy to keep software slow but the entire ecosystem depends on planned obsolescence.
Also, as Unix is open source (even when it wasn't, everyone had access to the source) any bored PhD student can read the code and become famous by making it better. That doesn't happen in Windows (MS does have a program that gives academics access to Windows source code, it is rarely taken advantage of). Look at this selection of Unix-related performance papers: http://www.eecs.harvard.edu/margo/papers/ or look up the history of papers by Osterhaus, Henry Spencer, or others. Heck, one of the biggest (and most enjoyable to watch) debates in Unix history was the back and forth between Osterhaus and Selzer http://www.eecs.harvard.edu/margo/papers/usenix95-lfs/supplement/rebuttal.html
You don't see that kind of thing happening in the Windows world. You might see vendors one-uping each other, but that seems to be much more rare lately since the innovation seems to all be at the standards body level.
That's how I see it.
Update: If you look at the new compiler chains that are coming out of Microsoft, you'll be very optimistic because much of what they are doing makes it easier to keep the entire toolchain in RAM and repeating less work. Very impressive stuff.
I personally found running a windows virtual machine on linux managed to remove a great deal of the IO slowness in windows, likely because the linux vm was doing lots of caching that Windows itself was not.
Doing that I was able to speed up compile times of a large (250Kloc) C++ project I was working on from something like 15 minutes to about 6 minutes.
Incremental linking
If the VC 2008 solution is set up as multiple projects with .lib outputs, you need to set "Use Library Dependency Inputs"; this makes the linker link directly against the .obj files rather than the .lib. (And actually makes it incrementally link.)
Directory traversal performance
It's a bit unfair to compare directory crawling on the original machine with crawling a newly created directory with the same files on another machine. If you want an equivalent test, you should probably make another copy of the directory on the source machine. (It may still be slow, but that could be due to any number of things: disk fragmentation, short file names, background services, etc.) Although I think the perf issues for dir /s have more to do with writing the output than measuring actual file traversal performance. Even dir /s /b > nul is slow on my machine with a huge directory.
I'm pretty sure it's related to the filesystem. I work on a cross-platform project for Linux and Windows where all the code is common except for where platform-dependent code is absolutely necessary. We use Mercurial, not git, so the "Linuxness" of git doesn't apply. Pulling in changes from the central repository takes forever on Windows compared to Linux, but I do have to say that our Windows 7 machines do a lot better than the Windows XP ones. Compiling the code after that is even worse on VS 2008. It's not just hg; CMake runs a lot slower on Windows as well, and both of these tools use the file system more than anything else.
The problem is so bad that most of our developers that work in a Windows environment don't even bother doing incremental builds anymore - they find that doing a unity build instead is faster.
Incidentally, if you want to dramatically decrease compilation speed in Windows, I'd suggest the aforementioned unity build. It's a pain to implement correctly in the build system (I did it for our team in CMake), but once done automagically speeds things up for our continuous integration servers. Depending on how many binaries your build system is spitting out, you can get 1 to 2 orders of magnitude improvement. Your mileage may vary. In our case I think it sped up the Linux builds threefold and the Windows one by about a factor of 10, but we have a lot of shared libraries and executables (which decreases the advantages of a unity build).
How do you build your large cross platform project?
If you are using common makefiles for Linux and Windows you could easily degrade windows performance by a factor of 10 if the makefiles are not designed to be fast on Windows.
I just fixed some makefiles of a cross platform project using common (GNU) makefiles for Linux and Windows. Make is starting a sh.exe process for each line of a recipe causing the performance difference between Windows and Linux!
According to the GNU make documentation
.ONESHELL:
should solve the issue, but this feature is (currently) not supported for Windows make. So rewriting the recipes to be on single logical lines (e.g. by adding ;\ or \ at the end of the current editor lines) worked very well!
IMHO this is all about disk I/O performance. The order of magnitude suggests a lot of the operations go to disk under Windows whereas they're handled in memory under Linux, i.e. Linux is caching better. Your best option under windows will be to move your files onto a fast disk, server or filesystem. Consider buying an Solid State Drive or moving your files to a ramdisk or fast NFS server.
I ran the directory traversal tests and the results are very close to the compilation times reported, suggesting this has nothing to do with CPU processing times or compiler/linker algorithms at all.
Measured times as suggested above traversing the chromium directory tree:
Windows Home Premium 7 (8GB Ram) on NTFS: 32 seconds
Ubuntu 11.04 Linux (2GB Ram) on NTFS: 10 seconds
Ubuntu 11.04 Linux (2GB Ram) on ext4: 0.6 seconds
For the tests I pulled the chromium sources (both under win/linux)
git clone http://github.com/chromium/chromium.git
cd chromium
git checkout remotes/origin/trunk
To measure the time I ran
ls -lR > ../list.txt ; time ls -lR > ../list.txt # bash
dir -Recurse > ../list.txt ; (measure-command { dir -Recurse > ../list.txt }).TotalSeconds #Powershell
I did turn off access timestamps, my virus scanner and increased the cache manager settings under windows (>2Gb RAM) - all without any noticeable improvements. Fact of the matter is, out of the box Linux performed 50x better than Windows with a quarter of the RAM.
For anybody who wants to contend that the numbers wrong - for whatever reason - please give it a try and post your findings.
Try using jom instead of nmake
Get it here:
https://github.com/qt-labs/jom
The fact is that nmake is using only one of your cores, jom is a clone of nmake that make uses of multicore processors.
GNU make do that out-of-the-box thanks to the -j option, that might be a reason of its speed vs the Microsoft nmake.
jom works by executing in parallel different make commands on different processors/cores.
Try yourself an feel the difference!
I want to add just one observation using Gnu make and other tools from MinGW tools on Windows: They seem to resolve hostnames even when the tools can not even communicate via IP. I would guess this is caused by some initialisation routine of the MinGW runtime. Running a local DNS proxy helped me to improve the compilation speed with these tools.
Before I got a big headache because the build speed dropped by a factor of 10 or so when I opened a VPN connection in parallel. In this case all these DNS lookups went through the VPN.
This observation might also apply to other build tools, not only MinGW based and it could have changed on the latest MinGW version meanwhile.
I recently could archive an other way to speed up compilation by about 10% on Windows using Gnu make by replacing the mingw bash.exe with the version from win-bash
(The win-bash is not very comfortable regarding interactive editing.)

Rewrite Registry File in Windows

I have been trying to find a way to "defragment" the registry on my Windows machine. Firstly, does this make sense? Any benefits in doing this? (Not much love on superuser.com) Secondly, I am looking for a way to rewrite the registry using C/C++ with Windows API. Is there a way to read the registry and write it to a new file getting rid of unused bytes along the way? (I might have to write the new file and then boot into another OS/disk before I can overwrite the original... but I am willing to take that risk.)
Microsoft's PageDefrag does exactly this, as it states on its page "PageDefrag uses the standard file defragmentation APIs to defragment the files."
(A copy of the linked article is here because in typical MSDN style their link is dead.)
http://www.larshederer.homepage.t-online.de/erunt/ - NTREGOPT NT Registry Optimizer
Similar to Windows 9x/Me, the registry files in an NT-based system
can become fragmented over time, occupying more space on your hard
disk than necessary and decreasing overall performance. You should
use the NTREGOPT utility regularly, but especially after installing
or uninstalling a program, to minimize the size of the registry files
and optimize registry access.
The program works by recreating each registry hive "from scratch",
thus removing any slack space that may be left from previously
modified or deleted keys.
http://reboot.pro/index.php?showtopic=11212 - Offreg.dll MS WDK Offline Registry Library
The offline registry library (Offreg.dll) is used to modify a registry hive outside the active system registry. This library is intended for registry update scenarios such as servicing an operating system image. The library supports registry hive formats starting with Windows XP.
Developer Audience
http://reboot.pro/topic/11312-offline-registry/ - Offline Registry MS WDK Command-Line Tool
A command line tool that will allow one to read and write to an offline registry hive.
Reading the values should be possible.
But I've never seen any spec for how the registry files are written to disk, and unless you could find one you'd have to reverse engineer those files in your OS (might be differences between XP and 7 etc). Then you have to remember that the registry isn't just one file, it's multiple files and some of them belongs to certain users and I think they use SIDs rather than user names so even if you move them to a new computer, you have to be sure it's the same OS version with the same users with the same SIDs set up on it.
All this for little or no gain so I'd agree with the superuser users that it wouldn't make sense.

How to keep windows from paging block of memory

We are working on a Vista/Windows 7 application that will be running in 64 bit mode using VS2008/C++. We will be needing to cache hundreds of 2-3 mb blobs of data in RAM for performance reasons up to some memory limit. Our usage profile is such that we cannot read the data in fast enough if it is all on the the disk. Cached Memory usage will be larger than 1gb memory used. For this to work well, we need to ensure that Windows does not page this memory out as it will defeat the purpose of why we are doing this.
I've done a fair amount of research and cannot find documentation that states exactly how to do this. I've seen several references that infer memory mapped files work this way. Is there an expert who can clarify this for me?
I'm aware there are other programs that we could adapt to do this, for example, splitting the blobs and loading into memcache or inmemory databases, but they all have too many problems with performance or code complexity.
Suggestions?
You can use VirtualLock. However, you'll surely hit the quota with the amount you're talking about. Given that you should never run any other code on this machine, you'll be better off just disabling the paging file. Control Panel + System + Advanced.
From user mode, you can't (EDIT: At least for the sizes you're talking about). User mode allocations all come down to either the VirtualAlloc API (On top of which the GlobalAlloc/LocalAlloc/C Runtime's functions are written) or the Memory Mapped File API. Neither API supports this, and therefore it's impossible to obtain on Win32. It is possible from whithin Kernel Mode, but somehow I suspect this is a user-mode application :)
Note that the memory manager is not going to decide to page your RAM without good reason to do so.
Now, you could of course, if you control the machine completely (this is for internal use or something) disable the pagefile on the machine in question, but that does not seem to solve your problem.
It's possible! You can force pages to be locked in memory from a user mode app by allocating them using AWE (Address Windowing Extensions) VirtualAlloc + AllocatePhysicalPages + MapPhysicalPages.
Note: I have read that you can use the AWE APIs from either a 32-bit or 64-bit app also, but I've only tried with 32-bit app. (Of course since it's AWE you can manually remap memory to access > 2GB RAM.)
Note: You have to first have seLockMemoryPrivilege. (Which seems to require the app to run as Administrator in my testing so far.)
Note: Using AWE implies some limitations on what you can do with those particular pages of memory, e.g. no VirtualProtect().
perhaps the answer? (from a VMWARE tutorial)
To edit the Registry and disable paging kernel-mode stacks
Click Start > Run and type regedit.
In the left pane of the Registry Editor, navigate to
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager.
In the right pane, right‐click GlobalFlag and select Modify.
With Base Hexadecimal, type value 80000, which corresponds to FLG_DISABLE_PAGE_KERNEL_STACKS.
Click OK and exit the Registry Editor.
Reboot the guest system for this change to take effect.
hope it helps

Clear file cache to repeat performance testing

What tools or techniques can I use to remove cached file contents to prevent my performance results from being skewed? I believe I need to either completely clear, or selectively remove cached information about file and directory contents.
The application that I'm developing is a specialised compression utility, and is expected to do a lot of work reading and writing files that the operating system hasn't touched recently, and whose disk blocks are unlikely to be cached.
I wish to remove the variability I see in IO time when I repeat the task of profiling different strategies for doing the file processing work.
I'm primarily interested in solutions for Windows XP, as that is my main development machine, but I can also test using linux, and so am interested in answers for that environment too.
I tried SysInternals CacheSet, but clicking "Clear" doesn't result in a measurable increase (restoration to timing after a cold-boot) in the time to re-read files I've just read a few times.
Use SysInternal's RAMMap app.
The Empty / Empty Standby List menu option will clear the Windows file cache.
For Windows XP, you should be able to clear the cache for a specific file by opening the file using CreateFile with the FILE_FLAG_NO_BUFFERING options and then closing the handle. This isn't documented, and I don't know if it works on later versions of Windows, but I used this long ago when writing test code to compare file compression libraries. I don't recall if read or write access affected this trick.
A command line utility can be found here
from source:
EmptyStandbyList.exe is a command line tool for Windows (Vista and
above) that can empty:
process working sets,
the modified page list,
the standby lists (priorities 0 to 7), or
the priority 0 standby list only.
Usage:
EmptyStandbyList.exe workingsets|modifiedpagelist|standbylist|priority0standbylist
A quick googling gives these options for Linux
Unmount and mount the partition holding the files
sync && echo 1 > /proc/sys/vm/drop_caches
#include <fcntl.h>
int posix_fadvise(int fd, off_t offset, off_t len, int advice);
with advice option POSIX_FADV_DONTNEED:
The specified data will not be accessed in the near future.
I've found one technique (other than rebooting) that seems to work:
Run a few copies of MemAlloc
With each one, allocate large chunks of memory a few times
Use Process Explorer to observe the System Cache size reducing to very low levels
Quit the MemAlloc programs
It isn't selective though. Ideally I'd like to be able to clear the specific portions of memory being used for caching the disk blocks of files that I want to no longer be cached.
For a much better view of the Windows XP Filesystem Cache - try ATM by Tim Murgent - it allows you to see both the filesystem cache Working Set size and Standby List size in a more detailed and accurate view. For Windows XP - you need the old version 1 of ATM which is available for download here since V2 and V3 require Server 2003,Vista, or higher.
You will observe that although Sysinternals Cacheset will reduce the "Cache WS Min" - the actual data still continues to exist in the form of Standby lists from where it can be used until it has been replaced with something else. To then replace it with something else use a tool such as MemAlloc or flushmem by Chad Austin or Consume.exe from the Windows Server 2003 Resource Kit Tools.
As the question also asked for Linux, there is a related answer here.
The command line tool vmtouch allows for adding and removing files and directories from the system file cache, amongst other things.
There's a windows API call https://learn.microsoft.com/en-us/windows/desktop/api/memoryapi/nf-memoryapi-setsystemfilecachesize that can be used to flush the file system cache. It can also be used to limit the cache size to a very small value. Looks perfect for these kinds of tests.

Resources