Speed up Build-Process of WiX-Installer - visual-studio

For my Wix project I am harvesting 4 directories, via the pre-build-event of visual studio, which will result in about 160mb of data, and about 220 files, but the build process tooks very long.
How can i speed that process up? I have one embedded media.cab file which will hold all the files. Is it the size or the amount of files that will slow the process down? Or is it the harvesting with the heat tool in the pre-build-event? Would it be faster with the HeatDirectory element?
Anyone made some experience with speeding this up?

For us, the vast majority of the time was spent invoking light (for the linking phase).
light is very slow at compressing cabinets. Changing the DefaultCompressionLevel in the .wixproj from high to mszip (or low or none) helps a lot. However, our build was still too slow.
It turns out that light handles cabinets independently, and automatically links them on multiple threads by default. To take advantage of this parallelism, you need to generate multiple cabinets. In our .wxs Product element, we had the following that placed everything in a single cabinet:
<MediaTemplate EmbedCab="yes" />
It turns out you can use the MaximumUncompressedMediaSize attribute to declare the threshold (in MB) at which you want files to be automatically sharded into different .cab files:
<MediaTemplate EmbedCab="yes" MaximumUncompressedMediaSize="2" />
Now light was much faster, even with high compression, but still not fast enough for incremental builds (where only a few files change).
Back in the .wixproj, we can use the following to set up a cabinet cache, which is ideal for incremental builds where few files change and most cabinets don't need to be regenerated:
<CabinetCachePath>$(OutputPath)cabcache\</CabinetCachePath>
<ReuseCabinetCache>True</ReuseCabinetCache>
Suppressing validation also gives a nice speedup (light.exe spends about a third of its time validating the .msi by default). We activate this for debug builds:
<SuppressValidation>True</SuppressValidation>
With these changes, our final (incremental) build went from over a minute to a few seconds for a 32 MB .msi output, and a full rebuild stayed well under a minute, even with the high compression level.

WiX Help File: How To: Optimize build speed. In other words: 1) Cabinet reuse and 2) multi-threaded cab creation are built-in mechanisms in WiX to speed up builds.
Hardware: The inevitable "throw hardware at it". New SSD and NVMe disks are so much faster than older IDE drives that you might want to try them as another way to improve build speed and installation speed. Obvious yes, but very important. It can really improve the speed of development. See this answer.
Challenges with NVMe drives?: 1) They run hot, 2) they usually have limited capacity (size), 3) they might be more vulnerable than older 2.5" drives when used in laptops (I am not sure - keep in mind that some NVMe drives are soldered solid to the motherboard on laptops), 4) data rescue can be a bit challenging if you don't have good quality external enclosures (form factor etc...), 5) NVMe drives are said to burn out over time, 6) They are still somewhat pricey - especially the larger capacity ones, and there are further challenges for sure - but overall: these drives are awesome.
Compression: You can try to compile your setup with a different compression level (for example none for debug builds). No compression makes builds faster. Here are illustrations for doing the opposite, setting higher compression (just use none instead of high for your purpose):
CompressionLevel: Msi two times larger than msm
MediaTemplate: How can I reduce the size of a 1GB MSI file using Orca?
A related answer on compression: What is the compression method used by MSI files?
Separate Setup: If you still go compressed, you could put prerequisites and merge modules in a separate setup to avoid compressing them for every build (or use release flags if you are in Installshield, or check the Preprocessor features in Wix).
External Source Files: I suppose you could use external source files if that's acceptable - then you don't have a lengthy compression operation taking place during the build, just a file copy (which keeps getting faster - especially with flash drives).
Shim: Another technique is to shim all the files you install to be 1 KB if what you are testing is the setup itself and its GUI and custom actions. It is then just a "shell" of a setup - which is a great way to test new custom actions to your setup. Many have written tools for this, but I don't have a link for you. There is always github.com to search.
Release Flags: Another way to save time is to use special release flags (Installshield only) to compile smaller versions of the setup you are working on at the moment (leaving out many features). WiX has similar possibilities via its preprocessor. More on WiX preprocessor practical use.
Debug Build: I usually use combinations of these techniques to make a debug build.
I normally use external source files when I experiment and add new features and keep rebuilding and installing the setup all the time.
Release flags to compile only part of the setup, cabinet reuse and release flags combined can save a lot of time depending on the size of your setup, the number of files and your hardware configuration.
Perhaps the most effective is a separate setup in my opinion (provided it is stable and not changing that often). Beware though: Wix to Install multiple Applications (the problems involved when it comes to splitting setups).
My take on it: go for a prerequisites-only separate setup. This is good also for Large Scale Deployment scenarios where corporate users want to use their own, standardized prerequisites and are annoyed with lots of embedded "junk" in a huge setup. A lot of package preparation time in large companies is spent taking out outdated runtimes and prerequisites. You can also deliver updates to these prerequisites without rebuilding your entire setup. Good de-coupling.
Links:
How can I speed up MSI package install and uninstall?

Simply put, don't harvest files. Please see my blog article: Dealing with very large number of files
The third downside is that your build will take A LOT longer to
perform since it's not only creating your package but that it's also
authoring and validating your component definitions.
I maintain an open source project on CodePlex called IsWiX. It contains project templates (scaffolding) and graphical designers to assist you in setting up and maintaining your WiX source. That said, it was designed around merge modules which slows the build down a bit as the .MSM has to be built and then merged into the .MSI. Pure fragments would be faster if you are really concerned about pure speed. That said I have many installers around 160mb and it doesn't take long at all.
And of course don't forget about having a fast build machine. CPU, RAM and SSD disk I/O all contribute to fast generation of MSIs. For my consulting, I use Microsoft Visual Studio Online (VSO). I have a Core i7-2600k Hyper-V server with 32GB of ram and a Samsung 850evo SSD. My build server (VM) runs a TFS proxy server for local SCC caching.
For fun, on the above machine, I took a 220 files from my system32 folder totaling 160MB. It took 30 seconds to build the MSM and 30 seconds to build the MSI for a total of 60 seconds. This is 'fast enough' for me. I would expect an MSI authored using only fragments to take 30 seconds.

Related

Speeding up WIX compiles

I have a WIX 3.0 installer that is building 88 slightly different builds (cross product of 32 and 64-bit, 11 locales, four editions (Beta, Retail, Evaluation, Different Evaluation).
Each build has slightly different contents in addition to localized UI, so I can't just build one configuration with multiple locales.
The resulting MSI is about 120MB. I'm already using the CabCache.
The installer takes about 3-5 minutes per release to build, resulting in a pretty lengthy overall build time.
The install appears to be heavily disk bound during linking (light.exe).
Clearly making the disks faster could help. Does anybody have advice on how to set up a machine that could crank through these installers faster? (or advice on reconfiguring my WIX project to build more efficiently?)
Get an SSD. Like one of those with internal RAID architecture from e.g. OCZ. SSD is every developer's upgrade of the decade. Plus more RAM if swapping is an issue.
If you have common parts (that are not localized) you can create a merge module with the common parts and then just add the differencing stuff to each build.
I am not sure if you have any say or communication with the developers of the application that you are installing, but if you have to create that many MSI's mainly because of languages, have you considered just offering one Language MSI that delivers all the language specific files to a resources directory and then the user can choose which language they would like to use (but only install this if they need something other than the default language). Also it might be worth looking into having the product made in such a way that the user can pick from within which language is best, then having all the languages installed from the start.
As for your question about speeding up the build, that is a tricky one. Using Merge Modules I would rule out right away, as I don't see any actual gain coming out of that. Of course updating the hardware (as you said) will give some results, but again, I am not sure how much of a jump you would be making so it is hard to tell what kind of gain that would give. I think it might be best to go over your WXS with a fine tooth comb and see what is really going on in there. You can sometimes find things that are left over from the developement of the package, or from a previous tool that are really slowing you down. One example would be that my company recently switched to WiX from a more automated setup creation utility (leaving the name out on purpose cause I am listing the problems with it :P ) and it automatically created every folder under Windows that might possibly be needed in the running of a windows application, as well as the common files folder, the current user profile, and many many more. I think I ended up erasing in all over 100 empty directories that this old technology was nice enough to add for me. That is just one example of optimization that was done. It is amazing what can be found when you take the time to REALLY review what is going on under the hood.
In your wixproj setup file add this just before the end of file in <PropertyGroup> tag
<IncrementalGet>true</IncrementalGet>
This will tell WIX to compile only those files which are changed after the previous build.

VS 2010 very slow

I have just upgraded to VS 2010, and I have performance problems which I did not have before (in VS 2008).
The most annoying thing is that it freezes while I work in the text editor. Sometimes when it freezes I see that it is saving auto recovery information, but not always.
Almost anything I do gives an unacceptable long delay, like saving, starting to debug, ending debug session, switching between design and code view, and doing WinForms designing.
I have some parts of my home directory on a mapped network drive. I suspect that that might be a part of the problem. Is it possible to configure VS 2010 to use exclusively local disk for its "internal" work perhaps?
Any hints would be appreciated! Has anyone else experienced these kinds of problems?
Edit:
I forgot to give my specs:
Win 7 64-bit
4 gb memory
No addins, just standard installation
The project folder is on the network drive
One interesting thing is that I feel that I have better performance in a VM running XP (where the VM runs on the same PC).
VS is great if you do what microsoft recommends and work on a local copy of your projects.
As soon as you start tying to open projects in remote locations you will get this issue.
Recommendations:
use a source control solution.
create a copy of your project locally and run the solution from that.
Also ...
I think it does it's clever stuff in the background, I found the more i use it the faster it gets, especially on long running projects that I regularly go back to.
If you think it might be aformentioned WPF framework you may want to try switching off aero (as a test) if it helps the problem is likely that your chosen graphics hardware is not very good at effect or 3D based output so it's struggling.
Also try reducing the number of background services and apps you have running.
on windows 7 these days 4 gigs of ram is considered standard, so whilst it should perform fine maybe consider putting more ram in if you are trying to handle large datasets / similar business applications.
Another thing you could try is run a repair install over the top of your existing, it may not have cleanly installed something ... unlikely but it may help.
If you can, buy an SSD disk and move all your projects locally.
I find VS2010 super intensive on disk.
It fly on my home machine with an SSD but it's almost unusable on my work machine(Win7 4 gig RAM, but standard disk)
Try setting the number of parallel builds to half the number of cores you have (I think its in options, settings, Solutions and Project, build and run).. I had it set to 8 which was too much.. it spawned 8 msbuild.exe, rebuilding a solution with 70 projects bottlenecked the disk when they all tried to read/writte similar pre-compiled headers. Those msbuild's stick around even after you close the IDE.
Also I disabled the gather browsing info for implicit files, which made intellisense parsing quicker.
An old post I know, but in case it helps others (as the previous answers focused on source code)...
I found that it wasn't my source code that was the issue, that was held locally along with all the references, but the default locations (project, project templates and item templates) as these were held on a networked drive. These can be altered in the Tools -> Options -> Projects and Solutions.
Alternatively you could change the frequency of the saves or turn them off altogether via Tools -> Options -> AutoRecover

Improving performance of Wix msi install/uninstall

In Windows 7(i.e. MSI 5.0), there is a property called MSIFASTINSTALL which will improve the performance of your installer. Or else, you can turn off the rollback option by setting property DISABLEROLLBACK. This property is available in earlier version of MSI 5.0 too.
Please share your knowledge to improve the install experience. Also, I cannot find the right way to improve the performance of Uninstall. We use huge set of files/folders (more than 70,000) and components like 35000. It hangs in the file costing process and do not know how to avoid this delay. Sometime it hangs for more than 2 or 3 hours to uninstall in XP or Vista machines
Edit:
I did some hack in my install by zipping the folders which has huge file sets and reduced the components size like Christopher said. It improved the performance drastically. yes ofcourse, I lost the MSI installer pattern by doing this concept and it is not recommended approach. However, it is trade off when we want to achieve this and our user really do not want to have file version details when we uninstall/upgrade the patches.
I had a similar situation, though the number of files was a bit less, about 25k. Most of those files were icons, which were never changed from one release to another. Only a major release (once per 2 years) might bring some changes to this area. A "quick & dirty" solution was zipping those icons and include this single file into the installation (not a component, just a file side by side with the MSI). During the installation this ZIP was extracted in the background thread, and RemoveFile table was used to delete icons on uninstall. It was faster than installing those 20k icons as separate components, even as components with many files. A good and correct solution was to convince the main application developers to put all those 20k icons into the 20 zip archives. Now these 20 zip files are installed as regular MSI components, and the application knows how to extract an icon on demand and cache it.
I would not recommend you to disable rollback. Though you'll save quite some installation time, you lose a standard guaranteed rollback option.
Uninstall takes more time than install because of rollback feature again. The way I understand it, when you uninstall, the MSI firstly creates a copy of every single file, then uninstalls every single file, and in case of success, drops every single backed up file. Hence, the uninstall time is about three times as much as the install time. I experienced the same problem when I took a default option to have 1 file per component. Though it is recommended, you should make a trade-off if you deal with an outstanding case.
Hope this clears up the things for you a bit.
The best option for improving the performance of your app is to reduce the number of files and components. While there may be a couple tweaks you can do to your MSI to improve the performance, the excessive number of files/components is the core issue and will be the gating factor on any performance improvements you make. Why do you need to install 70k files?

Why do some installations take so much time?

Pretty much we've all done an installer here and there - and all of us did an installation of some behemoth of a program. Why do some installations take so much time? Case in point: Adobe CS suite (with newer versions you can take a vacation) or Visual Studio.
I know there are files to copy - most of the time unpack even. There are some registry keys to set (if under Windows), maybe a service or couple to start. Some installations probably even check hardware/software combination. All of this does not justify sllloooow installation time in some of the programs.
How can I speed it up?
It obviously depends what you're installing As Colin Pickard pointed out, you'll be shifting huge quantities of data onto the disk (+optional virus check etc.).
For installations I've built recently, we have to request the shut down of some Windows services, wait for that, and check that they really have shut down before continuing. That takes time.
I confess that in the above, that's not parallelised, whereas it could be. I suspect that installations are not necessarily optimised. They may well be the last thing that the team put together prior to release, and they may well figure that you're only going to do it once (and forget the pain upon completion). Obviously not an ideal state of affairs!
Visual Studio on my machine is 3.03GB - 16,842 files in 1,979 folders. Passing 3GB through virus scan and auditing software and onto the filesystem is too much for my (dualcore,2GB,sata2) system - it's CPU or IO bound the whole way through the process. That's why it takes so long.
Most installers not only pack, but also compress their contents, so at installation time all of these files must be decompressed. All of the data that is decompressed must be written to disk after it is decompressed as well.
Look at the time a zip operation takes on several files. It's also slow.
Many installers maintain a log that is flushed to the disk after each primitive operation so that even if installation encounters a fatal failure the log is preserved and can be sent to the software vendor. Such flushing sums up and significantly contributes to overall time.

Comparing cold-start to warm start

Our application takes significantly more time to launch after a reboot (cold start) than if it was already opened once (warm start).
Most (if not all) the difference seems to come from loading DLLs, when the DLLs' are in cached memory pages they load much faster. We tried using ClearMem to simulate rebooting (since its much less time consuming than actually rebooting) and got mixed results, on some machines it seemed to simulate a reboot very consistently and in some not.
To sum up my questions are:
Have you experienced differences in launch time between cold and warm starts?
How have you delt with such differences?
Do you know of a way to dependably simulate a reboot?
Edit:
Clarifications for comments:
The application is mostly native C++ with some .NET (the first .NET assembly that's loaded pays for the CLR).
We're looking to improve load time, obviously we did our share of profiling and improved the hotspots in our code.
Something I forgot to mention was that we got some improvement by re-basing all our binaries so the loader doesn't have to do it at load time.
As for simulating reboots, have you considered running your app from a virtual PC? Using virtualization you can conveniently replicate a set of conditions over and over again.
I would also consider some type of profiling app to spot the bit of code causing the time lag, and then making the judgement call about how much of that code is really necessary, or if it could be achieved in a different way.
It would be hard to truly simulate a reboot in software. When you reboot, all devices in your machine get their reset bit asserted, which should cause all memory system-wide to be lost.
In a modern machine you've got memory and caches everywhere: there's the VM subsystem which is storing pages of memory for the program, then you've got the OS caching the contents of files in memory, then you've got the on-disk buffer of sectors on the harddrive itself. You can probably get the OS caches to be reset, but the on-disk buffer on the drive? I don't know of a way.
How did you profile your code? Not all profiling methods are equal and some find hotspots better than others. Are you loading lots of files? If so, disk fragmentation and seek time might come into play.
Maybe even sticking basic timing information into the code, writing out to a log file and examining the files on cold/warm start will help identify where the app is spending time.
Without more information, I would lean towards filesystem/disk cache as the likely difference between the two environments. If that's the case, then you either need to spend less time loading files upfront, or find faster ways to load files.
Example: if you are loading lots of binary data files, speed up loading by combining them into a single file, then do a slerp of the whole file into memory in one read and parse their contents. Less disk seeks and time spend reading off of disk. Again, maybe that doesn't apply.
I don't know offhand of any tools to clear the disk/filesystem cache, but you could write a quick application to read a bunch of unrelated files off of disk to cause the filesystem/disk cache to be loaded with different info.
#Morten Christiansen said:
One way to make apps start cold-start faster (sort of) is used by e.g. Adobe reader, by loading some of the files on startup, thereby hiding the cold start from the users. This is only usable if the program is not supposed to start up immediately.
That makes the customer pay for initializing our app at every boot even when it isn't used, I really don't like that option (neither does Raymond).
One succesful way to speed up application startup is to switch DLLs to delay-load. This is a low-cost change (some fiddling with project settings) but can make startup significantly faster. Afterwards, run depends.exe in profiling mode to figure out which DLLs load during startup anyway, and revert the delay-load on them. Remember that you may also delay-load most Windows DLLs you need.
A very effective technique for improving application cold launch time is optimizing function link ordering.
The Visual Studio linker lets you pass in a file lists all the functions in the module being linked (or just some of them - it doesn't have to be all of them), and the linker will place those functions next to each other in memory.
When your application is starting up, there are typically calls to init functions throughout your application. Many of these calls will be to a page that isn't in memory yet, resulting in a page fault and a disk seek. That's where slow startup comes from.
Optimizing your application so all these functions are together can be a big win.
Check out Profile Guided Optimization in Visual Studio 2005 or later. One of the thing sthat PGO does for you is function link ordering.
It's a bit difficult to work into a build process, because with PGO you need to link, run your application, and then re-link with the output from the profile run. This means your build process needs to have a runtime environment and deal cleaning up after bad builds and all that, but the payoff is typically 10+ or more faster cold launch with no code changes.
There's some more info on PGO here:
http://msdn.microsoft.com/en-us/library/e7k32f4k.aspx
As an alternative to function order list, just group the code that will be called within the same sections:
#pragma code_seg(".startUp")
//...
#pragma code_seg
#pragma data_seg(".startUp")
//...
#pragma data_seg
It should be easy to maintain as your code changes, but has the same benefit as the function order list.
I am not sure whether function order list can specify global variables as well, but use this #pragma data_seg would simply work.
One way to make apps start cold-start faster (sort of) is used by e.g. Adobe reader, by loading some of the files on startup, thereby hiding the cold start from the users. This is only usable if the program is not supposed to start up immediately.
Another note, is that .NET 3.5SP1 supposedly has much improved cold-start speed, though how much, I cannot say.
It could be the NICs (LAN Cards) and that your app depends on certain other
services that require the network to come up. So profiling your application alone may not quite tell you this, but you should examine the dependencies for your application.
If your application is not very complicated, you can just copy all the executables to another directory, it should be similar to a reboot. (Cut and Paste seems not work, Windows is smart enough to know the files move to another folder is cached in the memory)

Resources