Large application/file load-time - windows

I'm sure many have noticed that when you have a large application (i.e. something requiring a few MBs of DLLs) it loads much faster the second time than the first time.
The same happens if you read a large file in your application. It's read much faster after the first time.
What affects this? I suppose this is the hard-drive cache, or is the OS adding some memory-caching of its own.
What techniques do you use to speed-up the loading times of large applications and files?
Thanks in advance
Note: the question refers to Windows
Added: What affects the cache size of the OS? In some apps, files are slow-loading again after a minute or so, so the cache fills in a minute?

Two things can affect this. The first is hard-disk caching (done by the disk which has little impact and by the OS which tends to have more impact). The second is that Windows (and other OS') have little reason to unload DLLs when they're finished with them unless the memory is needed for something else. This is because DLLs can easily be shared between processes.
So DLLs have a habit of hanging around even after the applications that were using them disappear. If another application decides the DLL is needed, it's already in memory and just has to be mapped into the processes address space.
I've seen some application pre-load their required DLLs (usually called QuickStart, I think both MS Office and Adobe Reader do this) so that the perceived load times are better.

Windows's memory manager is actually pretty slick -- it services memory requests AND acts as the disk cache. With enough free memory on the system, lots of files that have been recently accessed will reside in memory. Until the physical memory is needed, those DLLs will remain in cache -- all ala the CacheManager.
As far as how to help, look into Delay Loading your DLLs. The advantages of LoadLibrary only when you need it, but automatic so you don't have LoadLibrary/GetProcAddress on all of your code. (Well automatic, as far as just needing to add a linker command switch):
http://msdn.microsoft.com/en-us/library/yx9zd12s.aspx
Or you could pre-load like Office and others do (as mentioned above), but I personally hate that -- slows down the computer at initial boot up.

I see two possibilities :
preload yourlibraries at system startup as already mentionned Office, OpenOffice and others are doing just that.
I am not a great fan of that solution : It makes your boot time longer and eats lots of memory.
load your DLL dynamically (see LoadLibrary) only when needed. Unfortunately not possible with all DLL.
For example, why load at startup a DLL to export file in XYZ format when you are not sure it will ever be needed ?? Load it when the user did select this export format.
I have a dream where Adobe Acrobat use this approach, instead of bogging me with loads of plugins I never use every time I want to display a PDF file !
Depending on your needs you might have to use both techniques : preload some big heavliy used librairies and load on demand only specific plugins...

One item that might be worth looking at is "rebasing". Each DLL has a preset "base" address that it prefers to be loaded into memory at. If an application is loading the DLL at a different address (because the preferred one is not available) the DLL is loaded at the new address and "rebased". Roughly speaking this means that parts of the dll are updated on the fly. This only applies to native images as opposed to .NET vm .dll's.
This really old MSDN article covers rebase'ng:
http://msdn.microsoft.com/en-us/library/ms810432.aspx
Not sure whether much of it still applies (it's a very old article)... but here's an enticing quote:
Prefer one large DLL over several
small ones; make sure that the
operating system does not need to
search for the DLLs very long; and
avoid many fixups if there is a chance
that the DLL may be rebased by the
operating system (or, alternatively,
try to select your base addresses such
that rebasing is unlikely).
Btw if you're dealing with .NET then "ngen'ng" your app/dlls should help speed things up (ngen = natve image generation).

Yep, anything read in from the hard drive is cached so it will load faster the second time. The basic assumption is that it's rare to use a large chunk of data from the HD only once and then discard it (this is usually a good assumption in practice). Typically I think it's the operating system (kernel) that implements the cache, taking up a chunk of RAM to do so, although I'm not sure if modern hard drives have some builtin cache capability. (I once wrote a small kernel as an academic project; caching of HD data in memory was one of its features)

One additional factor which affects program startup time is Superfetch, a technology introduced with (I believe) Windows XP. Essentially it monitors disk access during program startup, recognizes file access patterns and them attempts to "bunch up" the required data for quicker access (e.g. by rearranging the data sequentially on disk according to its loading order).
As the others mentioned, generally speaking any read operation is likely to be cached by the Windows disk cache, and reused unless the memory is needed for other operations.

NGENing the assemblies might help with the startup time, however, runtime might be effected (Sometimes the NGened code is not as optimal as OnDemand Compiled code)
NGENing can be done in the background as well: http://blogs.msdn.com/davidnotario/archive/2005/04/27/412838.aspx
Here's another good article NGen and Performance http://msdn.microsoft.com/en-us/magazine/cc163808.aspx

The system cache is used for anything that comes off disk. That includes file metadata, so if you are using applications that open a large number of files (say, directory scanners), then you can easily flush the cache if you also have applications running that eat up a lot of memory.
For the stuff I use, I prefer to use a small number of large files (>64 MB to 1 GB) and asynchronous un-bufferred I/O. And a good ol' defrag every once in a while.

Related

Huge memory allocations cause Windows fully unresponsive

This question is not really related to any specific code or even language.
If you allocate huge (exceeding phisical memory) amount of memory on Windows it causes entire operating system to become fully unresponsive - including mouse cursor which typically was able to move even with entire system crashed.
Working Set API seem to not solve the problem - it seems that all applications have an initial max working set size already set to a rather low level.
I hoped memory mapped files (via boost api) would help OS make better decisions about page loading/unloading - but again even single pass trough large data freezes the system.
Are there any magic WinAPI calls or other good programming practice (other than manual management of entire commited memory and manual data caching in files) that would keep the operating system and other applications reasonably stable while using such huge amount of data?

Is it possible in Windows to use part of memory as a virtual file

I'm using a commandline tool to do some processing on a file. The thing is that this file should not be stored on disk (security reasons). So I was wondering whether it's possible in Windows to use a part of memory as a virtual file that is accessible bu the commandline tool as if it was a real physical file.
Yes, it's possible with things referred to as "ramdisks" usually. What's the best ramdisk for Windows? over at superuser.com has some links.
Have you written the command line tool yourself? If so, you can simply allocate a section of memory to your program and use it in your processing. There's little reason to trick the app into thinking it's using a file on a physical disk. The specifics on how to do so depend on what language your app is written in.
If not, you'll need to create a RAM disk and tell the program to use that. Using a RAM disk on Windows requires third-party software; a comprehensive list of options is available here on Super User.
Note, though, neither using a RAM disk nor storing all of your data in memory will make it more secure. The information stored in RAM is just as accessible to prying eyes and malicious applications as data that is saved on the hard disk. Probably more so than data that has been deleted from the hard disk.
If you need a ready to use application, there are several ramdisk applications (including free ones) on the market, and then your question here is offtopic. If you need to do this in code, than one of our virtual storage products (SolFS, CallbackDisk, Callback File System) will work, and Callback File System has a sample project that stores files in memory.
If you're using .NET, you might look into MemoryStream.
Note Cody Gray's answer though, which is only too true insofar as having something in memory does not guarantee that it can't be compromised. Though opinions differ on this subject. Most people would argue that writing to disk is even less secure, especially in the age of wear-levelling where controlling what is deleted and what is not is practically impossible.
RAM has its own disadvantages, but on the positive side, what's gone is gone :-)

What could cause the application as well as the system to slowdown?

I am debugging an application which slows down the system very badly. The application loads a large amount of data (some 1000 files each of half an MB) from the local hard disk.The files are loaded as memory mapped files and are mapped only when needed. This means that at any given point in time the virtual memory usage does not exceed 300 MB.
I also checked the Handle count using handle.exe from sysinternals and found that there are at the most some 8000 odd handles opened. When the data is unloaded it drops to around 400. There are no handle leaks after each load and unload operation.
After 2-3 Load unload cycles, during one load, the system becomes very slow. I checked the virtual memory usage of the application as well as the handle counts at this point and it was well within the limits (VM about 460MB not much fragmentation also, handle counts 3200).
I want how an application could make the system very slow to respond? What other tools can I use to debug this scenario?
Let me be more specific, when i mean system it is entire windows that is slowing down. Task manager itself takes 2 mins to come up and most often requires a hard reboot
The fact that the whole system slows downs is very annoying, it means you can not attach a profiler easily, it also means it would be even difficult to stop the profiling session in order to view the results ( since you said it require a hard reboot ).
The best tool suited for the job in this situation is ETW ( Event Tracing for Windows ), these tools are great, will give you the exact answer you are looking for
Check them out here
http://msdn.microsoft.com/en-us/library/cc305210.aspx
and
http://msdn.microsoft.com/en-us/library/cc305221.aspx
and
http://msdn.microsoft.com/en-us/performance/default.aspx
Hope this works.
Thanks
Tools you can use at this point:
Perfmon
Event Viewer
In my experience, when things happen to a system that prevent Task Manager from popping up, they're usually of the hardware variety -- checking the system event log of Event Viewer is sometimes just full of warnings or errors that some hardware device is timing out.
If Event Viewer doesn't indicate that any kind of loggable hardware error is causing the slowdown, then try Perfmon -- add counters for system objects to track file read, exceptions, context switches etc. per second and see if there's something obvious there.
Frankly the sort of behavior demonstrated is meant to be impossible - by design - for user-mode code to cause. WinNT goes to a lot of effort to insulate applications from each other and prevent rogue applications from making the system unusable. So my suspicion is some kind of hardware fault is to blame. Is there any chance you can simply run the same test on a different PC?
If you don't have profilers, you may have to do the same work by hand...
Have you tried commenting out all read/write operations, just to check whether the slow down disappears ?
"Divide and conquer" strategies will help you find where the problem lies.
If you run it under an IDE, run it until it gets real slow, then hit the "pause" button. You will catch it in the act of doing whatever takes so much time.
You use tools like "IBM Rational Quantify" or "Intel VTune" to detect performance issue.
[EDIT]
Like BenoƮt did, one good mean is measuring tasks time to identify which is eating cpu.
But remember, as you are working with many files, is likely to be missing that causes the memory to disk swap.
when task manager is taking 2 minutes to come up, are you getting a lot of disk activity? or is it cpu-bound?
I would try process explorer from sysinternals. When your system is in the slowed-down state, and you try running, say, notepad, pay attention to page fault deltas.
Windows is very greedy about caching file data. I would try removing file I/O as someone suggested, and also making sure you close the file mapping as soon as you are done with a file.
I/O is probably causing your slowdown,especially if your files are on the same disk as the OS. Another way to test that would be to move your files to another disk and see if that alleviates the problem.

Why do some installations take so much time?

Pretty much we've all done an installer here and there - and all of us did an installation of some behemoth of a program. Why do some installations take so much time? Case in point: Adobe CS suite (with newer versions you can take a vacation) or Visual Studio.
I know there are files to copy - most of the time unpack even. There are some registry keys to set (if under Windows), maybe a service or couple to start. Some installations probably even check hardware/software combination. All of this does not justify sllloooow installation time in some of the programs.
How can I speed it up?
It obviously depends what you're installing As Colin Pickard pointed out, you'll be shifting huge quantities of data onto the disk (+optional virus check etc.).
For installations I've built recently, we have to request the shut down of some Windows services, wait for that, and check that they really have shut down before continuing. That takes time.
I confess that in the above, that's not parallelised, whereas it could be. I suspect that installations are not necessarily optimised. They may well be the last thing that the team put together prior to release, and they may well figure that you're only going to do it once (and forget the pain upon completion). Obviously not an ideal state of affairs!
Visual Studio on my machine is 3.03GB - 16,842 files in 1,979 folders. Passing 3GB through virus scan and auditing software and onto the filesystem is too much for my (dualcore,2GB,sata2) system - it's CPU or IO bound the whole way through the process. That's why it takes so long.
Most installers not only pack, but also compress their contents, so at installation time all of these files must be decompressed. All of the data that is decompressed must be written to disk after it is decompressed as well.
Look at the time a zip operation takes on several files. It's also slow.
Many installers maintain a log that is flushed to the disk after each primitive operation so that even if installation encounters a fatal failure the log is preserved and can be sent to the software vendor. Such flushing sums up and significantly contributes to overall time.

Comparing cold-start to warm start

Our application takes significantly more time to launch after a reboot (cold start) than if it was already opened once (warm start).
Most (if not all) the difference seems to come from loading DLLs, when the DLLs' are in cached memory pages they load much faster. We tried using ClearMem to simulate rebooting (since its much less time consuming than actually rebooting) and got mixed results, on some machines it seemed to simulate a reboot very consistently and in some not.
To sum up my questions are:
Have you experienced differences in launch time between cold and warm starts?
How have you delt with such differences?
Do you know of a way to dependably simulate a reboot?
Edit:
Clarifications for comments:
The application is mostly native C++ with some .NET (the first .NET assembly that's loaded pays for the CLR).
We're looking to improve load time, obviously we did our share of profiling and improved the hotspots in our code.
Something I forgot to mention was that we got some improvement by re-basing all our binaries so the loader doesn't have to do it at load time.
As for simulating reboots, have you considered running your app from a virtual PC? Using virtualization you can conveniently replicate a set of conditions over and over again.
I would also consider some type of profiling app to spot the bit of code causing the time lag, and then making the judgement call about how much of that code is really necessary, or if it could be achieved in a different way.
It would be hard to truly simulate a reboot in software. When you reboot, all devices in your machine get their reset bit asserted, which should cause all memory system-wide to be lost.
In a modern machine you've got memory and caches everywhere: there's the VM subsystem which is storing pages of memory for the program, then you've got the OS caching the contents of files in memory, then you've got the on-disk buffer of sectors on the harddrive itself. You can probably get the OS caches to be reset, but the on-disk buffer on the drive? I don't know of a way.
How did you profile your code? Not all profiling methods are equal and some find hotspots better than others. Are you loading lots of files? If so, disk fragmentation and seek time might come into play.
Maybe even sticking basic timing information into the code, writing out to a log file and examining the files on cold/warm start will help identify where the app is spending time.
Without more information, I would lean towards filesystem/disk cache as the likely difference between the two environments. If that's the case, then you either need to spend less time loading files upfront, or find faster ways to load files.
Example: if you are loading lots of binary data files, speed up loading by combining them into a single file, then do a slerp of the whole file into memory in one read and parse their contents. Less disk seeks and time spend reading off of disk. Again, maybe that doesn't apply.
I don't know offhand of any tools to clear the disk/filesystem cache, but you could write a quick application to read a bunch of unrelated files off of disk to cause the filesystem/disk cache to be loaded with different info.
#Morten Christiansen said:
One way to make apps start cold-start faster (sort of) is used by e.g. Adobe reader, by loading some of the files on startup, thereby hiding the cold start from the users. This is only usable if the program is not supposed to start up immediately.
That makes the customer pay for initializing our app at every boot even when it isn't used, I really don't like that option (neither does Raymond).
One succesful way to speed up application startup is to switch DLLs to delay-load. This is a low-cost change (some fiddling with project settings) but can make startup significantly faster. Afterwards, run depends.exe in profiling mode to figure out which DLLs load during startup anyway, and revert the delay-load on them. Remember that you may also delay-load most Windows DLLs you need.
A very effective technique for improving application cold launch time is optimizing function link ordering.
The Visual Studio linker lets you pass in a file lists all the functions in the module being linked (or just some of them - it doesn't have to be all of them), and the linker will place those functions next to each other in memory.
When your application is starting up, there are typically calls to init functions throughout your application. Many of these calls will be to a page that isn't in memory yet, resulting in a page fault and a disk seek. That's where slow startup comes from.
Optimizing your application so all these functions are together can be a big win.
Check out Profile Guided Optimization in Visual Studio 2005 or later. One of the thing sthat PGO does for you is function link ordering.
It's a bit difficult to work into a build process, because with PGO you need to link, run your application, and then re-link with the output from the profile run. This means your build process needs to have a runtime environment and deal cleaning up after bad builds and all that, but the payoff is typically 10+ or more faster cold launch with no code changes.
There's some more info on PGO here:
http://msdn.microsoft.com/en-us/library/e7k32f4k.aspx
As an alternative to function order list, just group the code that will be called within the same sections:
#pragma code_seg(".startUp")
//...
#pragma code_seg
#pragma data_seg(".startUp")
//...
#pragma data_seg
It should be easy to maintain as your code changes, but has the same benefit as the function order list.
I am not sure whether function order list can specify global variables as well, but use this #pragma data_seg would simply work.
One way to make apps start cold-start faster (sort of) is used by e.g. Adobe reader, by loading some of the files on startup, thereby hiding the cold start from the users. This is only usable if the program is not supposed to start up immediately.
Another note, is that .NET 3.5SP1 supposedly has much improved cold-start speed, though how much, I cannot say.
It could be the NICs (LAN Cards) and that your app depends on certain other
services that require the network to come up. So profiling your application alone may not quite tell you this, but you should examine the dependencies for your application.
If your application is not very complicated, you can just copy all the executables to another directory, it should be similar to a reboot. (Cut and Paste seems not work, Windows is smart enough to know the files move to another folder is cached in the memory)

Resources