I'm wondering how long it takes (in milliseconds) to read a registry value from the Windows registry through standard C# libraries. In this case, I'm reading in some proxy settings.
What order of magnitude value should I expect?
Are there any good benchmark data available?
I'm running WS2k8 R2 amd64. Bonus points: How impactful is the OS sku/version on this measure?
using (RegistryKey registryKey = Registry.CurrentUser.OpenSubKey(#"Software/Copium"))
{
return (string)registryKey.GetValue("BinDir");
}
I cannot quote numbers as I don't know. But having just read 30 pages in the Windows Internals 5 book about the registry the following noteworthy things that I didn't know became clear.
The Registry is transactional and has fail safes to prevent from being corrupted. This can affect performance. Since the transactional level is read committed, reads shouldn't be blocked by writes so they should be performant.
The registry is cached in memory (well frequently used values anyway) so if you access a set of keys often the performance should remain stable after the first hit.
This blog post from Raymond Chen should prove helpful:
The performance cost of reading a registry key
I would add that in general, it is not recommended to store settings in the registry for C# apps, use persisted storage, or a config file or something of that nature. There are problems related to permissions when you deal with the registry. Reading is often not an issue, but if you are going to persist things then it gets hairy, especially with UAC and newer OSs that shadow copy things.
Related
I'm using some basic processor detection in an installer, to determine which version of a software package should be available to the user.
Currently, I'm going through WMI to get some basic information, but I found out that doing that, I very regularly get unreliable results for the CPU features (CPUID is apparently poorly supported on a good number of mobile processors).
To avoid this kind of issue and to speed things up, I've been looking at getting the processor capabilities from the windows registry instead -- after all, the information should all be available there, under HKEY_LOCAL_MACHINE\HARDWARE\DESCRIPTION\System\CentralProcessor{n}
Reading a key from the registry makes the installer much simpler in code, doesn't have to call out to WMI (slow and can fail since I'd have to rely on calling out to a language like VBScript with WMI access, while registry operations are supported as standard in my development scripting language) and should avoid getting incorrect information from CPU value issues through it.
Sure, enough, I found a wealth of information, but the most important part, the "FeatureSet" value stored there, which I assume is a DWORD containing flags about available processor features like SIMD instruction sets etc., is not documented anywhere. I've spent a good while searching the 'net now trying to find any sort of documentation about this registry value, to no avail.
Does anyone have a document outlining or describing the bits in that registry value?
This question already has answers here:
Rewrite Registry File in Windows
(3 answers)
Closed 9 years ago.
This is out of curiosity, but I have seen several (and some of them very popular) software called registry defragmenter. While I can see the benefit they offer, but I am very curious on how exactly do you do registry defragmenting? Note that I'm not asking for software name, just a basic description of how it's done programmatically. I understand there is disk defragmenting API from microsoft. Is this that they are using? Or is there "registry defragmenting" api?
While disk defragmenting would be helpful, the more important speed benefit which could be obtained would be arranging the registry nodes so that a typical depth-first search would put the sequentially-accessed nodes in the same registry page.
I'm not aware of any API for that. The algorithm is a straightforward reordering and rewriting operation, complicated by dealing with Windows' concurrent access.
I suspect they're just defragmenting the files used to store registry information. Since the registry files are open during all normal Windows operation, a "normal" file defragmenting tool won't even touch them.
Answer: Most parse the file format directly and manually.
There is another possible way: Using the RegSaveKey and then the RegReplaceKey functions, which are used by the Windows Backup utility.
How do they prevent crashes in a live OS? Simple, they reroute API calls to the Windows Reg* functions and handle them themselves. Caching any changes that need to be written later. It would also be wise to hold an exclusive lock on the hive files.
I trust defragmenters MUCH more than I trust optimizers. Registry Optimizers can set untested or broken keys and enable broken features. With the mass commercialization of them, this is less of a problem. But still, with what I've seen in the past I don't trust them mucking my stable system up in ways that are too hard to pin down.
I have been investigating the windows Prefetching system hoping to find a way to speed up the load time of an application I am working on. I found the following link where a developer describes modifications to the prefetcher registry values:
http://dotnet.dzone.com/news/improving-cold-startup
I have made similar modifications locally and found that they do provide faster application loading times. My problem is that I cannot find any documentation on the registry values that were changed and why the new values are better than the old ones.
So my question in short is, does anybody have any further information on the prefetcher registry values given below:
VideoInitTime
EnablePrefetcher
AppLaunchMaxNumPages
AppLaunchMaxNumSections
AppLaunchTimerPeriod
BootMaxNumPages
BootMaxNumSections
BootTimerPeriod
MaxNumActiveTraces
MaxNumSavedTraces
RootDirPath
HostingAppList
You don't say what profiling or other changes you've done, and when people dive in with off-the-wall solutions to perf problems but don't describe how they arrived at the need for them, I'm always a bit doubtful.
Where is your app spending its start-up time? How did you measure that? Can you fix an underlying '300 dlls' problem of the type described in that article?
Messing with OS prefetch policy may being improving your application at the expense of everyone else, which may be the right thing to do (on a single-use industrial control system or something like that), but may be completely antisocial.
"Load less code" is often a good general way to improve application startup time - do you have some very expensive config file storage mechanism, for example (XmlSerializer was notorious for this at one point, for example).
I'm sure many have noticed that when you have a large application (i.e. something requiring a few MBs of DLLs) it loads much faster the second time than the first time.
The same happens if you read a large file in your application. It's read much faster after the first time.
What affects this? I suppose this is the hard-drive cache, or is the OS adding some memory-caching of its own.
What techniques do you use to speed-up the loading times of large applications and files?
Thanks in advance
Note: the question refers to Windows
Added: What affects the cache size of the OS? In some apps, files are slow-loading again after a minute or so, so the cache fills in a minute?
Two things can affect this. The first is hard-disk caching (done by the disk which has little impact and by the OS which tends to have more impact). The second is that Windows (and other OS') have little reason to unload DLLs when they're finished with them unless the memory is needed for something else. This is because DLLs can easily be shared between processes.
So DLLs have a habit of hanging around even after the applications that were using them disappear. If another application decides the DLL is needed, it's already in memory and just has to be mapped into the processes address space.
I've seen some application pre-load their required DLLs (usually called QuickStart, I think both MS Office and Adobe Reader do this) so that the perceived load times are better.
Windows's memory manager is actually pretty slick -- it services memory requests AND acts as the disk cache. With enough free memory on the system, lots of files that have been recently accessed will reside in memory. Until the physical memory is needed, those DLLs will remain in cache -- all ala the CacheManager.
As far as how to help, look into Delay Loading your DLLs. The advantages of LoadLibrary only when you need it, but automatic so you don't have LoadLibrary/GetProcAddress on all of your code. (Well automatic, as far as just needing to add a linker command switch):
http://msdn.microsoft.com/en-us/library/yx9zd12s.aspx
Or you could pre-load like Office and others do (as mentioned above), but I personally hate that -- slows down the computer at initial boot up.
I see two possibilities :
preload yourlibraries at system startup as already mentionned Office, OpenOffice and others are doing just that.
I am not a great fan of that solution : It makes your boot time longer and eats lots of memory.
load your DLL dynamically (see LoadLibrary) only when needed. Unfortunately not possible with all DLL.
For example, why load at startup a DLL to export file in XYZ format when you are not sure it will ever be needed ?? Load it when the user did select this export format.
I have a dream where Adobe Acrobat use this approach, instead of bogging me with loads of plugins I never use every time I want to display a PDF file !
Depending on your needs you might have to use both techniques : preload some big heavliy used librairies and load on demand only specific plugins...
One item that might be worth looking at is "rebasing". Each DLL has a preset "base" address that it prefers to be loaded into memory at. If an application is loading the DLL at a different address (because the preferred one is not available) the DLL is loaded at the new address and "rebased". Roughly speaking this means that parts of the dll are updated on the fly. This only applies to native images as opposed to .NET vm .dll's.
This really old MSDN article covers rebase'ng:
http://msdn.microsoft.com/en-us/library/ms810432.aspx
Not sure whether much of it still applies (it's a very old article)... but here's an enticing quote:
Prefer one large DLL over several
small ones; make sure that the
operating system does not need to
search for the DLLs very long; and
avoid many fixups if there is a chance
that the DLL may be rebased by the
operating system (or, alternatively,
try to select your base addresses such
that rebasing is unlikely).
Btw if you're dealing with .NET then "ngen'ng" your app/dlls should help speed things up (ngen = natve image generation).
Yep, anything read in from the hard drive is cached so it will load faster the second time. The basic assumption is that it's rare to use a large chunk of data from the HD only once and then discard it (this is usually a good assumption in practice). Typically I think it's the operating system (kernel) that implements the cache, taking up a chunk of RAM to do so, although I'm not sure if modern hard drives have some builtin cache capability. (I once wrote a small kernel as an academic project; caching of HD data in memory was one of its features)
One additional factor which affects program startup time is Superfetch, a technology introduced with (I believe) Windows XP. Essentially it monitors disk access during program startup, recognizes file access patterns and them attempts to "bunch up" the required data for quicker access (e.g. by rearranging the data sequentially on disk according to its loading order).
As the others mentioned, generally speaking any read operation is likely to be cached by the Windows disk cache, and reused unless the memory is needed for other operations.
NGENing the assemblies might help with the startup time, however, runtime might be effected (Sometimes the NGened code is not as optimal as OnDemand Compiled code)
NGENing can be done in the background as well: http://blogs.msdn.com/davidnotario/archive/2005/04/27/412838.aspx
Here's another good article NGen and Performance http://msdn.microsoft.com/en-us/magazine/cc163808.aspx
The system cache is used for anything that comes off disk. That includes file metadata, so if you are using applications that open a large number of files (say, directory scanners), then you can easily flush the cache if you also have applications running that eat up a lot of memory.
For the stuff I use, I prefer to use a small number of large files (>64 MB to 1 GB) and asynchronous un-bufferred I/O. And a good ol' defrag every once in a while.
I would like to create events for certain resources that are used across various processes and access these events by name. The problem seems to be that the names of the events must be known to all applications referring to them.
Is there maybe a way to get a list of names events in the system?
I am aware that I might use some standard names, but it seems rather inflexible with regard to future extensibility (all application would require a recompile).
I'm afraid, I can't even consider ZwOpenDirectoryObject, because it is described as needing Windows XP or higher, so it is out of question. Thanks for the suggestion though.
I am a little unsure about shared memory, because I haven't tried it so far. Might do some reading in that area I guess. Configuration files and registry are a slight problem, because they do tend to fail with Vista due to access problems. I am a bit afraid, that shared memory will have the same problem.
The idea with ProcessExplorer sounds promising. Does anyone know an API that could be used for listing events for a process? And, does it work without administrative rights?
Thank you for the clarification.
There is not really a master process. It is more of a driver dll that is used from different processes and the events would be used to "lock" resources used by these processes.
I am thinking about setting up a central service that has sufficient access rights even under Vista. It will certainly complicate things, but it might be the only thing left facing the problems with security.
No, there is not any facility to enumerate named events. You could enumerate all objects in the respective object manager directory using ZwOpenDirectoryObject and then filter for events. But this routine is undocumented and therefore should not be used without good reason.
Why not use a separate mechanism to share the event names? You could list them in a configuration file, a registry key or maybe even in shared memory.
Do not mix up the user mode ZwOpenDirectoryObject with the kernel mode ZwOpenDirectoryObject -- the kernel mode API (http://msdn.microsoft.com/en-us/library/ms800966.aspx) indeed seems to available as of XP only, but the user mode version should be available at least since NT 4. Anyway, I would not recommend using ZwOpenDirectoryObject.
Why should configuration files and registry keys fail on Vista? Of course, you have to get the security settings right -- but you would have to do that for your named events as well -- so there should not be a big difference here. Maybe you should tell us some more details about the nature of your processes -- do they all run within the same logon session or do they run as different users even? And is there some master process or who creates the events in the first place?
Frankly, I tend to find the Process Explorer idea to be not a very good one. Despite the fact that you probably will not be able to accomplish that without using undocumented APIs and/or a device driver, I do not think that a process should be spelunking around in the handle table of another process just to find out the names of some kernel objects. And, of course, the same security issues apply again.
ProcessExplorer is able to enumerate all the named events held by some specific process. You could go over the entire process list and do something similar although I have now clue as to what API is used to get the list...