Information about CurrentProcessor FeatureSet in the Windows Registry? - windows

I'm using some basic processor detection in an installer, to determine which version of a software package should be available to the user.
Currently, I'm going through WMI to get some basic information, but I found out that doing that, I very regularly get unreliable results for the CPU features (CPUID is apparently poorly supported on a good number of mobile processors).
To avoid this kind of issue and to speed things up, I've been looking at getting the processor capabilities from the windows registry instead -- after all, the information should all be available there, under HKEY_LOCAL_MACHINE\HARDWARE\DESCRIPTION\System\CentralProcessor{n}
Reading a key from the registry makes the installer much simpler in code, doesn't have to call out to WMI (slow and can fail since I'd have to rely on calling out to a language like VBScript with WMI access, while registry operations are supported as standard in my development scripting language) and should avoid getting incorrect information from CPU value issues through it.
Sure, enough, I found a wealth of information, but the most important part, the "FeatureSet" value stored there, which I assume is a DWORD containing flags about available processor features like SIMD instruction sets etc., is not documented anywhere. I've spent a good while searching the 'net now trying to find any sort of documentation about this registry value, to no avail.
Does anyone have a document outlining or describing the bits in that registry value?

Related

What is a compressed GUID and why is it used?

I know that in Windows, the GUID is used by the Windows Installer to check for already installed products under the registry key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall\ and other product-relevant information is stored using the compressed GUID in HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Installer\UserData\.
Yet a google search to figure out what the compressed GUID actually is and why both forms are being used for saving product-specific data in the registry database only revealed algorithms for converting from one form to another.
Apparently, the compressed GUID is calculated from merely changing the order of the characters in a specific way, giving me more confusion about why this is the compressed form and why it is used this way.
Additionally, some sources appear to refer to the GUID as the product code of the software product, and others specifically use the term product code as a synonym for the compressed GUID.
I do not have much knowledge of the inner workings of Windows and its installers, but hope that someone can enlighten me with the information I was unable to find. I apologize in advance for my mediocre English.
Not sure what you are doing, or why this is a problem for your scenario. However, the GUIDs you mention - the rearranged GUIDs with the braces and dashes removed - are actually referred to as Packed GUIDs. Then there are also Compressed GUIDs that are just 23 characters long that are primarily used to create Darwin descriptors - which are combinations of the product code GUID, a feature name, and a component code GUID. They are used for MSI's advertisement features. This is according to Bob Baker's book "Getting Started with InstallShield Developer and Windows Installer Setups".
As far as I recall the packed GUIDs are apparently used to make registry searches more efficient. I am unfamiliar with the exact technical details involved. Perhaps Bob or Rob of WiX can elaborate.
I have to run, not really an answer, I will look at it again later. Please elaborate your question with more details on what the problem actually is. As a summary it seems the collective comments suggest that this GUID concept is due to registry space saving, searching efficiency and obfuscation.
My advice (if I understand correctly): do not attempt to read packed or compressed GUIDs from the registry directly - rather go via the MSI API (COM / Win32) - which should have the features you need to do almost anything with MSI.
Some Links:
MSIEXEC -Embedding
MSI API Links (quite similar links):
Is MsiOpenProduct the correct way to read properties from an installed product?
Wix upgrade goes into maintenance mode and never does upgrade
how to find out which products are installed - newer product are already installed MSI windows
Is there an alternative to GUID when using msiexec to uninstall an application?
The packed guid is reportedly used to save space in the registry. It is used for component Ids, so once you start getting a million components on the system I assume they were worried about the extra space usage. It's also possible it was used to prevent people trawling the registry to look for Windows Installer items instead of using the APIs. It's an undocumented implementation detail, and if too many people got into habit of using the registry instead of using the APIs there would be issues if that implementation changed. So treat this as interesting information, not something to use in an application.
Darwin Descriptors are different - they are just an encoding of product code, component id, and feature name. They occur in various places where that combination of data is useful for loading COM servers, starting programs etc. Again, it's undocumented, but if you wanted to untangle them you could LoadLibrary on msi.dll and find and call MsiDecomposeDescriptorW with the right parameters.

Make windbg or kd attached to local kernel behave like system wide strace

I am running Windows 7 on which I want to do kernel debugging and I do not want to mess with boot loader. So I've downloaded LiveKd as suggested here and make it run and seems it is working. If I understand correct it is some kind of read only debugging. Here is mentioned that it is very limited and even breakpoint cannot be used. I would like to ask if is possible in this mode to periodically dump all the instructions that are being executed or basically all events which are happening on current OS? I would like to have some system wide strace (Linux users know) and to do some statistical analysis on this. I suppose it depends on more factors like installed debug symbols to begin able resolve addresses etc.
I'm not sure if debugger is the best tool you can use for tracing live system calls. As you've mentioned LiveKd session is quite limited and you are not allowed to place breakpoints in it (otherwise you would hang your own system). However, you still can create memory dumps using the .dump command (check windbg help: .hh .dump). Keep in mind though that getting a full dump (/f) of a running system might take a lot of time.
Moving back to the subject of your question, by using the "dump approach" you will miss many system calls as you will have only snapshots of a system at given points in time. So if you are looking for something similar to Linux strace I would recommend checking those tools:
Process Monitor (procmon) - it's a tool which will show you all I/O requests in the system, as well as operations performed on the registry or process activity events
Windows Performance Toolkit - it contains tools for collecting (WPR) and analysing (WPA) system and application tracing events. It might be a lot of events and it's really important to filter them accordingly to your needs. ETW (Event Tracing for Windows) is a huge subject and you probably will need to read some tutorials or books before you will be able to use it effectively (but it's really worth it!).
API Monitor - it's one of many (I consider it as one of the best) tracing applications - this tool will allow you to trace method calls in any of the running processes. It has a nice interface and even allows you to place breakpoints on methods you'd like to intercept.
There are many other tools which might be used for tracing on Windows, but I would start with the ones I listed above. You may also check a great book on this subject: Inside Windows Debugging. Good luck! :)

Performance of reading a registry key?

I'm wondering how long it takes (in milliseconds) to read a registry value from the Windows registry through standard C# libraries. In this case, I'm reading in some proxy settings.
What order of magnitude value should I expect?
Are there any good benchmark data available?
I'm running WS2k8 R2 amd64. Bonus points: How impactful is the OS sku/version on this measure?
using (RegistryKey registryKey = Registry.CurrentUser.OpenSubKey(#"Software/Copium"))
{
return (string)registryKey.GetValue("BinDir");
}
I cannot quote numbers as I don't know. But having just read 30 pages in the Windows Internals 5 book about the registry the following noteworthy things that I didn't know became clear.
The Registry is transactional and has fail safes to prevent from being corrupted. This can affect performance. Since the transactional level is read committed, reads shouldn't be blocked by writes so they should be performant.
The registry is cached in memory (well frequently used values anyway) so if you access a set of keys often the performance should remain stable after the first hit.
This blog post from Raymond Chen should prove helpful:
The performance cost of reading a registry key
I would add that in general, it is not recommended to store settings in the registry for C# apps, use persisted storage, or a config file or something of that nature. There are problems related to permissions when you deal with the registry. Reading is often not an issue, but if you are going to persist things then it gets hairy, especially with UAC and newer OSs that shadow copy things.

Windows XP prefetcher registry values

I have been investigating the windows Prefetching system hoping to find a way to speed up the load time of an application I am working on. I found the following link where a developer describes modifications to the prefetcher registry values:
http://dotnet.dzone.com/news/improving-cold-startup
I have made similar modifications locally and found that they do provide faster application loading times. My problem is that I cannot find any documentation on the registry values that were changed and why the new values are better than the old ones.
So my question in short is, does anybody have any further information on the prefetcher registry values given below:
VideoInitTime
EnablePrefetcher
AppLaunchMaxNumPages
AppLaunchMaxNumSections
AppLaunchTimerPeriod
BootMaxNumPages
BootMaxNumSections
BootTimerPeriod
MaxNumActiveTraces
MaxNumSavedTraces
RootDirPath
HostingAppList
You don't say what profiling or other changes you've done, and when people dive in with off-the-wall solutions to perf problems but don't describe how they arrived at the need for them, I'm always a bit doubtful.
Where is your app spending its start-up time? How did you measure that? Can you fix an underlying '300 dlls' problem of the type described in that article?
Messing with OS prefetch policy may being improving your application at the expense of everyone else, which may be the right thing to do (on a single-use industrial control system or something like that), but may be completely antisocial.
"Load less code" is often a good general way to improve application startup time - do you have some very expensive config file storage mechanism, for example (XmlSerializer was notorious for this at one point, for example).

Virtualization of Legacy API and co-existence with more modern API?

I don't mean for this question to be a flame bait but I'll be using Microsoft and their win32 API as a example of a legacy API.
Now what I am wondering here is Microsoft is spending a lots of their money and energy in maintaining their legacy API, including all of the "glitches/bugs/workaround" that are needed to keep the API functioning the same. Now I'm aware that in Windows 7 they are providing a way for the customer to run their application in a "Windows XP" VM which would be one such way for them to start cleaning up their win32 API because they could then push all of the application into the "Windows XP" VM.
So now what I am wondering is, is it possible to virtualization a legacy API in such way that an customer/program can still access and use it, yet at the same time be able to take advantage of the newer version/API? Because as far as I understand it, if the application is ran in the "Windows XP" VM, it won't be able to access any of the newer API/feature of Windows 7.
The thing that puzzles me about this question when it comes up is that Windows has been doing this since NT came out in the mid nineties. This is how NT runs DOS and Win16 programs, and how it always has. The NTVDM virtualization layer runs 16-bit apps under Win32 with very little special support from the core OS. This is just one example - another is WINE, which as I understand it does a pretty reasonabl job of running windows apps on top of an API set which is very different from that of windows. So it is definitely possible.
The more pertinent question would be why Microsoft would consider it. In order for you to think it is necessary you have to think two things. 1) There is something better to replace the win32 API with and 2) Maintaining the Win32 API is a burden.
Both of these are questionable. In the case of kernel duties, such as accessing hardware and synchronizing and doing threads and processes and memory the Win32 API does a pretty good job, and is ultimately quite close to what the kernel really does. If you think there is a better API then that must mean there is also a better kernel. I personally don't think that NT needs replacing right now. For graphics and windowing, admitedly gdi32 is a bit long in the tooth. But Microsoft solved that problem by building WPF right alongside it. This then brings in the burden question. Well, sure there are two APIs to maintain, but if you virtualized GDI on top of WPF you'd still have to maintain both anyway so there is no benefit there. The advantage of running both in parallel is that GDI already exists and is already tested. All you have to do is to fix the occasional bug, whereas a new virtualization layer would have to be written and tested all over again, which takes time away from making WPF better.
In terms of maintaining back compat, that isn't as much of a burden as it sounds. It is mainly a test question - you have to test that the API behaviour doesn't change, but again - those tests have already been written, so it isn't really any extra work.
So, to answer a question with a question, why would they bother?
This is an interesting question, at least to me, here are some of my thoughts.
Your understanding is correct, an application running in the XP VM only has access to the Win32 APIs provided by XP in the VM. One of the many ways that I have seen Microsoft's approach to enhancing specific APIs is to create new functions with the enhanced/fixed functionality and name the new function by append Ex and even ExEx to the original name, for example
GetVersion
GetVersionEx
For functions that accept pointers to structures, the structures are 'versioned' by using the size of the structure to determine the functionality required, so older code would be passing a previous size of the structure while newer code would be passing in the newer larger strucure and the API functions accordingly.
I guess, the problem has become that it is no longer just differences in how an API works, but more integral to the functioning of the operating system and the internal structures which have changes significantly enough that arguably badly written code is effectively broken.
As to your actual question, I guess it would be quite tough. Even if one thought to let the OS adjust how it executes code based on a target OS version in the PE header of the executable, what would happen if a newer DLL was loaded into the process that targeted the latest OS, now how should the OS handle this when the code is executing? IMHO, I think this would be very challenging, one frought with pitfalls that would ultimately fail.
Of course that is just my raw thoughts on the topic so I might be 100% wrong and there is some simple approach that just did not come to mind.

Resources