SHGetKnownFolderPath Vs. reading System folder path from Registry - windows

I am very new to windows system programming. In my project I need to read the registered "ProgramFiles" location for 32/64bit processes.
I have finally come up to two choices: either to use SHGetKnownFolderPath or to read the values for these folders from system registry, but I have some security concerns. Can someone please compare these two methods in terms of security and reliability.

You don't read stuff from undocumented random places in the registry, because that's just an implementation detail of where Windows currently stores that data: it may easily happen that:
in some next version of Windows they'll decide that such data needs to be stored elsewhere;
the data you found just is there only on some configurations (some particular IE versions installed, the machine has not joined Active Directory, there's no folder redirection inplace, ...) - but you can't know it, there's no documentation that guarantees you anything.
The correct way to go is to use the documented interfaces that the OS provides, on which Microsoft explicitly makes promises of compatibility (they promise that a public function that works today - if used according to the documentation - will continue to work tomorrow).
tl;dr: use SHGetKnownFolderPath - or SHGetFolderPath if you want to remain compatible with Windows versions before Vista, which in general is a good thing, given that Windows XP still seems to have more market share than all OS X versions combined.

Related

Alternative for Virtual file system (VFS) kernel extension on macOS M1

we have developed a kernel extension (KEXT) for a virtual file system (VFS) on macOS to integrate our software with external programs like Adobe InDesign or Microsoft Word. Our software and the KEXT are used by many of our customers.
As it looks like KEXTs are deprecated and may be removed completely in future versions of macOS, particularly on Apple Silicon based computers. See e.g. Apple's announcement in its security guide:
"This is why developers are being strongly encouraged to adopt system extensions before kext support is removed from macOS for future Mac computers with Apple silicon"
Therefore we are currently investigating in possible alternatives.
Apple suggests to migrate to System Extensions instead of KEXTs. However, the only VFS related API we found is to implement a File Provider that is based on an NSFileProviderReplicatedExtension.
Unfortunately that NSFileProviderReplicatedExtension has several flaws:
Files can either be in the cloud or downloaded. It is not possible to download/read only a portion of a file. This is a big performance problem for us, since we work with large images (> 1GB). The programs we integrate with typically only read a part of the image, e.g. the embedded preview. The API does not offer a way to access selected blocks of a file (random access file).
The File Provider learns about the file system content via enumerators. So everything that is inside a folder must be enumerated (listed) first. Otherwise it cannot be accessed. However, we cannot enumerate our VFS. Most of the content of our VFS is fully dynamic. It only exists when it is accessed by a client the first time. Such dynamic content also includes dynamic parameters like the client's locale or the size of a box where the image will be placed. Since we do not know those parameters in advance, we cannot enumerate the VFS's content in advance.
This means, an NSFileProviderReplicatedExtension in its current state isn't a replacement for a "real" VFS and therefore cannot be used by us as a replacement for our current VFS KEXT.
My questions:
Will Apple allow kernel extensions also in future versions of (Apple Silicon/M1 based) operating systems? Or is there at least a clear deadline?
If not, what is Apple's officially suggested replacement for KEXT based VFS solutions?
Will the API of an NSFileProviderReplicatedExtension be improved to behave like a "real" file system so that above mentioned flaws will no longer be an issue?
Many thanks for any answers or comments!
Best regards,
Michael
Will Apple allow kernel extensions also in future versions of (Apple Silicon/M1 based) operating systems? Or is there at least a clear deadline?
Apple doesn't really give timelines, and they also occasionally break promises of support.
However, this sort of hard API deprecation and removal usually is done as part of a major release, so you will typically get deprecation notice for it at WWDC one year, users might start seeing deprecation notices when the .0 of the OS release ships at the earliest, and sometimes the .3 or .4 revision. Then you'll typically be told at the next WWDC that the API is blocked in the upcoming release, so by that point you should have implemented a replacement.
If not, what is Apple's officially suggested replacement for KEXT based VFS solutions?
As far as I'm aware, NSFileProviderReplicatedExtension is currently the only one.
Will the API of an NSFileProviderReplicatedExtension be improved to behave like a "real" file system so that above mentioned flaws will no longer be an issue?
Other than via beta SDKs, Apple generally doesn't pre-announce future APIs.
My advice:
File issues for each of the file provider shortcomings you are hitting using Feedback Assistant. (Radar)
File an "enhancement request" feedback issue with Apple for a "real" file system API replacement for the VFS KPI.
If your vfs kext is critical to your business/product, I suggest additionally asking Apple's DTS via a TSI what they recommend for your situation. Reference the feedback IDs of the issues filed, otherwise they will recommend that you file issues.

What do we need installations for?

This is a conceptual question and I hope it fits into Stackoverflow's question and answer style. I wonder what the concept of installing applications is good for. In my naive understanding of operating systems we do not need a registry and to use an application it should be enough to just copy the executable and files onto your drive and launch that.
Am a Windows user but also worked with Linux a bit and noticed that there are package managers instead of installers. But even those do more than just a copy instruction, I guess.
I do not think that all the installers exist only because the common user expects them out of steady habit. So what is the advantage of installers in contrast to developing applications which are designed to run out of a single folder and copy that over?
I would really like if someone could explain that concept.
Installing applications is a way to embed them in the OS. It's a kind of standard, you offer procedures like installing and uninstalling that should have the same functionalities for all applications (even "change" under Windows).
Countless times I've "installed" applications with a single shell script that came with them, and then had troubles removing such programs, having to look for single files. If the programmer uses the standard of the OS to make an executable that can be installed, that won't happen.
You can also easily view a list of the installed programs at any time.
Under Linux, additionally, if we're talking about a package manager, it is convenient for the user to have an easy way to download and install a program by just typing its name.
Last but not least, some applications are required to be installed and recognized by the OS (for example services in Windows).

Windows: Getting the version of a running process

I want to get an overview of all the programs that are being used and how many versions of this software that is being used. I do not need to know the exact version number (though it would be nice), just be able to say that two things are distinct versions (or builds).
Because I do not know anything about each program, I need this to be done in a generic way. How could this be done?
This is quite a general question, so I'll give you a general answer. You are going to need to do the following:
Enumerate all the processes by calling EnumProcesses().
For each process ID, OpenProcess() to obtain a process handle.
With each process handle call GetModuleFileNameEx() to obtain the process's main executable file name.
Finally call GetFileVersionInfo() and perhaps some of its friends to retrieve the information.
This will give you binary version information rather than marketing versions. For example Windows XP is version 5.1, Windows Vista is 6.0, Windows 7 is version 6.1. If you need marketing versions then that's probably not achievable in a general manner.

What is the problem with DLLs and the Registry?

I was watching the WWDC 2009 Keynote and something someone said about Windows 7/Vista got me curious..
The speaker claimed that 7 was still a poor operating system because it still used the same technologies such as DLLs and the registry. How accurate are his claims and how different is OS X doing it? Even os x has dynamically loaded libraries right? I guess the Registry thing might have some weight..
Can anyone explain to me the differences in each OS' strategy?
I'm not trying to incite fanboys here or anything, I just want to know how both operating systems tackle problems in general..
Thanks,
kreb
Of course both operating systems have facilities for using DLLs (they're called dylibs or Frameworks on OS X depending on how they're packaged).dylibs are very much like DLLs--they are a dynamically linked library and as such there may be multiple versions of them floating around. Frameworks, on the other hand, are really a directory structure. They contain dynamically linked libraries (potentially multiple versions of them), resources, headers, documentation, etc. The dynamic linker on OS X automatically handles choosing the correct library version from the framework for each executable. The system appears to work better than Windows' DLL management which is, well, quite a mess still (of course, Windows' system is tied by legacy issues that Apple dropped when they moved to OS X). To be fair, Unix has had a solution to this problem for a long time, as well using symbolic links to link dylibs to their correct versioned implementation, allowing multiple installed versions.
There is no OS X equivalent of the Windows registry. This is good and bad. The good side is that it's much harder to corrupt an entire OS X system with a registry screw up. OS X instead stores configuration in many separate files, usually one or more per application, user, whatever. These files are generally a plist (an XML schema representing dictionaries, arrays, and primitive types) formatted file. The bad side is that, by retaining this Unix-y heritage, OS X doesn't have the same über-admin tools that can churn through the registry and do all sorts of crazy things.
DLLs
The major difference between OS X and Windows is that Windows historically tried to save space/memory by having everyone share code (i.e. you install one DLL, everyone can use it). Apple statically compiles (well, not really, but it may as well be) all of the non-system libraries into every application. Wastes disk space/memory, but makes app deployment way easier and no versioning issues.
Registry
OS X does have a registry, they're just flat files called plists, instead of a magic component that's mostly like a filesystem except where it's not. Apple's approach makes it easy to migrate settings from one machine to another, whereas Windows' approach is faster in-memory, and allows apps to easily "watch" a key without taking a big perf hit (i.e. one app changes a key and the other instantly knows about it).
In conclusion
The keynote presenter's full of it, 10.6 is mostly the same code as 10.5, which was mostly the same code as 10.4 et al, just like Win7 is mostly Vista, which is mostly Server '03, etc. There's far too much tested code in an operating system to throw it away every release, especially if you actually want your customers' apps to work.
DLL's are bad variations of libaries since they are unable to operate on their own, to use them a further wrapper executable is called(automatically), which adds unrequired overhead and makes it much harder to tell which libraries are actually in use. Another less important flaw, is the inability for systems to truely share a library.
*nix systems avoid this by having libraries exist on the top level running on their own or under a larger wrapper(like kde-init ), the libraries may be shared by any applications, meaning only a single copy of each library is required, and you may at any time kill a single library with ease as required.
The registry is a great idea, except for the fact that it is used for so much, almost anything you install will use the registry, and a corrupt registry and render your operating system almost completely useless until it's fixed.
This is avoided in *nix systems by having multiple different files for different content, drivers are refered to via Xorg's config file, installed applications will be written to their own database, and keys or identification will often be written into a directory, rather than a single all purpose file. This reduces the likely hood of a serious failure and means that at any time you can probably still repair the system. If Xorg becomes corrupt you just reconfigure it, if the installed applications database becomes corrupt you can repair or rebuild it, and should an applications individual settings directory become corrupt you need only reinstall one application(and most good commercial aps should have a way to repair this anyway)

Finding undocumented APIs in Windows

I was curious as to how does one go about finding undocumented APIs in Windows.
I know the risks involved in using them but this question is focused towards finding them and not whether to use them or not.
Use a tool to dump the export table from a shared library (for example, a .dll such as kernel32.dll). You'll see the named entry points and/or the ordinal entry points. Generally for windows the named entry points are unmangled (extern "C"). You will most likely need to do some peeking at the assembly code and derive the parameters (types, number, order, calling convention, etc) from the stack frame (if there is one) and register usage. If there is no stack frame it is a bit more difficult, but still doable. See the following links for references:
http://www.sf.org.cn/symbian/Tools/symbian_18245.html
http://msdn.microsoft.com/en-us/library/31d242h4.aspx
Check out tools such as dumpbin for investigating export sections.
There are also sites and books out there that try to keep an updated list of undocumented windows APIs:
The Undocumented Functions
A Primer of the Windows Architecture
How To Find Undocumented Constants Used by Windows API Functions
Undocumented Windows
Windows API
Edit:
These same principles work on a multitude of operating systems however, you will need to replace the tool you're using to dump the export table. For example, on Linux you could use nm to dump an object file and list its exports section (among other things). You could also use gdb to set breakpoints and step through the assembly code of an entry point to determine what the arguments should be.
IDA Pro is your best bet here, but please please double please don't actually use them for anything ever.
They're internal because they change; they can (and do) even change as a result of a Hotfix, so you're not even guaranteed your undocumented API will work for the specific OS version and Service Pack level you wrote it for. If you ship a product like that, you're living on borrowed time.
Everybody here so far is missing some substantial functionality that comprises hugely un-documented portions of the Windows OS RPC . RPC (think rpcrt4.dll, lsass.exe, csrss.exe, etc...) operations occur very frequently across all subsystems, via LPC ports or other interfaces, their functionality is buried in the mysticism incantations of various type/sub-type/struct-typedef's etc... which are substantially more difficult to debug, due to the asynchronous nature or the fact that they are destine for process's which if you were to debug via single stepping or what have you, you would find the entire system lockup due to blocking keyboard or other I/O from being passed ;)
ReactOS is probably the most expedient way to investigate undocumented API. They have a fairly mature kernel and other executive's built up. IDA is fairly time-intensive and it's unlikely you will find anything the ReactOS people have not already.
Here's a blurb from the linked page;
ReactOS® is a free, modern operating
system based on the design of Windows®
XP/2003. Written completely from
scratch, it aims to follow the
Windows® architecture designed by
Microsoft from the hardware level
right through to the application
level. This is not a Linux based
system, and shares none of the unix
architecture.
The main goal of the
ReactOS project is to provide an
operating system which is binary
compatible with Windows. This will
allow your Windows applications and
drivers to run as they would on your
Windows system. Additionally, the look
and feel of the Windows operating
system is used, such that people
accustomed to the familiar user
interface of Windows® would find using
ReactOS straightforward. The ultimate
goal of ReactOS is to allow you to
remove Windows® and install ReactOS
without the end user noticing the
change.
When I am investigating some rarely seen Windows construct, ReactOS is often the only credible reference.
Look at the system dlls and what functions they export. Every API function, whether documented or not, is exported in one of them (user, kernel, ...).
For user mode APIs you can open Kernel32.dll User32.dll Gdi32.dll, specially ntdll.dll in dependancy walker and find all the exported APIs. But you will not have the documentation offcourse.
Just found a good article on Native APIS by Mark Russinovich

Resources