Why can't running exes and loaded dlls be deleted on Windows? - windows

I mean, what's the point? They're on system memory anyway.
I couldn't find any "official" docs that explains why Windows protects loaded objects (exe, dll and even ocx).
I'm guessing:
Intended measure for security matter or against human error
File system limitation
We can easily delete any file unless locked on Unix. This only hinders ux in my opinion. Hoogle "how to delete dll" if you need proof. Many people suffered and i'm one of them.
Any words that Microsoft mention about this?
Any way to disable this "protection"? (probably isn't and never will be because Windows!)

They're on system memory anyway.
No, they're not. Individual pages are loaded on demand, and discarded from RAM when the system decides that they've been unused for a while and the RAM could be put to better use for another process (or another page in this process).
Which means that, effectively, the EXE file is open for as long as the process is running, and the DLL file is open until/unless the process unloads the DLL, in both cases so pages can be loaded/reloaded as needed.

Related

Create file or registry key without calling NTDLL.DLL

I know that ntdll is always present in the running process but is there a way (not necessarily supported/stable/guaranteed to work) to create a file/key without ever invoking ntdll functions?
NTDLL is at the bottom of the user-mode hierarchy, some of its functions switch to kernel mode to perform their tasks. If you want to duplicate its code then I suppose there is nothing stopping you from decompiling NtCreateFile to figure out how it works. Keep in mind that on 32-bit Windows there are 3 different instructions used to enter kernel mode (depending on the CPU type), the exact way and where the transition code lives changes between versions and the system call ids change between versions (and even service packs). You can find a list of system call ids here.
I assume you are doing this to avoid people hooking your calls? Detecting your calls? Either way, I can't recommend that you try to do this. Having to test on a huge set of different Windows versions is unmanageable and your software might break on a simple Windows update at any point.
You could create a custom kernel driver that does the work for you but then you are on the hook for getting all the security correct. At least you would have documented functions to call in the kernel.
Technically, registry is stored in %WINDIR%\System32\config / %WINDIR%\SysWOW64\config, excepted your own user's registry which is stored in your own profile, in %USERPROFILE%\NTUSER.DAT.
And now, the problems...
You don't normally have even a read access to this folder, and this is true even from an elevated process. You'll need to change (and mess up a lot...) the permissions to simply read it.
Even for your own registry, you can't open the binary file - "Sharing violation"... So, for system/local machine registries... You can't in fact open ANY registry file for the current machine/session. You would need to shut down your Windows and mount its system drive in another machine/OS to be able to open - and maybe edit - registry files.
Real registry isn't a simple file like the .reg files. It's a database (you can look here for some elements on its structure). Even when having a full access to the binary files, it won't be fun to add something inside "from scratch", without any sotware support.
So, it's technically possible - after all, Windows does it, right? But I doubt that it can be done in a reasonable amount of time, and I simply can't see any benefit from doing that since, as you said, ntdll is ALWAYS present, loaded and available to be used.
If the purpose is to hack the current machine and/or bypass some lack of privileges, it's a hopeless approach, since you'll need even more privileges to do it - like being able to open your case and extract the system drive or being able to boot on another operating system on the same machine... If it's possible, then there is already tools to access the offline Windows, found on a well-known "Boot CD", so still no need to write in registry without any Windows support.

DLLs reloaded to their preferred address

On Windows Server 2003, my application has started taking a long time to load on fresh install. Suspecting the DLLs are not loading to their preferred address and this is taking some time (the application has over 100 DLLs, 3rd parties included) I ran the sysinternals listDLLs utility, asking it to flag every dll that has been relocated. Oddly enough, for most of the DLLs in the list I get something like this:
Base Size Path
### Relocated from base of 0x44e90000:
0x44e90000 0x39000 validation.dll
That is: they are flagged as relocated (and the load time definitely seems to support that theory) but their load address remains the preferred address.
Some third party DLLs seem to be immune from this, but as a whole this happens to ~90% of the DLLs loaded by the application.
On windows 7, it would seem the only flagged DLLs are ones that actually move, and loading time is (as expected) significantly faster.
What is causing this? How can I stop it?
Edited: Since it sounds (in theory) like the effects of ASLR, I checked and while the OS DLLs are indeed ASLR-enabled, ours are not. And even those are relocated in place, and therefore not taking up the address for any of the other DLLs.
This is very common, setting the linker's /BASE option is often overlooked and maintaining it when DLLs grow is an unpleasant maintenance task. This does not tend to repeat well between operating system versions, they'll load different DLLs ahead of yours and that can force relocation on one but not the other. Also a single relocation can cause a train of forced relocations on all subsequent DLLs.
Having this notably affect loading time is a bit remote on modern machines. The relocation itself is very fast, just a memory operation. You do pay for having the memory used by the relocated DLL getting committed. Required since the original DLL file is no longer suitable to reload code when it gets swapped out, it is now backed by the paging file. If it needs to grow to accommodate the commit size then that costs time. That's not common.
The far more common issue in loading time is the speed of the disk drive. An issue when you have a lot of DLLs, they need to be located on disk on a cold start. With 100 DLLs, that can easily cost 5 seconds. You should suspect a cold start problem when you don't see the delay when you terminate the program and start it again. That's a warm start, the DLLs are already present in the file system cache so don't have to be found again. Solving a cold start problem requires better hardware, SSDs are nice. Or the machine learning your usage pattern so SuperFetch will pre-fetch the DLLs for you before you start the program.
Anyhoo, if you do suspect a rebasing problem then you'll need to create your own memory map to find good base addresses that don't force relocation. You need a good starting point, knowing the load order and sizes of the DLLs. You get that from, say, the VS debugger. The Output window shows the load order, the Debug + Windows + Modules window shows the DLL sizes. The linker supports specifying a .txt file for the base addresses in the /BASE option, best way to do this so you don't constantly have to tinker with individual /BASE value while your code keeps growing.

Does Windows XP have an equivalent to VAX/VMS Installed Shared Images?

Back in the good old/bad old days when I developed on VAX/VMS it had a feature called 'Installed Shared Images' whereby if one expected one's executable program would be run by many users concurrently one could invoke the INSTALL utility thus:
$ INSTALL
INSTALL> ADD ONES_PROGRAM.EXE/SHARE
INSTALL> EXIT
The /SHARE flag had the effect of separating out the code from the data so that concurrent users of ONES_PROGRAM.EXE would all share the code (on a read-only basis of course) but each would have their own copy of the data (on a read-write basis). This technique/feature saved Mbytes of memory (which was necessary in those days) as only ONE copy of the program's code ever needed to be resident in VAX memory irrespective of the number of concurrent users.
Does Windows XP have something similar? I can't figure out if the Control Panel's 'Add Programs/Features' is the equivalent (I think it is, but I'm not sure)
Many thanks for any info
Richard
p.s. INSTALL would also share Libraries as well as Programs in case you were curious
The Windows virtual memory manager will do this automatically for you. So long as the module can be loaded at the same address in each process, the physical memory for the code will be shared between each process that loads that module. That is true for all modules, libraries as well as executables.
This is achieved by the linker marking code segments as being shareable. So, linkers mark code segments as being shareable, and data segments otherwise.
The bottom line is that you do not have to do anything explicit to make this happen.

Is DLL loaded in kernel mode or user mode?

I was asked such a question in an interview:
In windows, suppose there is an exe which depends on some dlls, when you start
the exe, and then the dependent dlls will be loaded, are these dlls
loaded in kernel mode or user mode?
I am not quite sure about the question, not the mention the answer - could you help to explain?
Thanks.
I'm not an expert about how Windows internally works, but for what i know the correct answer is user mode, simply because only the processes related to your Operative System are admitted in the kernel space http://en.wikibooks.org/wiki/Windows_Programming/User_Mode_vs_Kernel_Mode
Basically if it's not an OS process, it's going to be allocated in the user space.
The question is very imprecise/ambiguous. "In Windows" suggests something but isn't clear what. Likely the interviewer was referring to the Win32 subsystem - i.e. the part of Windows that you usually get to see as an end-user. The last part of the question is even more ambiguous.
Now while process and section objects (in MSDN referred to as MMF, loaded PE images such as .exe and .dll and .sys) are indeed kernel objects and require some assistance from the underlying executive (and memory manager etc) the respective code in the DLL (including that in DllMain) will behave exactly the same as for any other user mode process, when called from a user mode process. That is, each thread that is running code from the DLL will transition to kernel mode to make use of OS services eventually (opening files, loading PE files, creating events etc) or do some stuff in user mode whenever that is sufficient.
Perhaps the interviewer was even interested in the memory ranges that are sometimes referred to as "kernel space" and "user space", traditionally at the 2 GB boundary for 32bit. And yes, DLLs usually end up below the 2 GB boundary, i.e. in "user space", while other shared memory (memory mapped files, MMF) usually end up above that boundary.
It is even possible that the interviewer fell victim to a common misunderstanding about DLLs. The DLL itself is merely a dormant piece of memory, it isn't running anything on its own ever (and yes, this is also true for DllMain). Sure, the loader will take care of all kinds of things such as relocations, but in the end nothing will run without being called explicitly or implicitly (in the context of some thread of the process loading the DLL). So for all practical purposes the question would require you to ask back.
Define "in Windows".
Also "dlls loaded in kernel mode or user mode", does this refer to the code doing the loading or to the end result (i.e. where the code runs or in what memory range it gets loaded)? Parts of that code run in user mode, others in kernel mode.
I wonder whether the interviewer has a clear idea of the concepts s/he is asking about.
Let me add some more information. It seems from the comments on the other answer that people have the same misconception that exists about DLLs also about drivers. Drivers are much closer to the idea of DLLs than to that of EXEs (or ultimately "processes"). The thing is that a driver doesn't do anything on its own most of the time (though it can create system threads to change that). Drivers are not processes and they do not create processes.
The answer is quite obviously User mode for anybody who does any kind of significant application development for windows. Let me explain two things.
DLL
A dynamic link library is closely similar to a regular old link library or .lib. When your application uses a .lib it pastes in function definitions just after compile time. You typically use a .lib to store API's and to modify the functions with out having to rebuild the whole project, just paste new .lib with same name over the old and as long as the interface(function name and parameters) hasn't changed it still works. Great modularity.
A .dll does exactly the same thing however it doesn't require re-linking or any compilation. You can think of a .dll as essentially a .lib which gets compiled to an .exe just the same as applications which use it. Simply put the new .dll which shares the name and function signatures and it all just works. You can update your application simply by replacing .dlls. This is why most windows software consists of .dlls and a few exe's.
The usage of a .dll is done in two ways
Implicit linking
To link this way if you had a .dll userapplication.dll you would have an userapplication.lib which defines all the entry points in the dll. You simply link to the static link library and then include the .dll in the working directory.
Explicit linking
Alernatively you can programmatically load the .dll by first calling LoadLibrary(userapplication.dll) which returns a handle to your .dll. Then GetProcAddress(handle, "FunctionInUserApplicationDll") which returns a function pointer you can use. This way your application can check stuff before attempting to use it. c# is a little different but easier.
USER/KERNEL MODES
Windows has two major modes of execution. User mode and Kernel modes (kernel further divided into system and sessions). For user mode the physical memory address is opaque. User mode makes use of virtual memory which is mapped to real memory spaces. User mode driver's are coincidentally also .dll's. A user mode application typically gets around 4Gb of virtual addressing space to work with. Two different applications can not meaningfully use those address because they are with in context of that application or process. There is no way for a user mode application to know it's physical memory address with out falling back to kernel mode driver. Basically everything your used to programming (unless you develop drivers).
Kernel mode is protected from user mode applications. Most hardware drivers work in the context of kernel mode and typically all windows api's are broken into two categories user and kernel. Kernel mode drivers use kernel mode api's and do not use user mode api's and hence don't user .dll's(You can't even print to a console cause that is a user mode api set). Instead they use .sys files which are drivers and essentially work exactly the same way in user mode. A .sys is an pe format so basically an .exe just like a .dll is like an .exe with out a main() entry point.
So from the askers perspective you have two groups
[kernel/.sys] and [user/.dll or .exe]
There really isn't .exe's in kernel because the operating system does everything not users. When system or another kernel component starts something they do it by calling DriverEntry() method so I guess that is like main().
So this question in this sense is quite simple.

Locking Executing Files: Windows does, Linux doesn't. Why?

I noticed when a file is executed on Windows (.exe or .dll), it is locked and cannot be deleted, moved or modified.
Linux, on the other hand, does not lock executing files and you can delete, move, or modify them.
Why does Windows lock when Linux does not? Is there an advantage to locking?
Linux has a reference-count mechanism, so you can delete the file while it is executing, and it will continue to exist as long as some process (Which previously opened it) has an open handle for it. The directory entry for the file is removed when you delete it, so it cannot be opened any more, but processes already using this file can still use it. Once all processes using this file terminate, the file is deleted automatically.
Windows does not have this capability, so it is forced to lock the file until all processes executing from it have finished.
I believe that the Linux behavior is preferable. There are probably some deep architectural reasons, but the prime (and simple) reason I find most compelling is that in Windows, you sometimes cannot delete a file, you have no idea why, and all you know is that some process is keeping it in use. In Linux it never happens.
As far as I know, linux does lock executables when they're running -- however, it locks the inode. This means that you can delete the "file" but the inode is still on the filesystem, untouched and all you really deleted is a link.
Unix programs use this way of thinking about the filesystem all the time, create a temporary file, open it, delete the name. Your file still exists but the name is freed up for others to use and no one else can see it.
Linux does lock the files. If you try to overwrite a file that's executing you will get "ETXTBUSY" (Text file busy). You can however remove the file, and the kernel will delete the file when the last reference to it is removed. (If the machine wasn't cleanly shutdown, these files are the cause of the "Deleted inode had zero d-time" messages when the filesystem is checked, they weren't fully deleted, because a running process had a reference to them, and now they are.)
This has some major advantages, you can upgrade a process thats running, by deleting the executable, replacing it, then restarting the process. Even init can be upgraded like this, replace the executable, and send it a signal, and it'll re-exec() itself, without requiring a reboot. (THis is normally done automatically by your package management system as part of it's upgrade)
Under windows, replacing a file that's in use appears to be a major hassle, generally requiring a reboot to make sure no processes are running.
There can be some problems, such as if you have an extremely large logfile, and you remove it, but forget to tell the process that was logging to that file to reopen the file, it'll hold the reference, and you'll wonder why your disk didn't suddenly get a lot more free space.
You can also use this trick under linux for temporary files. open the file, delete it, then continue to use the file. When your process exits (for no matter what reason -- even power failure), the file will be deleted.
Programs like lsof and fuser (or just poking around in /proc//fd) can show you what processes have files open that no longer have a name.
I think linux / unix doesn't use the same locking mechanics because they are built from the ground up as a multi-user system - which would expect the possibility of multiple users using the same file, maybe even for different purposes.
Is there an advantage to locking? Well, it could possibly reduce the amount of pointers that the OS would have to manage, but now a days the amount of savings is pretty negligible. The biggest advantage I can think of to locking is this: you save some user-viewable ambiguity. If user a is running a binary file, and user b deletes it, then the actual file has to stick around until user A's process completes. Yet, if User B or any other users look on the file system for it, they won't be able to find it - but it will continue to take up space. Not really a huge concern to me.
I think largely it's more of a question on backwards compatibility with window's file systems.
I think you're too absolute about Windows. Normally, it doesn't allocate swap space for the code part of an executable. Instead, it keeps a lock on the excutable & DLLs. If discarded code pages are needed again, they're simply reloaded. But with /SWAPRUN, these pages are kept in swap. This is used for executables on CD or network drives. Hence, windows doesn't need to lock these files.
For .NET, look at Shadow Copy.
If executed code in a file should be locked or not is a design decision and MS simply decided to lock, because it has clear advantages in practice: That way you don't need to know which code in which version is used by which application. This is a major problem with Linux default behaviour, which is simply ignored by most people. If system wide libs are replaced, you can't easily know which apps use code of such libs, most of the times the best you can get is that the package manager knows some users of those libs and restarts them. But that only works for general and well know things like maybe Postgres and its libs or such. The more interesting scenarios are if you develop your own application against some 3rd party libs and those get replaced, because most of the times the package manager simply doesn't know your app. And that's not only a problem of native C code or such, it can happen with almost everything: Just use httpd with mod_perl and some Perl libs installed using a package manager and let the package manager update those Perl libs because of any reason. It won't restart your httpd, simply because it doesn't know the dependencies. There are plenty of examples like this one, simply because any file can potentially contain code in use in memory by any runtime, think of Java, Python and all such things.
So there's a good reason to have the opinion that locking files by default may be a good choice. You don't need to agree with that reasons, though.
So what did MS do? They simply created an API which gives the calling application the chance to decide if files should be locked or not, but they decided that the default value of this API is to provide an exclusive lock to the first calling application. Have a look at the API around CreateFile and its dwShareMode argument. That is the reason why you might not be able to delete files in use by some application, it simply doesn't care about your use case, used the default values and therefore got an exclusive lock by Windows for a file.
Please don't believe in people telling you something about Windows doesn't use ref counting on HANDLEs or doesn't support Hardlinks or such, that is completely wrong. Almost every API using HANDLEs documents its behaviour regarding ref counting and you can easily read in almost any article about NTFS that it in deed does support Hardlinks and always did. Since Windows Vista it has support for Symlinks as well and the Support for Hardlinks has been improved by providing APIs to read all of those for a given file etc.
Additionally, you may simply want to have a look at the structures used to describe a file in e.g. Ext4 compared to those of NTFS, which have a lot in common. Both work with the concept of extents, which separates data from attributes like file name, and inodes are pretty much just another name for an older, but similar concept of that. Even Wikipedia lists both file systems in its article.
There's really a lot of FUD around file locking in Windows compared to other OSs on the net, just like about defragmentation. Some of this FUD can be ruled out by simply reading a bit on the Wikipedia.
NT variants have the
openfiles
command, which will show which processes have handles on which files. It does, however, require enabling the system global flag 'maintain objects list'
openfiles /local /?
tells you how to do this, and also that a performance penalty is incurred by doing so.
Executables are progressively mapped to memory when run. What that means is that portions of the executable are loaded as needed. If the file is swapped out prior to all sections being mapped, it could cause major instability.

Resources