Does OpenProcess always write lock the file? - windows

I want to call the Windows API OpenProcess function on another process running on the machine. Will this always cause the file whose process I am opening to be write locked? Or does it depend on the access rights I request?

Yes, it is a fundamental property of Windows. When an executable file gets loaded (EXE or DLL), Windows creates a memory mapped view of the file. Chunks of code or data from the executable file get page-faulted into RAM, as needed to keep the program running. It works the other way around too, when Windows needs to make RAM available for another program then it throws chunks of mapped pages away, the ones that weren't used in a while. Those pages don't take up space in the paging file if they are code, they can be reloaded from the executable file.
Very efficient, code that was written when 16 megabytes of RAM was a luxury. The memory mapped section keeps a write lock on the file. Still useful in this day and age, it prevents some kind of malware with fecking with the code of a running process.

The process file is locked while the process is running; it doesn't have anything to do with OpenProcess. The file is unlocked when the process terminates.

Related

Why can't running exes and loaded dlls be deleted on Windows?

I mean, what's the point? They're on system memory anyway.
I couldn't find any "official" docs that explains why Windows protects loaded objects (exe, dll and even ocx).
I'm guessing:
Intended measure for security matter or against human error
File system limitation
We can easily delete any file unless locked on Unix. This only hinders ux in my opinion. Hoogle "how to delete dll" if you need proof. Many people suffered and i'm one of them.
Any words that Microsoft mention about this?
Any way to disable this "protection"? (probably isn't and never will be because Windows!)
They're on system memory anyway.
No, they're not. Individual pages are loaded on demand, and discarded from RAM when the system decides that they've been unused for a while and the RAM could be put to better use for another process (or another page in this process).
Which means that, effectively, the EXE file is open for as long as the process is running, and the DLL file is open until/unless the process unloads the DLL, in both cases so pages can be loaded/reloaded as needed.

Runtime data structure like proc in windows

I have two questions, both are them may be related so I am asking at once.
Linux has /proc directory which is runtime data structure and gives information about running process. Does windows have any such directory where I can get runtime info about process, like its layout and open handles. Please do no suggest tools like Process Explorer, its good but they are not part of core windows os.
Secondly, it is said for Windows that not everything is file, like socket is not a file. Does it mean that it is not a sort of file you can see in your hard disk but a runtime it creates file and in proc like data structure it has some entry.
Thanks.
While Windows has the ability to create virtual files (device drivers use this), there are no such files for process information.
Information about processes is available either through the process functions, the undocumented functions used by Process Explorer, or not at all.
Not every file is stored on some disk.
Virtual files are essentially just some value in memory, or some callback function that generates the file contents dynamically when you're trying to read it.

Using a memory-mapped file while allowing other processes full access

I'm trying to use a memory-mapped file under windows (using CreateFile/CreateFileMapping/MapViewOfFile functions), and I'm currently specifying FILE_SHARE_READ and FILE_SHARE_WRITE when calling CREATE_FILE. However, this is locking the file from being using by other processes.
What I want is to memory-map a snapshot of the file at the time I call CreateFileMapping or MapViewOfFile, so that I don't see any changes (writes or deletes) made to the file by other processes. Sort of like copy-on-write, but other processes are doing the write. Can I do this using memory-mapped files on windows?
That's just not how memory mapped files work. Windows puts a hard lock on the file so that nobody can change its content and make it different from the pages mapped into RAM. Those pages in RAM are shared between all processes that created a view on the file. There is no 'tear off' option.
You could simply map the file and make a copy of the bytes in the view. Some synchronization with the other processes would typically be required.

Locking sharable memory

Is there away to page into memory another process’s entire image? In a couple of weeks, our IT staff will be replacing all of the "core" network switches. This will bring down the network. This will be done after normal business hours. During this time, several users will still be using a program that I have written. It will be a nightmare to install local copies of my program on each user's machine. The program normally runs from a network share. The only time the program will access the network is when the program executes its executable (image) code. How can I get the Windows Memory Manager to load the entire image into memory and hold it "lock" there until the network is back online?
You can relink your program with the /swaprun:net option:
http://msdn.microsoft.com/en-us/library/w0628bwh.aspx
You could write it so that it copies itself locally to temp directory and then have it run that copy as a separate process, and then kill itself(the first copy). I've done this little juggling act before, but it depends on how your program works whether or not it will like being run from the temp directory.
This isn't going to work.
Windows doesn't necessarily load a 'static' copy of the executable into memory, it's free to shuffle chunks around and page parts in and out. Often it loads resources (images, strings, etc.) from the executable after the program has started running. It often loads external libraries dynamically as well.
Edited to add:
There is no such thing as "a process's entire image". Every thread, for example, gets its own allocation.
Maybe you should explain why running from a different location (i.e., a local copy of the binary) won't work for you.

How do I make Windows file-locking more like UNIX file-locking?

UNIX file-locking is dead-easy: The operating system assumes that you know what you are doing and lets you do what you want:
For example, if you try to delete a file which another process has opened the operating system will usually let you do it. The original process still keeps it's file-handles until it terminates - at which point the the file-system will quietly re-cycle the disk-resources. No fuss, that's the way I like it.
How different things are on Windows: If I try to delete a file which another process is using I get an Operating-System error. The file is untouchable until the original process releases it's lock on the file. That was great back in the single-user days of MS-DOS when any locking process was likely to be on the same computer that contained the files, however on a network it's a nightmare:
Consider what happens when a process hangs while writing to a shared file on a Windows file-server. Before the file can be deleted we have to locate the computer and ID the process on that computer which originally opened the file. Only then can we kill the process and delete our unwanted file.
What a nuisance!
Is there a way to make this better? What I want is for file-locking on Windows to behave a like file-locking in UNIX. I want the operating system to just let me do what I want because I'm in charge and I know what I'm doing...
...so can it be done?
No. Windows is designed for the "average user", that is people who don't understand anything about a computer. Therefore, the OS tries to be smart to avoid PEBKACs. To quote Bill Gates: "There are no issues with Windows that any number of people want to be fixed." Of course, he knows that 99.9999% of all Windows users can't tell whether the program just did something odd because of them or the guy who wrote it.
Unix was designed when the world was more simple and anyone close enough to a computer to touch it, probably knew how to assemble it from dirty sand. Therefore, the OS usually lets you do what you want because it assumes that you know better (and if you didn't, you will next time).
Technical answer: Unix allocates an "i-nodes" if you create a file. I-nodes can be shared between processes. If two processes create the same file (that is, two processes call create() with the same path), then you end up with two i-nodes. This is by design. It allows for a fancy security feature: You can create files which no one can open but yourself:
Open a file
Delete it (but keep the file handle)
Use the file any way you like
Close the file
After step #2, the only process in the universe who can access the file is the one who created it (unless you want to read the hard disk block by block). The OS will keep the data alive until you either close the file or your process dies (at which time Unix will clean up after you).
This design is the foundation of all Unix filesystems. The Windows file system NTFS works much the same way but the high level API is different. Many applications open files in exclusive mode (which prevents anyone, even backup programs) to read the file. This is even true for applications which just display information like PDF viewers.
That means you'll have to fix all the Windows applications to achieve the desired effect. If you have access to the source, you can create a file in a shared mode. That would allow other processes to access it at the same time but then, you will have to check before every read/write if the file still exists, whether someone has made changes, etc.
According to MSDN you can specify to CreateFile() 3rd parameter (dwSharedMode) shared mode flag FILE_SHARE_DELETE which:
Enables subsequent open operations on a file or device to request delete access.
Otherwise, other processes cannot open the file or device if they request delete access.
If this flag is not specified, but the file or device has been opened for delete access, the function fails.
Note Delete access allows both delete and rename operations.
http://msdn.microsoft.com/en-us/library/aa363858(VS.85).aspx
So if you're can control your applications you can use this flag.
Note that Process Explorer allow for force closing of file handles (for processes local to the box on which you are running it) via Handle -> Close Handle.
Unlocker purports to do a lot more, and provides a helpful list of other tools.
Also deleting on reboot is an option (though this sounds like not what you want)
That doesn't really help if the hung process still has the handle open. It won't release the resources until that hung process releases the handle. But anyway, in Windows it is possible to force close a file out from under a process that's using it. Process Explorer from sysinternals.com will let you look at and close handles that a process has open.

Resources