I've been going through the WinAPI documentation for a while, but I don't seem to be able to find an answer. What I'm trying to achieve is to give a program a file name that it can open and work with it like that would be a normal file on the disk. But I want this object to be in the memory.
I tried using named pipes and they work in some of the situations, but not always. I create a named pipe and pass it to the child process as a regular file. When process exists I collect the data from the pipe.
program.exe \\.\pipe\input_pipe
Faced some limitations though. One of them is that they are not seekable. The second limitation is that they should be opened with exactly the right permissions. And the third one I found is that you cannot pre-put any data into a duplex pipe before it's been open on the other end. Is there any way to overcome those limitations of the named pipes?
Or maybe there is some other kind of object that could be opened with CreateFile and then accessed with ReadFile and WriteFile. So far the only solution I see is to create a file system driver and implement all the functionality myself.
Just to make it clear I wanted to point out that I cannot change the child program I'm running. The main idea is to give that program something that it would think is a normal file.
UPDATE: I'm not looking for a solution that involves installation of any external software.
Memory-mapped files would allow you to do what you want.
EDIT:
On rereading the question - since the receiving program already uses CreateFile/ReadFile/WriteFile and cannot be modified, this will not work. I cannot think of a way to do what OP wants outside of third-party or self-written RAMDisk solution.
The simplest solution might be, as you seem to suggest, using a Ramdisk to make a virtual drive mapped to memory. Then obviously, any files you write to or read from that virtual drive will be completely contained in RAM (assuming it doesn't get paged to disk).
I've done that a few times myself to speed up a process that was entirely disk-bound.
Call CreateFile but with FILE_ATTRIBUTE_TEMPORARY and probably FILE_FLAG_DELETE_ON_CLOSE as well.
The file will then never hit the disk unless the system is low on physical memory.
Related
I'm interested to know if there's any method/mechanism to roll my own virtual file system that will run on modern windows. The idea would be that no matter what part of the operating system tried to access files under the directory I "control", all of the operations are filtered through some kind of callback code. If not, is there a fundamental reason why?
You absolutely can do this, it's called a "reparse point". See MSDN for the details.
Eugene is correct... you want to look at the documentation for File System Filters, not for reparse points.
Take a look at the TrueCrypt source code its open source and it does something very close to what you want: "# Creates a virtual encrypted disk within a file and mounts it as a real disk."
I want that an exe file can't be copied or cut from the Windows file system to paste somewhere.
The exe is made in C#. which must have to be in only one PC.
I have worked with FileSystemWatcher, NSIS, Clipboard. but for all I need to detect whether that file is being copied.
I also have seen 'Prevent'(http www free-download-blog.com disable-cut-paste-copy-delete-rename-functions-using-prevent ), but I need to prevent only that particular exe from being copied or cut.
Any pointer or idea will help.
As others have suggested you won't be able to disable the copy/cut behaviour so easily.
An alternative would be to disable the execution of the copied versions. In your executable you could check many things like :
The path of the present executable is explicitly your_path
The name of the machine and user is the one you authorise
You could even prevent the file of being executed more than once using Windows register entries (if already 1 don't launch). It won't be perfect since any experimented user could tweak that out, assuming they are seeking for that. But depending on your users profile it might be sufficient.
If you need the exe to be executable, you need to permit loading it into memory.
As soon as you do, anyone can read it to memory using ReadFile and then write to an arbitrary location using WriteFile. No shell-detectable copying involved.
A good reading: Raymond's post and its comments on preventing copying.
Well, this is a hard problem. Even if you get explorer.exe to disable cut&paste, what prevents a user from using the command window? Or writing their own exe to do it? Booting up in linux and reading it?
Still, you have a few options (there will be more, most likely) which you could try:
Use the right permissions: Set the
permissions such that the users who
you don't want to cut&paste cannot
read the file.
Write a device driver which can hook
onto the filesystem calls and do that
for you.
Encrypt the file.
And some hacky options like:
Use the APPINIT_DLLS regkey to put your own dll to be loaded into each process ( I am not sure if this will work with console process though). Then on your dll load, do IAT hooking to replace the kernel32.dll file calls.
Replace kernel32.dll with your own version. Might have to do some messing around with the PE format etc.
There are no guarantees though. If for instance, you expect them to be able to execute it, but not copy it, you are probably stuck.
Any local admin will be able to undo anything you do to prevent copying. I can pretty much guarantee the program on that page you mention relies on a service or background process to prevent copy-and-paste, and therefore is easily circumventable. If your users are in a closed environment where none of them are admins and they have very limited rights to their PCs, then you have a chance.
if you could completly block explorer from copying or moving files, then all u need is a 3rd party software for copying files (but make sure it can filter file extensions) for example Copy Handler
Set up an ENVIRONMENT variable in your machine
In your code add a check
if (ENVIRONMENT Variable=='Same as defined')
//Execute code
else
//Suspend execution
I have a process with an open filehandle to a file. I need to detect if this file has been deleted by another process (there may be a file with the same name in its place). On UNIX I was comparing the inodes of my filehandle and the file-path via stat, but this doesn't work on Win32. How can I do this in Perl?
Thanks,
-Peter
I may be mistaken (I'm not a Windows programmer), but I thought files can't be deleted or replaced when they are opened in Win32, or at least by default it isn't possible.
This is a hard problem to solve, especially across both Windows and Unix.
Let's back up. Why are you trying to detect if the file has been replaced? My guess would be that you have some sort of race condition, two programs both trying to write to the same file. Perhaps file locking would help here? Or to use a real database? SQLite and Berkley DB come to mind.
I'd try comparing size, mtime, and atime; should be very difficult for those to be the same (barring nonsense like, say, stat on a filehandle on win32 giving you the information for the current file in the filesystem regardless of whether your filehandle is on a deleted instance). If it's possible for your file to be deleted and replaced with an identical file multiple times within a given second, and you have to detect this, then you may have to go to an architectural solution like using numbered versions of your file or something.
Look at the Win32::ChangeNotify package to register for notification of changes to a file or directory. It's also possible to open the file via the Win32API::File package such that it can't be deleted while you have it open (see createFile() and OsFHandleOpen() particularly, as well as the CreateFile() docs on MSDN).
I noticed when a file is executed on Windows (.exe or .dll), it is locked and cannot be deleted, moved or modified.
Linux, on the other hand, does not lock executing files and you can delete, move, or modify them.
Why does Windows lock when Linux does not? Is there an advantage to locking?
Linux has a reference-count mechanism, so you can delete the file while it is executing, and it will continue to exist as long as some process (Which previously opened it) has an open handle for it. The directory entry for the file is removed when you delete it, so it cannot be opened any more, but processes already using this file can still use it. Once all processes using this file terminate, the file is deleted automatically.
Windows does not have this capability, so it is forced to lock the file until all processes executing from it have finished.
I believe that the Linux behavior is preferable. There are probably some deep architectural reasons, but the prime (and simple) reason I find most compelling is that in Windows, you sometimes cannot delete a file, you have no idea why, and all you know is that some process is keeping it in use. In Linux it never happens.
As far as I know, linux does lock executables when they're running -- however, it locks the inode. This means that you can delete the "file" but the inode is still on the filesystem, untouched and all you really deleted is a link.
Unix programs use this way of thinking about the filesystem all the time, create a temporary file, open it, delete the name. Your file still exists but the name is freed up for others to use and no one else can see it.
Linux does lock the files. If you try to overwrite a file that's executing you will get "ETXTBUSY" (Text file busy). You can however remove the file, and the kernel will delete the file when the last reference to it is removed. (If the machine wasn't cleanly shutdown, these files are the cause of the "Deleted inode had zero d-time" messages when the filesystem is checked, they weren't fully deleted, because a running process had a reference to them, and now they are.)
This has some major advantages, you can upgrade a process thats running, by deleting the executable, replacing it, then restarting the process. Even init can be upgraded like this, replace the executable, and send it a signal, and it'll re-exec() itself, without requiring a reboot. (THis is normally done automatically by your package management system as part of it's upgrade)
Under windows, replacing a file that's in use appears to be a major hassle, generally requiring a reboot to make sure no processes are running.
There can be some problems, such as if you have an extremely large logfile, and you remove it, but forget to tell the process that was logging to that file to reopen the file, it'll hold the reference, and you'll wonder why your disk didn't suddenly get a lot more free space.
You can also use this trick under linux for temporary files. open the file, delete it, then continue to use the file. When your process exits (for no matter what reason -- even power failure), the file will be deleted.
Programs like lsof and fuser (or just poking around in /proc//fd) can show you what processes have files open that no longer have a name.
I think linux / unix doesn't use the same locking mechanics because they are built from the ground up as a multi-user system - which would expect the possibility of multiple users using the same file, maybe even for different purposes.
Is there an advantage to locking? Well, it could possibly reduce the amount of pointers that the OS would have to manage, but now a days the amount of savings is pretty negligible. The biggest advantage I can think of to locking is this: you save some user-viewable ambiguity. If user a is running a binary file, and user b deletes it, then the actual file has to stick around until user A's process completes. Yet, if User B or any other users look on the file system for it, they won't be able to find it - but it will continue to take up space. Not really a huge concern to me.
I think largely it's more of a question on backwards compatibility with window's file systems.
I think you're too absolute about Windows. Normally, it doesn't allocate swap space for the code part of an executable. Instead, it keeps a lock on the excutable & DLLs. If discarded code pages are needed again, they're simply reloaded. But with /SWAPRUN, these pages are kept in swap. This is used for executables on CD or network drives. Hence, windows doesn't need to lock these files.
For .NET, look at Shadow Copy.
If executed code in a file should be locked or not is a design decision and MS simply decided to lock, because it has clear advantages in practice: That way you don't need to know which code in which version is used by which application. This is a major problem with Linux default behaviour, which is simply ignored by most people. If system wide libs are replaced, you can't easily know which apps use code of such libs, most of the times the best you can get is that the package manager knows some users of those libs and restarts them. But that only works for general and well know things like maybe Postgres and its libs or such. The more interesting scenarios are if you develop your own application against some 3rd party libs and those get replaced, because most of the times the package manager simply doesn't know your app. And that's not only a problem of native C code or such, it can happen with almost everything: Just use httpd with mod_perl and some Perl libs installed using a package manager and let the package manager update those Perl libs because of any reason. It won't restart your httpd, simply because it doesn't know the dependencies. There are plenty of examples like this one, simply because any file can potentially contain code in use in memory by any runtime, think of Java, Python and all such things.
So there's a good reason to have the opinion that locking files by default may be a good choice. You don't need to agree with that reasons, though.
So what did MS do? They simply created an API which gives the calling application the chance to decide if files should be locked or not, but they decided that the default value of this API is to provide an exclusive lock to the first calling application. Have a look at the API around CreateFile and its dwShareMode argument. That is the reason why you might not be able to delete files in use by some application, it simply doesn't care about your use case, used the default values and therefore got an exclusive lock by Windows for a file.
Please don't believe in people telling you something about Windows doesn't use ref counting on HANDLEs or doesn't support Hardlinks or such, that is completely wrong. Almost every API using HANDLEs documents its behaviour regarding ref counting and you can easily read in almost any article about NTFS that it in deed does support Hardlinks and always did. Since Windows Vista it has support for Symlinks as well and the Support for Hardlinks has been improved by providing APIs to read all of those for a given file etc.
Additionally, you may simply want to have a look at the structures used to describe a file in e.g. Ext4 compared to those of NTFS, which have a lot in common. Both work with the concept of extents, which separates data from attributes like file name, and inodes are pretty much just another name for an older, but similar concept of that. Even Wikipedia lists both file systems in its article.
There's really a lot of FUD around file locking in Windows compared to other OSs on the net, just like about defragmentation. Some of this FUD can be ruled out by simply reading a bit on the Wikipedia.
NT variants have the
openfiles
command, which will show which processes have handles on which files. It does, however, require enabling the system global flag 'maintain objects list'
openfiles /local /?
tells you how to do this, and also that a performance penalty is incurred by doing so.
Executables are progressively mapped to memory when run. What that means is that portions of the executable are loaded as needed. If the file is swapped out prior to all sections being mapped, it could cause major instability.
UNIX file-locking is dead-easy: The operating system assumes that you know what you are doing and lets you do what you want:
For example, if you try to delete a file which another process has opened the operating system will usually let you do it. The original process still keeps it's file-handles until it terminates - at which point the the file-system will quietly re-cycle the disk-resources. No fuss, that's the way I like it.
How different things are on Windows: If I try to delete a file which another process is using I get an Operating-System error. The file is untouchable until the original process releases it's lock on the file. That was great back in the single-user days of MS-DOS when any locking process was likely to be on the same computer that contained the files, however on a network it's a nightmare:
Consider what happens when a process hangs while writing to a shared file on a Windows file-server. Before the file can be deleted we have to locate the computer and ID the process on that computer which originally opened the file. Only then can we kill the process and delete our unwanted file.
What a nuisance!
Is there a way to make this better? What I want is for file-locking on Windows to behave a like file-locking in UNIX. I want the operating system to just let me do what I want because I'm in charge and I know what I'm doing...
...so can it be done?
No. Windows is designed for the "average user", that is people who don't understand anything about a computer. Therefore, the OS tries to be smart to avoid PEBKACs. To quote Bill Gates: "There are no issues with Windows that any number of people want to be fixed." Of course, he knows that 99.9999% of all Windows users can't tell whether the program just did something odd because of them or the guy who wrote it.
Unix was designed when the world was more simple and anyone close enough to a computer to touch it, probably knew how to assemble it from dirty sand. Therefore, the OS usually lets you do what you want because it assumes that you know better (and if you didn't, you will next time).
Technical answer: Unix allocates an "i-nodes" if you create a file. I-nodes can be shared between processes. If two processes create the same file (that is, two processes call create() with the same path), then you end up with two i-nodes. This is by design. It allows for a fancy security feature: You can create files which no one can open but yourself:
Open a file
Delete it (but keep the file handle)
Use the file any way you like
Close the file
After step #2, the only process in the universe who can access the file is the one who created it (unless you want to read the hard disk block by block). The OS will keep the data alive until you either close the file or your process dies (at which time Unix will clean up after you).
This design is the foundation of all Unix filesystems. The Windows file system NTFS works much the same way but the high level API is different. Many applications open files in exclusive mode (which prevents anyone, even backup programs) to read the file. This is even true for applications which just display information like PDF viewers.
That means you'll have to fix all the Windows applications to achieve the desired effect. If you have access to the source, you can create a file in a shared mode. That would allow other processes to access it at the same time but then, you will have to check before every read/write if the file still exists, whether someone has made changes, etc.
According to MSDN you can specify to CreateFile() 3rd parameter (dwSharedMode) shared mode flag FILE_SHARE_DELETE which:
Enables subsequent open operations on a file or device to request delete access.
Otherwise, other processes cannot open the file or device if they request delete access.
If this flag is not specified, but the file or device has been opened for delete access, the function fails.
Note Delete access allows both delete and rename operations.
http://msdn.microsoft.com/en-us/library/aa363858(VS.85).aspx
So if you're can control your applications you can use this flag.
Note that Process Explorer allow for force closing of file handles (for processes local to the box on which you are running it) via Handle -> Close Handle.
Unlocker purports to do a lot more, and provides a helpful list of other tools.
Also deleting on reboot is an option (though this sounds like not what you want)
That doesn't really help if the hung process still has the handle open. It won't release the resources until that hung process releases the handle. But anyway, in Windows it is possible to force close a file out from under a process that's using it. Process Explorer from sysinternals.com will let you look at and close handles that a process has open.