Is there a way to fake file on file sistem or Write a file that visible only to my EXE file - windows

Ok i wrote and application that use Adobe ActiveX control for displaying PDF files.
Adobe ActiveX control load files only from file system. So i nead to feed a file path to this control.
Problem is that i don't want to store PDF files on file system. Event temporary! I wan't to store my PDF files only in memory, and i want to use Adobe ActiveX control.
So i nead:
1) A way to fake file on a file system. So this control would "think" that there is a file, but would load it from memory
2) A way to create file on file system that would be "visible" to only one application, so my PDF control could load it, and other users won't even see it..
3) Something else
PS: I'm not asking to "finish my home work", i'm just asking - is there a way to do this?

You can almost do it (means: no you can't, but you can do something that comes close).
Creating a file with FILE_ATTRIBUTE_TEMPORARY does in principle create a file, temporarily. However, as long as there is sufficient buffer cache (which is normally always the case unless your file is tens to hundreds of megabytes), the system will not write to disk. This is not just something that happens accidentially, but the actual specified behaviour of this flag.
Further, specifying 0 as share mode and FILE_FLAG_DELETE_ON_CLOSE will prevent any other process from opening your file for as long as you keep it open, even if someone knows it's there, and the file will "disappear" when you close it. Even if your application crashes, the OS will clean up behind you (if DRM is the reason). If you're in super paranoia mode and worried about the system bluescreening while your file exists, you can additionally schedule a pending move too. This will, in case of a system crash, remove the file during boot.
Lastly, given NTFS, you can create an alternate stream with a random, preferrably unique name (e.g. SHA1 of the document or a UUID) on any file or even directory. Alternate streams on directories are ... a kind of nasty hack, but entirely legal and they work just fine, and don't appear in Explorer. This will not really make your file invisible, but nearly so (in almost every practical aspect, anyway). If you're a good citizen, you will want to use the system temp folder for such a thing, not the program folder or some other place that you shouldn't write to.
Creating an alternate stream is dead easy too, just use the normal file or directory name and append a colon (:) and the name of the stream you want. No extra API needed.
Other than that, it gets kind of hard. You can of course always create something like a ramdisk (would be tough to hide it, though), or try to use one of the stream-from-memory functions to fool an application into reading from a memory buffer on the allegation of a file... but that's not trivial stuff.

If something needs to be on a file system to pass to another application, you can not hide it/limit it to certain processes. Anything your app can see, anything else at the same privilige level can also see/access. You may be able to lock it but how depends on why you want to protect against.
Remember that the user's PC is theirs, not yours so they have full access to everything on it.

You can create a virtual disk and limit access to it to only specific application. Do to this you would have to write a file system driver or a filesystem filter driver. Both work in kernel mode and are tricky to write and maintain. Our company offers components that let you avoid writing drivers yourself and write business logic in user-mode (we provide drivers in those products).
Your most obvious option is to get rid of Adobe Reader control and use some third-party component that displays PDFs and can load them from memory.
But in general a smart hacker would be able to capture your data unless you have (a) non-standard data format, and/or (b) stream the data from the server dynamically, not keeping the complete data on the computer. These are not bulletproof solutions either, but they make hacker's work much harder.

Related

Distinction between copy paste and file read on MiniFilter

I'm planning to make a MiniFilter do make some file encryption, add some meta-data on files.
I think I understand what I need to do, in my MiniFilter, to make that files are stored in their encrypted form but can be read by the system with no problems.
If an application ask a read on the file, I need to query the encrypted part, dechiper and send it back to the system.
If I try to copy the file, I need to copy the whole file, with meta-data and encrypted payload.
But I think I may have a problem with meta-data : as I cannot find a way to know if the IRP_MJ_READ i got is from an app trying to read the file or a copy-paste request, I will never be able to read the meta-data and either copy them.
Is there some informations, in the IRP_MJ_READ or the IRP_MJ_CREATE, that is specific from a copy paste action ?
Your task will not be easy or trivial by any means. Making an encryption filesystem filter in Windows is hard.
First of all I will give you a few hints and pointers. The best thing you could to is parse OSR NTFSD list for posts and threads about this. It is a gold mine when it comes to these kinds of filters.
Check out the swapbuffers sample from Microsoft. They show how you can replace data in the Read/Write I/O path with your own. In this case as you described your scenario encrypt on the Write and decrypt on the Read
For starters filter only the read/writes that have IRP_NO_CACHE flag set. Make sure all your Read/Write are volume sector size multiples in size. See more information about this flag here
Use a block cipher that aligns with the volume sector size, all the popular ones should. See CNG
Explore from there on. Modifying only this should be pretty straight forward.
Make sure you will be using a VM and snapshots as well as try to monitor a particular file only and encrypt/decrypt only that file as it will take you many tries until you succeed.
Is there some informations, in the IRP_MJ_READ or the IRP_MJ_CREATE,
that is specific from a copy paste action ?
None whatsoever. The kernel is blind to this. Even the Copy/Paste itself at the end of the day if you think about it will result in explorer.exe doing a file open, reading from a source file, and writing to the destination file using system calls. The OS is there to make sure the system calls work and do their job, it does not know nor it needs to know that the Read of the data or metadata came from you copy/pasting, right clicking Properties on explorer.exe or who knows, you might use Total Commander and do copy paste from there and this one could implement its copy totally different or use xcopy or robocopy. You need to think in a more abstract way in the kernel.
Good luck.

What is an alternative to mandatory file locking on macOS?

I'm writing an app for macOS with the primary goal of managing arbitrary user files in a certain manner. 'Management' includes arbitrarily reading/writing/updating these files. Management is not internally a discrete event, and may consist of several idle-periods. However, it must appear so to the user.
Note: The term 'user' includes any and all user-activity (i.e. via Finder) or user-initiated processes (i.e. other apps opened by the user; though not running as root, similar to the privileges of my own application).
My app does not store these files in an owned container (e.g. sandboxed app container), but rather runs continuously in the background keeping track of these files, monitoring for changes and managing them as necessary.
The duration of this 'management' may vary from a few milliseconds to a few hours.
I'm trying to write a construct (i.e. class / struct) to encapsulate references to these 'hot' files (i.e. files under management). During management, the user must not be capable of reading/writing-to/deleting these files, unless the app is explicitly quit (through normal quit / forced quit, regardless).
Is there any way I can "lock" a file, as to prevent user reading/writing/updating and/or even modification of permissions?
Here are two possible solutions:
Copy the file to an undisclosed location, manage it, and overwrite the old file. This is undesirable for multiple reasons: copying is expensive and impractical for large files, user is not explicitly aware of management, does nothing to prevent other processes from seeing the file as "free".
Modify file permissions. I'm not sure if this is even possible (please let me know in detail if it is!), but if my process could modify file permissions as to prevent user-access, it would solve the essence my problem. However, if anything were to prevent my app from 'unlocking' these files (be it through a crash/force-quit etc.), it would leave the files inaccessible to the user.
A third, though not really a solution, would be to simply not attempt to 'lock' any of these files. I could just monitor the files continuously, and alert the user of any failure. I really don't want to do this, hence the question.
The second solution seems quite promising. I can't, however, find any high-level APIs that let me interface with the file ACLs (access-control-lists). I'm not even sure whether I'm correct in my understanding of how it would work, so feel free to build upon that thought and turn it into a concrete answer.
I'm also curious as to how Finder seems to know whether files are being used by other processes. Again, I think I know but I'm not entirely sure, so better ask it here with the main question.

Possible to bypass caching and download/open file to RAM?

Preamble:
Recently I came across an interesting story about people who seem to be sending emails with documents that contain child pornography. This is an example (this one is jpeg but im hearing about it being done with PDFs, which generally cant be previewed)
https://www.youtube.com/watch?v=zislzpkpvZc
This can pose a real threat to people in investigative journalism, because even if you delete the file after its been opened in Temp the file may still be recovered by forensics software. Even just having opened the file already puts you in the realm of committing a felony.
This also can pose a real problem to security consultants for a group. Lets say person A emails criminal files, person B is suspicious of email and forwards it to security manager for their program. In order to analyze the file the consultant may have to download it on a harddrive, even if they load it in a VM or Sandbox. Even if they figure out what it is they are still in this legal landmine area that bad timing could land them in jail for 20 years. Thinking about this if the memory was to only enter the RAM then upon a power down all traces of this opened file would disappear.
Question: I have an OK understanding about how computer architecture works, but this problem presented earlier made me start wondering. Is there a limitation, at the OS, hardware, or firmware level, that prevents a program from opening a stream of downloading information directly to the RAM? If not let's say you try to open a pdf, is it possible for the file it's opening to instead be passed to the program as a stream of downloading bytes that could then rewrite/otherwise make retention of the final file on the hdd impossible?
Unfortunately I can only give a Linux/Unix based answer to this, but hopefully it is helpful and extends to Windows too.
There are many ways to pass data between programs without writing to the hard disk, it is usually more of a question of whether the software applications support it (web browser and pdf reader for your example). Streams can be passed via pipes and sockets, but the problem here is that it may be more convenient for the receiving program to seek back in the stream at certain points rather than store all the data in memory. This may be a more efficient use of resources too. Hence many programs do not do this. Indeed a pipe can be made to look like a file, but if the application tries to seek backward, it will cause an error.
If there was more demand for streaming data to applications, it would probably be seen in more cases though as there are no major barriers. Currently it is more common just to store pdfs in a temporary file if they are viewed in a plugin and not downloaded. Video can be different though.
An alternative is to use a RAM drive, it is common for a Linux system to have at least one set up by default (tmpfs), although it seems for Windows that you have to install additional software. Using one of these removes the above limitations and it is fairly easy to set a web browser to use it for temporary files.

Is there any way to get READ access to a file opened in exclusive access ie FILE_SHARE_NONE

Without doing dirty and nasty ways, I believe this to be not allowable from user mode, even with SE_BACKUP_NAME.
Things I consider dirty and nasty:
Figuring out what process owns the handle and writing code to run in that process and close the handle.
Reading/parsing the MFT/FAT table
Using a Kernel Driver
Yes, there is a way, although it may not suit your needs; it isn't dirty or nasty, but it's heavy, i.e., it is not straightforward to code and it creates a disproportionate amount of system load if you're just trying to read a single file.
However, if you need to do this, this is the only reasonable and safe solution I'm aware of: see the MSDN documentation on the Volume Shadow Copy Service.
Most backup software uses VSS nowadays.

Graceful File Reading without Locking

Whiteboard Overview
The images below are 1000 x 750 px, ~130 kB JPEGs hosted on ImageShack.
Internal
Global
Additional Information
I should mention that each user (of the client boxes) will be working straight off the /Foo share. Due to the nature of the business, users will never need to see or work on each other's documents concurrently, so conflicts of this nature will never be a problem. Access needs to be as simple as possible for them, which probably means mapping a drive to their respective /Foo/username sub-directory.
Additionally, no one but my applications (in-house and the ones on the server) will be using the FTP directory directly.
Possible Implementations
Unfortunately, it doesn't look like I can use off the shelf tools such as WinSCP because some other logic needs to be intimately tied into the process.
I figure there are two simple ways for me to accomplishing the above on the in-house side.
Method one (slow):
Walk the /Foo directory tree every N minutes.
Diff with previous tree using a combination of timestamps (can be faked by file copying tools, but not relevant in this case) and check-summation.
Merge changes with off-site FTP server.
Method two:
Register for directory change notifications (e.g., using ReadDirectoryChangesW from the WinAPI, or FileSystemWatcher if using .NET).
Log changes.
Merge changes with off-site FTP server every N minutes.
I'll probably end up using something like the second method due to performance considerations.
Problem
Since this synchronization must take place during business hours, the first problem that arises is during the off-site upload stage.
While I'm transferring a file off-site, I effectively need to prevent the users from writing to the file (e.g., use CreateFile with FILE_SHARE_READ or something) while I'm reading from it. The internet upstream speeds at their office are nowhere near symmetrical to the file sizes they'll be working with, so it's quite possible that they'll come back to the file and attempt to modify it while I'm still reading from it.
Possible Solution
The easiest solution to the above problem would be to create a copy of the file(s) in question elsewhere on the file-system and transfer those "snapshots" without disturbance.
The files (some will be binary) that these guys will be working with are relatively small, probably ≤20 MB, so copying (and therefore temporarily locking) them will be almost instant. The chances of them attempting to write to the file in the same instant that I'm copying it should be close to nil.
This solution seems kind of ugly, though, and I'm pretty sure there's a better way to handle this type of problem.
One thing that comes to mind is something like a file system filter that takes care of the replication and synchronization at the IRP level, kind of like what some A/Vs do. This is overkill for my project, however.
Questions
This is the first time that I've had to deal with this type of problem, so perhaps I'm thinking too much into it.
I'm interested in clean solutions that don't require going overboard with the complexity of their implementations. Perhaps I've missed something in the WinAPI that handles this problem gracefully?
I haven't decided what I'll be writing this in, but I'm comfortable with: C, C++, C#, D, and Perl.
After the discussions in the comments my proposal would be like so:
Create a partition on your data server, about 5GB for safety.
Create a Windows Service Project in C# that would monitor your data driver / location.
When a file has been modified then create a local copy of the file, containing the same directory structure and place on the new partition.
Create another service that would do the following:
Monitor Bandwidth Usages
Monitor file creations on the temporary partition.
Transfer several files at a time (Use Threading) to your FTP Server, abiding by the bandwidth usages at the current time, decreasing / increasing the worker threads depending on network traffic.
Remove the files from the partition that have successfully transferred.
So basically you have your drives:
C: Windows Installation
D: Share Storage
X: Temporary Partition
Then you would have following services:
LocalMirrorService - Watches D: and copies to X: with the dir structure
TransferClientService - Moves files from X: to ftp server, removes from X:
Also use multi threads to move multiples and monitors bandwidth.
I would bet that this is the idea that you had in mind but this seems like a reasonable approach as long as your really good with your application development and your able create a solid system that would handle most issues.
When a user edits a document in Microsoft Word for instance, the file will change on the share and it may be copied to X: even though the user is still working on it, within windows there would be an API see if the file handle is still opened by the user, if this is the case then you can just create a hook to watch when the user actually closes the document so that all there edits are complete, then you can migrate to drive X:.
this being said that if the user is working on the document and there PC crashes for some reason, the document / files handle may not get released until the document is opened at a later date, thus causing issues.
For anyone in a similar situation (I'm assuming the person who asked the question implemented a solution long ago), I would suggest an implementation of rsync.
rsync.net's Windows Backup Agent does what is described in method 1, and can be run as a service as well (see "Advanced Usage"). Though I'm not entirely sure if it has built-in bandwidth limiting...
Another (probably better) solution that does have bandwidth limiting is Duplicati. It also properly backs up currently-open or locked files. Uses SharpRSync, a managed rsync implementation, for its backend. Open source too, which is always a plus!

Resources