Recursive MoveFile/CopyFile - winapi

I’m working on a console program and need to use MoveFile/CopyFile to allow moving and copying files and directories (possibly across volumes). The problem of course is that copying or moving a directory to another volume does not work with the aforementioned functions because they are not recursive.
SHFileOperation will not do because this is a console app and I am using the variations that allow for progress display (MoveFileWithProgress/CopyFileEx), and SHFileOperation uses the GUI for displaying progress instead of the console.
I considered using FindNextFile, but even then I could not find any examples of code to recursively (Move|Copy)File with FindNextFile or otherwise—which is kind of baffling since this issue must have come up before.
Is there an easy way to do this or do I have to resort to reinventing the wheel?

Related

What is an alternative to mandatory file locking on macOS?

I'm writing an app for macOS with the primary goal of managing arbitrary user files in a certain manner. 'Management' includes arbitrarily reading/writing/updating these files. Management is not internally a discrete event, and may consist of several idle-periods. However, it must appear so to the user.
Note: The term 'user' includes any and all user-activity (i.e. via Finder) or user-initiated processes (i.e. other apps opened by the user; though not running as root, similar to the privileges of my own application).
My app does not store these files in an owned container (e.g. sandboxed app container), but rather runs continuously in the background keeping track of these files, monitoring for changes and managing them as necessary.
The duration of this 'management' may vary from a few milliseconds to a few hours.
I'm trying to write a construct (i.e. class / struct) to encapsulate references to these 'hot' files (i.e. files under management). During management, the user must not be capable of reading/writing-to/deleting these files, unless the app is explicitly quit (through normal quit / forced quit, regardless).
Is there any way I can "lock" a file, as to prevent user reading/writing/updating and/or even modification of permissions?
Here are two possible solutions:
Copy the file to an undisclosed location, manage it, and overwrite the old file. This is undesirable for multiple reasons: copying is expensive and impractical for large files, user is not explicitly aware of management, does nothing to prevent other processes from seeing the file as "free".
Modify file permissions. I'm not sure if this is even possible (please let me know in detail if it is!), but if my process could modify file permissions as to prevent user-access, it would solve the essence my problem. However, if anything were to prevent my app from 'unlocking' these files (be it through a crash/force-quit etc.), it would leave the files inaccessible to the user.
A third, though not really a solution, would be to simply not attempt to 'lock' any of these files. I could just monitor the files continuously, and alert the user of any failure. I really don't want to do this, hence the question.
The second solution seems quite promising. I can't, however, find any high-level APIs that let me interface with the file ACLs (access-control-lists). I'm not even sure whether I'm correct in my understanding of how it would work, so feel free to build upon that thought and turn it into a concrete answer.
I'm also curious as to how Finder seems to know whether files are being used by other processes. Again, I think I know but I'm not entirely sure, so better ask it here with the main question.

Alternatives to ShellAPI to get file list and icons

I need to build a file/folder tree with associated file icons and special locations like network computers.
Currently I'm using Shell API to achieve it: SHGetFileInfo, IShellFolder.EnumObjects and other functions.
It works fine most of the time, but occasionally, on customer's machines it causes various errors like random access violations deep in system libraries. Analyzing bug reports, some of those seem to be a result of 3rd party shell extensions which are loaded to my app's address space when the Shell API is used.
I'm thinking to somehow avoid using Shell API and do the job another way. What are the other good approaches to build a folder tree?
If the problem really is due to faulty shell extensions then the only sensible approach, in my view, is to remove those shell extensions. Trying to work with the shell, but avoid using the shell API won't lead anywhere useful. In fact I think that the likely outcome is that your alternative code will be less functional. All for the sake of one user that won't fix their broken machine. That's a terrible trade off.
If explorer is also crashing then that is a clear indication that the problem is indeed due to shell extensions.
Having said all of that, you post makes me suspect that you have had bug reports from multiple clients. That makes your diagnosis much less plausible. The shell API is a complex beast and it is very plausible that your code is defective in some way. I suspect that you may be guilty of a case of diagnosis by wishful thinking. It's very easy, when facing a fault that is hard to reproduce and diagnose, to believe that your code is not to blame. If multiple clients are reporting problems then my bet is that the defect can be found in your code.

Bash on OSX: How to determine if network file (AFP) is in use?

I've got a bash script that runs on OSX.
It needs to manipulate some files on a network-share (AFP share on a Synology NAS).
Unfortunately those files are sometimes still being written when the script runs.
How do I determine if the file is in use or not ?
The normal method is by using "lsof", but that doesn't seem to work on network files if the other user is coming from another client on the LAN.
I could just attempt to rename the file. I suppose that will fail if the file is in use, but that is far from elegant.
Anybody have a better solution ?
This is not a generally solvable problem. The typical solution is to write the file to a temporary location and then move it to the final processing directory (since move within a filesystem is generally atomic). If you cannot control how or where the file is written, then you are left with heuristics, particularly doing things like looking at the file and seeing if it hasn't grown in "awhile," but none of these are particularly good compared to separating the writing from the enqueuing.
Are the other potential accesses being done by arbitrary programs or can it be assumed that it's being done by other instances of your program running on other clients?
If the file is private to your program, then all instances of your program can participate in a cooperative locking scheme. You might use the lockfile command, for example. Be very sure to clean up your lock files even in the face of signals/exceptions. You can use the trap built-in command to help with that. See here for an explanation.

Graceful File Reading without Locking

Whiteboard Overview
The images below are 1000 x 750 px, ~130 kB JPEGs hosted on ImageShack.
Internal
Global
Additional Information
I should mention that each user (of the client boxes) will be working straight off the /Foo share. Due to the nature of the business, users will never need to see or work on each other's documents concurrently, so conflicts of this nature will never be a problem. Access needs to be as simple as possible for them, which probably means mapping a drive to their respective /Foo/username sub-directory.
Additionally, no one but my applications (in-house and the ones on the server) will be using the FTP directory directly.
Possible Implementations
Unfortunately, it doesn't look like I can use off the shelf tools such as WinSCP because some other logic needs to be intimately tied into the process.
I figure there are two simple ways for me to accomplishing the above on the in-house side.
Method one (slow):
Walk the /Foo directory tree every N minutes.
Diff with previous tree using a combination of timestamps (can be faked by file copying tools, but not relevant in this case) and check-summation.
Merge changes with off-site FTP server.
Method two:
Register for directory change notifications (e.g., using ReadDirectoryChangesW from the WinAPI, or FileSystemWatcher if using .NET).
Log changes.
Merge changes with off-site FTP server every N minutes.
I'll probably end up using something like the second method due to performance considerations.
Problem
Since this synchronization must take place during business hours, the first problem that arises is during the off-site upload stage.
While I'm transferring a file off-site, I effectively need to prevent the users from writing to the file (e.g., use CreateFile with FILE_SHARE_READ or something) while I'm reading from it. The internet upstream speeds at their office are nowhere near symmetrical to the file sizes they'll be working with, so it's quite possible that they'll come back to the file and attempt to modify it while I'm still reading from it.
Possible Solution
The easiest solution to the above problem would be to create a copy of the file(s) in question elsewhere on the file-system and transfer those "snapshots" without disturbance.
The files (some will be binary) that these guys will be working with are relatively small, probably ≤20 MB, so copying (and therefore temporarily locking) them will be almost instant. The chances of them attempting to write to the file in the same instant that I'm copying it should be close to nil.
This solution seems kind of ugly, though, and I'm pretty sure there's a better way to handle this type of problem.
One thing that comes to mind is something like a file system filter that takes care of the replication and synchronization at the IRP level, kind of like what some A/Vs do. This is overkill for my project, however.
Questions
This is the first time that I've had to deal with this type of problem, so perhaps I'm thinking too much into it.
I'm interested in clean solutions that don't require going overboard with the complexity of their implementations. Perhaps I've missed something in the WinAPI that handles this problem gracefully?
I haven't decided what I'll be writing this in, but I'm comfortable with: C, C++, C#, D, and Perl.
After the discussions in the comments my proposal would be like so:
Create a partition on your data server, about 5GB for safety.
Create a Windows Service Project in C# that would monitor your data driver / location.
When a file has been modified then create a local copy of the file, containing the same directory structure and place on the new partition.
Create another service that would do the following:
Monitor Bandwidth Usages
Monitor file creations on the temporary partition.
Transfer several files at a time (Use Threading) to your FTP Server, abiding by the bandwidth usages at the current time, decreasing / increasing the worker threads depending on network traffic.
Remove the files from the partition that have successfully transferred.
So basically you have your drives:
C: Windows Installation
D: Share Storage
X: Temporary Partition
Then you would have following services:
LocalMirrorService - Watches D: and copies to X: with the dir structure
TransferClientService - Moves files from X: to ftp server, removes from X:
Also use multi threads to move multiples and monitors bandwidth.
I would bet that this is the idea that you had in mind but this seems like a reasonable approach as long as your really good with your application development and your able create a solid system that would handle most issues.
When a user edits a document in Microsoft Word for instance, the file will change on the share and it may be copied to X: even though the user is still working on it, within windows there would be an API see if the file handle is still opened by the user, if this is the case then you can just create a hook to watch when the user actually closes the document so that all there edits are complete, then you can migrate to drive X:.
this being said that if the user is working on the document and there PC crashes for some reason, the document / files handle may not get released until the document is opened at a later date, thus causing issues.
For anyone in a similar situation (I'm assuming the person who asked the question implemented a solution long ago), I would suggest an implementation of rsync.
rsync.net's Windows Backup Agent does what is described in method 1, and can be run as a service as well (see "Advanced Usage"). Though I'm not entirely sure if it has built-in bandwidth limiting...
Another (probably better) solution that does have bandwidth limiting is Duplicati. It also properly backs up currently-open or locked files. Uses SharpRSync, a managed rsync implementation, for its backend. Open source too, which is always a plus!

Watching a folder using Win32

I'm looking for a straightforward way to watch the contents of a folder using Win32 (minimum target is XP). If possible, it would be nice to use an event-driven approach rather than a polling-type approach. To complicate things, the watched folder may be a network share.
I'm really only interested in capturing "new files". I don't care if I am not informed of renamed or removed files.
Is there an event-driven way, or is polling my only choice when dealing with Win32?
Have you tried out FindFirstChangeNotification and FindNextChangeNotification .
Download an example source code from here
FindFirstChangeNotification is the right API here, as Suraj says. I did however find when using this (many years ago), that it sometimes failed if used it to watch a network share with an infinite wait on the handle it returns. I simply applied a timeout and re-issued the FFCN every so often, which solved the problem.
I don't know if later OS updates solved this problem, we never went back and checked :-).

Resources