Fastest way to move files within remote computer from Cocoa application? - cocoa

I have files stored in a shared directory on one computer and a Cocoa Application running on another computer on the same LAN.
I want the application to move files within the shared directory.
I’m using -NSFileManager copyItemAtPath: toPath: error:. But sometimes it seems extremely slow, regardless of file size. Why would that operation be much longer than doing it directly on the shared directory’s computer?

I'd guess, I don't know for sure, that NSFileManager first downloads the file to copy and then reuploads the downloaded file under a different name. The last thing it does is removing the original file. Of course the downloading and uploading take some time.
The reason for this procedure is that most protocols don't have a 'copy' command. So the client will have to do all the work itself with the explained procedure.

Related

Detect incompatible file location (iCloud, Dropbox, shared folders) for custom file format

I’m designing a custom file format. It will be either a monolith file or a folder with smaller files. It’s a rather large file in total and there is no need to load everything into memory at once. It would make it also slower than necessary. One of the file(s) may or may not be database file. Running SQL queries would be useful.
The user can have many such files. The user might want to share files with others even if it takes some time to up/download it.
Conceptually I run into issues with shared network folders, Dropbox, iCloud, etc. Such services can lead to sync issues if the file is not loaded entirely in memory or the database file can get corrupted.
One solution is to prohibit storing the file on such services. Either by using a user/library folder or forcing the user to pick a local folder.
Using a folder in library means recreating a file navigation system like Finder. It limits the choice of the user as well in where the files end up. Limiting the location to a local folder seems the better choice.
Is there a way to programmatically detect if a folder is local?

Is there a way to have multiple files with same backing data in macOS FileProvider extension?

I'm creating a macOS FileProviderExtension for the remote Document Storage System (kind of like GoogleDrive), where it is possible to share a single document with multiple folders.
For example, Document1.pdf can simultaneously exist in Folder A and Folder B because it's shared with both folders. In my FileProvider extension, this would mean that file should be accessible in both folders:
Folder A/Document1.pdf
Folder B/Document1.pdf
But the file provider extension will treat those as two completely separate files. I.e., if you download one of them, and then try to open the other one, it will redownload the other one, effectively doubling the used space on user's disk and consuming network connection.
I'm looking for a way to tell the FileProviderItem what is the backing data for the given file, and thus solve problems such as:
If user downloads a file in one location, ideally I would tell the FileProvider extension that the same document in all the other locations is also now downloaded (cloud icon should disappear from all files).
Some approaches I considered:
I thought of using symbolic links as part of solution, but I don't really think that's possible
When user tries to open non-downloaded file, fetchContents(for itemIdentifier) callback is invoked. Once file is downloaded, I would ideally now notify all the other files of the same document that they are downloaded, i.e. by updating the isDownloaded property in NSFileProviderItem, but that doesn't seem to work. Also, even if I do that, I still can't say to file, what his backing data file should be.
By turning off the Sandbox capability, I guess I could, when user tries to download/open the file which has already been downloaded in other location, immediately report that file has been downloaded and provide the copy of already downloaded file as data for the requested file, but there are two drawbacks here:
3.1. I would have to turn off the Sandbox capability because I want to access the file in FileProvider path directly
3.2 System would still use disk space for each file. So, if I have same document in multiple folders, extension would keep all those copies in the system, without the option to tell it that for all those files, there is same backing data file somewhere in extension's Container.

Appcelerator with expansion files

Has anyone successfully used expansion files with appcelerator? I have found the module that is supposed to let us work with them, however I am running into the problem of the .obb file being downloaded directly from the play store and then being downloaded again with the module. Aside from that I can't seem to get access to any of the files contained within the .obb using the module.
I have heard all of the woes of having a big app, so please don't just tell me to make a smaller app, my client has a large "library" that they want installed directly on the app. It consists of html files that call javascript files and images through relative paths.
Are expansion files even the way to go with this? Should I simply zip up my files and download them after, unpack them, and access them using the file system? I am just looking for a way to get these large files onto the device and access them as if they were in the resources directory of the app.
Any help would be appreciated. thanks!
I have an app that needs over 300 PNG images and text files (to populate a database with) and could not get the app small enough to put up on the Play Store. What I ended up doing was create a barebones app (enough to get the user started) then I download the files on start up. I didn't mess with zipping everything (the data is constantly being updated), but if the information you have is pretty static, you could zip it. Once the download successfully finishes and installs the data, it sets an app property (Ti.App.Properties.setInt) 0 is never ran, 1 is partial download and 2 is download is installed (you can do this however you want, but that's what I did).

How Windows differentiates between Copied files and Created files

I am looking for a bit of advice on how Windows file system differentiates between files that are copied(copy and pasted from another location) and files that are created (a new file created in a a folder).
A bit of background to this so it makes more sense: I have an application that is used to move files. The application will monitor a directory and when a file is placed in the directory it will move it elsewhere. However, I am having issues where the application will not pick up a file that is created within the monitored directory but will pick up files that have been created else where and are copied into the monitored directory.
Any advice on how Windows differentiates, or if it does at all, would be greatly appreciated.
This is running on Microsoft Windows Server 2008 R2 Standard. I can't dig into the code and see what is going on under the hood unfortunately, so need to get an idea of the difference if any there would be.
The filesystems don't know the operation of "copying" the file. Any copying is a sequence of file open/read/write/close operations. The same applies to moving to the different filesystem. Moving within the same filesystem, though, is an operation native to the filesystems and it can be done with one command to the filesystem.
Now about your problem. Most likely you catch the creation of the file (before the data is written), and when your application reacts, the file is still opened for writing. So you need to wait until the file is closed.
Depending on how you do monitoring, such waiting is done in different ways. In filesystem filters you wait for file close operation. With .NET FileSystemWatcher there's no way to track file close operation, but I saw a couple of tricks here on StackOverflow (don't have a link though, sorry).
A file existing in D: drive, from creation
The same file which was copied to E: drive
As you can see, the file which was copied to E: drive, has a creation time as the latest, when it was copied to and the modification time as the last modification time for that file in previous location.
So I guess this illustrates, how windows differentiates between copied files and created files.

Renaming A Running Process' File Image On Windows

I have a Windows service application on Vista SP1 and I've found that users are renaming its executable file (while it's running) and then rebooting, thus causing it to fail to start on next bootup because the service manager can no longer find the exe file since it's been renamed.
I seem to recall that with older versions of Windows you couldn't do this because the OS placed a lock on the file. Even with Vista SP1 I still cannot copy over the existing file when it's running - Windows reports that the file is in use - makes sense. So why should I be allowed to rename it? What happens if Windows needs to page in a new code page from the exe but the file has been renamed since it was started? I ran Process Monitor while renaming the exe file, etc, but Process Mon didn't report anything strange and just logged changing the filename like any other file.
Does anyone know what's going on here behind the scenes? It's seem counter intuitive that Windows would allow a running process' filename (or its dependent DLLs) to be changed. What am I missing here?
your concept is wrong ... the filename is not the center of the file-io universe ... the handle to the open file is. the file is not moved to a different section of disk when you rename it, it's still in the same place and the part of the disk the internal data structure for the open file is still pointing to the same place. bottom line is that your observations are correct. you can rename a running program without causing problems. you can create a new file with the same name as the running program once you've renamed it. this is actually useful behavior if you want to update software while the software is running.
As long as the file is still there, Windows can still read from it - it's the underlying file that matters, not its name.
I can happily rename running executables on my XP machine.
The OS keeps an open handle to the .exe file,. Renaming the file simply changes some filesystem metadata about the file, without invalidating open handles. So when the OS goes to page in more code, it just uses the file handle it already has open.
Replacing the file (writing over its contents) is another matter entirely, and I'm guessing the OS opens with the FILE_SHARE_WRITE flag unset, so no other processes can write to the .exe file.
Might be a stupid question but, why do users have access to rename the file if they are not suppose to rename the file? But yeah, it's allowed because, as the good answers point out, the open handle to the file isn't lost until the application exits. And there are some uses for it as well, even though I'm not convinced updating an application by renaming its file is a good practice.
You might consider having your service listen to changes to the directory that your service is installed in. If it detects a rename, then it could rename itself back to what it's supposed to be.
There are two aspects to the notion of file here:
The data on the disk - that's the actual file.
The file-name (could be several or none) which you can give that data - called directory entries.
What you are renaming is the directory entry, which still references the same data. Windows doesn't care about your doing so, as it still can access the data when it needs to. The running process is mapped to the data, not the name.

Resources