How to design cloud based filesystem via osxFUSE - macos

Currently i'm working on FUSE based filesystem, that loads file from cloud and stores in local cache.
On user side im using obj-c framework, that wraps Fuse C high level API.
User can see all his files from cloud storage in a listing of a mounted drive. It is just 'ghost' files, not actually stored in local cache.
When user double click on a file, downloading of a file starting.
Under the hood, i block every 'readFileAtPath' call from fuse, till requested chunk of a file data is loaded.
That leads to different freezing issues in Finder and other apps.
For example, when file loading and a reads calls blocked, 'contentsOfDirectoryAtPath' is also blocked - user can't browse a drive, it appears empty, or folders content is incomplete.
I cant respond with error when user clicked on a file, i should download it and provide it content.
Now i'm thinking, is it a correct way to implement such functionality?
Anyone with similar requirements who resolved such issues?
Thanks for any ideas.

Related

Sharing Core Data between Network Extension and the containing application (MacOs)

I am attempting to share Core Data between a Network Extension and the containing application on MacOs. The Network Extension and containing application both have the same AppGroup entitlement and I use the following to get the URL for the storage:
let url = FileManager.default.containerURL(forSecurityApplicationGroupIdentifier: "my-group-name")
For the containing application, the resulting URL is of the form:
/Users/***/Library/Group Containers/my-group-name
But for the Network Extension, the resulting URL is different:
/private/var/root/Library/Group Containers/my-group-name
Both the Network Extension and containing application can access their respective Core Data stores, but, of course, they are not shared since the URLs are different.
What am I missing? How is Core Data supposed to be shared between a Network Extension and the containing application?
And, if it matters, I am creating a DNSProxy type extension.
It turns out that a shared container can only work between two processes running as the same user. In this case the Network Extension is running as root while the containing app is running as a normal user.
This is why the containing app wants to use /Users/***/Libary/... for the data and the Network Extension wants to use /private/var/root/Library/.... Not going to be able to share.
The suggested approach is to use an IPC mechanism to communicate between the Network Extension and the containing app.

Load Balanced File Access without the Cloud

I have an application made up of several microservices to allow independent pieces to be scaled up/down as needed (running as containers in docker swarm and maybe eventually kubernetes). I'm looking for any suggestions for controlling access to a shared file system from multiple containers as I feel like there is probably some system/software out there that I just don't know how to search for it.
Data in the form of physical files will come in to a single location. I need to have one microservice read each file that comes in and basically upload it to a database.
What solutions exist, if any, that would prevent 2 or more instances of the same microservice from fighting over each file?
This will need to work in a 100% offline environment so cloud storage is not an option. Ideally it would work similar to how a message queue works in that the next available consumer would pick up the next available file from this shared file system.

Performance of a java application rendering video files

I have a java/j2ee application deployed in tomcat container on a windows server. The application is a training portal where the training files such as pdf/ppt/flash/mp4 files are read from a share path. When the user clicks a training link, the associated file from the share folder is read downloaded from the share path to the client machine and start running.
If the user clicks mp4/flash/pdf files, it is taking too much time to get opened.
Is there anything in the application level, we need to configure? or it is a configuration for load in the server? or is it something needs attention from a WAN settings?
I am unable to find out a solution for this issue?
Please post your thoughts.
I'm not 100% sure because there is not so much details, but I'm 90% sure that the application code is not the main problem.
For example:
If the user clicks mp4/flash/pdf files, it is taking too much time to get opened.
A PDF is basically just a string. Flash is a client-side technology. And I'm pretty sure that you just send a stream to a video player in order to play a MP4. So the server is supposed to just take the file and send it. The server is probably not the source of your problem if we assume that he can handle the number of requests.
So it's about your network: it is too slow for some reasons. And it's difficult to be more specific without more details.
Regards.

Core Data concurrency with app extensions

I'm developing an app extension that needs to share data with the containing app. I created an app group and moved the core data store of the main app to that folder. From the extension I can create the managed object context and save data to the store and I can also access it from the containing app. Now I have two independent applications accessing the same core data store. This sounds like a recipe for disaster to me. Is what I have set up sufficient for sending data from the extension to the containing app or should I look for another way?
In this situation you'll have two entirely independent Core Data stacks accessing the same persistent store file.
Assuming that you're using SQLite, you're fine, at least as far as data integrity. Core Data uses SQLite transactions to save changes and SQLite is fine with multiple processes using the same file. Neither process will corrupt data for the other or mess up the file.
You will have to deal with keeping data current in the app. For example if someone uses the share extension to create new data while the app is running. You won't get anything like NSManagedObjectContextDidSaveNotification in this case. You'll need to find your own way to ensure you get any new updates.
In many cases you can make this almost trivial-- listen for UIApplicationDidBecomeActiveNotification, which will be posted any time your app comes to the foreground. When you get it, check the persistent store for new data and load it.
If you want to get a little more elegant, you could use something like MMWormhole for a sort-of file based IPC between the app and the extension. Then the extension can explicitly inform the app that there's new data, and the app can respond.
Very interesting answer from Tom Harrington. However I need to mention about MMWormhole. I've found that MMWormhole uses NSFileCoordinator and apple tells that:
Using file coordination in an app extension to access a container
shared with its containing app may result in a deadlock in iOS
versions 8.1.x and earlier.
What Apple suggests for safe save operations you can read here:
You can use CFPreferences, atomic safe save operations on flat
files, or SQLite or Core Data to share data in a group container
between multiple processes even if one is suspended mid transaction.
Here is the link to the Apple Technical Note TN2408.

Downloading large files to PC from OAS Server

We have an Oracle 10g forms application running on a Solaris OAS server, with the forms displaying in IE. Part of the application involves uploading and downloading files (Word docs and PDFs, mainly) from the PC to the OAS server, using Oracle's webutil utility.
The problem is with large files (anything over 25Megs or so), it takes a long time, sometimes many minutes. Uploading seems to work, even with large files. Downloading large files, though, will cause it to error out part way through the download.
I've been testing with a 189Meg file in our development system. Using WEBUTIL_FILE_TRANSFER.Client_To_DB (or Client_To_DB_with_Progress), the download would error out after about 24Megs. I switched to WEBUTIL_FILE_TRANSFER.URL_To_Client_With_Progress, and finally got the entire file to download, but it took 22 minutes. Doing without the progress bar got it down to 18 minutes, but that's still too long.
I can display files in the browser, and my test file displayed in about 5 seconds, but many files need to be downloaded for editing and then re-uploaded.
Any thoughts on how to accomplish this uploading and downloading faster? At this point, I'm open to almost any idea, whether it uses webutil or not. Solutions that are at least somewhat native to Oracle are preferred, but I'm opn to suggestions.
Thanks,
AndyDan
This may be totally out to lunch, but since you're looking for any thoughts that might help, here are mine.
First of all, I'm assuming that the actual editing of the files happens outside the browser, and that you're just looking for a better way to get the files back and forth.
In that case, one option I've used in the past is just to route around the web application using Apache, or any other vanilla web server you like. For downloading, create a unique file session token, remember it in the web application, and place a copy of the file, named with the token (e.g. <unique token>.doc), in a download directory visible to Apache. Then provide a link to the file that will be served via Apache.
For upload, you have a couple of options. One is to use the mechanism you've got, then when a file is uploaded, you just have to match on the token in the name to patch the file back into your archive. Alternately, you could create a very simple file upload form separate from your application that will upload the file to a temp directory via Apache, then route the user back into your application and provide the token in the URL HTTP GET-style or else in a cookie.
Before you go to all that trouble, you'll want to make sure that your vanilla web server will provide better upload and download speed and reliability than your current solution, but it should.
As an aside, I don't know whether the application server you're using provides HTTP compression, but if it does, you should make sure it's enabled and working. This is probably the best single thing you can do to increase transfer speed of large files, assuming they're fairly compressible. If your application server doesn't support it, then most any vanilla web server will.
I hope that helps.
I ended up using CLIENT_HOST to call an FTP command to download the files. My 189MB test file took 20-22 minutes to download using WEBUTIL_FILE_TRANSFER.URL_To_Client_With_Progress, and only about 20 seconds using FTP. It's not the best solution because it leaves the FTP password exposed on the PC temporarily, but only for as long as the download takes, and even then the user would have to know where to find it.
So, we're implementing this for now, and looking for a more secure but still performant long term solution.

Resources