OSXFUSE - what exactly does the "local" mount option mean? - macos

I've implemented an OSXFUSE-based file system. It works fine on 10.8, but on Mavericks MS Word opens existing documents as blank (although I am, apparently, returning the correct data - I see the contents in the preview icon. Also, if I copy a file to a real hard drive and open it, it opens fine).
This issue is fixed on Mavericks if I mount my filesystem with the "local" flag. However, using this flag introduces other problems - e.g., it looks like it causes Finder to do some more aggressive caching, hence some file are not visible in Finder (although I can ls them in terminal).
Ideally I want to be able to mount the filesystem without this local flag (my implementation stores file on the network, so passing this flag looks wrong), but the problem with blank Word documents really puzzles me.

We have been able to track down the problem to - wait for it - Google Chrome. When Google Chrome is running while the volume is mounted, the problem appears. If Google Chrome is not running, Word/Excel/etc. files open just fine.
We've been in contact with Benjamin (OSXFUSE developer). Please also see his answer regarding this issue on the OSXFUSE mailing list:
https://groups.google.com/d/msg/osxfuse-group/URlw-n-Qakg/bLw2fHHDe7sJ
So far I have not found any bugs in osxfuse that might explain this behavior. The odd thing is that the files are not corrupted or empty. After copying the files to another volume they open just fine. Using LibreOffice to open the file on the FUSE volume works, too.
Chrome and Office seem to be based on the Carbon framework (which is deprecated since Mountain Lion). I believe the issue is somehow related to Carbon since non-Carbon apps do not seem to be affected. Every time a volume is mounted Chrome queries the volume’s capabilities and attributes (and maybe more). As far as I can tell all these file system operations return successful without any errors. But from this point on Office will fail to open documents.
In my opinion the two most likely reasons for this are:
osxfuse might break the VFS file system contract on Mavericks. I’ve been looking into this for some time now but I have not found any clues supporting this.
There might be a bug in the Carbon/CarbonCore framework. The odd thing is that there are no issues when using the stock network file systems afp or smb.
The two possible "fixes" (or rather "workarounds") for this issue seem to be (for now):
Use the "local" mount option (which might introduce other problems and is generally not recommend to use)
Do not use the "volname" mount option. The problem seems only to occur when the "volname" mount option is used. If no custom volume name is set, the problem seems not to occur and Excel/Word/etc. files open just fine - regardless whether Google Chrome was running at mount time.

I've seen the same and likewise local is not an option. Similar problems with Photoshop.
Some findings from my implementation
The problem doesn't occur on first run after reboots.
The problem begins occurring after program exit.
I solved this problem by manually dismounting (and waiting a few seconds) before exiting my program. If unmount is successful, on next run the mount again performs fine.
If the program ever terminates or dismounting fails (file in use, etc) then the volume's read-access is borked in Word/Photoshop on next mount.
Rebooting resolves issue.
Does this match what you're seeing?

Related

Why does running a clang compiled executable on a network drive, hang all subsequent executions of compiled executables?

I'm perplexed by this one and not sure what's relevant so will include all context:
MacBook Pro with an M1 Pro running macOS 12.6.
Apple clang version 14.0.0, freshly installed by deleting DeveloperTools folder and running xcode-select --install.
Using zsh in Terminal.
Network share mounted using no-configuration Finder method (seems to use standard SMB, but authenticates with my Apple ID)
Network share is my home directory on a iMac with a Core i5 running macOS 11.6.8.
Update: also tried root directory and using the tmp directory, to eliminate one category of doubt. Same result.
The minimum repeatable example of the issue I've managed to find is:
Use gcc from Apple's Developer Tools to compile a “Hello World” C application (originally discovered using ghc to compile Haskell - effect is the same).
Run the compiled executable. No surprises.
cd to the mounted network drive.
Do the same thing there - compiled executable hangs! First surprise, but relatively minor.
Return to the local machine. Original compiled executable still runs fine.
Use the DeveloperTools to compile anything, including the original source - compiled executable on local machine now hangs!
I've created an asciinema recording of the MRE. You can see the key part of the transcript in this still:
I’ve tried killing processes, checking lsof, unmounting the drive, logging in and out, checking the PATH, etc. Nothing gets me back to a working state short of a reboot.
Some more troubleshooting data:
gcc -v is identical for both executables, except for -fdebug-compilation-dir (set to cwd) and the name of the object file (randomly generated).
Just performing the compilation doesn't trigger the issue - running the networked executable does.
Trawling through the voluminous Console log reveals nothing relevant.
system.log shows no entries around the time of the issue.
lsof and ps -axww show reams and reams of output that is hard to spot patterns in, but I'm pretty sure there is no significant before/after differences.
I left the hung process running on the local machine overnight, and there's no change the next day.
Have I triggered some sandboxing or security fault and am being protected from disastrous consequences? Or this some clang/llvm related quirk I'm not familiar with? Or, given that ghc using its native code generator seems to have the same result, is this a bug in the way stdout is provided to executables? I'm at a loss!
Oh boy, avoiding Apple ID authentication of the network share fixed this for me.
I forced Finder to not use it’s magic no-configuration Apple ID login method, by opening the Location in Finder, clicking the "Disconnect" button and then clicking the "Connect As..." button that appears in its place. If I choose "Registered User" and use my username and password, I can then execute exactly the same commands (since the mount name ends up being the same) and execution works without an issue. I can continue to compile and execute to my heart's content.
That the Apple ID method is being used in the first place is not obvious (in true minimal design fashion), but subtly indicated at the top of the Finder window as "Connected as ". The only obvious difference this makes, is the username shown in mount:
Apple ID:
//com.apple.idms.appleid.prd.<UUID>#<HOSTNAME>._smb._tcp.local/<SHARE> on /Volumes/<SHARE> (smbfs, nodev, nosuid, mounted by <USERNAME>)
"Registered User":
//<USERNAME>#<HOSTNAME>._smb._tcp.local/<SHARE> on /Volumes/<SHARE> (smbfs, nodev, nosuid, mounted by <USERNAME>)
Obviously something far more significant is different, given the fundamental impact, but it's not at all clear to me what that is. So at this stage, this answer is just a workaround to a nasty bug.

"File exists, but cannot be read", but third try is a charm (with emacs, os-x Catalina 10.15.6, dropbox smart sync)

I try to open a file using emacs c-x c-f /Users/fred/Dropbox/foo/bar/bam/baz.txt.
In the mini buffer it says...
"File exists, but cannot be read".
...Next I do m-x revert buffer. Now in the mini buffer it says....
"Opening input file: Input/output error, /Users/fred/Dropbox/foo/bar/bam/baz.txt"
...I do m-x revert buffer again and this time the file reads in fine.
The problem is that a file should open on the first try, no questions asked!
This is more or less a repeatable problem (specifically I have gotten "File exists, but cannot be read" several times in the last 2 weeks. I try various work arounds to open the file (e.g. hit m-x revert buffer twice as described above). I usually (always?) am able to open the file. And once I finally DO open one of those obstinate files, it easily opens using emacs in other contexts (e.g. new windows, or re-opens when I have closed the buffer).
<<< UPDATE ~2 DAYS AFTER ORIGINAL POST -- START OF SECTION >>
I seem to be able to reproduce a very similar behavior when I start emacs using an init file that starts an emacs with about 30 different text files open. (I.e. part of the init is to open these files in emacs). When I change the emacs buffer (c-x b) to point to some files, call them GoodFile1, GoodFile2, there text is visible, i.e. all is well. For other files, call them BadFile1, BadFile2, BadFile3, when I switch to them the screen is blank and I know they have LOTs of text in them. I haven't seem any error messages akin to "File exists, but cannot be read", but still this is bad behavior and it seems related to the original problem. Next, similar to the originally reported case, I hit m-x revert buffer between 1 and 4(?) times and, poof!, the text appears and I am begrundgingly happy again. Now, here's the interesting bit: when I start a new terminal window and fire up an emacs loading the same init file then the formerly bad files (e.g. BadFile1, BadFile2, BadFile3) are now visible right from the start -- as they should be on a normally functioning computer. It is as if a formerly blank seeming file changes some sort of state so that when a fresh emacs tries to open it the file shows up as it should. What kind of state change is involved? I think it has to do with smart sync. So the question is, assuming it is smart sync, how to avoid this annoying required behavior of hitting revert buffer a buncha times? Does it last between boots? I am pretty sure unix touch did not help. Maybe there is some other operation to perform?
Note: On this machine I always start emacs with 'emacs -nw -l my_special_emacs_init.el' (GUI's are for wimps (-;)
<<< UPDATE ~2 DAYS AFTER ORIGINAL POST -- END OF SECTION >>
All annoying bad behavior happens on my new set up.
On my old setup, I have never gotten any thing like this behavior over years, possibly decades. (and, on my old setup, I tried the specific file mentioned above and it opened fine)
So, what, you may ask, is different between my new setup and my old setup?
OS / Hardware:
New Setup: Catalina 10.15.6 on a brand new Mac Book Pro.
Old Setup: Mohave 10.14.6 on a Mac Book Pro "early 2015" (and has never had this issue in prior OS's either)
Dropbox:
New Setup: Smart Sync On. I am using Dropbox with Smart Sync Turned on such that files are by default "online only". "online only" is a misnomer. Some files end up locally on my hard drive. Smart sync seems to figure out what files to store locally on the mac. I suspect that knowing how it does this will fix my problem.
Old Setup: Smart Sync Off. I have been using Dropbox for years but have stayed far away from Smart Sync and have never had a problem opening a file.
Emacs Version:
New Setup: GNU Emacs 27.1
Old Setup: GNU Emacs 22.1.1.
Clearly it can't be a permissions issue bc I've never had this problem on my old setup.
Any clues?
Does anyone know of any diagnostics I can do to "dig under the hood" when I find another case of this "File exists, but cannot be read".
Any thoughts on whether it is OS difference? Hardware difference? Dropbox Smart Sync Yes vs Smart Sync No difference? Emacs version difference?
<<< UPDATE ~2 DAYS AFTER ORIGINAL POST -- START OF SECTION >>
My current hunch is the the 'state change' mentioned in the update above is related to smart sync somehow figuring out that the user wants a given file cached locally. The bad behaving files are non-local so poor emacs can't open them. Whacking them with 1 to 4 revert buffers tells smart sync to make the given file local. Alas, smart sync is not smart enough to figure out what emacs users want right off the bat! Perhaps emacs can be changed in such a way as to tickle smart sync into realize that the given file should be made local local Or we can petition Dropbox to respect emacs. Or I am not using smart sync correctly. Thoughts?
<<< UPDATE ~2 DAYS AFTER ORIGINAL POST -- END OF SECTION >>
I had the same problem. This is the root cause, and solution:
https://emacs.stackexchange.com/questions/53026/how-to-restore-file-system-access-in-macos-catalina
In short, give /usr/bin/ruby full filesystem access in General Settings -> Security & Privacy -> Privacy
I had this problem on Big Sur; giving Full Disk Access to /usr/bin/ruby solved the problem.
Note it is not so trivial to do this, you have to press cmd-shift-dot in the system preferences file chooser to enable the usr directory to be visible at the Macintosh HD level.

NFS mount keep changing inode

I'm using a MacBook Pro with Catalina for all my development. I also run a VM with ubuntu 16.04 through virtualbox where I export a NFS share.
The export looks like this:
/export/dev 192.168.0.0/16(rw,insecure,no_subtree_check,async,all_squash,anonuid=1000,anongid=1000)
and I mount this on my Mac with
mount -o rw,nolocks,locallocks -t nfs 192.168.56.102:/export/dev /Users/myhome/Documents/dev
nfsstat -m is saying
NFS parameters: vers=3,tcp,port=2049,nomntudp,hard,nointr,noresvport,negnamecache,callumnt,locallocks,quota,rsize=32768,wsize=32768,readahead=16,dsize=4096,rdirplus,nodumbtimr,timeo=10,maxgroups=16,acregmin=5,acregmax=60,acdirmin=5,acdirmax=60,nomutejukebox,nonfc,sec=sys
most of the time everything is working, but more and more often I get strange errors and folders in sublime text starts to look like folders with "link" on. Investigating that in the console, I get errors saying that the inodes has already been seen and the folder is considered a symbolic link in sublime.
I investigated this further and do not think this is a sublime error, but more likely a MacOS problem.
When everything is working and I write ls -i in Mac OS in one of my mounted folders I get the exakt same inode results as on the VM. But 5 minutes later doing the exact same thing - I get totally different inodes and the exact same inode numbers on all files in the same folder.
Has anyone experienced this before? Is this a NFS parameter issue?
I have googled this and haven't found anything on the internet about anyone with similar problems.
You are right. I'm seeing the exact problem on Catalina 10.15.7. The workaround I have for now is to specify actimeo=0. With the default actimeo=60, files/dirs attributes are refreshed periodically every 60s. Inode numbers are initially correct immediately following the mount, but each refresh changes files within the same directory to have an identical and seemingly random inode number. This is bad as hell, as it breaks the fundamental assumption of a lot of programs, including dyld, which uses inode number to identify loaded images internally. (function bool ImageLoader::statMatch(const struct stat& stat_buf) const from https://opensource.apple.com/source/dyld/dyld-750.6/src/ImageLoader.cpp.auto.html). I'm trying to file a bug report to Apple and see how to move forward.
Update 1: I quote the answer from Apple:
After reviewing your feedback, we have some additional information for you, or some additional information, or action is necessary for this issue:
To work around the issue, mounting with nordirplus should fix this.
Also, the issue is resolved in macOS Big Sur (11+) so, you could try that as well.

docker on OSX slow volumes

I'm trying to use docker beta on OSX, mainly for Symfony development but the mounted volumes are incredible slow. Even for a vanilla Symfony project I get 6s page load time. That's unbearable! Has anyone found a solution to this issue? Trying to move away from vagrant but I just can't find any reasonable way to work with docker instead.
Okay the user Spiil gave a solution but I wanted to elaborate on the exact steps to take since I went through 12 hours trying to figure it out, but once you know how its super easy and fixes all the slow down issues!
The key here is to understand this solution creates NFS (Network File System) drives as the means of communication from the Docker Containers to your Mac instead of the standard OSX File System which is very slow currently either due to bugs or the way it works*
Follow these steps exactly.
1.) Clone this repo here (https://github.com/IFSight/d4m-nfs) in your home directory. To do this open up terminal and type cd ~
Then type git clone https://github.com/IFSight/d4m-nfs
Alternatively you can also do this in a one liner git clone https://github.com/IFSight/d4m-nfs ~/d4m-nfs
2.) Next go into the d4m-nfs folder and create a new file in the /etc folder and title it d4m-nfs-mounts.txt
3.) Add the following lines of code to this.
/Users/yourusername:/Users/yourusername:0:0
What the above does is allows you to still use relative folders with docker-compose and allows all ports to connect on it hence the 0:0.
EDIT
Do not put /Volumes here!!
4.) Go to your docker preferences and do the following
Make sure only /tmp is showing and NOTHING ELSE. I mean nothing else it won't work if there is anything else since it will create conflicts with the NFS systems that the script will make for you later. Restart docker and docker-compose down any containers as well.
5.) Finally navigate to the d4m-nfs directory we created in step 1 and type the following command, /bin/bash d4m-nfs.sh
edit The correct way to type the command above is this as another user from the github (if-kenn) pointed out, ./d4m-nfs.sh which uses the Shebang for what shell should run it.
If done correctly there should be no errors and this should work. Please note DO NOT RUN as sh d4m-nfs.sh this will create errors and you will have to delete your exports file to start over. In fact anytime you make any changes you will have to clear your exports file.
This is what mine looks like.
EDIT:: IMPORTANT -- Remove the /private and volumes! This should only be users/username now!
If you see anything other than this you were not running with bash. You can quickly get to the exports file like this in Mac if you make any errors and just clear it out to start over.
Just select go to folder
and then type /etc/exports
This is a nice shortcut to quickly get to it and clear it out in your favorite text editor.
Also make sure no containers are running or you will get the ........ loop of death. If this loop of death continues make sure you upgrade docker and then restart your computer. Yes restart... it seemed to be the only way to get it to work on my friends computer. Refer to this (https://github.com/IFSight/d4m-nfs/issues/3)
Note to .... loop. I recently found another solution. Make sure you are NOT logged as root, and make sure you pulled the git repo into your users ~ folder not the root ~ folder. In otherwords, it should be in Users/username.
Also, make sure /tmp folder has full write permissions since the script needs to write here or this won't work either. chmod 777 -R /tmp
6.) If you did it right when running the script it will look like this.
Then simply run your docker-compose up -d as usual in your symfony project folder (or whatever project you are using with docker) and everything should work... except NO MORE slow downs!
You will need to run this anytime you restart your computer or docker.
Also note if you get mounting errors showing up, you probably don't have your project stored in your Users/username directory. Remember that is where we mounted it. If your project is somewhere other than there you will need to modify the d4m-nfs-mounts.txt file accordingly.
Other Info:
For people reading this now, maybe it's better to wait for Docker to fix this issue. A pull request has already been accepted to improve performance(https://github.com/docker/docker/pull/31047).
This will be release somewhere in April 2017 and should be a big improvement.
I've tried some workarounds for Docker for Mac, but all of them had some pretty big disadvantages, mostly in usability. A good source for alternatives of the OSXFS can be found at: https://github.com/EugenMayer/docker-sync/wiki/Alternatives-to-docker-sync. Credits for Eugen Mayer for setting this up.
EDIT:
First improvement is implemented in the edge release. https://github.com/docker/for-mac/issues/77 has more info on this.
There's a long thread with explanation from Docker Team and various workarounds.
Currently, the issue is being tracked on GitHub.
While some workarounds may be better than others, I'm afraid the ideal option for now is to switch to Linux.
I spent a lot of my time in searching viable solution. And I found.
d4m-nfs
allow you use docker volumes via nfs.
In my case it gave increase performance 16 times! (1.8sec vs ~30sec)
Also d4m-nfs has quite a intricate manual, so here is another link with detailed example: https://github.com/laradock/laradock/issues/353#issuecomment-262897619
I just leave this here for other googlers.
Normaly volumes should be fast.
But you can not change anything to make them faster if you dont want to change the format of your disk.
But maybe the bottleneck is the CPU or RAM.
You can check that with the command docker stats. These are by default set to 2 cores and 2 GB RAM. You can change this in the Docker for Mac GUI.
I had exactly the same thing. For me using docker-bg-sync (see on GitHub) made a dramatic improvement in speed and CPU usage.
Not as nice as just mounting the volume as you have to start a new container for every sync but it does the job.
In latest docker 17.06.0-ce-mac18 volumes mounted with :cached seems to run quite decent.
I've found that creating a CoreOS VM under Parallels, then using the Docker that is inside CoreOS is far faster than Docker for Mac (currently running Version 17.12.0-ce-mac49 (21995)).
I'm doing Linux code builds using CMAKE/Ninja/GCC and it's almost twice as fast as the exact same build from Docker for Mac.
In my case, I have a ton of library sources that are part of the container (e.g. Boost, OpenSSL), and a decent amount of C++ code that I keep local to my Mac.
This seems to be a recent development. Docker/Mac has become much slower than I remember it being a month or two ago. Maybe it's just me...
We overcame this issue by synchronizing the local and the docker for mac filesystem using syncthing. We built an open source tool that follows this approach, in case it helps:
https://github.com/okteto/cnd

How to delete an unfinished Darwinbuild build

I got darwinbuild off macports to get a single unix executable (long story, see Where/how to get the Mac OSX "login" command). I was having trouble figuring out how it worked, so I tried their website's example build, "darwinbuild xnu"
It worked, and when I opened the new volume it mounted in finder, it appeared to be building a whole new mac osx (I know this is probably not the case, but that is what it looked like to me at least.) So I grabbed the binary I wanted, hit control-c in terminal, and unmounted the volume. Everything seemed to work out, but even after restarting the computer, I could not get the 2gig or so that build/mount/kernal/thing took up.
I even tried restoring a timemachine backup, but even that would not bring the free space back.
So how do I get rid of this thing once and for all?
If you know the location the files were written to, navigate there in Finder then delete them. If you don't, read the documentation that comes with the Darwin stuff you downloaded (it'll be there, believe me) to find out or download a drive space analyzer app to locate it.
Really, I don't see how these questions are about programming? They're more "how do I fix something I screwed up ancillary to programming-related efforts," which are of course superuser.com material.

Resources