docker on OSX slow volumes - macos

I'm trying to use docker beta on OSX, mainly for Symfony development but the mounted volumes are incredible slow. Even for a vanilla Symfony project I get 6s page load time. That's unbearable! Has anyone found a solution to this issue? Trying to move away from vagrant but I just can't find any reasonable way to work with docker instead.

Okay the user Spiil gave a solution but I wanted to elaborate on the exact steps to take since I went through 12 hours trying to figure it out, but once you know how its super easy and fixes all the slow down issues!
The key here is to understand this solution creates NFS (Network File System) drives as the means of communication from the Docker Containers to your Mac instead of the standard OSX File System which is very slow currently either due to bugs or the way it works*
Follow these steps exactly.
1.) Clone this repo here (https://github.com/IFSight/d4m-nfs) in your home directory. To do this open up terminal and type cd ~
Then type git clone https://github.com/IFSight/d4m-nfs
Alternatively you can also do this in a one liner git clone https://github.com/IFSight/d4m-nfs ~/d4m-nfs
2.) Next go into the d4m-nfs folder and create a new file in the /etc folder and title it d4m-nfs-mounts.txt
3.) Add the following lines of code to this.
/Users/yourusername:/Users/yourusername:0:0
What the above does is allows you to still use relative folders with docker-compose and allows all ports to connect on it hence the 0:0.
EDIT
Do not put /Volumes here!!
4.) Go to your docker preferences and do the following
Make sure only /tmp is showing and NOTHING ELSE. I mean nothing else it won't work if there is anything else since it will create conflicts with the NFS systems that the script will make for you later. Restart docker and docker-compose down any containers as well.
5.) Finally navigate to the d4m-nfs directory we created in step 1 and type the following command, /bin/bash d4m-nfs.sh
edit The correct way to type the command above is this as another user from the github (if-kenn) pointed out, ./d4m-nfs.sh which uses the Shebang for what shell should run it.
If done correctly there should be no errors and this should work. Please note DO NOT RUN as sh d4m-nfs.sh this will create errors and you will have to delete your exports file to start over. In fact anytime you make any changes you will have to clear your exports file.
This is what mine looks like.
EDIT:: IMPORTANT -- Remove the /private and volumes! This should only be users/username now!
If you see anything other than this you were not running with bash. You can quickly get to the exports file like this in Mac if you make any errors and just clear it out to start over.
Just select go to folder
and then type /etc/exports
This is a nice shortcut to quickly get to it and clear it out in your favorite text editor.
Also make sure no containers are running or you will get the ........ loop of death. If this loop of death continues make sure you upgrade docker and then restart your computer. Yes restart... it seemed to be the only way to get it to work on my friends computer. Refer to this (https://github.com/IFSight/d4m-nfs/issues/3)
Note to .... loop. I recently found another solution. Make sure you are NOT logged as root, and make sure you pulled the git repo into your users ~ folder not the root ~ folder. In otherwords, it should be in Users/username.
Also, make sure /tmp folder has full write permissions since the script needs to write here or this won't work either. chmod 777 -R /tmp
6.) If you did it right when running the script it will look like this.
Then simply run your docker-compose up -d as usual in your symfony project folder (or whatever project you are using with docker) and everything should work... except NO MORE slow downs!
You will need to run this anytime you restart your computer or docker.
Also note if you get mounting errors showing up, you probably don't have your project stored in your Users/username directory. Remember that is where we mounted it. If your project is somewhere other than there you will need to modify the d4m-nfs-mounts.txt file accordingly.
Other Info:

For people reading this now, maybe it's better to wait for Docker to fix this issue. A pull request has already been accepted to improve performance(https://github.com/docker/docker/pull/31047).
This will be release somewhere in April 2017 and should be a big improvement.
I've tried some workarounds for Docker for Mac, but all of them had some pretty big disadvantages, mostly in usability. A good source for alternatives of the OSXFS can be found at: https://github.com/EugenMayer/docker-sync/wiki/Alternatives-to-docker-sync. Credits for Eugen Mayer for setting this up.
EDIT:
First improvement is implemented in the edge release. https://github.com/docker/for-mac/issues/77 has more info on this.

There's a long thread with explanation from Docker Team and various workarounds.
Currently, the issue is being tracked on GitHub.
While some workarounds may be better than others, I'm afraid the ideal option for now is to switch to Linux.

I spent a lot of my time in searching viable solution. And I found.
d4m-nfs
allow you use docker volumes via nfs.
In my case it gave increase performance 16 times! (1.8sec vs ~30sec)
Also d4m-nfs has quite a intricate manual, so here is another link with detailed example: https://github.com/laradock/laradock/issues/353#issuecomment-262897619
I just leave this here for other googlers.

Normaly volumes should be fast.
But you can not change anything to make them faster if you dont want to change the format of your disk.
But maybe the bottleneck is the CPU or RAM.
You can check that with the command docker stats. These are by default set to 2 cores and 2 GB RAM. You can change this in the Docker for Mac GUI.

I had exactly the same thing. For me using docker-bg-sync (see on GitHub) made a dramatic improvement in speed and CPU usage.
Not as nice as just mounting the volume as you have to start a new container for every sync but it does the job.

In latest docker 17.06.0-ce-mac18 volumes mounted with :cached seems to run quite decent.

I've found that creating a CoreOS VM under Parallels, then using the Docker that is inside CoreOS is far faster than Docker for Mac (currently running Version 17.12.0-ce-mac49 (21995)).
I'm doing Linux code builds using CMAKE/Ninja/GCC and it's almost twice as fast as the exact same build from Docker for Mac.
In my case, I have a ton of library sources that are part of the container (e.g. Boost, OpenSSL), and a decent amount of C++ code that I keep local to my Mac.
This seems to be a recent development. Docker/Mac has become much slower than I remember it being a month or two ago. Maybe it's just me...

We overcame this issue by synchronizing the local and the docker for mac filesystem using syncthing. We built an open source tool that follows this approach, in case it helps:
https://github.com/okteto/cnd

Related

Laravel php unit testing takes long in Windows Docker

I am working on Laravel with docker.
If I run php unit test in mac os, it takes few seconds.
However on windows 10, it takes few mins.
Is there anyway to fix this problem?
Thanks.
If you're running on a non-Linux OS, Docker has to virtualise your file system, and this requires a certain amount of time per file. For programs that are compiled into one executable, this is less of a problem at runtime (but clearly with its own compilation-time implications), but for scripting languages like PHP this can mean that every request runs super slowly, since every file that is used has to be 'translated' every time it is read. This is also a problem on Docker for Mac (so you're actually experiencing problems there, too, but less so, since at least it's a Linux system under the hood.) Linux is, I believe, completely virtualised on Windows which is going to add even more time.
This Reddit discusses the problem to an extent:
https://www.reddit.com/r/docker/comments/7xvlye/docker_for_macwindows_performances_vs_linux/
With this being particularly interesting (I have not tried it myself):
https://nickjanetakis.com/blog/setting-up-docker-for-windows-and-wsl-to-work-flawlessly
There is also a good community-created solution which we have used to solve our Docker for Mac problem. I don't see why their Windows options wouldn't work similarly well in your case. You can find it here:
https://github.com/EugenMayer/docker-sync/wiki/docker-sync-on-Windows
It basically sets up an intermediate service that copies all the files over into an intermediate volume (that uses the 'correct' filesystem) only when the file is updated, therefore speeding up run speed immensely.
I know it looks like quite an intimidating process, but this problem is fundamental, so you're going to have to do a certain amount of work to fix things!
FWIW I had that working on Docker 4 Mac, but it added a layer of complexity to our dev process that I found annoying, so in the end I've got myself a Linux box for work. To be honest, installing Linux as dual boot on my Windows machine (which has been my at-home solution) was probably easier than tweaking Docker 4 Mac to my satisfaction, so you might want to consider that. I have used this page twice:
https://itsfoss.com/install-ubuntu-1404-dual-boot-mode-windows-8-81-uefi/
And it's worked fine each time. One caveat - it suggests a low amount of disk for your root (/) volume, but Docker gets mounted on root so give it around 100G (not the 10-20G that page recommends.)

Not able to install Gentoo Linux

Here is my situation, when I download Gentoo and start to run it and downloaded the stage III Tarball from links and then tried to extract it a stream of white sentences flows down my screen really fast for about a minute just like in the YouTube tutorial I was viewing. However, after that instead of going to the correct stage it says cannot right not enough space on device and I tried repartitioning it but I'm not sure what device it is talkingaboutHowever, after that instead of going to the correct stage it says cannot right not enough space on device and I tried repartitioning it but I'm not sure what device it is talking about. Please help
Sorry you're having this issue, though in general, I truly believe Gentoo Handbook is quite well written and even a newbie can follow it... Here are some advices that I hope I can give you (most important is, digest the handbook and follow it carefully, not that I'm saying "RTFM", it's just that for Gentoo, handbook is essential and without it, we can get lost if you're just starting).
From my experiences, the "stream of white sentences" I'd presume would be from verbose un-tar'ing your stage3. Usually, I only want to see the errors so my suggestions is to remove the "v" (i.e. from "tar xjvpf" to "tar xjpf") so that only errors would appear when un-tar'ing. The caveat to this is that you'll be wondering if it hung or is busy un-tar'ing. Use Alt-F1 and Alt-F2 (if on console/tty mode back-and-forth) to log in on another TTY and do 'ps -auxf' to see if it's still tar'ing. If you're using GUI Terminal, just open another tab and 'ps auxf', you get the picture...
Also, learn the commands "df", it'll come in handy. If you're running out of disk space, perhaps you're trying to install/untar stage3 to your ramdisk (grin) rather than your mounted (i.e. "/mnt/gentoo"). Mount your root '/' device to '/mnt/gentoo' and cd to that mounted path then try it (don't forget to mount your '/boot' as well as your proc, dev, sys, etc before you chroot - again, follow the handbook as carefully as you can - oh also, distro such as Debian hybrids including Ubuntu uses symlink to shm, so read that part about 'rm /dev/shm' and follow it carefully; if you're using Gentoo LiveCD, you can ignore that part).
Other useful commands if you're confused (or new to) mounting devices would be to learn to experiment with commands such as 'lsblk' and 'mount' (by itself) to inspect the sizes of your partition (again, use of 'df' comes in handy as well) as well as what is your device (i.e. /dev/sda1 versus /dev/sdb1). Hint: when you do 'mkfs', use "-L" (or for some file system, it's "-N") to label/name your devices, so that when you use commands such as 'mount' or 'lsblk', you can spot them easier. If you're using GUI/desktop versions of some distro, hopefully there are tools such as "gparted" which can give you visual information in GUI of your devices which can be helpful. One think I'd advise you to stay away from if you're just starting, is to avoid RAID (i.e. mdadm) until you're comfortable with how grub/lilo works. Get your kernel (Gentoo-sources) compiled and MBR written (i.e. grub-install), try booting and have fun first (oh also, if you can avoid GUI like installing Gnome/KDE from the get-go, avoid it as well - you'll get into issues such as "should I use SystemD or OpenRC" and then get hit by the obstacle of some gnome parts needs you to use systemd but you've chosen openrc, and so on).
If I may add my opinions, in my opinion, Gentoo (also Arch and FreeBSD) is an excellent place to start if you want to learn the inside of Linux application workings (library dependencies, why packages are important rather than downloading each libs manually and compile them one by one, etc). I hope this won't discourage you from switching to another distro, but if it does frustrate you on installation and all you want to do is test-drive Linux, there are much easier distro that you'd not have to understand USE and other compilation mechanisms (if you have an old i586, it makes sense to build it with pick-and-chose libraries so that leaner can be faster, but if you have fast machine, why compile binaries when somebody who is expert at it already have done it for you?). SUSE and Fedora/RedHat/CentOS used to be the least frustrating for it was able to find/detect hardwares (legacy and new) but these days, I usually tell people, "if you know how to install Windows, you can install Ubuntu" so that too may be a good way to wet your feet. Good luck!
0_o wow, well.. how about some 411 like size of your hdd and exactly how you partioned it? Linux will look for specific directorys and if missing will instead start to install into the root dir. How you partion is an importent first step. Once you got a generally good partion setup most linux installs will go fine. Most basic tables include /, /home,/var and a swap.

OSXFUSE - what exactly does the "local" mount option mean?

I've implemented an OSXFUSE-based file system. It works fine on 10.8, but on Mavericks MS Word opens existing documents as blank (although I am, apparently, returning the correct data - I see the contents in the preview icon. Also, if I copy a file to a real hard drive and open it, it opens fine).
This issue is fixed on Mavericks if I mount my filesystem with the "local" flag. However, using this flag introduces other problems - e.g., it looks like it causes Finder to do some more aggressive caching, hence some file are not visible in Finder (although I can ls them in terminal).
Ideally I want to be able to mount the filesystem without this local flag (my implementation stores file on the network, so passing this flag looks wrong), but the problem with blank Word documents really puzzles me.
We have been able to track down the problem to - wait for it - Google Chrome. When Google Chrome is running while the volume is mounted, the problem appears. If Google Chrome is not running, Word/Excel/etc. files open just fine.
We've been in contact with Benjamin (OSXFUSE developer). Please also see his answer regarding this issue on the OSXFUSE mailing list:
https://groups.google.com/d/msg/osxfuse-group/URlw-n-Qakg/bLw2fHHDe7sJ
So far I have not found any bugs in osxfuse that might explain this behavior. The odd thing is that the files are not corrupted or empty. After copying the files to another volume they open just fine. Using LibreOffice to open the file on the FUSE volume works, too.
Chrome and Office seem to be based on the Carbon framework (which is deprecated since Mountain Lion). I believe the issue is somehow related to Carbon since non-Carbon apps do not seem to be affected. Every time a volume is mounted Chrome queries the volume’s capabilities and attributes (and maybe more). As far as I can tell all these file system operations return successful without any errors. But from this point on Office will fail to open documents.
In my opinion the two most likely reasons for this are:
osxfuse might break the VFS file system contract on Mavericks. I’ve been looking into this for some time now but I have not found any clues supporting this.
There might be a bug in the Carbon/CarbonCore framework. The odd thing is that there are no issues when using the stock network file systems afp or smb.
The two possible "fixes" (or rather "workarounds") for this issue seem to be (for now):
Use the "local" mount option (which might introduce other problems and is generally not recommend to use)
Do not use the "volname" mount option. The problem seems only to occur when the "volname" mount option is used. If no custom volume name is set, the problem seems not to occur and Excel/Word/etc. files open just fine - regardless whether Google Chrome was running at mount time.
I've seen the same and likewise local is not an option. Similar problems with Photoshop.
Some findings from my implementation
The problem doesn't occur on first run after reboots.
The problem begins occurring after program exit.
I solved this problem by manually dismounting (and waiting a few seconds) before exiting my program. If unmount is successful, on next run the mount again performs fine.
If the program ever terminates or dismounting fails (file in use, etc) then the volume's read-access is borked in Word/Photoshop on next mount.
Rebooting resolves issue.
Does this match what you're seeing?

How to take a snapshot of the entire system of Macbook Pro OS x 10.8 Mountain Lion

I want to be able to take a snapshot of the current system and will go back to it whenever I mess up with files. I looked at the Time Machine solution, but realized that it's only a good solution when I know what file I am looking for. But sometimes, some installation process creates binary files in multiple system paths, which are very hard to locate and identify. Say I installed a package, but then I felt like I shouldn't have done that. Uninstallation might still leave files around. So What might be some of the graceful solutions to go back to a status of the machine when everything is nice and clean.
Use disk utility (Applications | Utilities)
Click on the HDD and then click on new image. You can choose to have a compressed image or not. If you don't have much stuff on the drive it shouldn't be more than 30-40GB. Once you have the dmg file, stick it somewhere for backup purposes.
Also, create a recovery disk/stick with the recovery tool.
I dunno about "graceful" but Carbon Copy Cloner is definitely an easy solution for rolling back to a previous state. You can make an exact clone of your drive, then restore it back if something goes horribly wrong. I use CCC to make periodic backups of my Macs, as a sort of secondary backup to time-machine, which is easy to use but which I don't have total confidence in.
You can restore an entire system from a Time Machine snapshot, but it requires booting from the Recovery Partition or a Recovery disk. Basically, once you've rebooted in recovery mode, you can choose Restore From Time Machine Backup and then you'll be asked to locate the drive. Once you've done that, a list of Time Machine snapshots will be presented for restoring.
I haven't done this recently, but there are indications that the time of the backups may always be in PST, so be careful when looking at the times.
While OSX comes with TimeMachine, it also has the well-known (in Linux community!) command line tool called rsync.
With Google, I'm sure you can find many articles of how to use it, though here's an interesting blog of why its author uses rsync with Time Machine.

Search & Destroy Rouge Process in OSX

INFO: I installed SymformSync the other day, a distributed cloud storage system, but deleted it again the same day. (I like the idea, but it's not suitable for someone on a laptop like me.) However, there's a process symformsync that keeps popping up and consuming pretty heavily on the CPU. I deleted the application, but this process still keeps popping up! Needless to say, I don't appreciate not having control over the processes on my own CPU!
Q: How do I find this process that keeps starting up by itself, and how do I delete it?
Answer from Symform:
Thank you for contacting Symform support. I understand that you are needing the instructions to remove Symform from your Mac.
Here is the information:
Access the Terminal program on your Mac, by going to the search tool in the upper right-hand corner, and entering in Terminal.
Once Terminal is open, enter in the following command, depending on what you want to do:
Normal uninstall will only stop the services and remove the software. It will leave the service configuration and log files in place.
sudo /Library/Application\ Support/Symform/scripts/uninstall
To completely remove all aspects of the Symform software, configuration, and logs, a purging operation is available as well. This will remove any synchronization and contribution supporting files and directories too.
sudo /Library/Application\ Support/Symform/scripts/uninstall --purge
You will need to enter in your Mac password when running either of these commands.

Resources