In an automated system, i copy files to a mounted network volume with a sh
In basic i do "cp file.pdf /Volumes/NetworkShare/".
This works well until the remote system is down.
So before copying i can do a ping to detect if it's online.
But... when i get online OSX often remounts on a different path "/Volumes/NetworkShare-1/".
The old path "/Volumes/NetworkShare/" stil exists altough it's useless.
So, how can i find the actual mount point of this share in OSX cli?
I found out that diskutil does something like this for local disks, not for network volumes. Is there an equivalent for diskutil for network volumes?
The mount command (just on its own) will list all mounted filesystems. As for why OS X is creating that extra directory, that is pretty odd. Did you manually mount the filesystem, by any chance? If you created the “NetworkShare” directory yourself, OS X’s auto mounter might do what you’re suggesting.
Related
I have a container that I start like
docker run -it --mount type=bind,source=/path/to/my/data,target=/staging -v myvol:/myvol buildandoid bash -l
It has two mounts, one bind mount that I use to get data into the container, and one named volume that I use to persist data. The container is used as a reproducable android (AOSP) build environment, so not your typical web service.
I would like to access the files on myvol from the Windows host. If I use an absolute path for the mount, e.g. -v /c/some/path:/myvol, I can do that, but I believe docker creates copies of all the files and keeps them in sync. I really want to avoid creating these files on the windows side (for space reasons, as it is several GB, and performance reasons, since NTFS doesn't seem to handle many little files well).
Can I somehow "mount" a container directory or a named volume on the host? So the exact reverse of a bind mount. I think alternatively I could install samba or sshd in the container, and use that, but maybe there is something built into docker / VirtualBox to achive this.
Use bind mounts.
https://docs.docker.com/engine/admin/volumes/bind-mounts/
By contrast, when you use a volume, a new directory is created within Docker’s storage directory on the host machine, and Docker manages that directory’s contents.
I'm trying to access some files in my home directory on my macbook, using the terminal on recovery mode. In a normal boot, I can do:
sudo chflags nohidden /Users
to unhide the Users folder, but this is not working on recovery mode. I've tried this:
diskutil list
but no encrypted and/or offline Volumes appear. Does someone know how can I access my files?
Ok, so I just needed to mount the partition from Disk Utility, using File->Mount with the correct partition selected. After that, I can access my data.
I'm trying to mount a network folder with a Docker container on Windows 10 with the following syntax. Using UNC paths does not work. I'm running it under Hyper-V and the stable version of Docker.
docker run -v \\some\windows\network\path:/some/local/container
Before I was using Docker Toolbox, and I could map a network share to an internal folder with VirtualBox. I've tried adding the network share as a drive, but it doesn't show up as an available drive under the settings panel.
Currently I'm using mklink to mirror a local folder to the network folder, but I'd like to not depend on this as a solution.
Do this with Windows based containers
Go to Microsoft documentation https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/persistent-storage#smb-mounts.
There you'll find information about how to mount a network drive as a volume in a windows container.
Do this with Linux based containers
Is currently (as of 2019-11-13) not possible. BUT you can use a plugin: https://github.com/ContainX/docker-volume-netshare
I didn't use it, so I have no experience with it. Just found it during my research and wanted to add this as a potential solution.
Recommended solution
While researching on this topic I felt that you should probably mount the drive from within the container. You can pass required credentials either via file or parameters.
Example for credentials as file
You would require to install the package cifs-utils in the container, add
COPY ./.smbcredentials /.smbcredentials
to the Dockerfile and then run the following command after the container is started:
sudo mount -t cifs -o file_mode=0600,dir_mode=0755,credentials=/.smbcredentials //192.168.1.XXX/share /mnt
Potential duplicate
There was another stackoverflow thread on this topic here:
Docker add network drive as volume on windows
The answer provided there (https://stackoverflow.com/a/57510166/12338776) didn't work for me though.
I'm working with Xcode Server and continuous integrations. We're experiencing really slow build times.
My first attempt at speed up is using a RAM DISK and storing build files there. We are using mac mini with a SATA drive so I'm attempting to see how much time could be saved by eliminating that drive from part of the build process.
I created a RAM DISK with:
diskutil erasevolume HFS+ 'XcodeData' `hdiutil attach -nomount ram://8388608
I started by trying to set the DerivedData location onto the ram disk, but when running a CI build data isn't stored there.
I found what looks to be the build data for every CI at /Library/Developer/Integrations/Caches.
I tried symlinking ln -s /XcodeData/IntegrationCaches/ /Library/Developer/Integrations/Caches but I get permission errors when running the CI.
I tried chmod 777 /XcodeData/IntegrationCaches/ and I still get permission issues.
I also tried to chown my ramdisk folder to chown _xcsbuildd IntegrationsCaches for ram disk folder.
Haven't had any luck so far.
Has anyone else tried doing something like this?
Like #bolnad mentioned in the comments, it turned out that the RAM DISK by default ignores ownership. You can "Get Info" in the finder for that volume, then uncheck "Ignore Ownership", this will allow you to use chmod and tools to change users where required.
Several articles have been extremely helpful in understanding Docker's volume and data management. These two in particular are excellent:
http://container-solutions.com/understanding-volumes-docker/
http://www.alexecollins.com/docker-persistence/
However, I am not sure if what I am looking for is discussed. Here is my understanding:
When running docker run -v /host/something:/container/something the host files will overlay (but not overwrite) the container files at the specified location. The container will no longer have access to the location's previous files, but instead only have access to the host files at that location.
When defining a VOLUME in a Dockerfile, other containers may share the contents created by the image/container.
The host may also view/modify a Dockerfile volume, but only after discovering the true mountpoint using docker inspect. (usually somewhere like /var/lib/docker/vfs/dir/cde167197ccc3e138a14f1a4f7c....). However, this is hairy when Docker has to run inside a Virtualbox VM.
How can I reverse the overlay so that when mounting a volume, the container files take precedence over my host files?
I want to specify a mountpoint where I can easily access the container filesystem. I understand I can use a data container for this, or I can use docker inspect to find the mountpoint, but neither solution is a good solution in this case.
The docker 1.10+ way of sharing files would be through a volume, as in docker volume create.
That means that you can use a data volume directly (you don't need a container dedicated to a data volume).
That way, you can share and mount that volume in a container which will then keep its content in said volume.
That is more in line with how a container is working: isolating memory, cpu and filesystem from the host: that is why you cannot "mount a volume and have the container's files take precedence over the host file": that would break that container isolation and expose to the host its content.
Begin your container's script with copying files from a read-only mount bind reflecting the host files to a work location in the container. End the script with copying necessary results from the container's work location back to the host using either the same or different mount point.
Alternatively to the end-of-the script command, run the container without automatically removing it at the end, then run docker cp CONTAINER_NAME:CONTAINER_DIR HOST_DIR, then docker rm CONTAINER_NAME.
Alternatively to copying results back to the host, keep them in a separate "named" volume, provided that the container had it mounted (type=volume,src=datavol,dst=CONTAINER_DIR/work). Use the named volume with other docker run commands to retrieve or use the results.
The input files may be modified in the host during development between the repeated runs of the container. Avoid shadowing them with the frozen files in the named volume. Beginning the container script with copying the input files from the host may help.
Using a named volume helps running the container read-only. (One may still need --tmpfs /tmp for temporary files or --tmpfs /tmp:exec if some container commands create and run executable code in the temporary location).