Can't mount HOST folder into Amazon Docker Container? - amazon-ec2

I'm using an EC2 instance to run docker. From my local machine using OSX, I'm using docker machine to create containers and volumes. However when I'm trying to mount a local folder to any container is not possible.
docker create -v /data --name data-only-container ubuntu /bin/true
docker run -it --volumes-from data-only-container -v $(pwd)/data:/backup ubuntu bash
With the first command I create a data only container and I'm executing the second command to get into a container that should have the data-only-container volumes and the one I'm trying to mount, however when access it the folder /backup is empty
What I'm doing wrong?
EDIT:
I'm trying to mount a host folder in order to restore backuped data from my PC to container. In that case what would be a different approach?
Shall I try to use Flocker?

A host volume mounted with -v /path/to/dir:/container/mnt mounts a directory from the docker host inside the container. When you run this command on your OSX system, the $(pwd)/data will reference a directory on your local machine that doesn't exist on the docker host, the EC2 instance. If you log into your EC2 instance, you'll likely find the $(pwd)/data directory created there and empty.
If you want to mount folders from your OSX system into a docker container, you'll need to run Docker on the OSX system itself.
Edit: To answer the added question of how to move data up to your container in the cloud, there are often ways to move your data to the cloud provider itself, outside of docker, and then include it directly inside the container. To do a docker only approach, you can do something like:
tar -cC /source . | \
docker run --rm -i -v app-data:/target busybox \
/bin/sh -c "tar -xC /target"
This will upload your data with tar over a pipe into a named volume on your docker host. You can then include the named "app-data" volume in any other containers. If you need to do this multiple times with larger data sets, creating an rsync container would be more efficient.

Related

Haven't been able to bind mount a network drive on Docker for Windows

I'm trying to bind mount a network drive from my host (Windows 10) to a local folder in a container but, regrettably, I haven't been able to achieve it.
In Linux I've had no issues whatsoever.
Any help would be greatly appreciated :)
docker run -v G:/:mnt/shared ubuntu /bin/bash
For local drive, use the following command:
docker run -v c:/your-directory/:/mnt/shared ubuntu /bin/bash
For network drive, you need to create a Docker volume pointing to that network drive.
The command would be as follows:
docker volume create --driver local --opt type=cifs --opt device=//networkdrive-ip/Folder --opt o=user=yourusername,domain=yourdomain,password=yourpassword mydockervolume
docker run -v mydockervolume:/data alpine ls /data
Reference is here: How to Add Network Drive as Volume in Docker on Windows
I managed to do it in a different way.
Here is the answer: Docker add network drive as volume on windows

Case sensitive host volume mount in docker for windows

I am running a linux docker container on windows 10. I need my host to have access to the data that my container generates. I also need the data to persist if I update the container's image.
I created a folder on the host (On a NTFS formated drive), in the docker settings, I share that drive with docker. I then create the container with the host directory mounted (using the -v option on the docker run command)
The problem is that docker creates a cifs mount to my shared drive on the host. It seems like the CIFS protocol is not case sensitive. I create two files:
/data/Test
/data/test
But only one file will be generated. I setup the kernel to support case sensitive files. For example, if I mount the same folder inside cygwin bash, I can create those two files without any problem. The problem is with the CIFS implementation I think.
My current thoughts of solving this issue:
Use Cygwin to create an NFS server on the host, and mount the NFS volume from within the linux container. I am not sure how I can automate this processes though.
Create another linux container with a SAMBA server. Create a volume on that container:
docker run -d -v /data --name dbstore --name a-samba-server
Then use that volume in my container:
docker run -d --volumes-from dbstore --name my-container my-container-image
Then I need to share /data in the samba server and create a map to that share on my host.
Both solutions seem quite cumbersome and I would like to know if there is anyway I can solve this directly with the CIFS share that docker natively creates.

How to access the docker VM (MobyLinux) filesystem from windows shell?

Is there away to log into host VM's shell, similarly to how can we easily enter into running containers bash?
docker exec -it bash
I accidentally broke one container's crucial file, so that it couldn't start. Unfortunately, that container stored it's data inside. The result was that whenever I tried to run it, it couldn't start. The only solutions I saw were about navigating to host docker daemon's files. However, I'm running docker VM on windows, and I cannot access the files inside VM (MobyLinuxVM).
I'm using Docker for Windows, version 1.12.3-beta30.1 (8711)
Hack your way in
run a container with full root access to MobyLinuxVM and no seccomp profile (so you can mount stuff)
docker run --net=host --ipc=host --uts=host --pid=host -it --security-opt=seccomp=unconfined --privileged --rm -v /:/host alpine /bin/sh
https://forums.docker.com/t/how-can-i-ssh-into-the-betas-mobylinuxvm/10991/6

Running a Bash Script from (on Docker Container B) from Docker Container A

I have two Docker Containers configured through a Docker Compose file.
Docker Container A - (teamcity-agent)
Docker Container B - (build-tool)
Both start up fine. But as part of the build process in TeamCity - I would like the Agent (Container A) to run a bash script which is on Docker Container B (Only B can run this script).
I tried to set this up using the SSH build step in Team City, but I get connection refused.
Further reading into it shows that SSH isn't enabled in containers and that I shouldn't really be trying to SSH into a container.
So how can I get Container A to run the script on Container B and see the output of the script on A?
What is the best practice for this?
The only way without modifying the application itself is through SSH. It is completely false you cannot SSH to a container. I use SSH to a database container to run database export inside it.
First be sure openssh-server is installed on B. Then you must setup a passwordless connection between A and B.
Then be sure you link your containers in the docker-compose file so you won't need to expose the SSH port.
Snippet to add in Dockerfile for container B
RUN apt-get install -q -y openssh-server
ADD id_rsa.pub /home/ubuntu/.ssh/authorized_keys
RUN chown -R ubuntu:ubuntu /home/ubuntu/.ssh ; \
chmod 700 /home/ubuntu/.ssh ; \
chmod 600 /home/ubuntu/.ssh/authorized_keys
Also you can run the script outside the containers using docker exec in a crontab in the host. But I think you are not looking for this extreme solution.
I can help you via comments
Regards

Mounting local volumes to docker container

Edited for clarity
I run the following run command
docker run -d -P -v /users/username/app:/app contname
This resulsts in the following when i inspect the container
"HostConfig": {
"Binds": [
"/users/username/app:/app"
],
"Volumes": {
"/app": "/users/username/app",
"/app": "/mnt/sda1/var/lib/docker/vfs/dir/214a16c3678f93cbadb7e7b7d56b5f26b66a34c6d9bb89ade23b16e386a12212"
},
But when i ssh into the container, i can see that app is empty.
Is my assumption that there should be the files from my host machine correct?
Boot2docker automatically mounts your home directory into the virtualbox boot2docker VM not the container. You still need to add -v /users/username/app:/app to your docker run command.
When you add the Volume command to your docker file you are declaring a volume to be created by the container. This volume is mounted at /var/lib/docker/volumes on the host VM. These volumes can only be shared by passing --volumes-from to the docker run command. When you pass the -v switch you creating a volume using a specified location on the host file system.

Resources