how to generate core file in docker container? - bash

using ulimit command, i set core file size.
ulimit -c unlimited
and I compiled c source code using gcc - g option.
then a.out generated.
after command
./a.out
there is runtime error .
(core dumped)
but core file was not generated.(ex. core.294340)
how to generated core file?

First make sure the container will write the cores to an existing location in the container filesystem. The core generation settings are set in the host, not in the container. Example:
echo '/cores/core.%e.%p' | sudo tee /proc/sys/kernel/core_pattern
will generate cores in the folder /cores.
In your Dockerfile, create that folder:
RUN mkdir /cores
You need to specify the core size limit; the ulimit shell command would not work, cause it only affects at the current shell. You need to use the docker run option --ulimit with a soft and hard limit. After building the Docker image, run the container with something like:
docker run --ulimit core=-1 --mount source=coredumps_volume,target=/cores ab3ca583c907 ./a.out
where coredumps_volume is a volume you already created where the core will persist after the container is terminated. E.g.
docker volume create coredumps_volume

If you want to generate a core dump of an existing process, say using gcore, you need to start the container with --cap-add=SYS_PTRACE to allow a debugger running as root inside the container to attach to the process. (For core dumps on signals, see the other answer)

I keep forgetting how to do it exactly and keep stumbling upon this question which provides marginal help.
All in all it is very simple:
Run the container with extra params --ulimit core=-1 --privileged to allow coredumps:
docker run -it --rm \
--name something \
--ulimit core=-1 --privileged \
--security-opt seccomp=unconfined \
--entrypoint '/bin/bash' \
$IMAGE
Then, once in container, set coredump location and start your failing script:
sysctl -w kernel.core_pattern=/tmp/core-%e.%p.%h.%t
myfailingscript.a
Enjoy your stacktrace
cd /tmp; gdb -c `ls -t /tmp | grep core | tail -1`

Well, let's resurrect an ancient thread.
If you're running Docker on Linux, then all of this is controlled by /proc/sys/kernel/core_pattern on your raw metal. That is, if you cat that file on bare metal and inside the container, they'll be the same. Note also the file is tricky to update. You have to use the tee method from some of the other posts.
echo core | sudo tee /proc/sys/kernel/core_pattern
If you change it in bare metal, it gets changed in your container. So that also means that behavior is going to be based on where you're running your containers.
My containers don't run apport, but my bare metal did, so I wasn't getting cores. I did the above (I had already solved the ulimit -c thing), and suddenly I get core files in the current directory.
The key to this is understanding that it's your environment, your bare metal, that controls the contents of that file.

Related

file descriptor redirection in docker

I want to be able to pipe some content into a docker process without clobbering it's stdin.
I thought I could do this by opening a new file descriptor in bash before spawning the docker process, then consuming this descriptor within the docker process. However it doesn't work
outside docker:
exec 4<>somefile.txt
docker run --rm -i image cmd args > output.txt
inside docker:
exec 4>file.txt # also tried without the exec
do something with file.txt
The docker container stops when it reaches the 4>file.txt line.
It must be an atomic action, so I can't use docker cp or anything like that.
Also, the docker image does not expose any network ports, so netcat cannot be used.
I would prefer to not use any complex docker mounts.
STDIN is required for other purposes, so I can't clobber that
Are there any other options for getting the file content into a transient container for the use of a single command?
The usual approach here is to mount the current directory into the container. You can choose any directory name inside the container, and should try to avoid hiding the script itself with the mount.
docker run --rm -i -v $PWD:/data image \
cmd -i /data/file.txt -o /data/output.txt --other-args
Filesystem permissions can be tricky, on both sides of this: you can name any directory in the first half of the -v option, even system directories like /etc; and if the process inside the container runs as a non-root user it might have trouble reading the files in the directory you mount in.
You can bind-mount either files or directories, but with the one caveat that they must exist on the host first, or else Docker will create a directory for you (even if you wanted a file; and likely owned by root and not your local user).

convert Dockerfile to Bash script

Is there any easy way to convert a Dockerfile to a Bash script in order to install all the software on a real OS? The reason is that docker container I can not change and I would like afterwards change few things if they did not work out.
In short - no.
By parsing the Dockerfile with a tool such as dockerfile-parse you could run the individual RUN commands, but this would not replicate the Dockerfile's output.
You would have to be running the same version of the same OS.
The ADD and COPY commands affect the filesystem, which is in its own namespace. Running these outside of the container could potentially break your host system. Your host will also have files in places that the container image would not.
VOLUME mounts will also affect the filesytem.
The FROM image (which may in turn be descended from other images) may have other applications installed.
Writing Dockerfiles can be a slow process if there is a large installation or download step. To mitigate that, try adding new packages as a new RUN command (to take advantage of the cache) and add features incrementally, only optimising/compressing the layers when the functionality is complete.
You may also want to use something like ServerSpec to get a TDD approach to your container images and prevent regressions during development.
Best practice docs here, gotchas and the original article.
Basically you can make a copy of a Docker container's file system using “docker export”, which you can then write to a loop device:
docker build -t <YOUR-IMAGE> ...
docker create --name=<YOUR-CONTAINER> <YOUR-IMAGE>
dd if=/dev/zero of=disk.img bs=1 count=0 seek=1G
mkfs.ext2 -F disk.img
sudo mount -o loop disk.img /mnt
docker export <YOUR-CONTAINER> | sudo tar x -C /mnt
sudo umount /mnt
Convert a Docker container to a raw file system image.
More info here:
http://mr.gy/blog/build-vm-image-with-docker.html
You can of course convert a Dockerfile to bash script commands. Its just a matter of determining what the translation means. All docker installs, apply changes to a "file system layer" and that means all changes can be implemented in a real OS.
An example of this process is here:
https://github.com/thatkevin/dockerfile-to-shell-script
It is an example of how you would do the translation.
you can install application inside dockerfile like this
FROM <base>
RUN apt-get update -y
RUN apt-get install <some application> -y

Ahow to use multiple terminals in the docker container?

I know it is weird to use multiple terminals in the docker container.
My purpose is to test some commands and build a dockerfile with these commands finally.
So I need to use multiple terminals, say, two. One is running some commands, the other is used to test that commands.
If I use a real machine, I can ssh it to use multiple terminals, but in docker, how can I do this?
Maybe the solution is to run docker with CMD /bin/bash, and in that bash, using screen?
EDIT
In my situation, one shell run a server program, the other run a client program to test the server program. Because the server program and client program are compiled together. So, the default link method in docker is not suitable.
The docker way would be to run the server in one container and the client in another. You can use links to make the server visible from the client and you can use volumes to make the files at the server available from the client. If you really want to have two terminals to the same container there is nothing stopping you from using ssh. I tested this docker server:
from: https://docs.docker.com/examples/running_ssh_service/
# sshd
#
# VERSION 0.0.1
FROM ubuntu:14.04
MAINTAINER Thatcher R. Peskens "thatcher#dotcloud.com"
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
You need to base this image on your image or the otherway around to get all the functionality together. After you have built and started your container you can get it's IP using
docker inspect <id or name of container>
from the docker host you can now ssh in with root and the password from the docker file. Now you can spawn as many ssh clients as you want. I tested with:
while true; do echo "test" >> tmpfile; sleep 1; done
from one client and
tail -f tmpfile
from another
If I understand correctly the problem, then you can use nsenter.
Assuming you have a running docker named nginx (with nginx started), run the following command from the host:
nsenter -m -u -i -n -p -t `docker inspect --format {{.State.Pid}} nginx`
This will start a program in the given name space of the PID (default $SHELL).
You can run more then one shell by issuing it more then once (from the host). Then you can run any binary that exist in the given docker or tail, rm, etc files. For example, tail the log file of nginx.
Further information can be found in the nsenter man.
If you want to just play around, you can run sshd in your image and explore it the way you are used to:
docker run -d -p 22 your_image /usr/sbin/sshd -D
When you are done with your explorations, you can proceed to create Dockerfile as usual.

How can I inspect the file system of a failed `docker build`?

I'm trying to build a new Docker image for our development process, using cpanm to install a bunch of Perl modules as a base image for various projects.
While developing the Dockerfile, cpanm returns a failure code because some of the modules did not install cleanly.
I'm fairly sure I need to get apt to install some more things.
Where can I find the /.cpanm/work directory quoted in the output, in order to inspect the logs? In the general case, how can I inspect the file system of a failed docker build command?
After running a find I discovered
/var/lib/docker/aufs/diff/3afa404e[...]/.cpanm
Is this reliable, or am I better off building a "bare" container and running stuff manually until I have all the things I need?
Everytime docker successfully executes a RUN command from a Dockerfile, a new layer in the image filesystem is committed. Conveniently you can use those layers ids as images to start a new container.
Take the following Dockerfile:
FROM busybox
RUN echo 'foo' > /tmp/foo.txt
RUN echo 'bar' >> /tmp/foo.txt
and build it:
$ docker build -t so-26220957 .
Sending build context to Docker daemon 47.62 kB
Step 1/3 : FROM busybox
---> 00f017a8c2a6
Step 2/3 : RUN echo 'foo' > /tmp/foo.txt
---> Running in 4dbd01ebf27f
---> 044e1532c690
Removing intermediate container 4dbd01ebf27f
Step 3/3 : RUN echo 'bar' >> /tmp/foo.txt
---> Running in 74d81cb9d2b1
---> 5bd8172529c1
Removing intermediate container 74d81cb9d2b1
Successfully built 5bd8172529c1
You can now start a new container from 00f017a8c2a6, 044e1532c690 and 5bd8172529c1:
$ docker run --rm 00f017a8c2a6 cat /tmp/foo.txt
cat: /tmp/foo.txt: No such file or directory
$ docker run --rm 044e1532c690 cat /tmp/foo.txt
foo
$ docker run --rm 5bd8172529c1 cat /tmp/foo.txt
foo
bar
of course you might want to start a shell to explore the filesystem and try out commands:
$ docker run --rm -it 044e1532c690 sh
/ # ls -l /tmp
total 4
-rw-r--r-- 1 root root 4 Mar 9 19:09 foo.txt
/ # cat /tmp/foo.txt
foo
When one of the Dockerfile command fails, what you need to do is to look for the id of the preceding layer and run a shell in a container created from that id:
docker run --rm -it <id_last_working_layer> bash -il
Once in the container:
try the command that failed, and reproduce the issue
then fix the command and test it
finally update your Dockerfile with the fixed command
If you really need to experiment in the actual layer that failed instead of working from the last working layer, see Drew's answer.
The top answer works in the case that you want to examine the state immediately prior to the failed command.
However, the question asks how to examine the state of the failed container itself. In my situation, the failed command is a build that takes several hours, so rewinding prior to the failed command and running it again takes a long time and is not very helpful.
The solution here is to find the container that failed:
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6934ada98de6 42e0228751b3 "/bin/sh -c './utils/" 24 minutes ago Exited (1) About a minute ago sleepy_bell
Commit it to an image:
$ docker commit 6934ada98de6
sha256:7015687976a478e0e94b60fa496d319cdf4ec847bcd612aecf869a72336e6b83
And then run the image [if necessary, running bash]:
$ docker run -it 7015687976a4 [bash -il]
Now you are actually looking at the state of the build at the time that it failed, instead of at the time before running the command that caused the failure.
Update for newer docker versions 20.10 onwards
Linux or macOS
DOCKER_BUILDKIT=0 docker build ...
Windows
# Command line
set DOCKER_BUILDKIT=0 docker build ...
# PowerShell
$env:DOCKER_BUILDKIT=0
Use
DOCKER_BUILDKIT=0 docker build ...
to get the intermediate container hashes as known from older versions.
On newer versions, Buildkit is activated per default. It is recommended to only use it for debugging purposes. Build Kit can make your build faster.
For reference:
Buildkit doesn't support intermediate container hashes: https://github.com/moby/buildkit/issues/1053
Thanks to #David Callanan and #MegaCookie for their inputs.
Docker caches the entire filesystem state after each successful RUN line.
Knowing that:
to examine the latest state before your failing RUN command, comment it out in the Dockerfile (as well as any and all subsequent RUN commands), then run docker build and docker run again.
to examine the state after the failing RUN command, simply add || true to it to force it to succeed; then proceed like above (keep any and all subsequent RUN commands commented out, run docker build and docker run)
Tada, no need to mess with Docker internals or layer IDs, and as a bonus Docker automatically minimizes the amount of work that needs to be re-done.
Currently with the latest docker-desktop, there isn't a way to opt out
of the new Buildkit, which doesn't support debugging yet (follow the
latest updates on this on this GitHub Thread:
https://github.com/moby/buildkit/issues/1472).
Find out at which line in your Dockerfile it is failing.
Add to the top of your Dockerfile: FROM xxx as debug
Add an additional target: FROM xxx as next just one line before the failing command (as you don't want to build that part). Example:
FROM xxx as debug
RUN echo "working command"
FROM xxx as next
RUN echoo "failing command"
Run docker build -f Dockerfile --target debug --tag debug .
Then you can debug the container with: docker run -it debug /bin/sh
You can quit the shell by pressing CTRL P + CTRL Q
If you want to use docker compose build instead of docker build it's possible by adding target: debug in your docker-compose.yml under build.
Then start the container by docker compose run xxxYourServiceNamexxx and use either:
The second top answer to find out how to run a shell inside the container.
Or add ENTRYPOINT /bin/sh before the FROM xxx as next line in your Dockerfile.
Debugging build step failures is indeed very annoying.
The best solution I have found is to make sure that each step that does real work succeeds, and adding a check after those that fails. That way you get a committed layer that contains the outputs of the failed step that you can inspect.
A Dockerfile, with an example after the # Run DB2 silent installer line:
#
# DB2 10.5 Client Dockerfile (Part 1)
#
# Requires
# - DB2 10.5 Client for 64bit Linux ibm_data_server_runtime_client_linuxx64_v10.5.tar.gz
# - Response file for DB2 10.5 Client for 64bit Linux db2rtcl_nr.rsp
#
#
# Using Ubuntu 14.04 base image as the starting point.
FROM ubuntu:14.04
MAINTAINER David Carew <carew#us.ibm.com>
# DB2 prereqs (also installing sharutils package as we use the utility uuencode to generate password - all others are required for the DB2 Client)
RUN dpkg --add-architecture i386 && apt-get update && apt-get install -y sharutils binutils libstdc++6:i386 libpam0g:i386 && ln -s /lib/i386-linux-gnu/libpam.so.0 /lib/libpam.so.0
RUN apt-get install -y libxml2
# Create user db2clnt
# Generate strong random password and allow sudo to root w/o password
#
RUN \
adduser --quiet --disabled-password -shell /bin/bash -home /home/db2clnt --gecos "DB2 Client" db2clnt && \
echo db2clnt:`dd if=/dev/urandom bs=16 count=1 2>/dev/null | uuencode -| head -n 2 | grep -v begin | cut -b 2-10` | chgpasswd && \
adduser db2clnt sudo && \
echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers
# Install DB2
RUN mkdir /install
# Copy DB2 tarball - ADD command will expand it automatically
ADD v10.5fp9_linuxx64_rtcl.tar.gz /install/
# Copy response file
COPY db2rtcl_nr.rsp /install/
# Run DB2 silent installer
RUN mkdir /logs
RUN (/install/rtcl/db2setup -t /logs/trace -l /logs/log -u /install/db2rtcl_nr.rsp && touch /install/done) || /bin/true
RUN test -f /install/done || (echo ERROR-------; echo install failed, see files in container /logs directory of the last container layer; echo run docker run '<last image id>' /bin/cat /logs/trace; echo ----------)
RUN test -f /install/done
# Clean up unwanted files
RUN rm -fr /install/rtcl
# Login as db2clnt user
CMD su - db2clnt
In my case, I have to have:
DOCKER_BUILDKIT=1 docker build ...
and as mentioned by Jannis Schönleber in his answer, there is currently no debug available in this case (i.e. no intermediate images/containers get created).
What I've found I could do is use the following option:
... --progress=plain ...
and then add various RUN ... or additional lines on existing RUN ... to debug specific commands. This gives you what to me feels like full access (at least if your build is relatively fast).
For example, you could check a variable like so:
RUN echo "Variable NAME = [$NAME]"
If you're wondering whether a file is installed properly, you do:
RUN find /
etc.
In my situation, I had to debug a docker build of a Go application with a private repository and it was quite difficult to do that debugging. I've other details on that here.
If you are using docker-compose to build docker images try to add DOCKER_BUILDKIT=0 before the command to see the last successful layer id
DOCKER_BUILDKIT=0 docker-compose ...
This will temporarily disable DOCKER_BUILDKIT for the command only.
Having the last layer id you can connect to it using the command from the top answer
docker run --rm -it LAST_LAYER_ID sh
my solution would be to see what step failed in the docker file, RUN bundle install in my case,
and change it to
RUN bundle install || cat <path to the file containing the error>
This has the double effect of printing out the reason for the failure, AND this intermediate step is not figured as a failed one by docker build. so it's not deleted, and can be inspected via:
docker run --rm -it <id_last_working_layer> bash -il
in there you can even re run your failed command and test it live.
What I would do is comment out the Dockerfile below and including the offending line. Then you can run the container and run the docker commands by hand, and look at the logs in the usual way. E.g. if the Dockerfile is
RUN foo
RUN bar
RUN baz
and it's dying at bar I would do
RUN foo
# RUN bar
# RUN baz
Then
$ docker build -t foo .
$ docker run -it foo bash
container# bar
...grep logs...
Still using BuildKit, as in Alexis Wilke's answer, you can use ktock/buildg.
See "Interactive debugger for Dockerfile" from Kohei Tokunaga
buildg is a tool to interactively debug Dockerfile based on BuildKit.
Source-level inspection
Breakpoints and step execution
Interactive shell on a step with your own debugigng tools
Based on BuildKit (needs unmerged patches)
Supports rootless
Example:
$ buildg.sh debug --image=ubuntu:22.04 /tmp/ctx
WARN[2022-05-09T01:40:21Z] using host network as the default
#1 [internal] load .dockerignore
#1 transferring context: 2B done
#1 DONE 0.1s
#2 [internal] load build definition from Dockerfile
#2 transferring dockerfile: 195B done
#2 DONE 0.1s
#3 [internal] load metadata for docker.io/library/busybox:latest
#3 DONE 3.0s
#4 [build1 1/2] FROM docker.io/library/busybox#sha256:d2b53584f580310186df7a2055ce3ff83cc0df6caacf1e3489bff8cf5d0af5d8
#4 resolve docker.io/library/busybox#sha256:d2b53584f580310186df7a2055ce3ff83cc0df6caacf1e3489bff8cf5d0af5d8 0.0s done
#4 sha256:50e8d59317eb665383b2ef4d9434aeaa394dcd6f54b96bb7810fdde583e9c2d1 772.81kB / 772.81kB 0.2s done
Filename: "Dockerfile"
2| RUN echo hello > /hello
3|
4| FROM busybox AS build2
=> 5| RUN echo hi > /hi
6|
7| FROM scratch
8| COPY --from=build1 /hello /
>>> break 2
>>> breakpoints
[0]: line 2
>>> continue
#4 extracting sha256:50e8d59317eb665383b2ef4d9434aeaa394dcd6f54b96bb7810fdde583e9c2d1 0.0s done
#4 DONE 0.3s
...

How to mount a host directory in a Docker container

I am trying to mount a host directory into a Docker container so that any updates done on the host is reflected into the Docker containers.
Where am I doing something wrong. Here is what I did:
kishore$ cat Dockerfile
FROM ubuntu:trusty
RUN apt-get update
RUN apt-get -y install git curl vim
CMD ["/bin/bash"]
WORKDIR /test_container
VOLUME ["/test_container"]
kishore$ tree
.
├── Dockerfile
└── main_folder
├── tfile1.txt
├── tfile2.txt
├── tfile3.txt
└── tfile4.txt
1 directory, 5 files
kishore$ pwd
/Users/kishore/tdock
kishore$ docker build --tag=k3_s3:latest .
Uploading context 7.168 kB
Uploading context
Step 0 : FROM ubuntu:trusty
---> 99ec81b80c55
Step 1 : RUN apt-get update
---> Using cache
---> 1c7282005040
Step 2 : RUN apt-get -y install git curl vim
---> Using cache
---> aed48634e300
Step 3 : CMD ["/bin/bash"]
---> Running in d081b576878d
---> 65db8df48595
Step 4 : WORKDIR /test_container
---> Running in 5b8d2ccd719d
---> 250369b30e1f
Step 5 : VOLUME ["/test_container"]
---> Running in 72ca332d9809
---> 163deb2b1bc5
Successfully built 163deb2b1bc5
Removing intermediate container b8bfcb071441
Removing intermediate container d081b576878d
Removing intermediate container 5b8d2ccd719d
Removing intermediate container 72ca332d9809
kishore$ docker run -d -v /Users/kishore/main_folder:/test_container k3_s3:latest
c9f9a7e09c54ee1c2cc966f15c963b4af320b5203b8c46689033c1ab8872a0eakishore$ docker run -i -t k3_s3:latest /bin/bash
root#0f17e2313a46:/test_container# ls -al
total 8
drwx------ 2 root root 4096 Apr 29 05:15 .
drwxr-xr-x 66 root root 4096 Apr 29 05:15 ..
root#0f17e2313a46:/test_container# exit
exitkishore$ docker -v
Docker version 0.9.1, build 867b2a9
I don't know how to check boot2docker version
Questions, issues facing:
How do I need to link the main_folder to the test_container folder present inside the docker container?
I need to make this automatically. How do I to do that without really using the run -d -v command?
What happens if the boot2docker crashes? Where are the Docker files stored (apart from Dockerfile)?
There are a couple ways you can do this. The simplest way to do so is to use the dockerfile ADD command like so:
ADD . /path/inside/docker/container
However, any changes made to this directory on the host after building the dockerfile will not show up in the container. This is because when building a container, docker compresses the directory into a .tar and uploads that context into the container permanently.
The second way to do this is the way you attempted, which is to mount a volume. Due to trying to be as portable as possible you cannot map a host directory to a docker container directory within a dockerfile, because the host directory can change depending on which machine you are running on. To map a host directory to a docker container directory you need to use the -v flag when using docker run, e.g.,:
# Run a container using the `alpine` image, mount the `/tmp`
# directory from your host into the `/container/directory`
# directory in your container, and run the `ls` command to
# show the contents of that directory.
docker run \
-v /tmp:/container/directory \
alpine \
ls /container/directory
The user of this question was using Docker version 0.9.1, build 867b2a9, I will give you an answer for docker version >= 17.06.
What you want, keep local directory synchronized within container directory, is accomplished by mounting the volume with type bind. This will bind the source (your system) and the target (at the docker container) directories. It's almost the same as mounting a directory on linux.
According to Docker documentation, the appropriate command to mount is now mount instead of -v. Here's its documentation:
--mount: Consists of multiple key-value pairs, separated by commas. Each key/value pair takes the form of a <key>=<value> tuple. The --mount syntax is more verbose than -v or --volume, but the order of the keys is not significant, and the value of the flag is easier to understand.
The type of the mount, which can be bind, volume, or tmpfs. (We are going to use bind)
The source of the mount. For bind mounts, this is the path to the file or directory on the Docker daemon host. May be specified as source or src.
The destination takes as its value the path where the file or directory will be mounted in the container. May be specified as destination, dst, or target.
So, to mount the the current directory (source) with /test_container (target) we are going to use:
docker run -it --mount src="$(pwd)",target=/test_container,type=bind k3_s3
If these mount parameters have spaces you must put quotes around them. When I know they don't, I would use `pwd` instead:
docker run -it --mount src=`pwd`,target=/test_container,type=bind k3_s3
You will also have to deal with file permission, see this article.
you can use -v option from cli, this facility is not available via Dockerfile
docker run -t -i -v <host_dir>:<container_dir> ubuntu /bin/bash
where host_dir is the directory from host which you want to mount.
you don't need to worry about directory of container if it doesn't exist docker will create it.
If you do any changes in host_dir from host machine (under root privilege) it will be visible to container and vice versa.
2 successive mounts:
I guess many posts here might be using two boot2docker, the reason you don't see anything is that you are mounting a directory from boot2docker, not from your host.
You basically need 2 successive mounts:
the first one to mount a directory from your host to your system
the second to mount the new directory from boot2docker to your container like this:
1) Mount local system on boot2docker
sudo mount -t vboxsf hostfolder /boot2dockerfolder
2) Mount boot2docker file on linux container
docker run -v /boot2dockerfolder:/root/containerfolder -i -t imagename
Then when you ls inside the containerfolder you will see the content of your hostfolder.
Is it possible that you use docker on OS X via boot2docker or something similar.
I've made the same experience - the command is correct but nothing (sensible) is mounted in the container, anyway.
As it turns out - it's already explained in the docker documentation. When you type docker run -v /var/logs/on/host:/var/logs/in/container ... then /var/logs/on/host is actually mapped from the boot2docker VM-image, not your Mac.
You'll have to pipe the shared folder through your VM to your actual host (the Mac in my case).
For those who wants to mount a folder in current directory:
docker run -d --name some-container -v ${PWD}/folder:/var/folder ubuntu
I'm just experimenting with getting my SailsJS app running inside a Docker container to keep my physical machine clean.
I'm using the following command to mount my SailsJS/NodeJS application under /app:
cd my_source_code_folder
docker run -it -p 1337:1337 -v $(pwd):/app my_docker/image_with_nodejs_etc
[UPDATE] As of ~June 2017, Docker for Mac takes care of all the annoying parts of this where you have to mess with VirtualBox. It lets you map basically everything on your local host using the /private prefix. More info here. [/UPDATE]
All the current answers talk about Boot2docker. Since that's now deprecated in favor of docker-machine, this works for docker-machine:
First, ssh into the docker-machine vm and create the folder we'll be mapping to:
docker-machine ssh $MACHINE_NAME "sudo mkdir -p \"$VOL_DIR\""
Now share the folder to VirtualBox:
WORKDIR=$(basename "$VOL_DIR")
vboxmanage sharedfolder add "$MACHINE_NAME" --name "$WORKDIR" --hostpath "$VOL_DIR" --transient
Finally, ssh into the docker-machine again and mount the folder we just shared:
docker-machine ssh $MACHINE_NAME "sudo mount -t vboxsf -o uid=\"$U\",gid=\"$G\" \"$WORKDIR\" \"$VOL_DIR\""
Note: for UID and GID you can basically use whatever integers as long as they're not already taken.
This is tested as of docker-machine 0.4.1 and docker 1.8.3 on OS X El Capitan.
Using command-line :
docker run -it --name <WHATEVER> -p <LOCAL_PORT>:<CONTAINER_PORT> -v <LOCAL_PATH>:<CONTAINER_PATH> -d <IMAGE>:<TAG>
Using docker-compose.yaml :
version: '2'
services:
cms:
image: <IMAGE>:<TAG>
ports:
- <LOCAL_PORT>:<CONTAINER_PORT>
volumes:
- <LOCAL_PATH>:<CONTAINER_PATH>
Assume :
IMAGE: k3_s3
TAG: latest
LOCAL_PORT: 8080
CONTAINER_PORT: 8080
LOCAL_PATH: /volume-to-mount
CONTAINER_PATH: /mnt
Examples :
First create /volume-to-mount. (Skip if exist)
$ mkdir -p /volume-to-mount
docker-compose -f docker-compose.yaml up -d
version: '2'
services:
cms:
image: ghost-cms:latest
ports:
- 8080:8080
volumes:
- /volume-to-mount:/mnt
Verify your container :
docker exec -it CONTAINER_ID ls -la /mnt
docker run -v /host/directory:/container/directory -t IMAGE-NAME /bin/bash
docker run -v /root/shareData:/home/shareData -t kylemanna/openvpn /bin/bash
In my system I've corrected the answer from nhjk, it works flawless when you add the -t flag.
On Mac OS, to mount a folder /Users/<name>/projects/ on your mac at the root of your container:
docker run -it -v /Users/<name>/projects/:/projects <container_name> bash
ls /projects
If the host is windows 10 then instead of forward slash, use backward slash -
docker run -it -p 12001:80 -v c:\Users\C\Desktop\dockerStorage:/root/sketches
Make sure the host drive is shared (C in this case). In my case I got a prompt asking for share permission after running the command above.
For Windows 10 users, it is important to have the mount point inside the C:/Users/ directory. I tried for hours to get this to work. This post helped but it was not obvious at first as the solution for Windows 10 is a comment to an accepted answer. This is how I did it:
docker run -it -p 12001:80 -v //c/Users/C/Desktop/dockerStorage:/root/sketches \
<your-image-here> /bin/bash
Then to test it, you can do echo TEST > hostTest.txt inside your image. You should be able to see this new file in the local host folder at C:/Users/C/Desktop/dockerStorage/.
As of Docker 18-CE, you can use docker run -v /src/path:/container/path to do 2-way binding of a host folder.
There is a major catch here though if you're working with Windows 10/WSL and have Docker-CE for Windows as your host and then docker-ce client tools in WSL. WSL knows about the entire / filesystem while your Windows host only knows about your drives. Inside WSL, you can use /mnt/c/projectpath, but if you try to docker run -v ${PWD}:/projectpath, you will find in the host that /projectpath/ is empty because on the host /mnt means nothing.
If you work from /c/projectpath though and THEN do docker run -v ${PWD}:/projectpath and you WILL find that in the container, /projectpath will reflect /c/projectpath in realtime. There are no errors or any other ways to detect this issue other than seeing empty mounts inside your guest.
You must also be sure to "share the drive" in the Docker for Windows settings.
Jul 2015 update - boot2docker now supports direct mounting. You can use -v /var/logs/on/host:/var/logs/in/container directly from your Mac prompt, without double mounting
I've been having the same issue.
My command line looked like this:
docker run --rm -i --name $NAME -v `pwd`:/sources:z $NAME
The problem was with 'pwd'. So I changed that to $(pwd):
docker run --rm -i --name $NAME -v $(pwd):/sources:z $NAME
How do I link the main_folder to the test_container folder present inside the docker container?
Your command below is correct, unless your on a mac using boot2docker(depending on future updates) in which case you may find the folder empty. See mattes answer for a tutorial on correcting this.
docker run -d -v /Users/kishore/main_folder:/test_container k3_s3:latest
I need to make this run automatically, how to do that without really
using the run -d -v command.
You can't really get away from using these commands, they are intrinsic to the way docker works. You would be best off putting them into a shell script to save you writing them out repeatedly.
What happens if boot2docker crashes? Where are the docker files stored?
If you manage to use the -v arg and reference your host machine then the files will be safe on your host.
If you've used 'docker build -t myimage .' with a Dockerfile then your files will be baked into the image.
Your docker images, i believe, are stored in the boot2docker-vm. I found this out when my images disappeared when i delete the vm from VirtualBox. (Note, i don't know how Virtualbox works, so the images might be still hidden somewhere else, just not visible to docker).
Had the same problem. Found this in the docker documentation:
Note: The host directory is, by its nature, host-dependent. For this reason, you can’t mount a host directory from Dockerfile, the VOLUME instruction does not support passing a host-dir, because built images should be portable. A host directory wouldn’t be available on all potential hosts.
So, mounting a read/write host directory is only possible with the -v parameter in the docker run command, as the other answers point out correctly.
I found that any directory laying under system directive like /var, /usr, /etc could not be mount under the container.
The directive should be at user's space -v switch instructs docker daemon to mount local directory to the container, for example:
docker run -t -d -v /{local}/{path}:/{container}/{path} --name {container_name} {imagename}
Here's an example with a Windows path:
docker run -P -it --name organizr --mount src="/c/Users/MyUserName/AppData/Roaming/DockerConfigs/Organizr",dst=/config,type=bind organizrtools/organizr-v2:latest
As a side note, during all of this hair pulling, having to wrestle with figuring out, and retyping paths over and over and over again, I decided to whip up a small AutoHotkey script to convert a Windows path to a "Docker Windows" formatted path. This way all I have to do is copy any Windows path that I want to use as a mount point to the clipboard, press the "Apps Key" on the keyboard, and it'll format it into a path format that Docker appreciates.
For example:
Copy this to your clipboard:
C:\Users\My PC\AppData\Roaming\DockerConfigs\Organizr
press the Apps Key while the cursor is where you want it on the command-line, and it'll paste this there:
"/c/Users/My PC/AppData/Roaming/DockerConfigs/Organizr"
Saves a lot to time for me. Here it is for anyone else who may find it useful.
; --------------------------------------------------------------------------------------------------------------
;
; Docker Utility: Convert a Windows Formatted Path to a Docker Formatter Path
; Useful for (example) when mounting Windows volumes via the command-line.
;
; By: J. Scott Elblein
; Version: 1.0
; Date: 2/5/2019
;
; Usage: Cut or Copy the Windows formatted path to the clipboard, press the AppsKey on your keyboard
; (usually right next to the Windows Key), it'll format it into a 'docker path' and enter it
; into the active window. Easy example usage would be to copy your intended volume path via
; Explorer, place the cursor after the "-v" in your Docker command, press the Apps Key and
; then it'll place the formatted path onto the line for you.
;
; TODO:: I may or may not add anything to this depending on needs. Some ideas are:
;
; - Add a tray menu with the ability to do some things, like just replace the unformatted path
; on the clipboard with the formatted one rather than enter it automatically.
; - Add 'smarter' handling so the it first confirms that the clipboard text is even a path in
; the first place. (would need to be able to handle Win + Mac + Linux)
; - Add command-line handling so the script doesn't need to always be in the tray, you could
; just pass the Windows path to the script, have it format it, then paste and close.
; Also, could have it just check for a path on the clipboard upon script startup, if found
; do it's job, then exit the script.
; - Add an 'all-in-one' action, to copy the selected Windows path, and then output the result.
; - Whatever else comes to mind.
;
; --------------------------------------------------------------------------------------------------------------
#NoEnv
SendMode Input
SetWorkingDir %A_ScriptDir%
AppsKey::
; Create a new var, store the current clipboard contents (should be a Windows path)
NewStr := Clipboard
; Rip out the first 2 chars (should be a drive letter and colon) & convert the letter to lowercase
; NOTE: I could probably replace the following 3 lines with a regexreplace, but atm I'm lazy and in a rush.
tmpVar := SubStr(NewStr, 1, 2)
StringLower, tmpVar, tmpVar
; Replace the uppercase drive letter and colon with the lowercase drive letter and colon
NewStr := StrReplace(NewStr, SubStr(NewStr, 1, 2), tmpVar)
; Replace backslashes with forward slashes
NewStr := StrReplace(NewStr, "\", "/")
; Replace all colons with nothing
NewStr := StrReplace(NewStr, ":", "")
; Remove the last char if it's a trailing forward slash
NewStr := RegExReplace(NewStr, "/$")
; Append a leading forward slash if not already there
if RegExMatch(NewStr, "^/") == 0
NewStr := "/" . NewStr
; If there are any spaces in the path ... wrap in double quotes
if RegExMatch(NewStr, " ") > 0
NewStr := """" . NewStr . """"
; Send the result to the active window
SendInput % NewStr
To get this working in Windows 10 I had to open the Docker Settings window from the system tray and go to the Shared Drives section.
I then checked the box next to C. Docker asked for my desktop credentials to gain authorisation to write to my Users folder.
Then I ran the docker container following examples above and also the example on that settings page, attaching to /data in the container.
docker run -v c:/Users/<user.name>/Desktop/dockerStorage:/data -other -options
boot2docker together with VirtualBox Guest Additions
How to mount /Users into boot2docker
https://medium.com/boot2docker-lightweight-linux-for-docker/boot2docker-together-with-virtualbox-guest-additions-da1e3ab2465c
tl;dr Build your own custom boot2docker.iso with VirtualBox Guest
Additions (see link) or download
http://static.dockerfiles.io/boot2docker-v1.0.1-virtualbox-guest-additions-v4.3.12.iso
and save it to ~/.boot2docker/boot2docker.iso.
Note that in Windows you'll have to provide the absolute path.
Host: Windows 10
Container: Tensorflow Notebook
Below worked for me.
docker run -t -i -v D:/projects/:/home/chankeypathak/work -p 8888:8888 jupyter/tensorflow-notebook /bin/bash
i had same issues , i was trying to mount C:\Users\ folder on docker
this is how i did it Docker Toolbox command line
$ docker run -it --name <containername> -v /c/Users:/myVolData <imagename>
You can also do this with Portainer web application for a different visual experience.
First pull the Portainer image:
docker pull portainer/portainer
Then create a volume for Portainer:
docker volume create portainer_data
Also create a Portainer container:
docker run -d -p 8000:8000 -p 9000:9000 --name=portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer
You will be able to access the web app with your browser at this URL: "http://localhost:9000". At the first login, you will be prompted to set your Portainer admin credentials.
In the web app, follow these menus and buttons: (Container > Add container > Fill settings > Deploy Container)
I had trouble to create a "mount" volume with Portainer and I realized I had to click "bind" when creating my container's volume. Below is an illustration of the volume binding settings that worked for my container creation with a mounted volume binded to the host.
P.S.: I'm using Docker 19.035 and Portainer 1.23.1
I had the same requirement to mount host directory from container and I used volume mount command. But during testing noticed that it's creating files inside container too but after some digging found that they are just symbolic links and actual file system used form host machine.
Quoting from the Official Website:
Make sure you don’t have any previous getting-started containers running.
Run the following command from the app directory.
x86-64 Mac or Linux device:
docker run -dp 3000:3000 \
-w /app -v "$(pwd):/app" \
node:12-alpine \
sh -c "yarn install && yarn run dev"
Windows (PowerShell):
docker run -dp 3000:3000 `
-w /app -v "$(pwd):/app" `
node:12-alpine `
sh -c "yarn install && yarn run dev"
Aple silicon Mac or another ARM64 device:
docker run -dp 3000:3000 \
-w /app -v "$(pwd):/app" \
node:12-alpine \
sh -c "apk add --no-cache python2 g++ make && yarn install && yarn run dev"
Explaining:
dp 3000:3000 - same as before. Run in detached (background) mode and create a port mapping
w /app - sets the “working directory” or the current directory that the command will run from
v "$(pwd):/app" - bind mount the current directory from the host into the /app directory in the container
node:12-alpine - the image to use.
Note that this is the base image for our app from the Dockerfile sh -c "yarn install && yarn run dev" - the command.
We’re starting a shell using sh (alpine doesn’t have bash) and running yarn install to install all dependencies and then running yarn run dev. If we look in the package.json, we’ll see that the dev script is starting nodemon.

Resources