Docker rm does not remove my container if the physical machine dies - bash

I have some containers that hold sensitive information in its file system during their existence. I create those containers using a line such as:
docker run -it --rm someImage bash
Whenever I exit bash my container disappears along with all the sensitive files, which is precisely what I wanted. If I use a Docker container then I do not need to remember to delete those files. This solution is also better than putting files in /tmp since all files are removed right after I stopped using them.
The problem is when my physical machine dies for some reason. For instance, if I trip on the power cable. When this happens, my container goes to the stopped state and someone else can restart the container using
docker start myContainerId
Is there any way to improve the way I create my container to achieve what I want? One way to go would be to put something on the host machine's SystemV to erase those containers at startup, but I wonder if Docker does not have a better way to deal with that.

Just use docker rm
docker rm <containerid>
(it only works on stopped containers)

Related

How to copy from HOST to CONTAINER while within container

I am sorry for taking up your time.
I have a local docker setup and I want to copy files from my local host to my container.
But the thing is that I need a command that I can use WHILE i am inside the container.
To explain the situation further: I executed "docker exec -it CONTAINERNAME bash" to enter my container,
and now I am on /var/www/html
and I need to find a way to copy a file/folder from my local environment into that container.
Reason: I am currently writing a dockerfile which automates the process of setting things up. I need that very specific command because a Dockerfile RUN-command can only be executed while inside the container.
What I tried:
"docker cp" is a good command to use when I am outside the container but it doesn't work while in the container.
"DOCKERFILE COPY" might do the trick but I need a general shell command to double check if it really does what it is supposed to do. I must be able to reproduce the same process of my Dockerfile via manually executing the commands one by one.
Once again, I apologize for my inability to solve this problem by myself. My inexperience has caused me nothing but trouble.
Edit: I am using a Win10 64bit OS with dual monitor setup and a lefthanded mouse. My keyboard, albeit old, should possess all the necessary keys to replicate any essential keyboard-shortcuts if required. All my drivers are installed and updated.
When you build an image you need to put there everything you need for a normal work of your container. You shouldn't copy files from the host once your image is built. You might use volumes as a common storage for both the host and the container but I don't think this is your case.
Until this is not totally clear what you do I'd suggest to prepare all the data you need and put it within docker context. Then build an image. You also may find docker-compose useful as, at least, it helps separately define the context and the path to your dockerfile if needed.

How to access shared volumes on Docker for Mac

I've reviewed the documentation here:
https://docs.docker.com/docker-for-mac/install/#install-and-run-docker-for-mac
It doesn't say anything about boot2docker, although some other questions along these lines talk about this:
Mount volume to Docker image on OSX
So the question is – the Docker for Mac application provides File Sharing via Preferences -> File Sharing; how does one make use of these shared folders from the docker image (for example if one ssh's into the docker image)? When I say how, I don't mean "what are the use-cases", I mean "please show me an example of how to access a shared folder from the command line of the running container".
Ideally I'm trying to create a similar scenario to Vagrant's synched folders whereby I can edit files on my Host env, independently of the Docker Image but these are updated automatically to the Docker image on save.
UPDATE:
To be clear, the reason for asking this question is because I couldn't get the -v docker command to work. E.g.
docker run -v /Users/geoidesic/Documents/projects/arc/mysite/djangocms_demo:/home/djangocms/djangocms/djangocms_demo -d -p 8001:8000 --name test_shared_volumes bluszcz/djangocms
With the above command the container immediately stops, so if I run docker ps the list of running containers is empty.
However, if I run the container without the -v command, then it stays running as expected:
docker run -d -p 8001:8000 --name test_shared_volumes bluszcz/djangocms
Updated:
Well, if you want to share file/directory between host and container, you're gonna use Docker's bind-mount.
For example, if I want to share my host's /etc/resolv.conf to my container, I do the following:
docker run -v /etc/resolv.conf:/etc/resolv.conf <IMAGE>
In which the -v ... part tells the container to reuse host's /etc/resolve.conf. And whenever I edit this file, the changes will be immediately visible to the container.
In Linux, you can use this way to share almost any of your host files to containers. Unfortunately, this is not the case for Mac. As I mentioned in my old answer, by default you can only share /Users/, /Volumes/, /private/, and /tmp directly.
On my Mac, saying, I want to share the /data directory to a container. I run below command:
docker run -it --rm -v /data:/data busybox sh
Then it pops up an unhappy error:
docker: Error response from daemon: Mounts denied:
The path /data
is not shared from OS X and is not known to Docker.
You can configure shared paths from Docker -> Preferences... -> File Sharing.
See https://docs.docker.com/docker-for-mac/osxfs/#namespaces for more info.
So you see, this is where File Sharing comes up.
Then comes my answers to your questions:
File Sharing does not provide you a ready-to-use way to do the sharing as you have experienced in Vagrant;
To share file/folder between host and container, use Dockers bind-mount.
Hope that helps.
Old answer:
File Sharing is used by Docker's bind-mount feature. By default, you can bind-mount files in /Users/, /Volumes/, /private/, and /tmp directly. For other paths, you need to add them to Preferences -> File Sharing first.
Use cases for bind-mount:
Persisting data generated by the running container, so that you can backup or migrate data.
Sharing data amount multiple running containers.
Share host configuration files to containers.
Share source code between host and containers, to make debugging easier.
Note: For cases #1 and #2, consider using volumes instead of bind-mount.

Does docker stores all its files as "memory image", as part of image, not disk file?

I was trying to add some files inside a docker container like "touch". I found after I shutdown this container, and bring it up again, all my files are lost. Also, I'm using ubuntu image, after shutdown-restart the same image, all my software that has been installed by apt-get is gone! Just like running a new image. So how can I save any file that I created?
My question is, does docker "store" all its file systems like "/tmp" as memory file system, so nothing is actually saved to disk?
Thanks.
This is normal behavoir for docker. You have to define a volume to save your data, those volumes will exist even if you shutdown your container.
For example with a simple apache webserver:
$ docker run -dit --name my-apache-app -v "$PWD":/usr/local/apache2/htdocs/ httpd:2.4
This will mount your "current" director to /usr/local/apache2/htdocs at the container, so those files wil be available there.
A other approach is to use named volumes, those ones are not linked to a directory on your disk. Please refer to the docs:
Docker Manage - Data
When you start a container using docker run command,docker run ubuntu, docker starts a new container based on the image you specified. Any changes you make to the previous container will not be available, as this is a new instance spawned from the base image.
There a multiple ways to persist your data/changes to your container.
Use Volumes.
Data volumes are designed to persist data, independent of the container’s lifecycle. You could attach a data volume or mount a host directory as a volume.
Use Docker commit to create a new image with your changes and start future containers based on that image.
docker commit <container-id> new_image_name
docker run new_image_name
Use docker ps -a to list all the containers. It will list all containers including the ones that have exited. Find the docker id of the container that you were working on and start it using docker start <id>.
docker ps -a #find the id
docker start 1aef34df8ddf #start the container in background
References
Docker Volumes
Docker Commit

Met "/bin/bash: no such file" when building docker image from scratch

I'm trying to create my own docker image in a ubuntu-14 system.
My docker file is like the following:
FROM scratch
RUN /bin/bash -c 'echo "hello"'
I got the error message when I run docker build .:
exec: "/bin/sh": stat /bin/sh: no such file or directory
I guess it is because /bin/sh doesn't exist in the base image "scratch". How should I solve this problem?
Docker is basically a containerising tool that helps to build systems and bring them up and running in a flash without a lot of resource utilisation as compared to Virtual Machines.
A Docker container is basically a layered container. In case you happen to read a Dockerfile, each and every command in that file will lead to a creation of a new layer in container and the final layer is what your container actually is after all the commands in the Dockerfile has been executed.
The images available on the Dockerhub are specially optimised for this sort of environment and are very easy to setup and build. In case you are building a container right from scratch i.e. without any base image, then what you basically have is an empty container. An empty container does not understand what /bin/bash actually is and hence it won't work for you.
The Docker container does not use any specifics from your underlying OS. Multiple docker containers will make use of the same underlying kernel in an effective manner. That's it. Nothing else.
( There is however a concept of volumes wherein the container shares a specific volume on the local underlying system )
So in case you want to use /bin/bash, you need a base image which will setup the nitty gritties of this command for your container and then you can successfully execute it.
However, it is recommended that you use official Docker images for say Ubuntu and then install your custom stuff on top of it. The official images are right from the makers and are highly optimised for this environment.
Base image scratch does not use /bin/bash. So you should change to:
FROM ubuntu:14.04
RUN /bin/sh -c 'echo "hello"'

How to edit files in stopped/not starting docker container

Trying to fix errors and debug problems with my application that is split over several containers, I frequently edit files in containers:
either I am totally lazy and install nano and edit directly in container or
I docker cp the file out of the container, edit it, copy it back and restart the container
Those are intermediate steps before coming to new content for container build, which takes a lot longer than doing the above (which of course is only intermediate/fiddling around).
Now I frequently break the starting program of the container, which in the breaking cases is either a node script or a python webserver script, both typically fail from syntax errors.
Is there any way to save those containers? Since they do not start, I cannot docker exec into them, and thus they are lost to me. I then go the rm/rmi/build/run route after fixing the offending file in the build input.
How can I either edit files in a stopped container, or cp them in or start a shell in a stopped container - anything that allows me to fix this container?
(It seems a bit like working on a remote computer and breaking the networking configuration - connection is lost "forever" this way and one has to use a fallback, if that exists.)
How to edit Docker container files from the host? looks relevant but is outdated.
I had a problem with a container which wouldn't start due to a bad config change I made.
I was able to copy the file out of the stopped container and edit it. something like:
docker cp docker_web_1:/etc/apache2/sites-enabled/apache2.conf .
(correct the file)
docker cp apache.conf docker_web_1:/etc/apache2/sites-enabled/apache2.conf
Answering my own question.. still hoping for a better answer from a more knowledgable person!!
There are 2 possibilities.
1) Editing file system on host directly. This is somewhat dangerous and has a chance of completely breaking the container, possibly other data depending on what goes wrong.
2) Changing the startup script to something that never fails like starting a bash, doing the fixes/edits and then changing the startup program again to the desired one (like node or whatever it was before).
More details:
1) Using
docker ps
to find the running containers or
docker ps -a
to find all containers (including stopped ones) and
docker inspect (containername)
look for the "Id", one of the first values.
This is the part that contains implementation detail and might change, be aware that you may lose your container this way.
Go to
/var/lib/docker/aufs/diff/9bc343a9..(long container id)/
and there you will find all files that are changed towards the image the container is based upon. You can overwrite files, add or edit files.
Again, I would not recommend this.
2) As is described at https://stackoverflow.com/a/32353134/586754 you can find the configuration json config.json at a path like
/var/lib/docker/containers/9bc343a99..(long container id)/config.json
There you can change the args from e. g. "nodejs app.js" to "/bin/bash". Now restart the docker service and start the container (you should see that it now correctly starts up). You should use
docker start -i (containername)
to make sure it does not quit straight away. You can now work with the container and/or later attach with
docker exec -ti (containername) /bin/bash
Also, docker cp is rather useful for copying files that were edited outside of the container.
Also, one should only fall back to those measures if the container is more or less "lost" anyway, so any change would be an improvement.
You can edit container file-system directly, but I don't know if it is a good idea.
First you need to find the path of directory which is used as runtime root for container.
Run docker container inspect id/name.
Look for the key UpperDir in JSON output.
That is your directory.
If you are trying to restart an stopped container and need to alter the container because of misconfiguration but the container isn't starting you can do the following which works using the "docker cp" command (similar to previous suggestion). This procedure lets you remove files and do any other changes needed. With luck you can skip a lot of the steps below.
Use docker inspect to find entrypoint, (named Path in some versions)
Create a clone of the using docker run
Enter clone using docker exec -ti bash (if *nix container)
Locate entrypoint file location by looking though the clone to find
Copy the old entrypoint script using docker cp : ./
Modify or create a new entrypoint script for instance
#!/bin/bash
tail -f /etc/hosts
ensure the script has execution rights
Replace the old entrypoint using docker cp ./ :
start the old container using start
redo steps 6-9 until the starts
Fix issues in container
Restore entrypoint if needed and redo steps 6-9 as required
Remove clone if needed

Resources