Disk management (cleanup) in a stopped docker container - elasticsearch

I'm running an elasticsearch docker container on my virtual machine and recently have got an elasticsearch failure, container just stopped. The reason - my ssd is out of space.
I can easily cleanup my indexes but the real issue here is that I actually cannot start the docker to do that. Container stops right after start without an ability to go via web UI or bash to cleanup space.
How can I cleanup my disk in a stopped container which I couldn't start?

Assuming you're using the official elasticsearch image, the Elasticsearch data directory will be a volume (mind the VOLUME /usr/share/elasticsearch/data statement in that image's Dockerfile).
You can now start another container, mounting your original container's volumes using the --volumes-from option to perform whatever cleanup tasks you deem necessary:
docker run --rm -it \
--volumes-from=<original-elasticsearch-container> \
ubuntu:latest \
/bin/bash
If that should fail, you can also run docker inspect on your Elasticsearch container and find the volume's directory in the host filesystem (assuming you're using the default local volume driver). Look for a Mounts section in the JSON output:
"Mounts": [
{
"Name": "<volume-id>",
"Source": "/var/lib/docker/volumes/<volume-id>/_data",
"Destination": "/usr/share/elasticsearch/data",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
],
The "Source" property will describe the volume's location on your host filesystem. When the container is started, this directory is simply bindmounted into the container's mount namespace; any changes you make in this directory on the host will be reflected in the container when it is started.

Related

Postgres Docker Container data fails to mount to local

I'm trying to do data persistence in postgres. But when I want to mount the data folder into my local file, I get this error.
fixing permissions on existing directory /var/lib/postgresql/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
LOG: could not link file "pg_xlog/xlogtemp.25" to "pg_xlog/000000010000000000000001": Operation not permitted
FATAL: could not open file "pg_xlog/000000010000000000000001": No such file or directory
child process exited with exit code 1
initdb: removing contents of data directory "/var/lib/postgresql/data"
running bootstrap script ...
Here's my YAML file
version: '3.1'
services:
postgres:
restart: always
image: postgres:9.6.4-alpine
ports:
- 8100:5432
volumes:
- ./pgdata:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: root
I'm using docker toolbox on windows. The docker machine in Virtual Box.
It looks like you use a shared data directory (host dir shared into the virtual) for database storage.
Only two options make sense:
1) you have a trivial issue with directory permissions
2) you hit a known problem (google!) with some VirtualBox and also VmWare versions that on some Windows versions, you cannot create symlinks in directories shared from the host to virtual machine.
for (2), a workaround is to NOT use shared folder to keep data.
Either way, it's a problem which should be solved by the provider of the docker image itself, or by the provider of virtualizer (vbox, vmware etc).
This is NOT a fault of Windows OS, or PostgreSQL.
Looks like it has to be /mnt/sda1/var/lib/docker/volumes/psql/_data for windows docker toolbox. This worked for me
docker run -it --name psql -p 5432:5432 -v psql:/var/lib/postgresql/data postgres:9.5-alpine
"Mounts": [
{
"Type": "volume",
"Name": "psql",
"Source": "/mnt/sda1/var/lib/docker/volumes/psql/_data",
"Destination": "/var/lib/postgresql/data",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
docker volume ls
DRIVER VOLUME NAME
local 65f253d220ad390337daaacf39e4d17000c36616acfe1707e41e92ab26a6a23a
local 761f7eceaed5525b70d75208a1708437e0ddfa3de3e39a6a3c069b0011688a07
local 8a42268e965e6360b230d16477ae78035478f75dc7cb3e789f99b15a066d6812
local a37e0cf69201665b14813218c6d0441797b50001b70ee51b77cdd7e5ef373d6a
local psql
Please refer this for more info: bad mount

Case sensitive host volume mount in docker for windows

I am running a linux docker container on windows 10. I need my host to have access to the data that my container generates. I also need the data to persist if I update the container's image.
I created a folder on the host (On a NTFS formated drive), in the docker settings, I share that drive with docker. I then create the container with the host directory mounted (using the -v option on the docker run command)
The problem is that docker creates a cifs mount to my shared drive on the host. It seems like the CIFS protocol is not case sensitive. I create two files:
/data/Test
/data/test
But only one file will be generated. I setup the kernel to support case sensitive files. For example, if I mount the same folder inside cygwin bash, I can create those two files without any problem. The problem is with the CIFS implementation I think.
My current thoughts of solving this issue:
Use Cygwin to create an NFS server on the host, and mount the NFS volume from within the linux container. I am not sure how I can automate this processes though.
Create another linux container with a SAMBA server. Create a volume on that container:
docker run -d -v /data --name dbstore --name a-samba-server
Then use that volume in my container:
docker run -d --volumes-from dbstore --name my-container my-container-image
Then I need to share /data in the samba server and create a map to that share on my host.
Both solutions seem quite cumbersome and I would like to know if there is anyway I can solve this directly with the CIFS share that docker natively creates.

Can't mount HOST folder into Amazon Docker Container?

I'm using an EC2 instance to run docker. From my local machine using OSX, I'm using docker machine to create containers and volumes. However when I'm trying to mount a local folder to any container is not possible.
docker create -v /data --name data-only-container ubuntu /bin/true
docker run -it --volumes-from data-only-container -v $(pwd)/data:/backup ubuntu bash
With the first command I create a data only container and I'm executing the second command to get into a container that should have the data-only-container volumes and the one I'm trying to mount, however when access it the folder /backup is empty
What I'm doing wrong?
EDIT:
I'm trying to mount a host folder in order to restore backuped data from my PC to container. In that case what would be a different approach?
Shall I try to use Flocker?
A host volume mounted with -v /path/to/dir:/container/mnt mounts a directory from the docker host inside the container. When you run this command on your OSX system, the $(pwd)/data will reference a directory on your local machine that doesn't exist on the docker host, the EC2 instance. If you log into your EC2 instance, you'll likely find the $(pwd)/data directory created there and empty.
If you want to mount folders from your OSX system into a docker container, you'll need to run Docker on the OSX system itself.
Edit: To answer the added question of how to move data up to your container in the cloud, there are often ways to move your data to the cloud provider itself, outside of docker, and then include it directly inside the container. To do a docker only approach, you can do something like:
tar -cC /source . | \
docker run --rm -i -v app-data:/target busybox \
/bin/sh -c "tar -xC /target"
This will upload your data with tar over a pipe into a named volume on your docker host. You can then include the named "app-data" volume in any other containers. If you need to do this multiple times with larger data sets, creating an rsync container would be more efficient.

CoreOS-Kubernetes Cloud Config for Vagrant Worker Node

Background
CoreOS-Kubernetes has a project for multi-node on Vagrant:
https://github.com/coreos/coreos-kubernetes
https://coreos.com/kubernetes/docs/latest/kubernetes-on-vagrant.html
They have a custom cloud config for the etcd node, but none for the worker node. For those, the Vagrant file references shell scripts, which contain some cloud config but mostly Kubernetes yaml:
https://github.com/coreos/coreos-kubernetes/blob/master/multi-node/generic/worker-install.sh
Objective
I'm trying to mount a NFS directory onto the coreOS worker nodes, for use in a Kubernetes pod. From what I read about Kubernetes in docs and tutorials, I want to mount on the node first as a persistent volume, like this on docker:
http://www.emergingafrican.com/2015/02/enabling-docker-volumes-and-kubernetes.html
I saw some posts that said mounting in the pod itself can be buggy, and want to avoid it by mounting on coreOS worker node first:
Kubernetes NFS volume mount fail with exit status 32
If mounting right in the pod is the standard way, just let me know and I'll do that.
Question
Are there options for customizing the cloud config for the worker node? I'm about to start hacking on that shell script, but thought I should check first. I looked through the docs but couldn't find any.
This is the coreOS cloud config I'm trying to add to the Vagrant file:
https://coreos.com/os/docs/latest/mounting-storage.html#mounting-nfs-exports
No NFS mount on coreOS is needed. Kubernetes will do it for you right in the pod:
http://kubernetes.io/v1.1/examples/nfs/README.html
Checkout nfs-busybox replication controller:
http://kubernetes.io/v1.1/examples/nfs/nfs-busybox-rc.yaml
I ran this and got it to write files to the server. That helped me debug the application. Note that even though nfs mounts do not show up when you ssh into the kubernetes node and run docker -it run /bin/bash, they are mounted in the kubernetes pod.. That's where most of my misunderstanding occurred. I guess you have to add the mount parameters to the command when doing it manually.
Additionally, my application, gogs, stored it's config files in /data . To get it to work, I first mounted the nfs to /mnt. Then, like in the kubernetes nfs-busybox example, I created a command which would copy all folders in /data to /mnt . In the replication controller yaml, under the container node, I put a command:
command:
- sh
- -c
- 'sleep 300; cp -a /data /mnt; done'
This gave me enough time to run the initial config of my app. Then I just waited until the sleep time was up and the files were copied over.
I then change my mount point to /data, and now the app starts right where it left off when pod restarts. Coupled with external mysql server, and it so far it looks like it's stateless.

Mounting local volumes to docker container

Edited for clarity
I run the following run command
docker run -d -P -v /users/username/app:/app contname
This resulsts in the following when i inspect the container
"HostConfig": {
"Binds": [
"/users/username/app:/app"
],
"Volumes": {
"/app": "/users/username/app",
"/app": "/mnt/sda1/var/lib/docker/vfs/dir/214a16c3678f93cbadb7e7b7d56b5f26b66a34c6d9bb89ade23b16e386a12212"
},
But when i ssh into the container, i can see that app is empty.
Is my assumption that there should be the files from my host machine correct?
Boot2docker automatically mounts your home directory into the virtualbox boot2docker VM not the container. You still need to add -v /users/username/app:/app to your docker run command.
When you add the Volume command to your docker file you are declaring a volume to be created by the container. This volume is mounted at /var/lib/docker/volumes on the host VM. These volumes can only be shared by passing --volumes-from to the docker run command. When you pass the -v switch you creating a volume using a specified location on the host file system.

Resources