Can't access redis docker volume - macos

I can't access redis volume:
docker run --name redis --volume=data:/data/:rw -p 6379:6379 -d redis
b60c15a0a3e05d7cdb383415192927b4fc1563be5350924a01fe70f6a29ab113
ls: data/: No such file or directory
Why? Doesn't docker create a host volume by default?

You can use named volumes or bind volumes, but don't use dot . for relative path.
docker run
bind volumes
You have to use absolute path:
docker run --name redis --mount type=bind,source="/<absolutepath>/data",target=/app -p 6379:6379 -d redis
named volumes
You cannot use '.' for docker volume name.
docker run --name redis --volume=data:/data/:rw -p 6379:6379 -d redis
After this command, a named volume is generated: you can check it doing:
docker volume ls | grep data
docker-compose
With docker compose, you can define
services:
your-redis-service:
...
volumes:
- ./data:/data
In this case, docker-compose translate relative path to your working directory to absolute path for bind volumes.

Related

Get the name of running docker container inside shell script

I am currently developing an application, in which I want to automate a testing process to speed up my development time. I use a postgres db container, and I then want to check that the preparation of the database is correct.
My process is currently as follows:
docker run -p 5432:5432 --env-file=".db_env" -d postgres # Start the postgres db
# Prep the db, do some other stuff
# ...
docker exec -it CONTAINER_NAME psql -U postgres
Currently, I have to to docker ps to get the container name and then paste it and replace CONTAINER_NAME. The container is the only one running, so I am thinking I could easily find the container id or the container name automatically instead of using docker ps to manually retrieve it, but I don't know how. How do I do this using bash?
Thank you!
The container id is being returned from the docker run command:
CONTAINER_ID=$(docker run -p 5432:5432 --env-file=".db_env" -d postgres)
You can choose the name of your container with docker run --name CONTAINER_NAME.
https://docs.docker.com/engine/reference/run/#name---name
You can get its ID using:
docker ps -aqf "name=postgres"
If you're using Bash, you can do something like:
docker exec -it $(docker ps -aqf "name=postgres") psql -U postgres
In the end, I took use of #mrcl's answer, from which I developed a complete answer. Thank you for that #mrcl!
CONTAINER_ID=$(docker run -p 5432:5432 --env-file=".db_env" -d postgres)
# Do some other stuff
# ...
docker exec -it $CONTAINER_ID psql -U postgres

Problem with mount directory to container from local host

I'm newibe to docker, I used a dockerfile to make a container when i try to run the next comment:
docker run --name ai --rm -it -v /C/AI_project/:/AI_project project:latest bash
it makes the container but with empty AI_project folder.I tried to edited this line a lot of times but it never copy the folder.
How to add a folder from local host to the container?
here my result
If all that you want to do is copying a file from the host to the container, you can use docker cp:
docker cp local.file container:/path/local.file
If you want to mount a host file on the container when starting the container, you should do something like:
docker run -v local.file:/path/local.file --name name image

How to set hosts in docker for mac

When I use docker before, I can use docker-machine ssh default to set hosts in docker's machine /etc/hosts, but in docker for mac I can't access it's VM because of it don't have it.
So, the problem is how to set hosts in docker for mac ?
My secondary domain wants to point the other ip.
I found a solution, use this command
screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
Now, edit the /etc/hosts in the Docker VM.
To exit screen, use Ctrl + a + d.
Here's how I do it with a bash script so the changes persist between Docker for Mac restarts.
cd ~/Library/Containers/com.docker.docker/Data/database
git reset --hard
DFM_HOSTS_FILE="com.docker.driver.amd64-linux/etc/hosts"
if [ ! -f ${DFM_HOSTS_FILE} ]; then
echo "appending host to DFM /etc/hosts"
echo -e "xxx.xxx.xxx.xxx\tmy.special.host" > ${DFM_HOSTS_FILE}
git add ${DFM_HOSTS_FILE}
git commit -m "add host to /etc/hosts for dns lookup"
fi
You can automate it via this script, run this scrip on start up time or login time will save you..
#!/bin/sh
# host entry -> '10.4.1.4 dockerreigstry.senz.local'
# 1. run debian image
# 2. check host entry exists in /etc/hosts file
# 3. if not exists add it to /etc/hosts file
docker run --name debian -it --privileged --pid=host debian nsenter \
-t 1 -m -u -n -i sh \
-c "if ! grep -q dockerregistry.senz.local /etc/hosts; then echo -e '10.4.1.4\tdockerregistry.pagero.local' >> /etc/hosts; fi"
# sleep 2 seconds
# remove stopped debian container
sleep 2
docker rm -f debian
I have created a blog post with more information about this topic.
https://medium.com/#itseranga/set-hosts-in-docker-for-mac-2029276fd448
You must have to create an docker-compose.yml file. This file will be on the same route of your Dockerfile
For example, I use this docker-compose.yml file:
version: '2'
services:
app:
hostname: app
build: .
volumes:
- ./:/var/www/html
working_dir: /var/www/html
depends_on:
- db
- cache
ports:
- 80:80
cache:
image: memcached:1.4.27
ports:
- 11211:11211
rabbitmq:
image: rabbitmq:latest
ports:
- 5672:5672
db:
image: postgres:9.5.3
ports:
- 5432:5432
environment:
- TZ=America/Mazatlan
- POSTGRES_PASSWORD=root
- POSTGRES_DB=restaurantcore
- POSTGRES_USER=rooms
- POSTGRES_PASSWORD=rooms
The ports are binding with the ports of your host docker machine.

How to restore a mongo Docker container on the Mac

I removed my mongo container
docker rm myMongoDB
Did I lose all my data, or I can restore it? If so, how?
When I try to run another container from the image
docker run -p 27017:27017 -d mongo --name myMongo2
it won't run and its STATUS says Exited (2) 8 seconds ago.
The official mongo image on Docker Hub (https://hub.docker.com/_/mongo/) defines two volumes to store data in the Dockerfile. If you did not explicitly specify a -v / --volume option when running the container, Docker created anonymous (unnamed) volumes for those, and those volumes may still be around. It may be a bit difficult to find which volumes were last used by the container, because they don't have a name.
To list all volumes that are still present on the docker host, use;
docker volume ls
Which should give you something like this;
DRIVER VOLUME NAME
local 9142c58ad5ac6d6e40ccd84096605f5393bf44ab7b5fe51edfa23cd1f8e13e4b
local 4ac8e34c11ac7955b9b79af10c113b870edd0869889d1005ee17e98e7c6c05f1
local da0b4a7a00c4b60c492599dabe1dbc501113ae4b2dd1811527384a5dc26cab13
local 81a40483ae00d72dcfa2117b3ae40f3fe79038544253e60b85a8d0efc8f3d139
To see what's in a volume, you can attach it to a temporary container, and check what's in there. For example;
docker run -it -v 81a40483ae00d72dcfa2117b3ae40f3fe79038544253e60b85a8d0efc8f3d139:/volume-data ubuntu
That will start an interactive shell in a new ubuntu container, with the volume 81a40483ae00d72dcfa2117b3ae40f3fe79038544253e60b85a8d0efc8f3d139 mounted at /volume-data/ inside the container.
You can then go into that directory, and check if it's the volume you're looking for:
root#08c11a34ed44:/# cd /volume-data/
root#08c11a34ed44:/volume-data# ls -la
once you identified which volumes (according to the Dockerfile, the mongo image uses two), you can start a new mongo container, and mount those volumes;
docker run -d --name mymongo \
-v 4ac8e34c11ac7955b9b79af10c113b870edd0869889d1005ee17e98e7c6c05f1:/data/db/ \
-v da0b4a7a00c4b60c492599dabe1dbc501113ae4b2dd1811527384a5dc26cab13:/data/configdb/ \
mongo
I really suggest you read the Where to Store Data section in the documentation for the mongo image on Docker Hub to prevent loosing your data.
NOTE
I also noted that your last command puts the --name myMongo2 after the image name; it should be before mongo (the image name). Also myMongo2 is an invalid container name, as it is not allowed to have uppercase characters.

How does docker run differ from running a command from a shell within the container

I'm having a problem where I get permission denied when attempting to run logstash within the container and accessing configurations provided via a host volume. But if I explicitly run the command within a shell it works fine.
$ docker run -it --rm -v "$PWD/logstash/config":/etc/logstash/conf.d:Z logstash:latest logstash -f /etc/logstash/conf.d
The error reported is:
Permission denied - /etc/logstash/conf.d/logstash.conf
$ docker run -it --rm -v "$PWD/logstash/config":/etc/logstash/conf.d:Z logstash:latest sh -c 'logstash -f /etc/logstash/conf.d'
Settings: Default pipeline workers: 4
Logstash startup completed
$ docker run -it --rm -v "$PWD/logstash/config":/etc/logstash/conf.d:Z logstash:latest ls -lZ /etc/logstash/conf.d
total 4
-rw-------. 1 1000 1000 system_u:object_r:svirt_sandbox_file_t:s0:c78,c159 125 Mar 9 17:57 logstash.conf
This tells me that there's something different about the environment in the shell but I have no clue what would cause these permissions issues.
As a first clue, I see in the logstash Dockerfile that its ENTRYPOINT is docker-entrypoint.sh
# Run as user "logstash" if the command is "logstash"
if [ "$1" = 'logstash' ]; then
set -- gosu logstash "$#"
fi
That would explain the difference between logstash and sh -c 'logstash...': the first parameter is no longer logstash.
So you need to make sure $PWD/logstash/config is, once mounted, accessible to user 'logstash'.
The OP Mark Caudill adds in the comments:
adding :Z modifier to the -v parameter sets the correct SELinux labels on the files and directories
logstash is running as root
chcon -R system_u:object_r:svirt_sandbox_file_t:s0 ./ on each directory being mounted as a host volume
These points allow the logstash process to access the host volume.
I don't fully understand your questions but this should help ...
RUN runs the specified command inside a container at DOCKER BUILD time. ENTRYPOINT runs the specified command inside a container at DOCKER RUN time.
When mounting a volume, the files inside the container that exist inside the image (at build time) will be overwritten. The files inside the container that were created at docker run time will be accessible in the mounted volume, and thus will be accessible both inside the container and from the host.
You should either:
Consider using ENTRYPOINT instead of RUN
Use volumes-from if you only need to share files between containers

Resources