Calling docker-container by name - making commands (docker exec) to it specifically - bash

I have a docker-compose.yaml-file, that spins up a couple of containers: nginx, php, mysql.
I'm trying to automate a setup-process, that imports a database to the mysql-container, as part of a make-target.
The make-target looks like this:
startLocalEnv:
docker-compose up -d # Build containers
# A lot of other stuff happens here, that is omitted to keep it simple
importDb:
# THIS IS THE COMMAND I'M TRYING TO MAKE
docker exec -i CONTAINER_ID mysql -usomeuser -psomepassword local_db_name < ./dumps/existingDbDump.sql
How can I make this command: docker exec -i CONTAINER_ID mysql -usomeuser -psomepassword local_db_name < ./dumps/existingDbDump.sql
so that it doesn't become several steps, copying and pasting?
Currently
This is how it's done today:
Step 1: Do a docker ps and copy the Container ID. Let's say: 41e8203b54ea.
Step 2: Insert that into above-written command and run it. Example: docker exec -i 41e8203b54ea mysql -usomeuser -psomepassword local_db_name < ./dumps/existingDbDump.sql
It's not super painful, but it's something quite rudimentary, that I'm assuming (and hoping) can be made into one step fairly easily.
Solution attempt 1: Pipe the shit out of it!
I found this SO-article here: Get Docker container id from container name. Where I found this to output the Container ID: docker container ls | grep mysql | awk '{print $1}'.
So I imagine, that fiddling around with this, that maybe I can get a one-liner, that runs this import.
But it seems excessive. And if I have another project that also has a container called mysql (fairly possible!), that I have forgotten to stop, then this solution will target that.

There is a docker-compose exec command that will automatically do this lookup for you.
importDb: ./dumps/existingDbDump.sql
docker-compose exec mysql mysql -usomeuser -psomepassword local_db_name < $<
This would probably be my third-choice way to do the database load, though. If you have the mysql CLI tool on your host and your database container has published ports: then you can just run that directly, without doing anything Docker-specific.
importDb: ./dumps/existingDbDump.sql
mysql -h127.0.0.1 -usomeuser -psomepassword local_db_name < $<
Or you can docker-compose run a temporary container to do the load:
importDb: ./dumps/existingDbDump.sql
docker-compose run mysql \
mysql -hmysql -usomeuser -psomepassword local_db_name < $<
If your application framework has a database migration system, you could similarly docker-compose run the migrations.

Related

Get the name of running docker container inside shell script

I am currently developing an application, in which I want to automate a testing process to speed up my development time. I use a postgres db container, and I then want to check that the preparation of the database is correct.
My process is currently as follows:
docker run -p 5432:5432 --env-file=".db_env" -d postgres # Start the postgres db
# Prep the db, do some other stuff
# ...
docker exec -it CONTAINER_NAME psql -U postgres
Currently, I have to to docker ps to get the container name and then paste it and replace CONTAINER_NAME. The container is the only one running, so I am thinking I could easily find the container id or the container name automatically instead of using docker ps to manually retrieve it, but I don't know how. How do I do this using bash?
Thank you!
The container id is being returned from the docker run command:
CONTAINER_ID=$(docker run -p 5432:5432 --env-file=".db_env" -d postgres)
You can choose the name of your container with docker run --name CONTAINER_NAME.
https://docs.docker.com/engine/reference/run/#name---name
You can get its ID using:
docker ps -aqf "name=postgres"
If you're using Bash, you can do something like:
docker exec -it $(docker ps -aqf "name=postgres") psql -U postgres
In the end, I took use of #mrcl's answer, from which I developed a complete answer. Thank you for that #mrcl!
CONTAINER_ID=$(docker run -p 5432:5432 --env-file=".db_env" -d postgres)
# Do some other stuff
# ...
docker exec -it $CONTAINER_ID psql -U postgres

What is the difference between running docker exec in terminal and in bash script

Let's assume I run the following command inside a script:
#!/usr/bin/env bash
docker run --name mydb --rm -e POSTGRES_PASSWORD=kgalli -e POSTGRES_USER=kgalli -p "9999:5432" -v $PWD/db:/opt -d postgres
When I then run the following command to create a database it works fine.
docker exec -e PGPASSWORD=kgalli mydb psql -U kgalli -d template1 -c "CREATE DATABASE kgalli_test WITH OWNER kgalli ENCODING 'UTF8' LC_COLLATE = 'en_US.utf8' LC_CTYPE = 'en_US.utf8';"
However when I add this line to the script above, so the script not only starts the postgres server but also creates the database it fails.
I do not really understand why I get the following error:
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
I know I can instruct docker postgres image to create a database on start. But this is actually not what I want to achieve. I just using this as an example to understand the problem.
When you're running it in a script, it's most likely just happening too quickly. The docker run … command returns immediately, and then docker exec … is attempting to use PostgreSQL while the database server is still starting up. You need to wait for it to be ready before creating the extra database.
That said, the postgres image has functionality in its entrypoint script to run custom initialization scripts. You can put your CREATE DATABASE … statement into a .sql file or config and mount it into /docker-entrypoint-initdb.d in the container. The postgres container will automatically run it when the database server is ready.
The docs for this seems to have disappeared, but you can see the implementation in docker-entrypoint.sh.
Using docker run, you are starting a new container, using docker exec, you are executing a command in already running container
The docker run command first creates a writeable container layer over the specified image, and then starts it using the specified command.
The docker exec command runs a new command in a running container.
If the container is paused, then the docker exec command will fail with an error
$ docker pause test
test
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1ae3b36715d2 ubuntu:latest "bash" 17 seconds ago Up 16 seconds (Paused) test
$ docker exec test ls
FATA[0000] Error response from daemon: Container test is paused, unpause the container before exec
$ echo $?
1
(ref.1)
(ref.2)

docker-compose script with prompt…. better solution?

I have a bash script to start various docker-compose.yml(s)
One of these compose instances is docker-compose.password.yml to create a password file for mysql. For that I need to prompt the user to input a user name and then run a service in docker (that is actually not running).
basically the only way I can think of to accomplish this is run the docker in idle state, exec the command and close the docker. Is there a better way?
(easier would be to do it directly with docker run, but then I would have to check if the image is already available and have image definitions in the various docker-compose.ymls plus now also in the bash script)
XXXXXXXXXX
My solution:
docker-compose.password.yml
version: '2'
services:
createpw:
command:
top -b -d 3600
then
docker-compose -f docker-compose.password.yml up -d
prompt the user by my bash script outside of docker for the credentials
read -p "Input user name.echo $’\n> ’" username
and send it to the running docker
docker exec createpw /bin/bash -c "mysql_config_editor set --user=${username} --password"
and then docker-compose down
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Tried and not working:
I tried to have just a small subscript prompting for the input right under command
command:
/bin/bash /somewhere/createpassword.sh
This did produce the file, but the user was an empty string, as the prompt didn’t stop the docker execution. It didn’t matter if I used compose -d or not.
Any suggestions are welcome. Thanks.

Bash / Docker exec: file redirection from inside a container

I can't figure out how to read content of a file from a Docker container. I want to execute content of a SQL file into my PGSQL container. I tried:
docker exec -it app_pgsql psql --host=127.0.0.1 --username=foo foo < /usr/src/app/migrations/*.sql
My application is mounted in /usr/src/app. But I got an error:
bash: /usr/src/app/migrations/*.sql: No such file or directory
It seems that Bash interprets this path as an host path, not a guest one. Indeed, executing the command in two times works perfectly:
docker exec -it app_pgsql
psql --host=127.0.0.1 --username=foo foo < /usr/src/app/migrations/*.sql
I think that's more a Bash issue than a Docker one, but I'm still stuck! :)
Try and use a shell to execute that command
sh -c 'psql --host=127.0.0.1 --username=foo foo < /usr/src/app/migrations/*.sql'
The full command would be:
docker exec -it app_pgsql sh -c 'psql --host=127.0.0.1 --username=foo foo < /usr/src/app/migrations/*.sql'
try with sh -c "your long command"
Also working when piping backup to the mysql command:
cat backup.sql | docker exec -i CONTAINER /usr/bin/mysql -u root --password=root DATABASE
You can use the database client in order to connect to you container and redirect the database file, then you can perform the restore.
Here is an example with MySQL: a container running MySQL, using the host network stack. Since that the container is using the host network stack (if you don't have any restriction on your MySQL or whatever database), you can connect via localhost and performing the commands transparently
mysql -h 127.0.0.1 -u user -pyour_passwd database_name < db_backup.sql
You can do the same with PostgresSQL (Restore a postgres backup file using the command line?):
pg_restore --host 127.0.0.1 --port 5432 --username "postgres" --dbname "mydatabase" --no-password --clean "/home/dinesh/db/mydb.backup"
Seems like that "docker exec" does not support input redirection.. I will verify this and maybe open an issue for Docker Community at GitHub, if it is applicable.

Ahow to use multiple terminals in the docker container?

I know it is weird to use multiple terminals in the docker container.
My purpose is to test some commands and build a dockerfile with these commands finally.
So I need to use multiple terminals, say, two. One is running some commands, the other is used to test that commands.
If I use a real machine, I can ssh it to use multiple terminals, but in docker, how can I do this?
Maybe the solution is to run docker with CMD /bin/bash, and in that bash, using screen?
EDIT
In my situation, one shell run a server program, the other run a client program to test the server program. Because the server program and client program are compiled together. So, the default link method in docker is not suitable.
The docker way would be to run the server in one container and the client in another. You can use links to make the server visible from the client and you can use volumes to make the files at the server available from the client. If you really want to have two terminals to the same container there is nothing stopping you from using ssh. I tested this docker server:
from: https://docs.docker.com/examples/running_ssh_service/
# sshd
#
# VERSION 0.0.1
FROM ubuntu:14.04
MAINTAINER Thatcher R. Peskens "thatcher#dotcloud.com"
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
You need to base this image on your image or the otherway around to get all the functionality together. After you have built and started your container you can get it's IP using
docker inspect <id or name of container>
from the docker host you can now ssh in with root and the password from the docker file. Now you can spawn as many ssh clients as you want. I tested with:
while true; do echo "test" >> tmpfile; sleep 1; done
from one client and
tail -f tmpfile
from another
If I understand correctly the problem, then you can use nsenter.
Assuming you have a running docker named nginx (with nginx started), run the following command from the host:
nsenter -m -u -i -n -p -t `docker inspect --format {{.State.Pid}} nginx`
This will start a program in the given name space of the PID (default $SHELL).
You can run more then one shell by issuing it more then once (from the host). Then you can run any binary that exist in the given docker or tail, rm, etc files. For example, tail the log file of nginx.
Further information can be found in the nsenter man.
If you want to just play around, you can run sshd in your image and explore it the way you are used to:
docker run -d -p 22 your_image /usr/sbin/sshd -D
When you are done with your explorations, you can proceed to create Dockerfile as usual.

Resources