How do I inject a local file as an argument to a command to run inside a docker container? - bash

Scenario:
I have a postgres container named db running on a machine. I am in a directory on the host and have an SQL script named patch.sql. I wish to apply this script to the database inside the container.
Were I to be inside the container and have the script also inside the container, I would run
psql -U user -d db -f patch.sql
Since I am outside the container, I could naively try
docker exec -i db psql -U user -d db -f patch.sql
but of course, this would look for a file named patch.sql inside the container, while it is actually on the host machine.
My current workaround is
cat patch.sql | docker exec -i db /bin/sh -c "cat $# > patch.sql"
docker exec -i db psql -U user -d db -f patch.sql
docker exec -i db rm patch.sql
Is there away to elegantly reduce this to a one-liner?
I am aware, how to place the file inside the container, this is exactly what my workaround does. I am thinking of some trick with I/O redirection to place the file into the command.
I do not want to mount volumes and I cannot do this, since the container is already running anyway. The idea is to avoid moving the file into the container.

Maybe could try directly pipe the patch.sql file content to psql, like
cat patch.sql | docker exec -i db psql -U user -d db -f -
or just
cat patch.sql | docker exec -i db psql -U user -d db

Related

Automating password change inside a Docker container

I need to use a bash script:
Launch the container
Generate a password
Enter the container
Run the 'cd /' command
Change the password using htpasswd to the generated one
I tried it like this:
docker restart c1
a = date +%s | sha256sum | base64 | head -c 32 ; echo
docker exec -u 0 -it c1 bash 'echo cd /'
htpasswd user.passwd webdav a
And so:
docker restart c1
docker exec -u 0 -it c1 bash
cd /
a = date +%s | sha256sum | base64 | head -c 32 ; echo
htpasswd user.passwd webdav a
With the first option , I get:
bash: echo cd /: No such file or directory
With the second one, it enters the container and does nothing
I will be grateful for any help
I tried many variations of the script, which did not help me
You do not need Docker or debugging tools like docker exec just to generate an htpasswd file.
htpasswd is part of the Apache distribution, and you should be able to install it on your host system using your OS package manager. Since it just manipulates a credential file it doesn't need the actual server.
# On the host system, without using Docker at all
sudo apt-get update && apt-get install apache2-utils
# Make sure to wrap the password-generating command in `$()`
a=$(date +%s | sha256sum | base64 | head -c 32)
# Make sure to use a variable reference `$a`
htpasswd user.passwd webdav "$a"
This gives you a user.passwd file on your local system. Now when you launch your container, you can bind-mount the file into the container:
docker run -d -p 80:80 ... \
-v "$PWD/user.passwd:/usr/local/apache2/conf/user.passwd" \
httpd
The container will be immediately ready to use. If you delete and recreate this container, you do not need to repeat the manual setup step. If you need to launch multiple copies of the container, they can all have the same credentials file without doing manual steps.

Docker: "pg_restore input file does not appear to be a valid archive" error

I backup a PostgreSql database using the following command and it creates an sql file:
cmd /c 'docker exec -t <container-name> pg_dump <db_name> -U postgres -c
-v > C:\\backup\\<db_name>.sql'
However, I cannot restore the sql backup file using the following command:
first I drop and create an empty db:
docker exec <container-name> bash -c "dropdb -U postgres <db_name>"
docker exec <container-name> bash -c "createdb -U postgres <db_name>"
then restore:
cmd /c "docker exec -i <container-name> pg_restore -C -U postgres -d
<db_name> -v < C:\\<db_name>.sql"
gives "pg_restore: error: input file does not appear to be a valid archive" error. So,
1. how can I restore the database with sql file?
2. how can I backup PostgreSql db in Docker on Windows?
A plain-text pg_dump is restored with psql, not with pg_restore.
I don't understand why you want to run that inside the container. Install a PostgreSQL client on the host operating system and simplify the procedure. Besides, a backup should be on a different machine than the database.

Docker run to execute script in mount without exiting container automatically?

I have a simple bash script 'test.sh' in the root of mounted folder :
#!/bin/bash
Rscript -e "source('/home/rstudio/mount-folder/src/controller.R')";
However, when i try to mount folder and start the container with docker run as follows:
docker run -d -p 8000:8787 -e ROOT=true -e DISABLE_AUTH=true --name container -v mount-folder/:/home/rstudio/ image_name /home/rstudio/test.sh
above run command starts the container but exits automatically.
I am looking for a docker run command that starts the container , mounts the folder and then executes the bash script which is in the mount-folder without exiting the container.
(** dont want to go with docker exec command as it is not suitable for my use case for other reasons)
Dockerfile:
FROM rocker/rstudio:4.0.2
//some RUN commands to install necessary r packages
EXPOSE 8787
CMD tail -f /dev/null
Other details :
Image that i am using is rstudio server from rocker and container runs on AWS ubuntu machine.
Edit :
have also tried adding CMD tail -f /dev/null at the end of dockerfile as suggested in http://bigdatums.net/2017/11/07/how-to-keep-docker-containers-running/ even then the container exits.
Docker containers shutdown automatically when run in detached mode. I think this article proposes a nice solution:
http://bigdatums.net/2017/11/07/how-to-keep-docker-containers-running/
You could add tail -f /dev/null as the last command in your bash script instead so that the script will never halt unless it is told to do so.
When you do docker run [options] image_name [cmd] the command you specify becomes the command for the container and replaces any the command specified in the dockerfile (that's why adding CMD tail -f /dev/null doesn't do anything). If you ran your container without the /home/rstudio/test.sh at the end, it should stay running.
The solution would be to update your script to add the tail command at the end.
#!/bin/bash
Rscript -e "source('/home/rstudio/mount-folder/src/controller.R')";
exec tail -f /dev/null
If you can't update that script, you could instead add it to the command being passed to the container, with something like:
docker run [options] image_name bash -c '/home/rstudio/test.sh && exec tail -f /dev/null'

Unable to run queries from a file using psql command line with docker exec

I have a bash file should bring the postgres docker container online and then run a .sql file to create the databases. But it's throwing the error.
psql: error: provision-db.sql: No such file or directory
I have checked the path and the file exists at the same level of this bash script. Following is the content of my bash file.
#!/usr/bin/env bash
docker-compose up -d db
# Ensure the Postgres server is online and usable
until docker exec -i boohoo.postgres pg_isready --host="${POSTGRES_HOST}" --username="${POSTGRES_USER}"
do
echo "."
sleep 1
done
docker exec -i boohoo.postgres psql -h "${POSTGRES_HOST}" -U "${POSTGRES_USER}" -a -q -f provision-db.sql
And this is the provision-db.sql file.
DROP DATABASE "boo-hoo";
CREATE DATABASE "boo-hoo";
GRANT ALL PRIVILEGES ON DATABASE "boo-hoo" TO postgres;
This is the part of docker-compose.yml
version: '3.3'
services:
db:
container_name: boohoo.postgres
hostname: postgres.boohoo
image: postgres
ports:
- "15432:5432"
environment:
POSTGRES_USER: "postgres"
POSTGRES_PASSWORD: "postgres"
The short version
This works
cat provision-db.sql | docker exec -i boohoo.postgres bash -c 'psql -U ${POSTGRES_USER} -w -a -q -f -'
The long version
multiple things here
1) why does following command not find the provision-db.sql?
docker exec -i boohoo.postgres psql -h "${POSTGRES_HOST}" -U "${POSTGRES_USER}" -a -q -f provision-db.sql
because the provision-db.sql is on your host and not in your container. Therefore, when you execute the psql command inside the container it can not find the file
2) Why didn't my first solution work?
cat provision-db.sql | docker exec -i boohoo.postgres psql -h "${POSTGRES_HOST}" -U "${POSTGRES_USER}" -a -q -f - should do the trick asuming provision-db.sql
That is due to the fact, that the variables ${POSTGRES_USER} and ${POSTGRES_PASSWORD} get evaluated on your host machine and I guess they are not set there. In addition, I forgot to specify the -w flag to avoid the password prompt
3) Why does that work?
cat provision-db.sql | docker exec -i boohoo.postgres bash -c 'psql -U ${POSTGRES_USER} -w -a -q -f -'
Well, let's go through it step by step.
First, we print the content of provision-db.sql, which resides on the host machine to stdout and pipe it to the next command via |.
docker-exec executes a command in the container specified (boohoo.postgres). By specifying the -i flag we allow the stdin from your host to go to stdin in the container <- that's important.
In the container, we execute bash -c which is just a wrapper to avoid evaluating the bash variables on the host. We want the variables from the container and by putting it into single quotes we can do that.
docker-exec boohoo.postgres bash -c "echo $POSTGRES_USER"
evaluates the host env variable named POSTGRES_USER.
docker-exec boohoo.postgres bash -c "echo $POSTGRES_USER"
evaluates the container env variable named POSTGRES_USER.
Next we just have to get our postgres command in order.
psql -U ${POSTGRES_USER} -w -a -q -f -
-U specifies the user
-w does not ask for password
-q do it quietly
-f - process whatever you get from stdin
-f is an option for psql and not for docker exec, and psql is running inside the container, so it can only access the file if it is inside the container as well.

Can't find mongodump on Windows/Docker

I am trying to dump my mongodb, which is currently in a docker container on windows. I run the following command:
docker run --rm --link docker-mongodb_1:mongo --net docker_default -v /backup:/backup mongo bash -c "mongodump --out /backup/ --host mongo:27017"
The output is something like this (with no errors):
"writing db.entity to "
"done dumping db.entity"
However, I cannot find the actual dump. I have checked C:/backup, my local directory. Tried renaming the output and volumes, but with no luck. Does anyone know where the dump is stored?
I have been trying to do the same. I have written a shell script which does this process of backing up the data as you require. You first need to run the container with a name (whatever you wish that container name to be)
BACKUP_DIR="/backup"
DB_CONTAINER_ID=$(docker ps -aqf "name=<**name of your container**>")
NETWORK_ID=$(docker inspect -f "{{ .NetworkSettings.Networks.root_default.NetworkID }}" $DB_CONTAINER_ID)
docker run -v $BACKUP_DIR:/backup --network $NETWORK_ID mongo:3.4 su -c "cd /backup && mongodump -h db -u <username> -p <password> --authenticationDatabase <db_name> --db <db_name>"
tar -zcvf $BACKUP_DIR/db.tgz $BACKUP_DIR/dump
rm -rf $BACKUP_DIR/dump

Resources