Automating password change inside a Docker container - bash

I need to use a bash script:
Launch the container
Generate a password
Enter the container
Run the 'cd /' command
Change the password using htpasswd to the generated one
I tried it like this:
docker restart c1
a = date +%s | sha256sum | base64 | head -c 32 ; echo
docker exec -u 0 -it c1 bash 'echo cd /'
htpasswd user.passwd webdav a
And so:
docker restart c1
docker exec -u 0 -it c1 bash
cd /
a = date +%s | sha256sum | base64 | head -c 32 ; echo
htpasswd user.passwd webdav a
With the first option , I get:
bash: echo cd /: No such file or directory
With the second one, it enters the container and does nothing
I will be grateful for any help
I tried many variations of the script, which did not help me

You do not need Docker or debugging tools like docker exec just to generate an htpasswd file.
htpasswd is part of the Apache distribution, and you should be able to install it on your host system using your OS package manager. Since it just manipulates a credential file it doesn't need the actual server.
# On the host system, without using Docker at all
sudo apt-get update && apt-get install apache2-utils
# Make sure to wrap the password-generating command in `$()`
a=$(date +%s | sha256sum | base64 | head -c 32)
# Make sure to use a variable reference `$a`
htpasswd user.passwd webdav "$a"
This gives you a user.passwd file on your local system. Now when you launch your container, you can bind-mount the file into the container:
docker run -d -p 80:80 ... \
-v "$PWD/user.passwd:/usr/local/apache2/conf/user.passwd" \
httpd
The container will be immediately ready to use. If you delete and recreate this container, you do not need to repeat the manual setup step. If you need to launch multiple copies of the container, they can all have the same credentials file without doing manual steps.

Related

Unable to run queries from a file using psql command line with docker exec

I have a bash file should bring the postgres docker container online and then run a .sql file to create the databases. But it's throwing the error.
psql: error: provision-db.sql: No such file or directory
I have checked the path and the file exists at the same level of this bash script. Following is the content of my bash file.
#!/usr/bin/env bash
docker-compose up -d db
# Ensure the Postgres server is online and usable
until docker exec -i boohoo.postgres pg_isready --host="${POSTGRES_HOST}" --username="${POSTGRES_USER}"
do
echo "."
sleep 1
done
docker exec -i boohoo.postgres psql -h "${POSTGRES_HOST}" -U "${POSTGRES_USER}" -a -q -f provision-db.sql
And this is the provision-db.sql file.
DROP DATABASE "boo-hoo";
CREATE DATABASE "boo-hoo";
GRANT ALL PRIVILEGES ON DATABASE "boo-hoo" TO postgres;
This is the part of docker-compose.yml
version: '3.3'
services:
db:
container_name: boohoo.postgres
hostname: postgres.boohoo
image: postgres
ports:
- "15432:5432"
environment:
POSTGRES_USER: "postgres"
POSTGRES_PASSWORD: "postgres"
The short version
This works
cat provision-db.sql | docker exec -i boohoo.postgres bash -c 'psql -U ${POSTGRES_USER} -w -a -q -f -'
The long version
multiple things here
1) why does following command not find the provision-db.sql?
docker exec -i boohoo.postgres psql -h "${POSTGRES_HOST}" -U "${POSTGRES_USER}" -a -q -f provision-db.sql
because the provision-db.sql is on your host and not in your container. Therefore, when you execute the psql command inside the container it can not find the file
2) Why didn't my first solution work?
cat provision-db.sql | docker exec -i boohoo.postgres psql -h "${POSTGRES_HOST}" -U "${POSTGRES_USER}" -a -q -f - should do the trick asuming provision-db.sql
That is due to the fact, that the variables ${POSTGRES_USER} and ${POSTGRES_PASSWORD} get evaluated on your host machine and I guess they are not set there. In addition, I forgot to specify the -w flag to avoid the password prompt
3) Why does that work?
cat provision-db.sql | docker exec -i boohoo.postgres bash -c 'psql -U ${POSTGRES_USER} -w -a -q -f -'
Well, let's go through it step by step.
First, we print the content of provision-db.sql, which resides on the host machine to stdout and pipe it to the next command via |.
docker-exec executes a command in the container specified (boohoo.postgres). By specifying the -i flag we allow the stdin from your host to go to stdin in the container <- that's important.
In the container, we execute bash -c which is just a wrapper to avoid evaluating the bash variables on the host. We want the variables from the container and by putting it into single quotes we can do that.
docker-exec boohoo.postgres bash -c "echo $POSTGRES_USER"
evaluates the host env variable named POSTGRES_USER.
docker-exec boohoo.postgres bash -c "echo $POSTGRES_USER"
evaluates the container env variable named POSTGRES_USER.
Next we just have to get our postgres command in order.
psql -U ${POSTGRES_USER} -w -a -q -f -
-U specifies the user
-w does not ask for password
-q do it quietly
-f - process whatever you get from stdin
-f is an option for psql and not for docker exec, and psql is running inside the container, so it can only access the file if it is inside the container as well.

How do I inject a local file as an argument to a command to run inside a docker container?

Scenario:
I have a postgres container named db running on a machine. I am in a directory on the host and have an SQL script named patch.sql. I wish to apply this script to the database inside the container.
Were I to be inside the container and have the script also inside the container, I would run
psql -U user -d db -f patch.sql
Since I am outside the container, I could naively try
docker exec -i db psql -U user -d db -f patch.sql
but of course, this would look for a file named patch.sql inside the container, while it is actually on the host machine.
My current workaround is
cat patch.sql | docker exec -i db /bin/sh -c "cat $# > patch.sql"
docker exec -i db psql -U user -d db -f patch.sql
docker exec -i db rm patch.sql
Is there away to elegantly reduce this to a one-liner?
I am aware, how to place the file inside the container, this is exactly what my workaround does. I am thinking of some trick with I/O redirection to place the file into the command.
I do not want to mount volumes and I cannot do this, since the container is already running anyway. The idea is to avoid moving the file into the container.
Maybe could try directly pipe the patch.sql file content to psql, like
cat patch.sql | docker exec -i db psql -U user -d db -f -
or just
cat patch.sql | docker exec -i db psql -U user -d db

How to run multiple entrypoint scripts one after another inside docker container?

I am trying to match the host UID with container UID as below.
Dockerfile
RUN addgroup -g 1000 deploy \
&& adduser -D -u 1000 -G deploy -s /bin/sh deploy
USER deploy
COPY entrypoint.sh /
ENTRYPOINT ["/entrypoint.sh"]
CMD ["php-fpm7","-F"]
entrypoint.sh
whoami # it outputs `deploy`
# Change UID of 'deploy' as per host user UID
HOST_CURRENT_USER_ID=$(stat -c "%u" /var/www/${PROJECT_NAME})
if [ ${HOST_CURRENT_USER_ID} -ne 0 ]; then
gosu root usermod -u ${HOST_CURRENT_USER_ID} deploy
gosu root groupmod -g ${HOST_CURRENT_USER_ID} deploy
fi
whoami # It outputs as unknown user id 1000.
Please note the output of whoami above. Even If I changed the UID of deploy to host uid, the entrypoint script process doesn't get changed as the entrypoint shell has been called by UID 1000.
So I came up in a solution to make two entry point script one is to change the UID and another one is for container's bootstrap process which will be run in a separate shell after I change the UID of deploy. So how can I make two entrypoint run after another. E.g something like
ENTRYPOINT ["/fix-uid.sh && /entrypoint.sh"]
It looks like you're designing a solution very similar to one that I've created. As ErikMD mentions, do not use gosu to switch from a user to root, you want to go the other way, from root to a user. Otherwise, you will have an open security hole inside your container than any user can become root, defeating the purpose of running a container as a different user id.
For the solution that I put together, I have it work whether the container is run in production as just a user with no volume mounts, or in development with volume mounts by initially starting the container as root. You can have an identical Dockerfile, and change the entrypoint to have something along the lines of:
#!/bin/sh
if [ "$(id -u)" = "0" ]; then
fix-perms -r -u deploy -g deploy /var/www/${PROJECT_NAME}
exec gosu deploy "$#"
else
exec "$#"
fi
The fix-perms script above is from my base image, and includes the following bit of code:
# update the uid
if [ -n "$opt_u" ]; then
OLD_UID=`getent passwd "${opt_u}" | cut -f3 -d:`
NEW_UID=`ls -nd "$1" | awk '{print $3}'`
if [ "$OLD_UID" != "$NEW_UID" ]; then
echo "Changing UID of $opt_u from $OLD_UID to $NEW_UID"
usermod -u "$NEW_UID" -o "$opt_u"
if [ -n "$opt_r" ]; then
find / -xdev -user "$OLD_UID" -exec chown -h "$opt_u" {} \;
fi
fi
fi
(Note, I really like your use of stat -c and will likely be updating my fix-perms script to leverage that over the ls command I have in there now.)
The important part to this is running the container. When you need the fix-perms code to run (which for me is only in development), I start the container as root. This can be a docker run -u root:root ... or user: "root:root" in a compose file. That launches the container as root initially, which triggers the first half of the if/else in the entrypoint that runs fix-perms and then runs a gosu deploy to drop from root to deploy before calling "$#" which is your command (CMD). The end result is pid 1 in the container is now running your command as the deploy user.
As an aside, if you really want an easier way to run multiple entrypoint fragments in a way that's easy to extend with child images, I use an entrypoint.d folder that is processed by an entrypoint script in my base image. To code to implement that logic is as simple as:
for ep in /etc/entrypoint.d/*.sh; do
if [ -x "${ep}" ]; then
echo "Running: ${ep}"
"${ep}"
fi
done
All of this can be seen, along with an example using nginx, at: https://github.com/sudo-bmitch/docker-base
The behavior you observe seems fairly normal: in your entrypoint script, you changed the UID associated with the username deploy, but the two whoami commands are still run with the same user (identified by the UID in the first place, not the username).
For more information about UIDs and GIDs in a Docker context, see e.g. that reference.
Note also that using gosu to re-become root is not a standard practice (see in particular that warning in the upstream doc).
For your use case, I'd suggest removing the USER deploy command and switch user in the very end, by adapting your entrypoint script as follows:
Dockerfile
(…)
RUN addgroup -g 1000 deploy \
&& adduser -D -u 1000 -G deploy -s /bin/sh deploy
COPY entrypoint.sh /
ENTRYPOINT ["/entrypoint.sh"]
CMD ["php-fpm7","-F"]
entrypoint.sh
#!/bin/sh
whoami # it outputs `root`
# Change UID of 'deploy' as per host user UID
HOST_CURRENT_USER_ID=$(stat -c "%u" /var/www/${PROJECT_NAME})
if [ ${HOST_CURRENT_USER_ID} -ne 0 ]; then
usermod -u ${HOST_CURRENT_USER_ID} deploy
groupmod -g ${HOST_CURRENT_USER_ID} deploy
fi
# don't forget the "exec" builtin
exec gosu ${HOST_CURRENT_USER_ID}:${HOST_CURRENT_USER_ID} "$#"
this can be tested using id, for example:
$ docker build -t test-gosu .
$ docker run --rm -it test-gosu /bin/sh
$ id

Clear logs in native Docker on Mac

I want to get rid of huge container log files on my docker env.
I have problem finding them when running native Docker on a Mac. I am not using docker-machine (virtualbox) thing. My docker version is 1.13.1.
When I do
docker inspect <container-name>
I see there is
"LogPath": "/var/lib/docker/containers/<container-id>/<container-id>-json.log
But there is not even directory /var/lib/docker on my mac (host).
I have also looked in
~/Library/Containers/com.docker.docker/
but didn't find any container specific loggings there.
I could use tail, but it is not that convenient always to me.
So the question is, how can I clear the log files of my containers on my native Docker Mac environment.
Docker daemon runs in a separate VM, so in order to clear logs you should do the following steps:
First, you can find the log path inside the VM, with:
docker inspect --format='{{.LogPath}}' NAME|ID
You can connect to the VM with screen
screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
Here you can simply use output redirection to clear the log
> /var/lib/docker/containers/CONTAINER_ID/CONTAINER_ID-json.log
And finally you can detach the screen with hitting Control+a d
I added the following to my bash_profile.
it gets the logpath for the docker container, opens a screen to the docker machine and deletes the logfile.
clearDockerLog(){
dockerLogFile=$(docker inspect $1 | grep -G '\"LogPath\": \"*\"' | sed -e 's/.*\"LogPath\": \"//g' | sed -e 's/\",//g')
rmCommand="rm $dockerLogFile"
screen -d -m -S dockerlogdelete ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
screen -S dockerlogdelete -p 0 -X stuff $"$rmCommand"
screen -S dockerlogdelete -p 0 -X stuff $'\n'
screen -S dockerlogdelete -X quit
}
use as follows:
clearDockerLog <container_name>
This will remove all your docker logs in macOS.
echo "rm /var/lib/docker/containers/*/*.log" | nc -U -w 0 ~/Library/Containers/com.docker.docker/Data/debug-shell.sock
This is the only solution that worked for macOS 10.14
docker run -it --rm --privileged --pid=host NAME nsenter -t 1 -m -u -n -i -- sh -c 'truncate -s0 /var/lib/docker/containers/*/*-json.log'
Replace NAME with your container name
Hope this helps
This worked for me, at least from the commandline: screen $(cat ~/Library/Containers/com.docker.docker/Data/vms/0/tty)
This might work better with the script if the above doesn't: screen /dev/ttys000
gist with more things to try

Using ssh-agent with docker on macOS

I would like to use ssh-agent to forward my keys into the docker image and pull from a private github repo.
I am using a slightly modified version of https://github.com/phusion/passenger-docker with boot2docker on Yosemite.
ssh-add -l
...key details
boot2docker up
Then I use the command which I have seen in a number of places (i.e. https://gist.github.com/d11wtq/8699521):
docker run --rm -t -i -v $SSH_AUTH_SOCK:/ssh-agent -e SSH_AUTH_SOCK=/ssh-agent my_image /bin/bash
However it doesn't seem to work:
root#299212f6fee3:/# ssh-add -l
Could not open a connection to your authentication agent.
root#299212f6fee3:/# eval `ssh-agent -s`
Agent pid 19
root#299212f6fee3:/# ssh-add -l
The agent has no identities.
root#299212f6fee3:/# ssh git#github.com
Warning: Permanently added the RSA host key for IP address '192.30.252.128' to the list of known hosts.
Permission denied (publickey).
Since version 2.2.0.0, docker for macOS allows users to access the host’s SSH agent inside containers.
Here's an example command that let's you do it:
docker run --rm -it \
-v /run/host-services/ssh-auth.sock:/ssh-agent \
-e SSH_AUTH_SOCK="/ssh-agent" \
my_image
Note that you have to mount the specific path (/run/host-services/ssh-auth.sock) instead of the path contained in $SSH_AUTH_SOCK environment variable, like you would do on linux hosts.
A one-liner:
Here’s how to set it up on Ubuntu 16 running a Debian Jessie image:
docker run --rm -it --name container_name \
-v $(dirname $SSH_AUTH_SOCK):$(dirname $SSH_AUTH_SOCK) \
-e SSH_AUTH_SOCK=$SSH_AUTH_SOCK my_image
https://techtip.tech.blog/2016/12/04/using-ssh-agent-forwarding-with-a-docker-container/
I expanded on #wilwilson's answer, and created a script that will setup agent forwarding in an OSX boot2docker environment.
https://gist.github.com/rcoup/53e8dee9f5ea27a51855
#!/bin/bash
# Use a unique ssh socket name per-invocation of this script
SSH_SOCK=boot2docker.$$.ssh.socket
# ssh into boot2docker with agent forwarding
ssh -i ~/.ssh/id_boot2docker \
-o StrictHostKeyChecking=no \
-o IdentitiesOnly=yes \
-o UserKnownHostsFile=/dev/null \
-o LogLevel=quiet \
-p 2022 docker#localhost \
-A -M -S $SSH_SOCK -f -n \
tail -f /dev/null
# get the agent socket path from the boot2docker vm
B2D_AGENT_SOCK=$(ssh -S $SSH_SOCK docker#localhost echo \$SSH_AUTH_SOCK)
# mount the socket (from the boot2docker vm) onto the docker container
# and set the ssh agent environment variable so ssh tools pick it up
docker run \
-v $B2D_AGENT_SOCK:/ssh-agent \
-e "SSH_AUTH_SOCK=/ssh-agent" \
"$#"
# we're done; kill off the boot2docker ssh agent
ssh -S $SSH_SOCK -O exit docker#localhost
Stick it in ~/bin/docker-run-ssh, chmod +x it, and use docker-run-ssh instead of docker run.
I ran into a similar issue, and was able to make things pretty seamless by using ssh in master mode with a control socket and wrapping it all in a script like this:
#!/bin/sh
ssh -i ~/.vagrant.d/insecure_private_key -p 2222 -A -M -S ssh.socket -f docker#127.0.0.1 tail -f /dev/null
HOST_SSH_AUTH_SOCK=$(ssh -S ssh.socket docker#127.0.0.1 env | grep "SSH_AUTH_SOCK" | cut -f 2 -d =)
docker run -v $HOST_SSH_AUTH_SOCK:/ssh-agent \
-e "SSH_AUTH_SOCK=/ssh-agent" \
-t hello-world "$#"
ssh -S ssh.socket -O exit docker#127.0.0.1
Not the prettiest thing in the universe, but much better than manually keeping an SSH session open IMO.
For me accessing ssh-agent to forward keys worked on OSX Mavericks and docker 1.5 as follows:
ssh into the boot2docker VM with boot2docker ssh -A. Don't forget to use option -A which enables forwarding of the authentication agent connection.
Inside the boot2docker ssh session:
docker#boot2docker:~$ echo $SSH_AUTH_SOCK
/tmp/ssh-BRLb99Y69U/agent.7750
This session must be left open. Take note of the value of the SSH_AUTH_SOCK environmental variable.
In another OS X terminal issue the docker run command with the SSH_AUTH_SOCK value from step 2 as follows:
docker run --rm -t -i \
-v /tmp/ssh-BRLb99Y69U/agent.7750:/ssh-agent \
-e SSH_AUTH_SOCK=/ssh-agent my_image /bin/bash
root#600d0e9b443d:/# ssh-add -l
2048 6c:8e:82:08:74:33:78:61:f9:9a:74:1b:65:46:be:eb
/Users/dev/.ssh/id_rsa (RSA)
I don't really like the fact that I have to keep a boot2docker ssh session open to make this work, but until a better solution is found, this at least worked for me.
Socket forwarding doesn't work on OS X yet. Here is a variation of #henrjk answer brought into 2019 using Docker for Mac instead of boot2docker which is now obsolete.
First run a ssh server in the container, with /tmp being on the exportable volume. Like this
docker run -v tmp:/tmp -v \
${HOME}/.ssh/id_rsa.pub:/root/.ssh/authorized_keys:ro \
-d -p 2222:22 arvindr226/alpine-ssh
Then ssh into this container with agent forwarding
ssh -A -p 2222 root#localhost
Inside of that ssh session find out the current socket for ssh-agent
3f53fa1f5452:~# echo $SSH_AUTH_SOCK
/tmp/ssh-9zjJcSa3DM/agent.7
Now you can run your real container. Just make sure to replace the value of SSH_AUTH_SOCK below, with the value you got in the step above
docker run -it -v tmp:/tmp \
-e SSH_AUTH_SOCK=/tmp/ssh-9zjJcSa3DM/agent.7 \
vladistan/ansible
By default, boot2docker shares only files under /Users. SSH_AUTH_SOCK is probably under /tmp so the -v mounts the agent of the VM, not the one from your mac.
If you setup your VirtualBox to share /tmp, it should be working.
Could not open a connection to your authentication agent.
This error occurs when $SSH_AUTH_SOCK env var is set incorrectly on the host or not set at all. There are various workarounds you could try. My suggestion, however, is to dual-boot Linux and macOS.
Additional resources:
Using SSH keys inside docker container - Related Question
SSH and docker-compose - Blog post
Build secrets and SSH forwarding in Docker 18.09 - Blog post

Resources