I need to use a bash script:
Launch the container
Generate a password
Enter the container
Run the 'cd /' command
Change the password using htpasswd to the generated one
I tried it like this:
docker restart c1
a = date +%s | sha256sum | base64 | head -c 32 ; echo
docker exec -u 0 -it c1 bash 'echo cd /'
htpasswd user.passwd webdav a
And so:
docker restart c1
docker exec -u 0 -it c1 bash
cd /
a = date +%s | sha256sum | base64 | head -c 32 ; echo
htpasswd user.passwd webdav a
With the first option , I get:
bash: echo cd /: No such file or directory
With the second one, it enters the container and does nothing
I will be grateful for any help
I tried many variations of the script, which did not help me
You do not need Docker or debugging tools like docker exec just to generate an htpasswd file.
htpasswd is part of the Apache distribution, and you should be able to install it on your host system using your OS package manager. Since it just manipulates a credential file it doesn't need the actual server.
# On the host system, without using Docker at all
sudo apt-get update && apt-get install apache2-utils
# Make sure to wrap the password-generating command in `$()`
a=$(date +%s | sha256sum | base64 | head -c 32)
# Make sure to use a variable reference `$a`
htpasswd user.passwd webdav "$a"
This gives you a user.passwd file on your local system. Now when you launch your container, you can bind-mount the file into the container:
docker run -d -p 80:80 ... \
-v "$PWD/user.passwd:/usr/local/apache2/conf/user.passwd" \
httpd
The container will be immediately ready to use. If you delete and recreate this container, you do not need to repeat the manual setup step. If you need to launch multiple copies of the container, they can all have the same credentials file without doing manual steps.
I seem to be stuck here. I'm attempting to write a bash function that starts x number of docker containers, wish an array that holds exposed ports for the given app. I don't want to loop over the array, just the commands, while referencing the array to get the value. The function looks like this:
#!/bin/bash
declare -a HOSTS=( ["app1"]="8002"
["app2"]="8003"
["app3"]="8008"
["app4"]="8009"
["app5"]="8004"
["app6"]="8007"
["app7"]="8006" )
start() {
for app in "$#"; do
if [ "docker ps|grep $app" == "$app" ]; then
docker stop "$app"
fi
docker run -it --rm -d --network example_example \
--workdir=/home/docker/app/src/projects/"$app" \
--volume "${PWD}"/example:/home/docker/app/src/example \
--volume "${PWD}"/projects:/home/docker/app/src/projects \
--volume "${PWD}"/docker_etc/example:/etc/example \
--volume "${PWD}"/static:/home/docker/app/src/static \
--name "$app" --hostname "$app" \
--publish "${HOSTS["$app"]}":"${HOSTS["$app"]}" \
example ./manage.py runserver 0.0.0.0:"${HOSTS[$app]}";
echo "$app"
done
}
And I want to pass arguments like so:
./script.sh start app1 app2 app4
Right now it isn't echoing the app so that points towards the for loop being declared incorrectly...could use some pointers on this.
This line:
if [ "docker ps|grep $app" == "$app" ];
doesn't do what you want. It looks like you mean to say:
if [ "$(docker ps | grep "$app")" == "$app" ];
but you could fail to detect two copies of the application running, and you aren't looking for the application as a word (so if you look for rm you might find perform running and think rm was running).
You should consider, therefore, using:
if docker ps | grep -w -q "$app"
then …
fi
This runs the docker command and pipes the result to grep, and reports on the exit status of grep. The -w looks for a word containing the value of "$app", but does so quietly (-q), so grep only reports success (exit status 0) if it found at least one matching line or failure (non-zero exit status) otherwise.
docker ps -f lets you conveniently check programmatically whether a particular image is running.
for app in "$#"; do
if docker ps -q -f name="$app" | grep -q .; then
docker stop "$app"
:
Unfortunately, docker ps does not set its exit code (at least not in the versions I have available -- I think it has been fixed in some development version after 17.06 but I'm not sure) so we have to use an ugly pipe to grep -q . to check whether the command produced any output. The -q flag just minimizes the amount of stuff it prints (it will print just the container ID instead of a bunch of headers and columnar output for each matching container).
I have a starting docker script here:
#!/usr/bin/env bash
set -e
echo '>>> Get old container id'
CID=$(sudo docker ps --all | grep "web-client" | awk '{print $1}')
echo $CID
echo '>>> Stopping and deleting old container'
if [ "$CID" != "" ];
then
sudo docker stop $CID
sudo docker rm $CID
fi
echo '>>> Starting new container'
sudo docker pull my-example-registry.com:5050/web-client:latest
sudo docker run --name=web-client -p 8080:80 -d my-example-registry.com:5050/web-client:latest
The fact is this script has umproper result. It deletes the old container everytime the script is run.
The "starting new container" section will pull the most recent image. Here is an example output of docker pull if the image locally is up to date:
Status: Image is up to date for
my-example-registry:5050/web-client:latest
Is there any way to improve my script by adding a condition:
Before anything, check via docker pull the local image is the most recent version available on registry. Then if it's the most recent version, proceed the stop and delete old container action and docker run the new pulled image.
In this script, how to parse the status to check the local image corresponds to the most up to date available on registry?
Maybe a docker command can do the trick, but I didn't manage to find a useful one.
Check the string "Image is up to date" to know whether the local image was updated:
sudo docker pull my-example-registry.com:5050/web-client:latest |
grep "Image is up to date" ||
(echo Already up to date. Exiting... && exit 0)
So change your script to:
#!/usr/bin/env bash
set -e
sudo docker pull my-example-registry.com:5050/web-client:latest |
grep "Image is up to date" ||
(echo Already up to date. Exiting... && exit 0)
echo '>>> Get old container id'
CID=$(sudo docker ps --all | grep "web-client" | awk '{print $1}')
echo $CID
echo '>>> Stopping and deleting old container'
if [ "$CID" != "" ];
then
sudo docker stop $CID
sudo docker rm $CID
fi
echo '>>> Starting new container'
sudo docker run --name=web-client -p 8080:80 -d my-example-registry.com:5050/web-client:latest
Simple use docker-compose and you can remove all the above.
docker-compose pull && docker-compose up
This will pull the image, if it exists, and up will only recreate the container, if it actually has a newer image, otherwise it will do nothing
If you're using docker compose, here's my solution where I keep put my latest docker-compose.yml into an image right after I've pushed all of the needed images that are in docker-compose.yml
The server runs this as a cron job:
#!/usr/bin/env bash
docker login --username username --password password
if (( $? > 0 )) ; then
echo 'Failed to login'
exit 1
fi
# Grab latest config, if the image is different then we have a new update to make
pullContents=$(docker pull my/registry:config-holder)
if (( $? > 0 )) ; then
echo 'Failed to pull image'
exit 1
fi
if echo $pullContents | grep "Image is up to date" ; then
echo 'Image already up to date'
exit 0
fi
cd /srv/www/
# Grab latest docker-compose.yml that we'll be needing
docker run -d --name config-holder my/registry:config-holder
docker cp config-holder:/home/docker-compose.yml docker-compose-new.yml
docker stop config-holder
docker rm config-holder
# Use new yml to pull latest images
docker-compose -f docker-compose-new.yml pull
# Stop server
docker-compose down
# Replace old yml file with our new one, and spin back up
mv docker-compose-new.yml docker-compose.yml
docker-compose up -d
Config holder dockerfile:
FROM bash
# This image exists just to hold the docker-compose.yml. So when remote updating the server can pull this, get the latest docker-compose file, then pull those
COPY docker-compose.yml /home/docker-compose.yml
# Ensures that the image is subtly different every time we deploy. This is required we want the server to find this image has changed to trigger a new deployment
RUN bash -c "touch random.txt; echo $(echo $RANDOM | md5sum | head -c 20) >> random.txt"
# Wait forever
CMD exec bash -c "trap : TERM INT; sleep infinity & wait"
What's the simplest way to get an environment variable from a docker container that has not been declared in the Dockerfile?
For instance, an environment variable that has been set through some docker exec container /bin/bash session?
I can do docker exec container env | grep ENV_VAR, but I would prefer something that just returns the value.
I've tried using docker exec container echo "$ENV_VAR", but the substitution seems to happen outside of the container, so I don't get the env var from the container, but rather the env var from my own computer.
Thanks.
To view all env variables:
docker exec container env
To get one:
docker exec container env | grep VARIABLE | cut -d'=' -f2
The proper way to run echo "$ENV_VAR" inside the container so that the variable substitution happens in the container is:
docker exec <container_id> bash -c 'echo "$ENV_VAR"'
You can use printenv VARIABLE instead of /bin/bash -c 'echo $VARIABLE. It's much simpler and it does not perform substitution:
docker exec container printenv VARIABLE
The downside of using docker exec is that it requires a running container, so docker inspect -f might be handy if you're unsure a container is running.
Example #1. Output a list of space-separated environment variables in the specified container:
docker inspect -f \
'{{range $index, $value := .Config.Env}}{{$value}} {{end}}' container_name
the output will look like this:
ENV_VAR1=value1 ENV_VAR2=value2 ENV_VAR3=value3
Example #2. Output each env var on new line and grep the needed items, for example, the mysql container's settings could be retrieved like this:
docker inspect -f \
'{{range $index, $value := .Config.Env}}{{println $value}}{{end}}' \
container_name | grep MYSQL_
will output:
MYSQL_PASSWORD=secret
MYSQL_ROOT_PASSWORD=supersecret
MYSQL_USER=demo
MYSQL_DATABASE=demodb
MYSQL_MAJOR=5.5
MYSQL_VERSION=5.5.52
Example #3. Let's modify the example above to get a bash friendly output which can be directly used in your scripts:
docker inspect -f \
'{{range $index, $value := .Config.Env}}export {{$value}}{{println}}{{end}}' \
container_name | grep MYSQL
will output:
export MYSQL_PASSWORD=secret
export MYSQL_ROOT_PASSWORD=supersecret
export MYSQL_USER=demo
export MYSQL_DATABASE=demodb
export MYSQL_MAJOR=5.5
export MYSQL_VERSION=5.5.52
If you want to dive deeper, then go to Go’s text/template package documentation with all the details of the format.
Since we are dealing with JSON and unlike the accepted answer, we don't need to exec the container.
docker inspect <NAME|ID> | jq '.[] | .Config.Env'
Output sample
[
"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
"NGINX_VERSION=1.19.4",
"NJS_VERSION=0.4.4",
"PKG_RELEASE=1~buster"
]
To retrieve a specific variable
docker inspect <NAME|ID> | jq -r '.[].Config.Env[]|select(match("^<VAR_NAME>"))|.[index("=")+1:]'
See jq
None of the above answers show you how to extract a variable from a non-running container (if you use the echo approach with run, you won't get any output).
Simply run with printenv, like so:
docker run --rm <container> printenv <MY_VAR>
(Note that docker-compose instead of docker works too)
If by any chance you use VSCode and has installed the docker extension, just right+click on the docker you want to check (within the docker extension), click on Inspect, and there search for env, you will find all your env variables values
We can modify entrypoint of a non-running container with the docker run command.
Example show PATH environment variable:
using bash and echo: This answer claims that echo will not produce any output, which is incorrect.
docker run --rm --entrypoint bash <container> -c 'echo "$PATH"'
using printenv
docker run --rm --entrypoint printenv <container> PATH
#aisbaa's answer works if you don't care when the environment variable was declared. If you want the environment variable, even if it has been declared inside of an exec /bin/bash session, use something like:
IFS="=" read -a out <<< $(docker exec container /bin/bash -c "env | grep ENV_VAR" 2>&1)
It's not very pretty, but it gets the job done.
To then get the value, use:
echo ${out[1]}
This command inspects docker stack processes' environment in the host :
pidof dockerd containerd containerd-shim | tr ' ' '\n' \
| xargs -L1 -I{} -- sudo xargs -a '/proc/{}/environ' -L1 -0
The first way we use to find the ENV variables is docker inspect <container name>
The second way is docker exec <4 alphanumeric letter of CONTAINER id> bash -c 'echo "$ENV_VAR"'
There is a misconception in the question, that causes confusion:
you cannot access a "running session", so no bash session can change anything.
docker exec -ti container /bin/bash
starts a new console process in the container, so if you do export VAR=VALUE this will go away as soon as you leave the shell, and it won't exist anymore.
Perhaps a good example:
# assuming TESTVAR did not existed previously this is empty
docker exec container env | grep TESTVAR
# -> TESTVAR=a new value!
docker exec container /bin/bash -c 'TESTVAR="a new value!" env' | grep TESTVAR
# again empty
docker exec container env | grep TESTVAR
The variables from env come from the Dockerfile or command, docker itself and whatever the entrypoint sets.
The other answers here are good. But if you really need to get the environmental properties used when starting a program, then you can inspect the /proc/pid/environ contents in the container, where pid is the container process id of the running comand.
# environmental props
docker exec container cat /proc/pid/environ | tr '\0' '\n'
# you can check this is the correct pid by checking the ran command
docker exec container cat /proc/pid/cmdline | tr '\0' ' '
I have a shell script which runs as follows :
image_id=$(docker ps -a | grep postgres | awk -F' ' '{print $1}')
full_id=$(docker ps -a --no-trunc -q | grep $image_id)
docker exec -i -t $full_id bash
When I run this from the base linux OS, I expect to actually enter the postgres container which is a running container. But the issue is that the shell script hangs on 3rd line during ' docker exec' step.
My end goal is using the bash script, enter a running postgres container and run another bash script inside that container.
However the same command when I run it from command line, it works fine and gets me into the postgres container.
Please help, I have spent hours and hours to solve this but no progress.
Thanks again
Your setup is a bit more complex than it needs to be.
Docker ps can filter containers directly with the --filter= option
docker ps --no-trunc --quiet --filter="ancestor=postgres"
You can also --name containers when you run them which will be less fraught with danger than the script you are attempting
docker run --detach --name postgres_whatever postgres
docker exec -ti postgres_whatever bash
I'm not sure that your script is hanging as opposed to sitting there waiting for input. Try running a command directly
Using naming
exec_test.sh
#!/usr/bin/env bash
docker exec postgres_whatever echo "I have run the test"
When run
$ ./exec_test.sh
I have run the test
Without naming
exec_filter_test.sh
#!/usr/bin/env bash
id=$(docker ps --no-trunc --quiet --filter="ancestor=postgres")
[ -z "$id" ] && echo "no id" && exit 1
docker exec "${id}" echo "I have run the test"
When run
$ ./exec_filter_test.sh
I have run the test