how to capture docker compose exec command in bash variable - bash

I am writing a bash script to check the status of a mongodb instance running in a docker container. This code validates that I can successfully execute the mongo command inside the container:
cat <<END | docker-compose exec -T mongodb1 mongo --username root --password passwd
rs.status().myState
END
However, I would like to be able to store the stdout of rs.status().myState in a variable. Something similar to this:
MY_STATE=$(docker-compose exec -T mongodb1 mongo --username root --password passwd &&
rs.status().myState)
But I get the exception: uncaught exception: ReferenceError: invalid assignment left-hand side
How do I capture the output from the mongo shell running inside the container and store it in a variable?

No matter what it looks like on your terminal, you can't write a shell script that first starts some program, and then second types some input into it. That's what it looks like your last invocation is trying to do. If you try to run something like
some-command && \
input to some-command
then first the command runs to completion, with no input, and then the shell tries to run the input as a second command.
Your first command is probably closer to something that would actually work. If the input fits on a single line then I might write
echo 'input to some-command' | some-command
or, in the more specific case of your command,
MY_STATE=$(echo 'rs.status().mystate' | docker-compose exec -T mongodb1 mongo --username root --password passwd)
You might reconsider whether you actually need docker-compose exec here. You can't run that without also having the ability to docker run a container that can take over the entire host system. If you have the MongoDB command-line tools available on your host, and if you've published a port with the Compose ports: option, then it might work to skip the docker-compose exec part
MY_STATE=$(echo 'rs.status().mystate' | mongo --username root --password passwd)
If you're doing this for a health check, the other thing to consider is that, if a container's main process exits, the process will exit too. That's not always a 100% guarantee and it's very possible for a container to not exit but also not be functional, maybe waiting for something in its environment to reappear (Kubernetes has much richer health checks). But if you can rely on seeing the database server exit if it becomes unhealthy then you don't need a check like this at all.

Related

Send commands directly in running process and indirectly (e. g. with tail)

I am currently building a docker project for running a Minecraft Spigot server.
To achieve this I need to be able to run commands in the running shell (when using docker run -it d3strukt0r/spigot) and indirectly with docker exec <name> console <command>. Unfortunately, I'm not too fond of the bash language.
Currently, I am able to send commands indirectly, which is great when being detached. I got this with:
_console_input="/app/input.buffer"
# Clear console buffers
true >$_console_input
# Start the main application
echo "[....] Starting Minecraft server..."
tail -f $_console_input | tee /dev/console | $(command -v java) $JAVA_OPTIONS -jar /app/spigot.jar --nogui "$#"
And when running the console command, all it does is the following:
echo "$#" >>/app/input.buffer
The code can be found here
Does someone know a way of how to be able to now add the functionality to directly enter commands?
USE CASE ONE: A user may run attached using docker run
docker run -it --name spigot -p 25565:25565 -e EULA=true d3strukt0r/spigot:nightly
In this case, the user should definitely be able to use the console as he is used to (when running java -jar spigot.jar).
If he has a second console open he can also send a command with:
docker exec spigot console "time set day"
USE CASE TWO: A user may run detached using docker run -d
docker run -d --name spigot -p 25565:25565 -e EULA=true d3strukt0r/spigot:nightly
In this case, the user is only able to send commands indirectly.
docker exec spigot console "time set day"
USE CASE THREE AND FOUR: Use docker-compose (look at the use case "two", it's basically the same)
You could make a script that acts like a mini-shell, reading from stdin and writing to /app/input.buffer. Set it as the container's CMD so it runs by default. Put it in the same directory as your Dockerfile and make sure it's executable.
interactive_console
#!/bin/sh
while IFS= read -rp '$ ' command; do
printf '%s\n' "$command"
done >> /app/input.buffer
Dockerfile
COPY interactive_console /usr/bin
CMD interactive_console

Unable to get any Docker Entrypoint from script working without continuous restarts

I'm having trouble understanding or seeing some working version of using a bash script as an Entrypoint for a Docker container. I've been trying numerous things for about 5 hours now.
Even from this official Docker blog, using a bash-script as an entry-point still doesn't work.
Dockerfile
FROM debian:stretch
COPY docker-entrypoint.sh /usr/local/bin/
RUN ln -s /usr/local/bin/docker-entrypoint.sh / # backwards compat
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["postgres"]
docker-entrypoint.sh
#!/bin/bash
set -e
if [ "$1" = 'postgres' ]; then
chown -R postgres "$PGDATA"
if [ -z "$(ls -A "$PGDATA")" ]; then
gosu postgres initdb
fi
exec gosu postgres "$#"
fi
exec "$#"
build.sh
docker build -t test .
run.sh
docker service create \
--name test \
test
Despite many efforts, I can't seem to get Dockerfile using an Entrypoint as a bash script that doesn't continuously restart and fail repeatedly.
My understanding is that exec "$#" was suppose to keep the container form immediately exiting, but I'm not sure if that's dependent some other process within the script failing.
I've tried using a docker-entrypoint.sh script that simply looked like this:
#!/bin/bash
exec "$#"
And since that also failed, I think that rules out something else going wrong inside the script being the cause of the failure.
What's also frustrating is there are no logs, either from docker service logs test or docker logs [container_id], and I can't seem to find anything useful in docker inspect [container_id].
I'm having trouble understanding everyone's confidence in exec "$#". I don't want to resort to using something like tail -f /dev/null or using a command at docker run. I was hoping that there would be some consistent, reliable way that a docker-entrypoint.sh script could reliably used to start services that I could run with docker run as well for other things for services, but even on Docker's official blog and countless questions here and blogs from other sites, I can't seem get a single example to work.
I would really appreciate some insight into what I'm missing here.
$# is just a string of the command line arguments. You are providing none, so it is executing a null string. That exits and will kill the docker. However, the exec command will always exit the running script - it destroys the current shell and starts a new one, it doesn't keep it running.
What I think you want to do is keep calling this script in kind of a recursive way. To actually have the script call itself, the line would be:
exec $0
$0 is the name of the bash file (or function name, if in a function). In this case it would be the name of your script.
Also, I am curious your desire not to use tail -f /dev/null? Creating a new shell over and over as fast as the script can go is not more performant. I am guessing you want this script to run over and over to just check your if condition.
In that case, a while(1) loop would probably work.
What you show, in principle, should work, and is one of the standard Docker patterns.
The interaction between ENTRYPOINT and CMD is pretty straightforward. If you provide both, then the main container process is whatever ENTRYPOINT (or docker run --entrypoint) specifies, and it is passed CMD (or the command at the end of docker run) as arguments. In this context, ending an entrypoint script with exec "$#" just means "replace me with the CMD as the main container process".
So, the pattern here is
Do some initial setup, like chowning a possibly-external data directory; then
exec "$#" to run whatever was passed as the command.
In your example there are a couple of things worth checking; it won't run as shown.
Whatever you provide as the ENTRYPOINT needs to obey the usual rules for executable commands: if it's a bare command, it must be in $PATH; it must have the executable bit set in its file permissions; if it's a script, its interpreter must also exist; if it's a binary, it must be statically linked or all of its shared library dependencies must be in the image already. For your script you might need to make it executable if it isn't already
RUN chmod +x /usr/local/bin/docker-entrypoint.sh
The other thing with this setup is that (definitionally) if the ENTRYPOINT exits, the whole container exits, and the Bourne shell set -e directive tells the script to exit on any error. In the artifacts in the question, gosu isn't a standard part of the debian base image, so your entrypoint will fail (and your container will exit) trying to run that command. (That won't affect the very simple case though.)
Finally, if you run into trouble running a container under an orchestration system like Docker Swarm or Kubernetes, one of your first steps should be to run the same container, locally, in the foreground: use docker run without the -d option and see what it prints out. For example:
% docker build .
% docker run --rm c5fb7da1c7c1
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"docker-entrypoint.sh\": executable file not found in $PATH": unknown.
ERRO[0000] error waiting for container: context canceled
% chmod +x docker-entrypoint.sh
% docker build .
% docker run --rm f5a239f2758d
/usr/local/bin/docker-entrypoint.sh: line 3: exec: postgres: not found
(Using the Dockerfile and short docker-entrypoint.sh from the question, and using the final image ID from docker build . in those docker run commands.)

Entering text into a docker container via ssh from bash file

What I am trying to do is setup a local development database and to prevent everyone having to go through all the steps I thought it would be useful to create a script.
What I have below stop once it is in the terminal, which looks like:
output
./dbSetup.sh
hash of container 0d1b182aa6f1
/ #
At which point I have to manually enter exit.
script
#!/bin/bash
command=$(docker ps | grep personal)
set $command
echo "hash of container ${1}"
docker exec -it ${1} sh
Is there a way I can inject a command via a script into a dockers container terminal?
In order to execute command inside a container, you can use something like this:
docker exec -ti my_container sh -c "echo a && echo b"
More information available at: https://docs.docker.com/engine/reference/commandline/exec/
Your script finds a running Docker container and opens a shell to it. The "-it" makes it interactive and allocates a tty which is why it continues to wait for input, e.g. "exit". If the plan is to execute some commands to initialize a local development database, I'd recommend looking at building an image with a Dockerfile instead. i.e. Once you figure out the commands to run, they would become RUN commands and the container after docker run would expose a local development database.
If you really want some commands to run within the shell after it is started and maintain the session, depending on the base image, you might be able to mount a bash profile that has the required commands, e.g. -v db_profile:/etc/profile.d where db_profile is a folder with the shell scripts you want to run. To get them to run you'd exec sh -l to have the login startup scripts to run.

What is the difference between running docker exec in terminal and in bash script

Let's assume I run the following command inside a script:
#!/usr/bin/env bash
docker run --name mydb --rm -e POSTGRES_PASSWORD=kgalli -e POSTGRES_USER=kgalli -p "9999:5432" -v $PWD/db:/opt -d postgres
When I then run the following command to create a database it works fine.
docker exec -e PGPASSWORD=kgalli mydb psql -U kgalli -d template1 -c "CREATE DATABASE kgalli_test WITH OWNER kgalli ENCODING 'UTF8' LC_COLLATE = 'en_US.utf8' LC_CTYPE = 'en_US.utf8';"
However when I add this line to the script above, so the script not only starts the postgres server but also creates the database it fails.
I do not really understand why I get the following error:
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
I know I can instruct docker postgres image to create a database on start. But this is actually not what I want to achieve. I just using this as an example to understand the problem.
When you're running it in a script, it's most likely just happening too quickly. The docker run … command returns immediately, and then docker exec … is attempting to use PostgreSQL while the database server is still starting up. You need to wait for it to be ready before creating the extra database.
That said, the postgres image has functionality in its entrypoint script to run custom initialization scripts. You can put your CREATE DATABASE … statement into a .sql file or config and mount it into /docker-entrypoint-initdb.d in the container. The postgres container will automatically run it when the database server is ready.
The docs for this seems to have disappeared, but you can see the implementation in docker-entrypoint.sh.
Using docker run, you are starting a new container, using docker exec, you are executing a command in already running container
The docker run command first creates a writeable container layer over the specified image, and then starts it using the specified command.
The docker exec command runs a new command in a running container.
If the container is paused, then the docker exec command will fail with an error
$ docker pause test
test
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1ae3b36715d2 ubuntu:latest "bash" 17 seconds ago Up 16 seconds (Paused) test
$ docker exec test ls
FATA[0000] Error response from daemon: Container test is paused, unpause the container before exec
$ echo $?
1
(ref.1)
(ref.2)

docker-compose script with prompt…. better solution?

I have a bash script to start various docker-compose.yml(s)
One of these compose instances is docker-compose.password.yml to create a password file for mysql. For that I need to prompt the user to input a user name and then run a service in docker (that is actually not running).
basically the only way I can think of to accomplish this is run the docker in idle state, exec the command and close the docker. Is there a better way?
(easier would be to do it directly with docker run, but then I would have to check if the image is already available and have image definitions in the various docker-compose.ymls plus now also in the bash script)
XXXXXXXXXX
My solution:
docker-compose.password.yml
version: '2'
services:
createpw:
command:
top -b -d 3600
then
docker-compose -f docker-compose.password.yml up -d
prompt the user by my bash script outside of docker for the credentials
read -p "Input user name.echo $’\n> ’" username
and send it to the running docker
docker exec createpw /bin/bash -c "mysql_config_editor set --user=${username} --password"
and then docker-compose down
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Tried and not working:
I tried to have just a small subscript prompting for the input right under command
command:
/bin/bash /somewhere/createpassword.sh
This did produce the file, but the user was an empty string, as the prompt didn’t stop the docker execution. It didn’t matter if I used compose -d or not.
Any suggestions are welcome. Thanks.

Resources