Use container environment variable in docker run without using bash -c - bash

I have a WP-CLI container where I have to run the following command:
wp --allow-root core config --dbname=$MYSQL_DATABASE --dbuser=$MYSQL_USER --dbpass=$MYSQL_PASSWORD --dbhost=$WP_CLI_MYSQL_HOST --debug
When I run in bash inside container, I have no problem, but when I try to do:
docker-compose run --rm wordpress-cli --rm core config --dbname=$MYSQL_DATABASE --dbuser=$MYSQL_USER --dbpass=$MYSQL_PASSWORD --dbhost=$WP_CLI_MYSQL_HOST --allow-root --debug
All environment variables are evaluated in the host instead of the container, so they are passed empty to container.
I found in this question, that using bash -c 'my command' will do the trick, but my ENTRYPOINT is the WP command, so I want to just run without using the bash command.

Just escape the $ so they get passed through to the container:
docker-compose run --rm wordpress-cli --rm core config --dbname=\$MYSQL_DATABASE --dbuser=\$MYSQL_USER ...

Related

GitLab ci cannot start bash in container

Good day!
I am using powershell shell executor for gitlab runner. For the test I need to run bash in my php-fpm container but I get a strange error
docker exec -it phpfpm bash
error:
the input device is not a TTY. If you are using mintty, try prefixing the command with winpty
also i tried to run command with winpty prefix
winpty docker exec -it phpfpm bash
or
winpty -Xallow-non-tty docker exec -it phpfpm bash
but "winpty" is not internal or external
command, executable program, or batch file.
i installed git with standard windown shell option instead of MTY
also i tried to install winpty but it didn't work. Are there any options to solve this problem or what could be the problem?

Jenkins console does not show the output of command runs on docker container

Running below command to execute my tests on docker container
sudo docker exec -i 6d49272f772c bash -c "mvn clean install test"
Above command running on Jenkins execute bash. But Jenkins console does not show the logs for test execution.
I had a similar problem with docker start (which is similar to docker exec). I used the -i option and it would work fine outside Jenkins, but the console in Jenkins didn't show any output from this command. I replaced -i with -a similar to the following:
sudo docker container create -it --name container-name some-docker-image some-command
sudo docker container start -a container-name
sudo docker container rm -f container-name
The docker exec method doesn't have a -a option so possibly removing the -i option would work too (since you are not interacting with the container in Jenkins), so if that doesn't work than you can convert to the following commands and achieve similar results with standard out being captured.

Dockerfile CMD for taking bash commands from host

I've created a dockerfile with various compile and build tools. The goal of the dockerimage is to standardize our development tools, and make it easy and consistent for developing.
Everything is installed.
What I am stuck on, is how to make the docker container keep running, and be able to have a bash shell to that container so that I can run, for example, make etc. ?
If I use ENTRYPOINT /bin/bash my container exits immediately. How to keep the container running?
You should use the command at run time. You run your Docker container in interatice mode (-i) and set the command to "/bin/bash":
docker run -it myDockerImage myCommandToExecuteInteractively
For instance:
docker run -it myDocker /bin/bash
Here is a real life example:
a) Pulling the most basic image
docker pull debian:jessie-slim
b) Let's have a bash there:
docker run -it debian:jessie-slim /bin/bash
c) Enjoy:
A docker container will run as long as the CMD/Entrypoint from your Dockerfile takes.
You can run your Docker container in interactive mode using switch i
sudo docker run -it --entrypoint=/bin/bash <imagename>
Example : docker run -it --entrypoint=/bin/bash ubuntu:14.04
This will start an interactive shell in your container. Your container will exit as soon as you exit that shell.

Piping docker run container ID to docker exec

In my development, I find myself issuing a docker run command followed by a docker exec command on the resulting container ID quite frequently. It's a little annoying to have to copy/paste the container ID between commands, so I was trying to pipe the container ID into my docker exec command.
Here's my example command.
docker run -itd image | xargs -i docker exec -it {} bash
This starts the container, but then I get the following error.
the input device is not a TTY
Does anyone have any idea how to get around this?
Edit: I also forgot to mention I have an ENTRYPOINT defined and cannot override that.
Do this instead:
ID=$(docker run -itd image) && docker exec -it $ID bash
Because xargs executes it arguments without allocating a new tty.
If you just want to "bash"-into the container you do not have to pass the container-id around. You can simply run
docker run -it --rm <image> /bin/bash
For example, if we take the ubuntu base image
docker run -it --rm ubuntu /bin/bash
root#f80f83eec0d4:/#
from the documentation
-t : Allocate a pseudo-tty
-i : Keep STDIN open even if not attached
--rm : Automatically remove the container when it exits
The command /bin/bash overwrites the default command that is specified with the CMD instruction in the Dockerfile.

Cannot run ruby commands while running Docker

I have an image I have built with Ruby in it. I am able to run Ruby commands, irb and install gems in the image while running by:
docker run -it jikkujose/apple
I can also do this to list the files in the container:
docker run -it jikkujose/apple ls
But when I try to run Ruby commands, it fails:
docker run -it jikkujose/apple ruby -e "puts 'Hello'"
Error:
Error response from daemon: Cannot start container c888aa8d2c7510a672608744a69f00c5feda4509742d54ea2896b7ebce76c16d: [8] System error: exec: "ruby": executable file not found in $PATH
That is probably because the ruby executable is not in the path of the user running the container process (i.e. root or the user specified with the USER command in the Dockerfile). The following two options might help you with your problem.
Specify the full path to the ruby binary when running the container. docker run -it jikkujose/apple /usr/bin/ruby -e "puts 'Hello'"
Add /usr/bin to the path in the Dockerfile ENV PATH /usr/bin:$PATH I'm not 100% sure this works, but the ENV operator in the Dockerfile should add this environment variable to the container. Source docker.com.
Alternatively you can specify /usr/bin/ruby as the ENTRYPOINT in your Dockerfile. That is: ENTRYPOINT ["/usr/bin/ruby"]. Then you can run docker run -it jikkujose/apple -e "puts 'Hello'" Note that this causes the container to run /usr/bin/ruby as default, and that you need to override this entrypoint if you want to run ls or other commands.
Edit:
Minimal viable Dockerfile solution is given below. Let us assume that /usr/bin is not already in the $PATH environment variable, which it is in the Ubuntu image.
FROM ubuntu:latest
RUN apt-get install ruby -y
ENV PATH /usr/bin:$PATH
CMD ["bash"]
Running docker run --rm -it pathtest ruby -e "puts 'Hello'" now outputs Hello in the terminal.
Edit 2:
Ah, you built the image with Docker commit. You can send in environment variables when running the docker run command. To do this simply run docker like so:
docker run --rm -e "PATH=/usr/bin" -it pathtest ruby -e "puts 'Hello'"
The -e option for docker run lets you specify or override an environment variable inside the container. Note that you will have to provide all paths you want $PATH to equal with this method.
You may also want to simply edit the PATH variable inside the container and then recommit the container so that /usr/bin is present in the $PATH environment variable stored in the container.
Its possible PATH is not set correctly, therefore try
docker run -it jikkujose/apple /usr/bin/ruby -e "puts 'Hello'"
or
docker run -it jikkujose/apple /bin/sh -c "/usr/bin/ruby -e "puts
'Hello'"

Resources