Dockerfile assign bash command to var - bash

I need to assign bash command to var in Dockerfile. Following is what I guess:
FROM centos:7
RUN data=$(ls /)
ENV DATA $data
After running container (docker run exec -it <image> bash), then echo $DATA output is empty. I have searched on google, but noway. I am stuck!
How to assign bash command to value in dockerfile?

You can't do that, since RUN command spawns its own shell.
Alternatively, you can save the information to some file, and use an ENTRYPOINT to set the env variable using some script once the container is running.

You can't set any variables while building docker image, because image build as layered filesystem, after executing RUN command instruction, it will execute command in run time and exit, so you can write docker file like below:
FROM centos:7
RUN echo 'export data=$(ls /)' >> ~/.bashrc
ENV DATA $data

Related

Docker image env variables overwritten by local machine

Why is it that when checking the env for an image I create, I get the image environment variables listed as expected, but when I try to access one of those env variables (i.e. $PATH), I'm getting my local machines environment variable output instead?
I believe I misunderstand how docker environment variables work. I'm attempting to run some commands against a docker container and am seeing what I consider unexpected behavior. I have created a simple example to try to demonstrate.
Dockerfile:
FROM node:12.13.0
ENV PATH="${PATH}:/custom-path/goes-here"
Commands:
docker build . -tag env-test
docker run env-test /bin/bash -c "env"
docker run env-test /bin/bash -c "$PATH"
Expected Output from final two commands.
docker run env-test /bin/bash -c "env".
...
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/custom-path/goes-here
...
docker run evn-test /bin/bash -c "echo $PATH"
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/custom-path/goes-here
Actual Output from final two commands
docker run env-test /bin/bash -c "env".
...
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/custom-path/goes-here
...
docker run evn-test /bin/bash -c "echo $PATH"
/Users/local-machine-user/Downloads/google-cloud-sdk/bin:/Users/local-machine-user/.nvm/versions/node/v12.16.1/bin:/Library/Frameworks/Python.framework/Versions/3.7/bin:/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/Apple/usr/bin:/Users/local-machine-user/Downloads/google-cloud-sdk/bin:/Library/Frameworks/Python.framework/Versions/3.7/bin
The output of running echo $PATH against the created image is returning my local machines $PATH variable. What?
The primary thing I'm trying to do is execute a script against the docker image that requires those environment variables I set in the image, but the script fails because the environment variables the script uses end up being for my local machine and not the ones specified in the image.
Say you're trying to run your third example
docker run env-test /bin/bash -c "echo $PATH"
The first thing that happens here is that your local shell processes this command and does its usual set of expansions. Environment variable references in double quotes are expanded, for example. Once it's built the final command line, then the shell executes it.
A generally useful trick is to just put echo at the front of the command
echo docker run env-test /bin/bash -c "echo $PATH"
This will show you the command that would have been run, but not actually run it.
To make this work you need to cause your local shell to not expand environment variables, so that the shell you're launching in the container can do it. Either single quotes or backslash escaping will work for this
docker run env-test /bin/sh -c 'echo $PATH'
docker run env-test /bin/sh -c "echo \$PATH"
The primary thing I'm trying to do is execute a script against the docker image that requires those environment variables I set in the image
The best way to approach this is probably to write a normal shell script and COPY it into your image. This saves both layers of quoting and confusion around which shell is processing things like variables. If you can't modify the image, an alternative is to bind-mount a script from the host.
# If the script is in the image
docker run --rm env-test path-echoer.sh
# If not
docker run --rm -v $PWD:/scripts env-test /scripts/path-echoer.sh
You should escape the dollar sign when using $PATH in a string - "echo \$PATH"
What happens is that when running this line:
docker run evn-test /bin/bash -c "echo $PATH"
Bash first translate $PATH, then passes that string into the docker container. So the command that is ran inside the container is:
docker run evn-test /bin/bash -c "echo /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"

Access a bash script variable outside the docker container in which the script is running

I have a bash script running inside a docker container. In this script, I set the value of some variable.
Can I somehow access the value of this variable outside the container?
I tried to make the variable "global" but could not figure out how to do it. Is it a good idea to make the required variable an environment variable inside the container?
How to reproduce
Create a bash script called temp.sh with the following contents:
a=$RANDOM
Now, run this file in a docker container as follows:
docker run -it --rm -v $(pwd):/opt alpine sh -c "sh /opt/temp.sh"
Desired behaviour: To be able to access the variable a outside the docker container
Credit: This comment by Mark
I mounted a directory on the docker filesystem using
docker run -v <host-file-system-directory>:<docker-file-system-directory>
In the bash script, I added
echo "$variable" >docker-file-system-directory/variable.txt
As I had mounted a host filesystem directory on the docker filesystem, I can still access variable.txt simply using cat <host-file-system-directory>/variable.txt
Note that docker-file-system-directory must be an absolute path, and not a relative path.
One way of achieving that is using docker exec, if your container is running and has access to bash.
#!/usr/bin/env bash
set -x
yourContainerName="testContainerName"
test=$(docker exec -i "${yourContainerName}" bash <<EOF
# do some work here e.g. execute your script
testVar="thisIsTest" # the value we want to access outside of container
echo \$testVar
EOF
)
echo $test
We pass a multiline script to docker container, which in the end echo's the value we need. This value is then accessible from shell that executed docker exec.
Output looks like this:
++ docker exec -i testContainerName bash
+ test=thisIsTest
+ echo thisIsTest
thisIsTest

Entering text into a docker container via ssh from bash file

What I am trying to do is setup a local development database and to prevent everyone having to go through all the steps I thought it would be useful to create a script.
What I have below stop once it is in the terminal, which looks like:
output
./dbSetup.sh
hash of container 0d1b182aa6f1
/ #
At which point I have to manually enter exit.
script
#!/bin/bash
command=$(docker ps | grep personal)
set $command
echo "hash of container ${1}"
docker exec -it ${1} sh
Is there a way I can inject a command via a script into a dockers container terminal?
In order to execute command inside a container, you can use something like this:
docker exec -ti my_container sh -c "echo a && echo b"
More information available at: https://docs.docker.com/engine/reference/commandline/exec/
Your script finds a running Docker container and opens a shell to it. The "-it" makes it interactive and allocates a tty which is why it continues to wait for input, e.g. "exit". If the plan is to execute some commands to initialize a local development database, I'd recommend looking at building an image with a Dockerfile instead. i.e. Once you figure out the commands to run, they would become RUN commands and the container after docker run would expose a local development database.
If you really want some commands to run within the shell after it is started and maintain the session, depending on the base image, you might be able to mount a bash profile that has the required commands, e.g. -v db_profile:/etc/profile.d where db_profile is a folder with the shell scripts you want to run. To get them to run you'd exec sh -l to have the login startup scripts to run.

Setting environment variables when running docker in detached mode

If I include the following line in /root/.bashrc:
export $A = "AAA"
then when I run the docker container in interactive mode (docker run -i), the $A variable keeps its value. However if I run the container in detached mode I cannot access the variable. Even if I run the container explicitly sourcing the .bashrc like
docker run -d my_image /bin/bash -c "cd /root && source .bashrc && echo $A"
such line produces an empty output.
So, why is this happening? And how can I set the environment variables defined in the .bashrc file?
Any help would be very much appreciated!
The first problem is that the command you are running has $A being interpreted by your hosts shell (not the container shell). On your host, $A is likely black, so your effectively command becomes:
docker run -i my_image /bin/bash -c "cd /root && source .bashrc && echo "
Which does exactly as it says. We can escape the variable so it is sent to the container and properly evaluated there:
docker run -i my_image /bin/bash -c "echo \$A"
But this will also be blank because, although the container is, the shell is not in interactive mode. But we can force it to be:
docker run -i my_image /bin/bash -i -c "echo \$A"
Woohoo, we finally got our desired result. But with an added error from bash because there is no TTY. So, instead of interactive mode, we can just set a psuedo-TTY:
docker run -t my_image /bin/bash -i -c "echo \$A"
After running some tests, it appears that when running a container in detached mode, overidding the default environment variables doesnt always happen the way we want, depending on where you are in the Dockerfile.
As an exemple if, running a container in a detached container like so:
docker run **-d** --name image_name_container image_name
Whatever ENV variables you defined within the Dockerfile takes effect everywhere (read the rest and you will understand what the everywhere means).
example of a simple dockerfile (alpine is just a lighweight linux distribution):
FROM alpine:latest
#declaring a docker env variable and giving it a default value
ENV MY_ENV_VARIABLE dummy_value
#copying two dummy scripts into a place where i can execute them straight away
COPY ./start.sh /usr/sbin
COPY ./not_start.sh /usr/sbin
#in this script i could do: echo $MY_ENV_VARIABLE > /test1.txt
RUN not_start.sh
RUN echo $MY_ENV_VARIABLE > /test2.txt
#in this script i could do: echo $MY_ENV_VARIABLE > /test3.txt
ENTRYPOINT ["start.sh"]
Now if you want to run your container in detached and override some ENV variables, like so:
docker run **-d** -e MY_ENV_VARIABLE=new_value --name image_name_container image_name
Surprise! The var MY_ENV_VARIABLE is only overidden inside the script that is run in the ENTRYPOINT (and i checked, same thing happens if your replace ENTRYPOINT with CMD). It would also be overidden in a subscript that you could call from this start.sh script. But the MY_EV_VARIABLE variables that are called within a RUN dockerfile command or within the dockerfile itself do not get overidden.
In other words we would have $MY_ENV_VARIABLE being replaced by the value dummy_value and new_value depending on if you are in the ENTRYPOINT or not.

Running a script inside a docker container using shell script

I am trying to create a shell script for setting up a docker container. My script file looks like:
#!bin/bash
docker run -t -i -p 5902:5902 --name "mycontainer" --privileged myImage:new /bin/bash
Running this script file will run the container in a newly invoked bash.
Now I need to run a script file (test.sh)which is already inside container from the above given shell script.(eg: cd /path/to/test.sh && ./test.sh)
How to do that?
You can run a command in a running container using docker exec [OPTIONS] CONTAINER COMMAND [ARG...]:
docker exec mycontainer /path/to/test.sh
And to run from a bash session:
docker exec -it mycontainer /bin/bash
From there you can run your script.
Assuming that your docker container is up and running, you can run commands as:
docker exec mycontainer /bin/sh -c "cmd1;cmd2;...;cmdn"
I was searching an answer for this same question and found ENTRYPOINT in Dockerfile solution for me.
Dockerfile
...
ENTRYPOINT /my-script.sh ; /my-script2.sh ; /bin/bash
Now the scripts are executed when I start the container and I get the bash prompt after the scripts has been executed.
In case you don't want (or have) a running container, you can call your script directly with the run command.
Remove the iterative tty -i -t arguments and use this:
$ docker run ubuntu:bionic /bin/bash /path/to/script.sh
This will (didn't test) also work for other scripts:
$ docker run ubuntu:bionic /usr/bin/python /path/to/script.py
This command worked for me
cat local_file.sh | docker exec -i container_name bash
You could also mount a local directory into your docker image and source the script in your .bashrc. Don't forget the script has to consist of functions unless you want it to execute on every new shell. (This is outdated see the update notice.)
I'm using this solution to be able to update the script outside of the docker instance. This way I don't have to rerun the image if changes occur, I just open a new shell. (Got rid of reopening a shell - see the update notice)
Here is how you bind your current directory:
docker run -it -v $PWD:/scripts $my_docker_build /bin/bash
Now your current directory is bound to /scripts of your docker instance.
(Outdated)
To save your .bashrc changes commit your working image with this command:
docker commit $container_id $my_docker_build
Update
To solve the issue to open up a new shell for every change I now do the following:
In the dockerfile itself I add RUN echo "/scripts/bashrc" > /root/.bashrc". Inside zshrc I export the scripts directory to the path. The scripts directory now contains multiple files instead of one. Now I can directly call all scripts without having open a sub shell on every change.
BTW you can define the history file outside of your container too. This way it's not necessary to commit on a bash change anymore.
Thomio's answer is helpful but it expects the script to exist inside the image. If you have a one-of script that you want to run/test inside a container (from command-line or to be useful in a script), then you can use
$ docker run ubuntu:bionic /bin/bash -c '
echo "Hello there"
echo "this could be a long script"
'
Have a look at entry points too. You will be able to use multiple CMD
https://docs.docker.com/engine/reference/builder/#/entrypoint
If you want to run the same command on multiple instances you can do this :
for i in c1 dm1 dm2 ds1 ds2 gtm_m gtm_sl; do docker exec -it $i /bin/bash -c "service sshd start"; done
This is old, and I don't have enough reputation points to comment. Still, I guess it is worth sharing how one can generalize Marvin's idea to allow parameters.
docker exec -i mycontainer bash -s arg1 arg2 arg3 < mylocal.sh

Resources