I have a Dockerfile, where I am finding myself constantly needing to call source /opt/ros/noetic/setup.bash.
e.g.:
RUN source /opt/ros/noetic/setup.bash \
&& SOME_COMMAND
RUN source /opt/ros/noetic/setup.bash \
&& SOME_OTHER_COMMAND
Is there a method to have this initialised in every RUN call in a Dockerfile?
Have tried adding to ~/.bash_profile and Docker's ENV command with no luck.
TL;DR: what you want is feasible by copying your .sh script in /etc/profile.d/ and using the SHELL Dockerfile command to tweak the default shell.
Details:
To start with, consider the following sample script setup.sh:
#!/bin/sh
echo "# setup.sh"
ENV_VAR="some value"
some_fun() {
echo "## some_fun"
}
Then, it can be noted that bash provides the --login CLI option:
When Bash is invoked as an interactive login shell, or as a non-interactive shell with the --login option, it first reads and executes commands from the file /etc/profile, if that file exists. After reading that file, it looks for ~/.bash_profile, ~/.bash_login, and ~/.profile, in that order, and reads and executes commands from the first one that exists and is readable. The --noprofile option may be used when the shell is started to inhibit this behavior.
When an interactive login shell exits, or a non-interactive login shell executes the exit builtin command, Bash reads and executes commands from the file ~/.bash_logout, if it exists.
− https://www.gnu.org/savannah-checkouts/gnu/bash/manual/bash.html#Bash-Startup-Files
Furthermore, instead of appending the setup.sh code in /etc/profile, you can take advantage of the /etc/profile.d folder that is read in the following way by most distributions:
$ docker run --rm -i debian:10 cat /etc/profile | tail -n 9
if [ -d /etc/profile.d ]; then
for i in /etc/profile.d/*.sh; do
if [ -r $i ]; then
. $i
fi
done
unset i
fi
Note in particular that the .sh extension is mandatory, hence the naming of the minimal-working-example above: setup.sh (not setup.bash).
Finally, it is possible to rely on the SHELL command to replace the default shell used by RUN (in place of ["/bin/sh", "-c"]) to incorporate the --login option of bash.
Concretely, you could phrase your Dockerfile like this:
FROM debian:10
# WORKDIR /opt/ros/noetic
# COPY setup.sh .
# RUN . /opt/ros/noetic/setup.sh && echo "ENV_VAR=$ENV_VAR"
# empty var here
RUN echo "ENV_VAR=$ENV_VAR"
# enable the extra shell init code
COPY setup.sh /etc/profile.d/
SHELL ["/bin/bash", "--login", "-c"]
# nonempty var and function
RUN echo "ENV_VAR=$ENV_VAR" && some_fun
# DISABLE the extra shell init code!
RUN rm /etc/profile.d/setup.sh
# empty var here
RUN echo "ENV_VAR=$ENV_VAR"
Outcome:
$ docker build -t test .
Sending build context to Docker daemon 6.144kB
Step 1/7 : FROM debian:10
---> ef05c61d5112
Step 2/7 : RUN echo "ENV_VAR=$ENV_VAR"
---> Running in 87b5c589ec60
ENV_VAR=
Removing intermediate container 87b5c589ec60
---> 6fdb70be76f9
Step 3/7 : COPY setup.sh /etc/profile.d/
---> e6aab4ebf9ef
Step 4/7 : SHELL ["/bin/bash", "--login", "-c"]
---> Running in d73b0d13df23
Removing intermediate container d73b0d13df23
---> ccbe789dc36d
Step 5/7 : RUN echo "ENV_VAR=$ENV_VAR" && some_fun
---> Running in 42fd1ae14c17
# setup.sh
ENV_VAR=some value
## some_fun
Removing intermediate container 42fd1ae14c17
---> de74831896a4
Step 6/7 : RUN rm /etc/profile.d/setup.sh
---> Running in bdd969a63def
# setup.sh
Removing intermediate container bdd969a63def
---> 5453be3271e5
Step 7/7 : RUN echo "ENV_VAR=$ENV_VAR"
---> Running in 0712cea427f1
ENV_VAR=
Removing intermediate container 0712cea427f1
---> 216a421f5659
Successfully built 216a421f5659
Successfully tagged test:latest
Related
I have a Makefile that looks like this:
build-docker:
DOCKER_BUILDKIT=1 docker build --ssh default=~/.ssh/id_rsa -t my-app .
If I run make build-docker I get the following error:
$ make build-docker
DOCKER_BUILDKIT=1 docker build --ssh default=~/.ssh/id_rsa -t my-app .
could not parse ssh: [default=~/.ssh/id_rsa]: stat ~/.ssh/id_rsa: no such file or directory
make: *** [Makefile:12: build-docker] Error 1
However, if I run the command directly in the shell it runs just fine:
$ DOCKER_BUILDKIT=1 docker build --ssh default=~/.ssh/id_rsa -t my-app .
[+] Building 65.5s (20/20) FINISHED
Why is this and how do I solve it?
You are running the same command, but in different shells. Your interactive shell is probably bash. But the shell make uses is /bin/sh which is a POSIX standard shell (often).
The special handling of ~ in an argument is a shell feature: it's not embedded in programs like docker or ssh. And, it's not defined in POSIX; it's an additional feature that some shells, like bash.
On my system:
bash$ echo foo=~
foo=/home/me
bash$ /bin/sh
$ echo foo=~
foo=~
To be portable you should use the full pathname or $HOME instead (remember that in a make recipe you have to double the $ to escape it from make: $$HOME).
I have the following Dockerfile:
# => Build container
FROM node:14-alpine AS builder
WORKDIR /app
# split the dependecies from our source code so docker builds can cache this step
# (unless we actually change dependencies)
COPY package.json yarn.lock ./
RUN yarn install
COPY . ./
RUN yarn build
# => Run container
FROM nginx:alpine
# Nginx config
WORKDIR /usr/share/nginx/html
RUN rm -rf ./*
COPY ./nginx/nginx.conf /etc/nginx/conf.d/default.conf
# Default port exposure
EXPOSE 8080
# Copy .env file and shell script to container
COPY --from=builder /app/dist .
COPY ./env.sh .
COPY .env .
USER root
# Add bash
RUN apk update && apk add --no-cache bash
# Make our shell script executable
RUN chmod +x env.sh
# Start Nginx server
CMD ["/bin/bash", "-c", "/usr/share/nginx/html/env.sh && nginx -g \"daemon off;\""]
But when I try to run it I get the following errors when trying to add bash:
------
> [stage-1 8/9] RUN apk update && apk add --no-cache bash:
#19 0.294 fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/main/x86_64/APKINDEX.tar.gz
#19 0.358 140186175183688:error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed:ssl/statem/statem_clnt.c:1914:
#19 0.360 fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/community/x86_64/APKINDEX.tar.gz
#19 0.360 ERROR: https://dl-cdn.alpinelinux.org/alpine/v3.14/main: Permission denied
#19 0.360 WARNING: Ignoring https://dl-cdn.alpinelinux.org/alpine/v3.14/main: No such file or directory
#19 0.405 140186175183688:error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed:ssl/statem/statem_clnt.c:1914:
#19 0.407 ERROR: https://dl-cdn.alpinelinux.org/alpine/v3.14/community: Permission denied
#19 0.407 WARNING: Ignoring https://dl-cdn.alpinelinux.org/alpine/v3.14/community: No such file or directory
#19 0.408 2 errors; 42 distinct packages available
------
executor failed running [/bin/sh -c apk update && apk add --no-cache bash]: exit code: 2
Which I need to add to execute this script:
#!/bin/bash
# Recreate config file
rm -rf ./env-config.js
touch ./env-config.js
# Add assignment
echo "window._env_ = {" >> ./env-config.js
# Read each line in .env file
# Each line represents key=value pairs
while read -r line || [[ -n "$line" ]];
do
# Split env variables by character `=`
if printf '%s\n' "$line" | grep -q -e '='; then
varname=$(printf '%s\n' "$line" | sed -e 's/=.*//')
varvalue=$(printf '%s\n' "$line" | sed -e 's/^[^=]*=//')
fi
# Read value of current variable if exists as Environment variable
value=$(printf '%s\n' "${!varname}")
# Otherwise use value from .env file
[[ -z $value ]] && value=${varvalue}
# Append configuration property to JS file
echo " $varname: \"$value\"," >> ./env-config.js
done < .env
echo "}" >> ./env-config.js
From what I have read adding USER root should fix this but it does not in this case.
Any ideas on how to fix?
You should be able to use /bin/sh as a standard Bourne shell; also, you should be able to avoid the sh -c wrapper in the CMD line.
First, rewrite the script using POSIX shell syntax. Scanning over the script, it seems like it is almost okay as-is; change the first line to #!/bin/sh, and correct the non-standard ${!varname} expansion (also see Dynamic variable names in Bash)
# Read value of current variable if exists as Environment variable
value=$(sh -c "echo \$$varname")
# Otherwise use value from .env file
[[ -z $value ]] && value=${varvalue}
You can try testing it using an alpine or busybox image with a more restricted shell, or setting the POSIXLY_CORRECT environment variable with bash.
Secondly, there's a reasonably standard pattern of using ENTRYPOINT and CMD together. The CMD gets passed as arguments to the ENTRYPOINT, so if the ENTRYPOINT ends with exec "$#", it will replace itself with that command.
#!/bin/sh
# ^^ not bash
# Recreate config file
...
echo "window._env_ = {" >> ./env-config.js
...
echo "}" >> ./env-config.js
# Run the main container CMD
exec "$#"
Now, in your Dockerfile, you don't need to install GNU bash, because you're not using it, but you do need to correctly split out the ENTRYPOINT and CMD.
ENTRYPOINT ["/usr/share/nginx/html/env.sh"] # must be JSON-array syntax
CMD ["nginx", "-g", "daemon off;"] # can be either syntax
As an aside, the cmd1 && cmd2 syntax is basic shell syntax and not a bash extension, so you could write CMD ["/bin/sh", "-c", "cmd1 && cmd2"]; but, if you write a command without the JSON array syntax, Docker will insert the sh -c wrapper for you. You should almost never need to write out sh -c in a Dockerfile.
I have a Dockerfile as below:
#1st stage - wildfly production image
FROM wildfly-setup:17.0.0 AS wildfly-prod
USER jenkins
RUN mkdir /opt/wildfly/install && mkdir /opt/wildfly/install/config
COPY --chown=jenkins:jenkins init.sh /opt/wildfly/bin
RUN mkdir -p $JBOSS_HOME/standalone/data/datastorage
#Second stage - test run image
FROM wildfly-prod AS wildfly-sedi-test
USER jenkins
COPY --chown=jenkins:jenkins init.sh /opt/wildfly/bin
RUN /opt/wildfly/bin/init.sh
#CMD ["/opt/wildfly/bin/init.sh"]
And the bash script which I am running from the above Dockerfile is as below:
#!/bin/bash
if [ -e "$JBOSS_HOME/install/wildfly.sh" ] ; then
$JBOSS_HOME/install/wildfly.sh
rm $JBOSS_HOME/install/wildfly.sh
fi
# check for postgres running or not
cnt=0
psql_terminate=2
while (( $cnt < 120 && $psql_terminate != 0 )); do
postgres_isready -h $POSTGRES > /dev/null 2>&1
if [ $? -eq 0 ] ; then
let psql_terminate=0
echo $psql_terminate
fi
let cnt=cnt+1
sleep 1
done
if (( $psql_terminate == 0)) ; then
exec $JBOSS_HOME/bin/standalone.sh -c standalone-full.xml
else
echo "database unavailable."
exit 1
fi
In the Dockerfile when I unable the CMD command it works, but with the RUN command it throws the below the error while building the image:
The command '/bin/sh -c /opt/wildfly/bin/init.sh' returned a non-zero code: 1
Can somebody please help me on this?
Thanks in advance.
RUN will execute your bash script when building the image.
The RUN instruction will execute any commands in a new layer on
top of the current image and commit the results. The resulting
committed image will be used for the next step in the Dockerfile.
CMD will execute your bash script when starting the container.
The main purpose of a CMD is to provide defaults for an executing container. These defaults can include an executable, or they can omit the executable, in which case you must specify an ENTRYPOINT instruction as well.
So I'm assuming that your script reaches exit 1 because it is supposed to run when the container starts instead of when building your image.
Suppose that I create a Dockerfile that just runs an echo command:
FROM alpine
ENTRYPOINT [ "echo" ]
and that I build it like this:
docker build -t my_echo .
If I run docker run --rm my_echo test it will output test as expected.
But how can I run the command to display an environment variable that is inside the container?
Example:
docker run --rm --env MYVAR=foo my_echo ???
How to access the $MYVAR variable that is in the container to display foo by replacing the ??? part of that command?
Note:
This is a simplified version of my real use case. My real use case is a WP-CLI Docker image that I built with a Dockerfile. It has the wp-cli command as the ENTRYPOINT.
I am trying to run a container based on this image to update a WordPress parameter with an environment variable. My command without Docker is wp-cli option update siteurl "http://example.com" where http://example.com would be in an environment variable.
This is the command I am trying to run (wp_cli is the name of my container):
docker run --rm --env WEBSITE_URL="http://example.com" wp_cli option update siteurl ???
It's possible to have the argument that immediately follows ["bash", "-c"] itself be a shell script that looks for sigils to replace. For example, consider the following script, which I'm going to call argEnvSubst:
#!/usr/bin/env bash
args=( "$#" ) # collect all arguments into a single array
for idx in "${!args[#]}"; do # iterate over the indices of that array...
arg=${args[$idx]} # ...and collect the associated values.
if [[ $arg =~ ^#ENV[.](.*)#$ ]]; then # if we have a value that matches a pattern...
varname=${BASH_REMATCH[1]} # extract the variable name from that pattern
args[$idx]=${!varname} # and replace the value with a lookup result
fi
done
exec "${args[#]}" # run our resulting array as a command.
Thus, argEnvSubst "echo" "#ENV.foobar#" will replace #ENV.foobar# with the value of the environment named foobar before it invokes echo.
While I would strongly suggest injecting this into your Dockerfile as a separate script and naming that script as your ENTRYPOINT, it's possible to do it in-line:
ENTRYPOINT [ "bash", "-c", "args=(\"$#\"); for idx in \"${!args[#]}\"; do arg=${args[$idx]}; if [[ $arg =~ ^#ENV[.](.*)#$ ]]; then varname=${BASH_REMATCH[1]}; args[$idx]=${!varname}; fi; done; \"${args[#]}\"", "_" ]
...such that you can then invoke:
docker run --rm --env WEBSITE_URL="http://example.com" \
wp_cli option update siteurl '#ENV.WEBSITE_URL#'
Note the use of bash -- this means alpine (providing only dash) isn't sufficient.
My Docker wrapper script works as intended when the current working directory does not contain spaces, however there is a bug when it does.
I have simplified an example to make use of the smallest official Docker image I could find and a well known GNU core utility. Of course this example is not very useful. In my real world use case, a much more complicated environment is packaged.
Docker Wrapper Script:
#!/usr/bin/env bash
##
## Dockerized ls
##
set -eux
# Only allocate tty if one is detected
# See https://stackoverflow.com/questions/911168/how-to-detect-if-my-shell-script-is-running-through-a-pipe
if [[ -t 0 ]]; then
DOCKER_RUN_OPTIONS+="-i "
fi
if [[ -t 1 ]]; then
DOCKER_RUN_OPTIONS+="-t "
fi
WORK_DIR="$(realpath .)"
DOCKER_RUN_OPTIONS+="--rm --user=$(id -u $(logname)):$(id -g $(logname)) --workdir=${WORK_DIR} --mount type=bind,source=${WORK_DIR},target=${WORK_DIR}"
exec docker run ${DOCKER_RUN_OPTIONS} busybox:latest ls "$#"
You can save this somewhere as /tmp/docker_ls for example. Remember to chmod +x /tmp/docker_ls
Now you are able to use this Dockerized ls in any path which contains no spaces as follows:
/tmp/docker_ls -lah
/tmp/docker_ls -lah | grep 'r'
Note that /tmp/docker_ls -lah /path/to/something is not implemented. The wrapper script would have to be adapted to parse parameters and mount the path argument into the container.
Can you see why this would not work when current working directory path contains spaces? What can be done to rectify it?
Solution:
#david-maze's answer solved the problem. Please see: https://stackoverflow.com/a/55763212/1782641
Using his advice I refactored my script as follows:
#!/usr/bin/env bash
##
## Dockerized ls
##
set -eux
# Only allocate tty if one is detected. See - https://stackoverflow.com/questions/911168
if [[ -t 0 ]]; then IT+=(-i); fi
if [[ -t 1 ]]; then IT+=(-t); fi
USER="$(id -u $(logname)):$(id -g $(logname))"
WORKDIR="$(realpath .)"
MOUNT="type=bind,source=${WORKDIR},target=${WORKDIR}"
exec docker run --rm "${IT[#]}" --user "${USER}" --workdir "${WORKDIR}" --mount "${MOUNT}" busybox:latest ls "$#"
If your goal is to run a process on the current host directory as the current host user, you will find it vastly easier and safer to use a host process, and not an isolation layer like Docker that intentionally tries to hide these things from you. For what you’re showing I would just skip Docker and run
#!/bin/sh
ls "$#"
Most software is fairly straightforward to install without Docker, either using a package manager like APT or filesystem-level isolation like Python’s virtual environments and Node’s node_modules directory. If you’re writing this script then Docker is just getting in your way.
In a portable shell script there’s no way to make “a list of words” in a way that keeps their individual wordiness. If you know you’ll always want to pass some troublesome options then this is still fairly straightforward: include them directly in the docker run command and don’t try to create a variable of options.
#!/bin/sh
RM_IT="--rm"
if [[ -t 0 ]]; then RM_IT="$RM_IT -i"; fi
if [[ -t 1 ]]; then RM_IT="$RM_IT -t"; fi
UID=$(id -u $(logname))
GID=$(id -g $(logname))
# We want the --rm -it options to be expanded into separate
# words; we want the volume options to stay as a single word
docker run $RM_IT "-u$UID:$GID" "-w$PWD" "-v$PWD:$PWD" \
busybox \
ls "$#"
Some shells like ksh, bash, and zsh have array types, but these shells may not be present on every system or environment (your busybox image doesn’t have any of these for example). You also might consider picking a higher-level scripting language that can more explicitly pass words into an exec type call.
I'm taking a stab at this to give you something to try:
Change this:
DOCKER_RUN_OPTIONS+="--rm --user=$(id -u $(logname)):$(id -g $(logname)) --workdir=${WORK_DIR} --mount type=bind,source=${WORK_DIR},target=${WORK_DIR}"
To this:
DOCKER_RUN_OPTIONS+="--rm --user=$(id -u $(logname)):$(id -g $(logname)) --workdir=${WORK_DIR} --mount type=bind,source='${WORK_DIR}',target='${WORK_DIR}'"
Essentially, we are putting the ' in there to escape the space when the $DOCKER_RUN_OPTIONS variable is evaluated by bash on the 'exec docker' command.
I haven't tried this - it's just a hunch / first shot.