Dockerfile throws an error while running bash script from it - bash

I have a Dockerfile as below:
#1st stage - wildfly production image
FROM wildfly-setup:17.0.0 AS wildfly-prod
USER jenkins
RUN mkdir /opt/wildfly/install && mkdir /opt/wildfly/install/config
COPY --chown=jenkins:jenkins init.sh /opt/wildfly/bin
RUN mkdir -p $JBOSS_HOME/standalone/data/datastorage
#Second stage - test run image
FROM wildfly-prod AS wildfly-sedi-test
USER jenkins
COPY --chown=jenkins:jenkins init.sh /opt/wildfly/bin
RUN /opt/wildfly/bin/init.sh
#CMD ["/opt/wildfly/bin/init.sh"]
And the bash script which I am running from the above Dockerfile is as below:
#!/bin/bash
if [ -e "$JBOSS_HOME/install/wildfly.sh" ] ; then
$JBOSS_HOME/install/wildfly.sh
rm $JBOSS_HOME/install/wildfly.sh
fi
# check for postgres running or not
cnt=0
psql_terminate=2
while (( $cnt < 120 && $psql_terminate != 0 )); do
postgres_isready -h $POSTGRES > /dev/null 2>&1
if [ $? -eq 0 ] ; then
let psql_terminate=0
echo $psql_terminate
fi
let cnt=cnt+1
sleep 1
done
if (( $psql_terminate == 0)) ; then
exec $JBOSS_HOME/bin/standalone.sh -c standalone-full.xml
else
echo "database unavailable."
exit 1
fi
In the Dockerfile when I unable the CMD command it works, but with the RUN command it throws the below the error while building the image:
The command '/bin/sh -c /opt/wildfly/bin/init.sh' returned a non-zero code: 1
Can somebody please help me on this?
Thanks in advance.

RUN will execute your bash script when building the image.
The RUN instruction will execute any commands in a new layer on
top of the current image and commit the results. The resulting
committed image will be used for the next step in the Dockerfile.
CMD will execute your bash script when starting the container.
The main purpose of a CMD is to provide defaults for an executing container. These defaults can include an executable, or they can omit the executable, in which case you must specify an ENTRYPOINT instruction as well.
So I'm assuming that your script reaches exit 1 because it is supposed to run when the container starts instead of when building your image.

Related

Shell script not running as bash using Dockerfile

I need to split a comma-separated string into array and run k6 for each of the array values parallely. Since shell script doesn't support arrays, I m using bash command in the script. I am not able to run it as bash using Dockerfile in TeamCity.
Dockerfile:
FROM loadimpact/k6:0.34.1
COPY ./src/lib /lib
COPY ./src/scenarios /scenarios
COPY ./src/k6-run-all.sh /k6-run-all.sh
WORKDIR /
ENTRYPOINT []
RUN bash -c "./k6-run-all.sh"
Shell script:
#!/bin/bash
K6_RUN_OPTIONS=${K6_RUN_OPTIONS}
ENV_NAME=${ENV_NAME:-qa}
IS_TEST_RUN=${IS_TEST_RUN:-true}
SCENARIO_NAME=${SCENARIO_NAME:-"full-card-visa"}
GWC_PC_ID=${GWC_PC_ID}
IFS=',' read -r -a PCIds <<< "$GWC_PC_ID"
echo "Number of PC ids provided in environment variables=" ${#PCIds[#]}
if [[ ${#PCIds[#]} > 0 ]]; then
for pcId in "$#"
do
ENV_NAME=$ENV_NAME RUN_OPTIONS=$SCENARIO_NAME-$ENV_NAME$OPTIONS_VARIANT GWC_PC_ID=$pcId k6 run $K6_RUN_OPTIONS ''$SCENARIO/index.js'' &
done
fi
existCode=$?
if [ $existCode -ne 0 ]; then
echo "Scenario $SCENARIO_NAME completed with the error"
exit $existCode
fi
Error:
#9 [5/6] RUN bash -c "./k6-run-all.sh"
17:02:02 #9 0.356 /bin/sh: bash: not found
17:02:02 #9 ERROR: executor failed running [/bin/sh -c bash -c "./k6-run-all.sh"]: exit code: 127
17:02:02 ------
17:02:02 > [5/6] RUN bash -c "./k6-run-all.sh":
17:02:02 #9 0.356 /bin/sh: bash: not found
17:02:02 ------
17:02:02 failed to solve: executor failed running [/bin/sh -c bash -c "./k6-run-all.sh"]: exit code: 127
How to modify Dockerfile or shell script to run this shell script as bash?
Previously to run it as bash script the last line of Dockerfile used to be:
CMD ["sh", "-c", "./k6-run-all.sh"]
******* Edit: **********
Updated full script after knittl's answer (current issue is after adding & for parallel runs it is not working, it is not running anything inside the for loop and it is not giving any extra error or information in logs, it is like it is skipping it):
K6_RUN_OPTIONS=${K6_RUN_OPTIONS}
ENV_NAME=${ENV_NAME:-qa}
IS_TEST_RUN=${IS_TEST_RUN:-true}
SCENARIO_NAME=${SCENARIO_NAME:-"full-card-visa"}
GWC_PC_ID=${GWC_PC_ID}
OPTIONS_VARIANT=""
if $IS_TEST_RUN; then
OPTIONS_VARIANT="-test"
fi
SCENARIO_DIR="$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )"
SCENARIO_PATH="${SCENARIO_DIR}/scenarios"
SCENARIO="${SCENARIO_PATH}/${SCENARIO_NAME}"
echo "Executing scenario path $SCENARIO"
SCENARIO_NAME=${SCENARIO:${#SCENARIO_PATH}+1:${#SCENARIO}}
echo "Scenario Name: $SCENARIO_NAME"
echo "Run option: $SCENARIO_NAME-$ENV_NAME$OPTIONS_VARIANT"
echo "pc ids provided in environment variable=" $GWC_PC_ID
if [ -z "$GWC_PC_ID" ]
then
ENV_NAME=$ENV_NAME RUN_OPTIONS=$SCENARIO_NAME-$ENV_NAME$OPTIONS_VARIANT k6 run $K6_RUN_OPTIONS ''$SCENARIO/index.js''
else
for pcId in $(printf '%s' "$GWC_PC_ID" | tr , ' ');
do
ENV_NAME=$ENV_NAME RUN_OPTIONS=$SCENARIO_NAME-$ENV_NAME$OPTIONS_VARIANT GWC_PC_ID=$pcId k6 run $K6_RUN_OPTIONS ''$SCENARIO/index.js'' &
done
fi
existCode=$?
if [ $existCode -ne 0 ]; then
echo "Scenario $SCENARIO_NAME completed with the error"
exit $existCode
fi
k6 Docker containers do not come with bash preinstalled, but with busybox. I see two options:
Create your own Docker image based off grafana/k6 and manually install bash in your image.
Rewrite your script to not rely on bashisms. Should be fairly easy: split your list of tests to run into one path per line and while read -r path; do …; done them.
Or if support for whitespace in filenames is not required, then for path in $(printf '%s' "$GWC_PC_ID" | tr , ' '); do …; done
Note that your current script will return with the exit code of your last k6 process, meaning that if any other test failed but the last one was successfull, that will mask the error.
PS. Time to upgrade your base Docker image too. loadimpact/k6:0.34.1 is really old (exactly 1 year). It's better to switch to grafana/k6:0.40.0, which was released a week ago.

source in every Dockerfile RUN call

I have a Dockerfile, where I am finding myself constantly needing to call source /opt/ros/noetic/setup.bash.
e.g.:
RUN source /opt/ros/noetic/setup.bash \
&& SOME_COMMAND
RUN source /opt/ros/noetic/setup.bash \
&& SOME_OTHER_COMMAND
Is there a method to have this initialised in every RUN call in a Dockerfile?
Have tried adding to ~/.bash_profile and Docker's ENV command with no luck.
TL;DR: what you want is feasible by copying your .sh script in /etc/profile.d/ and using the SHELL Dockerfile command to tweak the default shell.
Details:
To start with, consider the following sample script setup.sh:
#!/bin/sh
echo "# setup.sh"
ENV_VAR="some value"
some_fun() {
echo "## some_fun"
}
Then, it can be noted that bash provides the --login CLI option:
When Bash is invoked as an interactive login shell, or as a non-interactive shell with the --login option, it first reads and executes commands from the file /etc/profile, if that file exists. After reading that file, it looks for ~/.bash_profile, ~/.bash_login, and ~/.profile, in that order, and reads and executes commands from the first one that exists and is readable. The --noprofile option may be used when the shell is started to inhibit this behavior.
When an interactive login shell exits, or a non-interactive login shell executes the exit builtin command, Bash reads and executes commands from the file ~/.bash_logout, if it exists.
− https://www.gnu.org/savannah-checkouts/gnu/bash/manual/bash.html#Bash-Startup-Files
Furthermore, instead of appending the setup.sh code in /etc/profile, you can take advantage of the /etc/profile.d folder that is read in the following way by most distributions:
$ docker run --rm -i debian:10 cat /etc/profile | tail -n 9
if [ -d /etc/profile.d ]; then
for i in /etc/profile.d/*.sh; do
if [ -r $i ]; then
. $i
fi
done
unset i
fi
Note in particular that the .sh extension is mandatory, hence the naming of the minimal-working-example above: setup.sh (not setup.bash).
Finally, it is possible to rely on the SHELL command to replace the default shell used by RUN (in place of ["/bin/sh", "-c"]) to incorporate the --login option of bash.
Concretely, you could phrase your Dockerfile like this:
FROM debian:10
# WORKDIR /opt/ros/noetic
# COPY setup.sh .
# RUN . /opt/ros/noetic/setup.sh && echo "ENV_VAR=$ENV_VAR"
# empty var here
RUN echo "ENV_VAR=$ENV_VAR"
# enable the extra shell init code
COPY setup.sh /etc/profile.d/
SHELL ["/bin/bash", "--login", "-c"]
# nonempty var and function
RUN echo "ENV_VAR=$ENV_VAR" && some_fun
# DISABLE the extra shell init code!
RUN rm /etc/profile.d/setup.sh
# empty var here
RUN echo "ENV_VAR=$ENV_VAR"
Outcome:
$ docker build -t test .
Sending build context to Docker daemon 6.144kB
Step 1/7 : FROM debian:10
---> ef05c61d5112
Step 2/7 : RUN echo "ENV_VAR=$ENV_VAR"
---> Running in 87b5c589ec60
ENV_VAR=
Removing intermediate container 87b5c589ec60
---> 6fdb70be76f9
Step 3/7 : COPY setup.sh /etc/profile.d/
---> e6aab4ebf9ef
Step 4/7 : SHELL ["/bin/bash", "--login", "-c"]
---> Running in d73b0d13df23
Removing intermediate container d73b0d13df23
---> ccbe789dc36d
Step 5/7 : RUN echo "ENV_VAR=$ENV_VAR" && some_fun
---> Running in 42fd1ae14c17
# setup.sh
ENV_VAR=some value
## some_fun
Removing intermediate container 42fd1ae14c17
---> de74831896a4
Step 6/7 : RUN rm /etc/profile.d/setup.sh
---> Running in bdd969a63def
# setup.sh
Removing intermediate container bdd969a63def
---> 5453be3271e5
Step 7/7 : RUN echo "ENV_VAR=$ENV_VAR"
---> Running in 0712cea427f1
ENV_VAR=
Removing intermediate container 0712cea427f1
---> 216a421f5659
Successfully built 216a421f5659
Successfully tagged test:latest

Is it possible to access console on Tshock server which is running in docker container?

So my main goal is to access Tshock console so i could run some commands on the server directly.
From what i found as soon as the command to run the server is executed there is no option to get back to the console so:
I'd like to run the server in screen mode.
Dockerfile is basicly running some bash script but i'm getting errors when i'm trying to add "screen" to it. I'm getting "bootstrap.sh: 33: bootstrap.sh: Syntax error: "fi" unexpected (expecting "then")
Entering script"
I've tried everything i could found on google but nothing works :( This is my first time when i'm doing any scripting in bash so i would be grateful for understanding :)
Here is the link to original repo: https://github.com/ryansheehan/terraria/blob/master/tshock/bootstrap.sh
I would be glad for any hints on how to make this script working or if there is any other simplier option to access the console of the server :)
I've added extra line in dockerfile to download screen so it looks like that now:
FROM alpine:3.11.6 AS base
RUN apk add --update-cache \
unzip
# add the bootstrap file
COPY bootstrap.sh /tshock/bootstrap.sh
ENV TSHOCKVERSION=v4.4.0-pre12
ENV TSHOCKZIP=TShock4.4.0_Pre12_Terraria1.4.0.5.zip
# Download and unpack TShock
ADD https://github.com/Pryaxis/TShock/releases/download/$TSHOCKVERSION/$TSHOCKZIP /
RUN unzip $TSHOCKZIP -d /tshock && \
rm $TSHOCKZIP && \
chmod +x /tshock/TerrariaServer.exe && \
# add executable perm to bootstrap
chmod +x /tshock/bootstrap.sh
FROM mono:6.8.0.96-slim
LABEL maintainer="Ryan Sheehan <rsheehan#gmail.com>"
# documenting ports
EXPOSE 7777 7878
# env used in the bootstrap
ENV CONFIGPATH=/root/.local/share/Terraria/Worlds
ENV LOGPATH=/tshock/logs
ENV WORLD_FILENAME=""
# Allow for external data
VOLUME ["/root/.local/share/Terraria/Worlds", "/tshock/logs", "/plugins"]
# install nuget to grab tshock dependencies
RUN apt-get update -y && \
apt-get install -y nuget && \
apt-get install -y screen
# rm -rf /var/lib/apt/lists/* /tmp/*
# copy game files
COPY --from=base /tshock/ /tshock/
# Set working directory to server
WORKDIR /tshock
# run the bootstrap, which will copy the TShockAPI.dll before starting the server
ENTRYPOINT [ "/bin/sh", "bootstrap.sh" ]
And here is my modified code for bootstrap.sh :
#!/bin/sh
echo "Entering script"
if [ -z "$STY" ];then
echo "Opening screen mode ..."
exec screen -dm -S terraria bin/bash "$0"
else
echo "Continuing with script in screen mode"
echo "\nBootstrap:\nworld_file_name=$WORLD_FILENAME\nconfigpath=$CONFIGPATH\nlogpath=$LOGPATH\n"
echo "Copying plugins..."
cp -Rfv /plugins/* ./ServerPlugins
WORLD_PATH="/root/.local/share/Terraria/Worlds/$WORLD_FILENAME"
if [ -z "$WORLD_FILENAME" ]; then
echo "No world file specified in environment WORLD_FILENAME."
if [ -z "$#" ]; then
echo "Running server setup..."
else
echo "Running server with command flags: $#"
fi
mono --server --gc=sgen -O=all TerrariaServer.exe -configpath "$CONFIGPATH" -logpath "$LOGPATH" "$#"
else
echo "Environment WORLD_FILENAME specified"
if [ -f "$WORLD_PATH" ]; then
echo "Loading to world $WORLD_FILENAME..."
mono --server --gc=sgen -O=all TerrariaServer.exe -configpath "$CONFIGPATH" -logpath "$LOGPATH" -world "$WORLD_PATH" "$#"
else
echo "Unable to locate $WORLD_PATH.\nPlease make sure your world file is volumed into docker: -v <path_to_world_file>:/root/.local/share/Terraria/Worlds"
exit 1
fi
fi
fi
After removing empty spaces as #KamilCuk suggested the script is running but the screen doesn't seem to be working.
Here is the output from console:
PS D:\TerrariaServer\Source\terraria\tshock> docker run --rm -p 7777:7777 -v D:/TerrariaServer/World:/root/.local/share/Terraria/Worlds --name="terraria" terraria-image -world /root/.local/share/Terraria/Worlds/TestWorld.wld
Entering screen mode
+ [ -z ]
+ echo Entering screen mode
+ screen -d -m -S terraria bin/bash bootstrap.sh
+ echo Screen mode activated
+ echo Continuing with script in screen mode
+ echo \nBootstrap:\nworld_file_name=\nconfigpath=/root/.local/share/Terraria/Worlds\nlogpath=/tshock/logs\n
+ echo Copying plugins...
+ cp -Rfv /plugins/* ./ServerPlugins
Screen mode activated
Continuing with script in screen mode
Bootstrap:
world_file_name=
configpath=/root/.local/share/Terraria/Worlds
logpath=/tshock/logs
Copying plugins...
cp: cannot stat '/plugins/*': No such file or directory
+ WORLD_PATH=/root/.local/share/Terraria/Worlds/
+ [ -z ]
+ echo No world file specified in environment WORLD_FILENAME.
+ [ -z -world /root/.local/share/Terraria/Worlds/TestWorld.wld ]
+ echo Running server with command flags: -world /root/.local/share/Terraria/Worlds/TestWorld.wld
+ mono --server --gc=sgen -O=all TerrariaServer.exe -configpath /root/.local/share/Terraria/Worlds -logpath /tshock/logs -world /root/.local/share/Terraria/Worlds/TestWorld.wld
No world file specified in environment WORLD_FILENAME.
Running server with command flags: -world /root/.local/share/Terraria/Worlds/TestWorld.wld
Error Logging Enabled.
TerrariaAPI Version: 2.1.0.0 (Protocol v1.4.0.5 (230), OTAPI 1.4.0.5)
[TShock] Info Config path has been set to /root/.local/share/Terraria/Worlds
[TShock] Info Log path has been set to /tshock/logs
TShock was improperly shut down. Please use the exit command in the future to prevent this.
TShock 4.4.0.0 (Go to sleep Patrikkk, Icy, Chris, Death, Axeel, Zaicon, hakusaro, Zack, and Yoraiz0r <3) now running.
AutoSave Enabled
Backups Enabled
Welcome to TShock for Terraria!
TShock comes with no warranty & is free software.
You can modify & distribute it under the terms of the GNU GPLv3.
[Server API] Info Plugin TShock v4.4.0.0 (by The TShock Team) initiated.
Terraria Server v1.4.0.5
Resetting game objects 1%
Resetting game objects 2%
Resetting game objects 3%
...
I'm not directly seenig major issues with your entrypoint script. But I want provide some steps I would do to hopefully resolve these kind of issues:
Avoid the line f [ -z "$#" ].
See here why.
Suggestion: f [ -z "$*" ]
Also this line is problematic too: echo "Running server with command flags: $#"
See here why.
Suggestion: echo "Running server with command flags: $*"
Sometimes it can help, to enter the container and "play around". This allows you to work in your docker container like you would do in any normal Linux shell.
Have a look at the -interactive, -i option for docker run.
Example
If you want to start your container, you can run e. g. run a bash (...or sh, or whatever shell you are using):
docker run -it <img_name> <arguments> bash
This also works with exec when your container is already running and you want to enter it.
docker exec -it <container_id> bash
Try the bash -x option when invoking your entrypoint script to get a verbose output. Sometimes it helps finding the error.
ENTRYPOINT [ "/bin/bash", "-x" "bootstrap.sh" ]

In GCP, how can I get a parent cloud build to fail when a child build fails?

I have a monorepo set up and a cloudbuild.yaml file in the root of my repository spins off child cloud build jobs in the first step:
# Trigger builds for all packages in the repository.
- name: "gcr.io/cloud-builders/gcloud"
entrypoint: "bash"
args: [
"./scripts/cloudbuild/build-all.sh",
# Child builds don't have the git context, so pass them the SHORT_SHA.
"--substitutions=_TAG=$SHORT_SHA",
]
timeout: 1200s # 20 minutes
The build all script is something I copied from the community builders repo:
#!/usr/bin/env bash
DIR_NAME="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
set -e # Sets mode to exit on an error, without executing the remaining commands.
for d in {packages,ops/helm,ops/pulumi}/*/; do
config="${d}cloudbuild.yaml"
if [[ ! -f "${config}" ]]; then
continue
fi
echo "Building $d ... "
(
gcloud builds submit . --config=${config} $*
) &
done
wait
It waits until all child builds are done before continuing to the next one... handy!
Only problem is, if any of the child builds fail, it will still continue to the next step.
Is there a way to make this step fail if any of the child builds fail? I guess my script isn't returning the correct error code...?
The set -e flag should make the script to exit if any of the commands performed has an error, however you can also check the output of a command by using the $? variable, for example you can include the next lines:
echo "Building $d ... "
(
gcloud builds submit . --config=${config} $*
if [ $? == 1 ]; then #Check the status of the last command
echo "There was an error while building $d, exiting"
exit 1
fi
) &
So if there was an error the script will exit and give an status of 1 (error)

Testing server setup bash scripts

I'm just learning to write bash scripts.
I'm writing a script to setup a new server.
How should I go about testing the script.
i.e.
I use apt install for certain packages like apache, php etc. and then a couple of lines down there is an error.
I then need to fix the error and run it again but it will run all the install commands again.
The system will probably say the package is installed already, but what if there are commands which append strings to files.
If these are run again it will append the same string to the file a second time.
What is the best approach to write bash-scripts like this?
Can you do test runs which rollback everything after an error or end of the script?
Or even better to have the script continue from the line where the error occured the next time it is run?
I'm doing this on an Ubuntu 18.04 server.
it's a matter of how clear you want it to be to read it, but
[ -f .step01-done ] || your install command && touch .step01-done
[ -f .step02-done ] || your other install command && touch .step02-done
maybe a little easier to read:
if ! [ -f .step01-done ]; then
if your install command ; then
touch .step01-done
fi
fi
if ! [ -f .step02-done ]; then
if your other install command ; then
touch .step02-done
fi
fi
...or something in between.
Now, I would suggest creating a directory somewhere and maybe logging output from the commands to some file there (maybe tee it) but definitely putting all these files you are creating with touch there. That way if you start it from another directory by accident, it won't matter. You just need to make sure that apt-get or whatever you use actual returns false if it fails. It should.
You could even make a function that does it in a nice way...
#!/bin/bash
function do_cmd() {
if [ -f "$1.done" ]; then
echo "$2: skipping already completed step"
return 0
fi
echo -n "$2: "
$3 1> "$1.out" 2> "$1.err"
if $?; then
echo "ok"
touch "$1.done"
return 0
else
echo "failed"
echo -e "see \"$1.out\" and/or \"$1.err\" for details."
return 1
# could "exit 1" instead
fi
}
[ -d /root/mysetup ] || mkdir /root/mysetup
if ! [ -d /root/mysetup ]; then
echo "failed to find or create /root/mysetup directory
exit 1
fi
cd /root/mysetup
# ---------------- your steps go here -------------------
do_cmd prog1 "installing prog1" "apt-get install prog1" || exit 1
do_cmd prog2 "installing prog2" "apt-get install prog2" || exit 1
do_cmd startfoo "starting foo service" "service foo start" || exit 1
echo "all setup functions finished."
You would use:
do_cmd identifier "description" "command or function"
description
identifier: unique identifier used when files are generated:
identifier.out: standard output from command
identifier.err: standard error from command
identifier.done: created when command is successful
description: this is actually printed to the terminal when the step is being executed.
command or function: this is the actual command to run
not sure why stackoverflow forced me to format that last bit as code but w/e

Resources