Shell script not running as bash using Dockerfile - bash

I need to split a comma-separated string into array and run k6 for each of the array values parallely. Since shell script doesn't support arrays, I m using bash command in the script. I am not able to run it as bash using Dockerfile in TeamCity.
Dockerfile:
FROM loadimpact/k6:0.34.1
COPY ./src/lib /lib
COPY ./src/scenarios /scenarios
COPY ./src/k6-run-all.sh /k6-run-all.sh
WORKDIR /
ENTRYPOINT []
RUN bash -c "./k6-run-all.sh"
Shell script:
#!/bin/bash
K6_RUN_OPTIONS=${K6_RUN_OPTIONS}
ENV_NAME=${ENV_NAME:-qa}
IS_TEST_RUN=${IS_TEST_RUN:-true}
SCENARIO_NAME=${SCENARIO_NAME:-"full-card-visa"}
GWC_PC_ID=${GWC_PC_ID}
IFS=',' read -r -a PCIds <<< "$GWC_PC_ID"
echo "Number of PC ids provided in environment variables=" ${#PCIds[#]}
if [[ ${#PCIds[#]} > 0 ]]; then
for pcId in "$#"
do
ENV_NAME=$ENV_NAME RUN_OPTIONS=$SCENARIO_NAME-$ENV_NAME$OPTIONS_VARIANT GWC_PC_ID=$pcId k6 run $K6_RUN_OPTIONS ''$SCENARIO/index.js'' &
done
fi
existCode=$?
if [ $existCode -ne 0 ]; then
echo "Scenario $SCENARIO_NAME completed with the error"
exit $existCode
fi
Error:
#9 [5/6] RUN bash -c "./k6-run-all.sh"
17:02:02 #9 0.356 /bin/sh: bash: not found
17:02:02 #9 ERROR: executor failed running [/bin/sh -c bash -c "./k6-run-all.sh"]: exit code: 127
17:02:02 ------
17:02:02 > [5/6] RUN bash -c "./k6-run-all.sh":
17:02:02 #9 0.356 /bin/sh: bash: not found
17:02:02 ------
17:02:02 failed to solve: executor failed running [/bin/sh -c bash -c "./k6-run-all.sh"]: exit code: 127
How to modify Dockerfile or shell script to run this shell script as bash?
Previously to run it as bash script the last line of Dockerfile used to be:
CMD ["sh", "-c", "./k6-run-all.sh"]
******* Edit: **********
Updated full script after knittl's answer (current issue is after adding & for parallel runs it is not working, it is not running anything inside the for loop and it is not giving any extra error or information in logs, it is like it is skipping it):
K6_RUN_OPTIONS=${K6_RUN_OPTIONS}
ENV_NAME=${ENV_NAME:-qa}
IS_TEST_RUN=${IS_TEST_RUN:-true}
SCENARIO_NAME=${SCENARIO_NAME:-"full-card-visa"}
GWC_PC_ID=${GWC_PC_ID}
OPTIONS_VARIANT=""
if $IS_TEST_RUN; then
OPTIONS_VARIANT="-test"
fi
SCENARIO_DIR="$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )"
SCENARIO_PATH="${SCENARIO_DIR}/scenarios"
SCENARIO="${SCENARIO_PATH}/${SCENARIO_NAME}"
echo "Executing scenario path $SCENARIO"
SCENARIO_NAME=${SCENARIO:${#SCENARIO_PATH}+1:${#SCENARIO}}
echo "Scenario Name: $SCENARIO_NAME"
echo "Run option: $SCENARIO_NAME-$ENV_NAME$OPTIONS_VARIANT"
echo "pc ids provided in environment variable=" $GWC_PC_ID
if [ -z "$GWC_PC_ID" ]
then
ENV_NAME=$ENV_NAME RUN_OPTIONS=$SCENARIO_NAME-$ENV_NAME$OPTIONS_VARIANT k6 run $K6_RUN_OPTIONS ''$SCENARIO/index.js''
else
for pcId in $(printf '%s' "$GWC_PC_ID" | tr , ' ');
do
ENV_NAME=$ENV_NAME RUN_OPTIONS=$SCENARIO_NAME-$ENV_NAME$OPTIONS_VARIANT GWC_PC_ID=$pcId k6 run $K6_RUN_OPTIONS ''$SCENARIO/index.js'' &
done
fi
existCode=$?
if [ $existCode -ne 0 ]; then
echo "Scenario $SCENARIO_NAME completed with the error"
exit $existCode
fi

k6 Docker containers do not come with bash preinstalled, but with busybox. I see two options:
Create your own Docker image based off grafana/k6 and manually install bash in your image.
Rewrite your script to not rely on bashisms. Should be fairly easy: split your list of tests to run into one path per line and while read -r path; do …; done them.
Or if support for whitespace in filenames is not required, then for path in $(printf '%s' "$GWC_PC_ID" | tr , ' '); do …; done
Note that your current script will return with the exit code of your last k6 process, meaning that if any other test failed but the last one was successfull, that will mask the error.
PS. Time to upgrade your base Docker image too. loadimpact/k6:0.34.1 is really old (exactly 1 year). It's better to switch to grafana/k6:0.40.0, which was released a week ago.

Related

Cannot install bash in nginx:alpine

I have the following Dockerfile:
# => Build container
FROM node:14-alpine AS builder
WORKDIR /app
# split the dependecies from our source code so docker builds can cache this step
# (unless we actually change dependencies)
COPY package.json yarn.lock ./
RUN yarn install
COPY . ./
RUN yarn build
# => Run container
FROM nginx:alpine
# Nginx config
WORKDIR /usr/share/nginx/html
RUN rm -rf ./*
COPY ./nginx/nginx.conf /etc/nginx/conf.d/default.conf
# Default port exposure
EXPOSE 8080
# Copy .env file and shell script to container
COPY --from=builder /app/dist .
COPY ./env.sh .
COPY .env .
USER root
# Add bash
RUN apk update && apk add --no-cache bash
# Make our shell script executable
RUN chmod +x env.sh
# Start Nginx server
CMD ["/bin/bash", "-c", "/usr/share/nginx/html/env.sh && nginx -g \"daemon off;\""]
But when I try to run it I get the following errors when trying to add bash:
------
> [stage-1 8/9] RUN apk update && apk add --no-cache bash:
#19 0.294 fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/main/x86_64/APKINDEX.tar.gz
#19 0.358 140186175183688:error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed:ssl/statem/statem_clnt.c:1914:
#19 0.360 fetch https://dl-cdn.alpinelinux.org/alpine/v3.14/community/x86_64/APKINDEX.tar.gz
#19 0.360 ERROR: https://dl-cdn.alpinelinux.org/alpine/v3.14/main: Permission denied
#19 0.360 WARNING: Ignoring https://dl-cdn.alpinelinux.org/alpine/v3.14/main: No such file or directory
#19 0.405 140186175183688:error:1416F086:SSL routines:tls_process_server_certificate:certificate verify failed:ssl/statem/statem_clnt.c:1914:
#19 0.407 ERROR: https://dl-cdn.alpinelinux.org/alpine/v3.14/community: Permission denied
#19 0.407 WARNING: Ignoring https://dl-cdn.alpinelinux.org/alpine/v3.14/community: No such file or directory
#19 0.408 2 errors; 42 distinct packages available
------
executor failed running [/bin/sh -c apk update && apk add --no-cache bash]: exit code: 2
Which I need to add to execute this script:
#!/bin/bash
# Recreate config file
rm -rf ./env-config.js
touch ./env-config.js
# Add assignment
echo "window._env_ = {" >> ./env-config.js
# Read each line in .env file
# Each line represents key=value pairs
while read -r line || [[ -n "$line" ]];
do
# Split env variables by character `=`
if printf '%s\n' "$line" | grep -q -e '='; then
varname=$(printf '%s\n' "$line" | sed -e 's/=.*//')
varvalue=$(printf '%s\n' "$line" | sed -e 's/^[^=]*=//')
fi
# Read value of current variable if exists as Environment variable
value=$(printf '%s\n' "${!varname}")
# Otherwise use value from .env file
[[ -z $value ]] && value=${varvalue}
# Append configuration property to JS file
echo " $varname: \"$value\"," >> ./env-config.js
done < .env
echo "}" >> ./env-config.js
From what I have read adding USER root should fix this but it does not in this case.
Any ideas on how to fix?
You should be able to use /bin/sh as a standard Bourne shell; also, you should be able to avoid the sh -c wrapper in the CMD line.
First, rewrite the script using POSIX shell syntax. Scanning over the script, it seems like it is almost okay as-is; change the first line to #!/bin/sh, and correct the non-standard ${!varname} expansion (also see Dynamic variable names in Bash)
# Read value of current variable if exists as Environment variable
value=$(sh -c "echo \$$varname")
# Otherwise use value from .env file
[[ -z $value ]] && value=${varvalue}
You can try testing it using an alpine or busybox image with a more restricted shell, or setting the POSIXLY_CORRECT environment variable with bash.
Secondly, there's a reasonably standard pattern of using ENTRYPOINT and CMD together. The CMD gets passed as arguments to the ENTRYPOINT, so if the ENTRYPOINT ends with exec "$#", it will replace itself with that command.
#!/bin/sh
# ^^ not bash
# Recreate config file
...
echo "window._env_ = {" >> ./env-config.js
...
echo "}" >> ./env-config.js
# Run the main container CMD
exec "$#"
Now, in your Dockerfile, you don't need to install GNU bash, because you're not using it, but you do need to correctly split out the ENTRYPOINT and CMD.
ENTRYPOINT ["/usr/share/nginx/html/env.sh"] # must be JSON-array syntax
CMD ["nginx", "-g", "daemon off;"] # can be either syntax
As an aside, the cmd1 && cmd2 syntax is basic shell syntax and not a bash extension, so you could write CMD ["/bin/sh", "-c", "cmd1 && cmd2"]; but, if you write a command without the JSON array syntax, Docker will insert the sh -c wrapper for you. You should almost never need to write out sh -c in a Dockerfile.

Dockerfile throws an error while running bash script from it

I have a Dockerfile as below:
#1st stage - wildfly production image
FROM wildfly-setup:17.0.0 AS wildfly-prod
USER jenkins
RUN mkdir /opt/wildfly/install && mkdir /opt/wildfly/install/config
COPY --chown=jenkins:jenkins init.sh /opt/wildfly/bin
RUN mkdir -p $JBOSS_HOME/standalone/data/datastorage
#Second stage - test run image
FROM wildfly-prod AS wildfly-sedi-test
USER jenkins
COPY --chown=jenkins:jenkins init.sh /opt/wildfly/bin
RUN /opt/wildfly/bin/init.sh
#CMD ["/opt/wildfly/bin/init.sh"]
And the bash script which I am running from the above Dockerfile is as below:
#!/bin/bash
if [ -e "$JBOSS_HOME/install/wildfly.sh" ] ; then
$JBOSS_HOME/install/wildfly.sh
rm $JBOSS_HOME/install/wildfly.sh
fi
# check for postgres running or not
cnt=0
psql_terminate=2
while (( $cnt < 120 && $psql_terminate != 0 )); do
postgres_isready -h $POSTGRES > /dev/null 2>&1
if [ $? -eq 0 ] ; then
let psql_terminate=0
echo $psql_terminate
fi
let cnt=cnt+1
sleep 1
done
if (( $psql_terminate == 0)) ; then
exec $JBOSS_HOME/bin/standalone.sh -c standalone-full.xml
else
echo "database unavailable."
exit 1
fi
In the Dockerfile when I unable the CMD command it works, but with the RUN command it throws the below the error while building the image:
The command '/bin/sh -c /opt/wildfly/bin/init.sh' returned a non-zero code: 1
Can somebody please help me on this?
Thanks in advance.
RUN will execute your bash script when building the image.
The RUN instruction will execute any commands in a new layer on
top of the current image and commit the results. The resulting
committed image will be used for the next step in the Dockerfile.
CMD will execute your bash script when starting the container.
The main purpose of a CMD is to provide defaults for an executing container. These defaults can include an executable, or they can omit the executable, in which case you must specify an ENTRYPOINT instruction as well.
So I'm assuming that your script reaches exit 1 because it is supposed to run when the container starts instead of when building your image.

Testing server setup bash scripts

I'm just learning to write bash scripts.
I'm writing a script to setup a new server.
How should I go about testing the script.
i.e.
I use apt install for certain packages like apache, php etc. and then a couple of lines down there is an error.
I then need to fix the error and run it again but it will run all the install commands again.
The system will probably say the package is installed already, but what if there are commands which append strings to files.
If these are run again it will append the same string to the file a second time.
What is the best approach to write bash-scripts like this?
Can you do test runs which rollback everything after an error or end of the script?
Or even better to have the script continue from the line where the error occured the next time it is run?
I'm doing this on an Ubuntu 18.04 server.
it's a matter of how clear you want it to be to read it, but
[ -f .step01-done ] || your install command && touch .step01-done
[ -f .step02-done ] || your other install command && touch .step02-done
maybe a little easier to read:
if ! [ -f .step01-done ]; then
if your install command ; then
touch .step01-done
fi
fi
if ! [ -f .step02-done ]; then
if your other install command ; then
touch .step02-done
fi
fi
...or something in between.
Now, I would suggest creating a directory somewhere and maybe logging output from the commands to some file there (maybe tee it) but definitely putting all these files you are creating with touch there. That way if you start it from another directory by accident, it won't matter. You just need to make sure that apt-get or whatever you use actual returns false if it fails. It should.
You could even make a function that does it in a nice way...
#!/bin/bash
function do_cmd() {
if [ -f "$1.done" ]; then
echo "$2: skipping already completed step"
return 0
fi
echo -n "$2: "
$3 1> "$1.out" 2> "$1.err"
if $?; then
echo "ok"
touch "$1.done"
return 0
else
echo "failed"
echo -e "see \"$1.out\" and/or \"$1.err\" for details."
return 1
# could "exit 1" instead
fi
}
[ -d /root/mysetup ] || mkdir /root/mysetup
if ! [ -d /root/mysetup ]; then
echo "failed to find or create /root/mysetup directory
exit 1
fi
cd /root/mysetup
# ---------------- your steps go here -------------------
do_cmd prog1 "installing prog1" "apt-get install prog1" || exit 1
do_cmd prog2 "installing prog2" "apt-get install prog2" || exit 1
do_cmd startfoo "starting foo service" "service foo start" || exit 1
echo "all setup functions finished."
You would use:
do_cmd identifier "description" "command or function"
description
identifier: unique identifier used when files are generated:
identifier.out: standard output from command
identifier.err: standard error from command
identifier.done: created when command is successful
description: this is actually printed to the terminal when the step is being executed.
command or function: this is the actual command to run
not sure why stackoverflow forced me to format that last bit as code but w/e

Script can't find library when run as cron job, works fine when run manually

I am creating the script below in cronjob. It is manually getting generated. But, when in cron, it is failing to generate the files.
Below is my unix cron script.
#!/usr/local/bin/bash
var=`perl -w -e '$d=1*86400;#t=localtime (time -$d); printf "%.2d%.2d%.2d", $t[5]+1900,$t[4]+1,$t[3];'`
var="`echo $var |cut -c3-8`"
i=1;
while [ $i -le 8 ]
do
cd /home/svfe/bin
./bills_unloader -d $var -f $i
i=`expr $i + 1`
done
echo "Done !
When I try to debug the script, I am finding below error.
/usr/lib/hpux64/dld.so: Unable to find library 'libclntsh.so.11.1'.
/home/swa/swa2/autoload/bills_unloader.sh: line 19: 7078 Killed
./bills_unloader -d 170606 -f $i
Why is the command failing in cron, but working fine when executed manually?
Most probably you have LD_LIBRARY_PATH variable set in your CLI environment, but it is not available when script is run under cron. Add a line:
declare -x > /tmp/variables.log.$(date +%s).$$
at the beginning of the script, the compare the logs from manual run and cron run. If necessary, set the LD_LIBRARY_PATH properly in your script.
As it is a hpux system, it could be also SHLIB_PATH.
You might be using command with another user and crontab with another.
Suppose your command is working fine with user "xyz" try below command in your crontab entry -
su - xyz -c sh

Bash script not working on a new dedicated server

Recently I have migrated to the new dedicated server which is running on the same operating system - FreeBSD 8.2. I got a root account access and all permissions have been set properly.
My problem is that, the bash script I was running on the old server doesn't works on the new machine, the only error appearing while running the script is:
# sh script.sh
script.sh: 3: Syntax error: word unexpected (expecting ")")
Here is the code itself:
#!/usr/local/bin/bash
PORTS=(7777:GAME 11000:AUTH 12000:DB)
MESSG=""
for i in ${PORTS[#]} ; do
PORT=${i%%:*}
DESC=${i##*:}
CHECK=`sockstat -4 -l | grep :$PORT | awk '{print $3}' | head -1`
if [ "$CHECK" -gt 1 ]; then
echo $DESC[$PORT] "is up ..." $CHECK
else
MESSG=$MESSG"$DESC[$PORT] wylaczony...\n"
if [ "$DESC" == "AUTH" ]; then
MESSG=$MESSG"AUTH is down...\n"
fi
if [ "$DESC" == "GAME" ]; then
MESSG=$MESSG"GAME is down...\n"
fi
if [ "$DESC" == "DB" ]; then
MESSG=$MESSG"DB is down...\n"
fi
fi
done
if [ -n "$MESSG" ]; then
echo -e "Some problems ocurred:\n\n"$MESSG | mail -s "Problems" yet#another.com
fi
I don't really code in bash, so I don't know why this happend...
Bourne shell (sh) doesn't support arrays, that's why you're running into this error when you use
sh script.sh
Use bash instead
bash script.sh
Note: I suspect that sh script.sh worked on the old server because sh is linked to bash there.
also you shouldn't need to run it through sh (that's what the
#!
on the first line is for - the OS will run the remainder of the line as a command and pass the contents of the file for it to interpret). Just make the script executable:
chmod +x script.sh
and then you can just run it directly without the sh in front of the name.
It's possible that the default shell is not bash and so by running it through sh you're interpreting it with a different shell which is then giving the error
The code looks good. It is likely that your new dedicated server is running older version of Bash than your last server. Or maybe /usr/local/bin/bash is pointing towards older version.
Run
$ which bash
if the output is other than /usr/local/bin/bash then change the first shebang line to the newer path, if it still does not work
Try replacing third line:
PORTS=(7777:GAME 11000:AUTH 12000:DB)
with
PORTS=('7777:GAME' '11000:AUTH' '12000:DB')
and rerun the script.
If it still does not work then post the BASH version here by running
$ bash --version
try with facing and trailing spaces
PORTS=( 7777:GAME 11000:AUTH 12000:DB )

Resources