Shell Script in Jenkins - bash

I am runnning shell script in a jenkins pipeline step.
Script is doing maven build and some other stuff.
I want pipeline to fail if maven build fails.
I am not sure how to do that.
This is my script
#!/usr/bin/env bash
echo "Acceptance Test"
cd .. && cd $PWD/transactions-ui
npm run acceptance-start && cd .. && cd $PWD/transactions-test/ && mvn verify -Dserenity.reports=email -Dwebdriver.driver=chrome -Dwebdriver.provided.mydriver=starter.util.RemoteChromeDriver ; cd .. ; cd $PWD/transactions-ui/ ; npm run acceptance-stop
echo "Test completed"
Following is my jenkins file
dir("${workspace}/transactions-test")
{
sh "${workspace}/transactions-test/run.sh"
}

It depends on how you define failure. If the exit code of your run.sh script is different than 0, then the build will fail.

Add set -e in your bash to exit if any command returns a non-zero status
#!/usr/bin/env bash
set -e
echo "Acceptance Test"
...
Bash set command for reference.
If set -e can't work for you, you can examine key cmd result as following:
#!/usr/bin/env bash
echo "Acceptance Test"
cd ..
cd $PWD/transactions-ui
npm run acceptance-start
if [[ $? -ne 0 ]]; then
echo "npm run acceptance-start failed"
exit 1
fi
cd ..
cd $PWD/transactions-test/
mvn verify \
-Dserenity.reports=email \
-Dwebdriver.driver=chrome \
-Dwebdriver.provided.mydriver=starter.util.RemoteChromeDriver
if [[ $? -ne 0 ]]; then
echo "mvn verify failed"
exit 1
fi
cd ..
cd $PWD/transactions-ui/
npm run acceptance-stop
if [[ $? -ne 0 ]]; then
echo "npm run acceptance-stop failed"
exit 1
fi
echo "Test completed"

Related

bash run command without exiting on error and tell me its exit code

From a bash script I want to run a command which might fail, store its exit code in a variable, and run a subsequent command regardless of that exit code.
Examples of what I'm trying to avoid:
Using set:
set +e # disable exit on error (it was explicitly enabled earlier)
docker exec $CONTAINER_NAME npm test
test_exit_code=$? # remember exit code of previous command
set -e # enable exit on error
echo "copying unit test result file to host"
docker cp $CONTAINER_NAME:/home/test/test-results.xml .
exit $test_exit_code
Using if:
if docker exec $CONTAINER_NAME npm test ; then
test_exit_code=$?
else
test_exit_code=$?
fi
echo "copying unit test result file to host"
docker cp $CONTAINER_NAME:/home/test/test-results.xml .
exit $test_exit_code
Is there a semantically straightforward way to tell bash "run command without exiting on error, and tell me its exit code"?
The best alternative I have is still confusing and requires comments to explain to subsequent developers (it's just a terser if/else):
docker exec $CONTAINER_NAME npm test && test_exit_code=$? || test_exit_code=$?
echo "copying unit test result file to host"
docker cp $CONTAINER_NAME:/home/test/test-results.xml .
exit $test_exit_code
I believe you could just use the || operator? Which is equivalent to an "if − else" command.
Would the following address your use case? (otherwise feel free to comment!)
set -e # implied in a CI context
exit_status=0
docker exec "$CONTAINER_NAME" npm test || exit_status=$?
docker cp "$CONTAINER_NAME:/home/test/test-results.xml" .
exit "$exit_status"
or more briefly:
set -e # implied in a CI context
docker exec "$CONTAINER_NAME" npm test || exit_status=$?
docker cp "$CONTAINER_NAME:/home/test/test-results.xml" .
exit "${exit_status:-0}"
As an aside, if you are not interested in this exit status code, you can also do something like this:
set -e # implied in a CI context
docker exec "$CONTAINER_NAME" npm test || :
docker cp "$CONTAINER_NAME:/home/test/test-results.xml" .
For more details on the || : tip, see e.g. this answer on Unix-&-Linux SE:
Which is more idiomatic in a bash script: || true or || :?
Very simply save the return-code if command failed:
#!/usr/bin/env sh
# Implied by CI
set -e
# Initialise exit return code
rc=0
# Run command or save its error return code if it fail
docker exec "$CONTAINER_NAME" npm test || rc="$?"
printf '%s\n' "copying unit test result file to host"
# Run other command regardless if first one failed
docker cp "$CONTAINER_NAME:/home/test/test-results.xml" .
# Exit with the return code of the first command
exit "$rc"
You could use a kind of try catch, to get the exit code and use a simple switch case to run another commands depending on the error exit code:
(
exit 2
#here your command which might fail
)
exit_code=$?
case "$exit_code" in
0) echo "Success execution"
#do something
;;
1) echo "Error type 1"
#do something
;;
2) echo "Error type 2"
#do something
;;
*) echo "Unknown error type: $exit_code"
;;
esac

How to do try and catch in bash file?

I have shell file (deploy.sh) do the following commands:
npm run build:prod
docker-compose -f .docker/docker-compose.ecr.yml build my-app
docker-compose -f .docker/docker-compose.ecr.yml push my-app
aws ecs update-service --cluster ...
I want to stop the execution of the bash when error occurred from one of the commands.
Which command does that in shell?
If you want to test the success or failure of a command, you can rely on its exit code. Knowing that each command will return a 0 on success or any other number on failure, you have a few options on how to handle each command's errors.
|| handler
npm run build:prod || exit 1
if condition
if docker-compose -f .docker/docker-compose.ecr.yml build my-app; then
printf "success\n"
else
printf "failure\n"
exit 1
fi
the $? variable
docker-compose -f .docker/docker-compose.ecr.yml push my-app
if [ $? -gt 0 ]; then
printf "Failure\n"
exit 1
fi
traps
err_report() {
echo "Error on line $1"
}
trap 'err_report $LINENO' ERR
aws ecs update-service --cluster ...
set -e
To globally "exit on error", then set -e will do just that. It won't give you much info, but it'll get the job done.
You can use set -e to exit on errors. And even better, you can set -e and use a trap function.
#!/bin/bash
set -e
trap 'catch $? $LINENO' EXIT
catch() {
echo "catching!"
if [ "$1" != "0" ]; then
# error handling goes here
echo "Error $1 occurred on $2"
fi
}
npm run build:prod
docker-compose -f .docker/docker-compose.ecr.yml build my-app
docker-compose -f .docker/docker-compose.ecr.yml push my-app
aws ecs update-service --cluster ...
source: https://medium.com/#dirk.avery/the-bash-trap-trap-ce6083f36700
Did a fast search on Google and it seems there isnt. Best is to use && or || or if ... else blocks see links below:
SO-Try Catch in bash
and
Linuxhint
Hope this helps

Script in gitlab-ci does not output errors

I have a shell script that I use on gitlab CI that runs a supplied command. The issue is it doesn't display errors if the command failed, it just says ERROR: Job failed: exit code 1. I tried the script locally and it was able to output the failed command, is there a way I could somehow force it to display the error through my script before it exits the job?
The particular part of my script
output="$( (cd "$FOLDER"; eval "$#") 2>&1 )"
if [ "$output" ]; then
echo -e "$output\n"
fi
One way to trap an error that works in every shell is to combine two commands with logical OR ||.
Catch error from subshell:
output="$( (cd "$FOLDER"; eval "$#") 2>&1 )" || errorcode="$?"
will save the error code from the previous command if it fails.
Exit with own error code if important command fails
output="$( (cd "$FOLDER"; eval "$#") 2>&1 )" || exit 12
For more complex things one can define a function that will be called after the OR.
handle_error() {
# do stuff
}
output="$( (cd "$FOLDER"; eval "$#") 2>&1 )" || handle_error

bin sh start multiple processes

I am trying to start multiple processes for my development server by creating a #!/bin/sh file with shell commands in it.
E.g.:
#!/bin/sh
read -p "Start all servers? (Y/n)" answer
if test "$answer" = "Y"
then
cd ./www/src; node ./index.js;
cd ./www/; ./gulp;
cd ./api/; nodemon ./init.js;
cd ./api-test/; node ./index.js;
else
echo "Cancelled."
fi
Because for example nodemon will setup a watch process or node a http server process, the first command will start (cd ./www/src; node ./index.js;) and not continue to startup the other processes.
I can't figure out how to start all 4 processes independent of each other..
Anybody?
I would prefer to write some functions to spawn each process consistently:
One function called spawn_once will only run the command if it is not already running, therefore allowing only one instance
One second function called spawn will run the command even if it is already running (allow multiple instances)
Use the most appropriate for your use case
#!/bin/sh
# Spawn command $1 from path $2
spawn() {
local cmd=$1; local path=$2
( [ ! -z "$path" ] && cd "$path"; eval "$cmd"; ) &
}
# Spawn command $1 from path $2, only if command $1 is not running
# in other words, run only a single instance
spawn_once() {
local cmd=$1; local path=$2
(
[ ! -z "$path" ] && cd "$path"
pgrep -u "$USER" -x "$cmd" >/dev/null || eval "$cmd"
) &
}
# Use spawn or spawn_once to start your multiple commands
# The format is: spawn "<command with args>" "<path>"
spawn "node ./index.js" "./www/src"
spawn_once "./gulp" "./www/" # only one instance allowed
spawn "node ./index.js" "./api-test/"
Explanation:
( [ ! -z "$path" ] && cd "$path"; eval "$cmd"; ) & : Change directory (if path argument was set) and run the command in a subshell (&), i.e. in the background, so it does not affect the current directory for other commands and does not block the script while the command is running.
spawn_once: pgrep -u "$USER" -x "$cmd" >/dev/null || eval "$cmd" : Check if the command has already been started with pgrep, otherwise (||) run the command.
You can use ampersand "&" to execute task in background
for example:
#!/bin/sh
read -p "Start all servers? (Y/n)" answer
if test "$answer" = "Y"
then
cd ./www/src
node ./index.js &
cd $OLDPWD && cd ./www/
./gulp &
.........
else
echo "Cancelled."
fi

Catch failure in shell script

Pretty new to shell scripting. I am trying to do the following:
#!/bin/bash
unzip myfile.zip
#do stuff if unzip successful
I know that I can just chain the commands together in with && but there is quite a chunk, it would not be terribly maintainable.
You can use the exit status of the command explicitly in the test:
if ! unzip myfile.zip &> /dev/null; then
# handle error
fi
You can use $?. It returns:
- 0 if the command was successfully executed.
- !0 if the command was unsuccessful.
So you can do
#!/bin/bash
unzip myfile.zip
if [ "$?" -eq 0 ]; then
#do stuff on unzip successful
fi
Test
$ cat a
hello
$ echo $?
0
$ cat b
cat: b: No such file or directory
$ echo $?
1
The variable $? contains the exit status of the previous command. A successful exit status for (most) commands is (usually) 0, so just check for that...
#!/bin/bash
unzip myfile.zip
if [ $? == 0 ]
then
# Do something
fi
If you want the shell to check the result of the executed commands and stop interpretation when something returns non-zero value you can add set -e which means Exit immediately if a command exits with a non-zero status. I'm using this often in scripts.
#!/bin/sh
set -e
# here goes the rest

Resources