How to do try and catch in bash file? - bash

I have shell file (deploy.sh) do the following commands:
npm run build:prod
docker-compose -f .docker/docker-compose.ecr.yml build my-app
docker-compose -f .docker/docker-compose.ecr.yml push my-app
aws ecs update-service --cluster ...
I want to stop the execution of the bash when error occurred from one of the commands.
Which command does that in shell?

If you want to test the success or failure of a command, you can rely on its exit code. Knowing that each command will return a 0 on success or any other number on failure, you have a few options on how to handle each command's errors.
|| handler
npm run build:prod || exit 1
if condition
if docker-compose -f .docker/docker-compose.ecr.yml build my-app; then
printf "success\n"
else
printf "failure\n"
exit 1
fi
the $? variable
docker-compose -f .docker/docker-compose.ecr.yml push my-app
if [ $? -gt 0 ]; then
printf "Failure\n"
exit 1
fi
traps
err_report() {
echo "Error on line $1"
}
trap 'err_report $LINENO' ERR
aws ecs update-service --cluster ...
set -e
To globally "exit on error", then set -e will do just that. It won't give you much info, but it'll get the job done.

You can use set -e to exit on errors. And even better, you can set -e and use a trap function.
#!/bin/bash
set -e
trap 'catch $? $LINENO' EXIT
catch() {
echo "catching!"
if [ "$1" != "0" ]; then
# error handling goes here
echo "Error $1 occurred on $2"
fi
}
npm run build:prod
docker-compose -f .docker/docker-compose.ecr.yml build my-app
docker-compose -f .docker/docker-compose.ecr.yml push my-app
aws ecs update-service --cluster ...
source: https://medium.com/#dirk.avery/the-bash-trap-trap-ce6083f36700

Did a fast search on Google and it seems there isnt. Best is to use && or || or if ... else blocks see links below:
SO-Try Catch in bash
and
Linuxhint
Hope this helps

Related

bash run command without exiting on error and tell me its exit code

From a bash script I want to run a command which might fail, store its exit code in a variable, and run a subsequent command regardless of that exit code.
Examples of what I'm trying to avoid:
Using set:
set +e # disable exit on error (it was explicitly enabled earlier)
docker exec $CONTAINER_NAME npm test
test_exit_code=$? # remember exit code of previous command
set -e # enable exit on error
echo "copying unit test result file to host"
docker cp $CONTAINER_NAME:/home/test/test-results.xml .
exit $test_exit_code
Using if:
if docker exec $CONTAINER_NAME npm test ; then
test_exit_code=$?
else
test_exit_code=$?
fi
echo "copying unit test result file to host"
docker cp $CONTAINER_NAME:/home/test/test-results.xml .
exit $test_exit_code
Is there a semantically straightforward way to tell bash "run command without exiting on error, and tell me its exit code"?
The best alternative I have is still confusing and requires comments to explain to subsequent developers (it's just a terser if/else):
docker exec $CONTAINER_NAME npm test && test_exit_code=$? || test_exit_code=$?
echo "copying unit test result file to host"
docker cp $CONTAINER_NAME:/home/test/test-results.xml .
exit $test_exit_code
I believe you could just use the || operator? Which is equivalent to an "if − else" command.
Would the following address your use case? (otherwise feel free to comment!)
set -e # implied in a CI context
exit_status=0
docker exec "$CONTAINER_NAME" npm test || exit_status=$?
docker cp "$CONTAINER_NAME:/home/test/test-results.xml" .
exit "$exit_status"
or more briefly:
set -e # implied in a CI context
docker exec "$CONTAINER_NAME" npm test || exit_status=$?
docker cp "$CONTAINER_NAME:/home/test/test-results.xml" .
exit "${exit_status:-0}"
As an aside, if you are not interested in this exit status code, you can also do something like this:
set -e # implied in a CI context
docker exec "$CONTAINER_NAME" npm test || :
docker cp "$CONTAINER_NAME:/home/test/test-results.xml" .
For more details on the || : tip, see e.g. this answer on Unix-&-Linux SE:
Which is more idiomatic in a bash script: || true or || :?
Very simply save the return-code if command failed:
#!/usr/bin/env sh
# Implied by CI
set -e
# Initialise exit return code
rc=0
# Run command or save its error return code if it fail
docker exec "$CONTAINER_NAME" npm test || rc="$?"
printf '%s\n' "copying unit test result file to host"
# Run other command regardless if first one failed
docker cp "$CONTAINER_NAME:/home/test/test-results.xml" .
# Exit with the return code of the first command
exit "$rc"
You could use a kind of try catch, to get the exit code and use a simple switch case to run another commands depending on the error exit code:
(
exit 2
#here your command which might fail
)
exit_code=$?
case "$exit_code" in
0) echo "Success execution"
#do something
;;
1) echo "Error type 1"
#do something
;;
2) echo "Error type 2"
#do something
;;
*) echo "Unknown error type: $exit_code"
;;
esac

Shell Script in Jenkins

I am runnning shell script in a jenkins pipeline step.
Script is doing maven build and some other stuff.
I want pipeline to fail if maven build fails.
I am not sure how to do that.
This is my script
#!/usr/bin/env bash
echo "Acceptance Test"
cd .. && cd $PWD/transactions-ui
npm run acceptance-start && cd .. && cd $PWD/transactions-test/ && mvn verify -Dserenity.reports=email -Dwebdriver.driver=chrome -Dwebdriver.provided.mydriver=starter.util.RemoteChromeDriver ; cd .. ; cd $PWD/transactions-ui/ ; npm run acceptance-stop
echo "Test completed"
Following is my jenkins file
dir("${workspace}/transactions-test")
{
sh "${workspace}/transactions-test/run.sh"
}
It depends on how you define failure. If the exit code of your run.sh script is different than 0, then the build will fail.
Add set -e in your bash to exit if any command returns a non-zero status
#!/usr/bin/env bash
set -e
echo "Acceptance Test"
...
Bash set command for reference.
If set -e can't work for you, you can examine key cmd result as following:
#!/usr/bin/env bash
echo "Acceptance Test"
cd ..
cd $PWD/transactions-ui
npm run acceptance-start
if [[ $? -ne 0 ]]; then
echo "npm run acceptance-start failed"
exit 1
fi
cd ..
cd $PWD/transactions-test/
mvn verify \
-Dserenity.reports=email \
-Dwebdriver.driver=chrome \
-Dwebdriver.provided.mydriver=starter.util.RemoteChromeDriver
if [[ $? -ne 0 ]]; then
echo "mvn verify failed"
exit 1
fi
cd ..
cd $PWD/transactions-ui/
npm run acceptance-stop
if [[ $? -ne 0 ]]; then
echo "npm run acceptance-stop failed"
exit 1
fi
echo "Test completed"

Starting multiple services using shell script in Dockerfile

I am creating a Dockerfile to install and start the WebLogic 12c services using startup scripts at "docker run" command. I am passing the shell script in the CMD instruction which executes the startWeblogic.sh and startNodeManager.sh script. But when I logged in to the container, it has started only the first script startWeblogic.sh and not even started the second script which is obvious from the docker logs.
The same script executed inside the container manually and it is starting both the services. What is the right instruction for running the script to start multiple processes in a container and not to exit the container?
What am I missing in this script and in the dockerfile? I know that container can run only one process, but in a dirty way, how to start multiple services for an application like WebLogic which has a nameserver, node manager, managed server and creating managed domains and machines. The managed server can only be started when WebLogic nameserver is running.
Script: startscript.sh
#!/bin/bash
# Start the first process
/u01/app/oracle/product/wls122100/domains/verdomain/bin/startWebLogic.sh -D
status=$?
if [ $status -ne 0 ]; then
echo "Failed to start my_first_process: $status"
exit $status
fi
# Start the second process
/u01/app/oracle/product/wls122100/domains/verdomain/bin/startNodeManager.sh -D
status=$?
if [ $status -ne 0 ]; then
echo "Failed to start my_second_process: $status"
exit $status
fi
while sleep 60; do
ps aux |grep "Name=adminserver" |grep -q -v grep
PROCESS_1_STATUS=$?
ps aux |grep node |grep -q -v grep
PROCESS_2_STATUS=$?
# If the greps above find anything, they exit with 0 status
# If they are not both 0, then something is wrong
if [ $PROCESS_1_STATUS -ne 0 -o $PROCESS_2_STATUS -ne 0 ]; then
echo "One of the processes has already exited."
exit 1
fi
done
Truncated the dockerfile.
RUN unzip $WLS_PKG
RUN $JAVA_HOME/bin/java -Xmx1024m -jar /u01/app/oracle/$WLS_JAR -silent -responseFile /u01/app/oracle/wls.rsp -invPtrLoc /u01/app/oracle/oraInst.loc > install.log
RUN rm -f $WLS_PKG
RUN . $WLS_HOME/server/bin/setWLSEnv.sh && java weblogic.version
RUN java weblogic.WLST -skipWLSModuleScanning create_basedomain.py
WORKDIR /u01/app/oracle
CMD ./startscript.sh
docker build and run commands:
docker build -f Dockerfile-weblogic --tag="weblogic12c:startweb" /var/dprojects
docker rund -d -it weblogic12c:startweb
docker exec -it 6313c4caccd3 bash
Please use supervisord for running multiple services in a docker container. It will make the whole process more robust and reliable.
Run supervisord -n as your CMD command and configure all your services in /etc/supervisord.conf.
Sample conf would look like:
[program:WebLogic]
command=/u01/app/oracle/product/wls122100/domains/verdomain/bin/startWebLogic.sh -D
stderr_logfile = /var/log/supervisord/WebLogic-stderr.log
stdout_logfile = /var/log/supervisord/WebLogic-stdout.log
autorestart=unexpected
[program:NodeManager]
command=/u01/app/oracle/product/wls122100/domains/verdomain/bin/startNodeManager.sh -D
stderr_logfile = /var/log/supervisord/NodeManager-stderr.log
stdout_logfile = /var/log/supervisord/NodeManager-stdout.log
autorestart=unexpected
It will handle all the things you are trying to do with a shell script.
Hope it helps!

Docker run with if statement within Docker Bamboo Task

I would like to run a docker openjdk:8-jdk with the following command:
if [ "$GIT_BRANCH" = "master" ]; then ./gradlew publish; else echo Skipped because it is not master branch; fi
I tried to do the following:
docker run --rm openjdk:8-jdk "if [ \"$GIT_BRANCH\" = \"master\" ]; then echo hi; else echo bla; fi"
But I get the following error: executable file not found in $PATH": unknown.
Furthermore it is not possible for me that I use the if statement like that:
if ...
docker run ...
else
echo Skipped
Because I have to run it as a bamboo docker task.
Since the command above is not executed within bash, bash has to be started first like that:
docker run --rm openjdk:8-jdk /bin/bash -c "if [ \"$GIT_BRANCH\" = \"master\" ]; then ./gradlew publish; else echo Skipped because it is not master branch; fi"

CasperJS pass exit code to Bash

I have a problem with running my CasperJS tests on Travis CI.
Whenever a test fails CasperJS returns status code 1, which would be the correct status code to be returned on a failed test.
I am running all my tests with a bash script and I would need the exit code of the tests in the bash script. I tried the $? operator, but this only returns wheter the command was executed properly or not. Since it is done properly it always returns 0.
So my question is: Is there a way to pass the CasperJs-Test status code to my bash script?
The reason I need all this is that I am running my tests on Travis CI and Travis always exits with status 0, since the tests are executed correctly and I would need to have Travis exit with the proper exit codes.
UPDATE:
Here is my script:
#!/bin/sh
WIDGET_NAME=${1:-widget} # defaults to 'widget'
PORT=${2:-4001} # default port is 4001
SERVER_PORT=${3:-4002} # default port is 4002
TEST_CASES=${4:-./test/features/*/*/*-test.casper.js} # default run all subdirectories
# bail on errors
set -e
# switch to root folder
cd `dirname $0`/..
echo "Starting feature tests ..."
echo "- start App server on port $PORT"
WIDGET_NAME_PASCAL_CASE=`node -e "console.log(require('pascal-case') (process.argv[1]))" $WIDGET_NAME`
./node_modules/.bin/beefy app/widget.js $PORT \
--cwd public \
--index public/widget-test.html \
-- \
--standalone $WIDGET_NAME_PASCAL_CASE \
-t [ babelify --sourceMapRelative . ] \
-t browserify-shim \
--exclude moment 1>/dev/null &
echo $! > /tmp/appointment-widget-tester-process1.pid
sleep 1
echo "- start Fake API server on port $SERVER_PORT"
bin/fake-api $SERVER_PORT 1>/dev/null &
echo $! > /tmp/appointment-widget-tester-process2.pid
sleep 1
echo "- run feature tests"
mocha-casperjs $TEST_CASES --viewport-width=800 --viewport-height=600 --fail-fast | grep --line-buffered -v -e '^$' | grep --line-buffered -v "Unsafe JavaScript"
echo "- stop App and Fake API server"
kill -9 `cat /tmp/appointment-widget-tester-process*.pid`
rm /tmp/appointment-widget-tester-process*.pid
echo "done."
I have found my problem:
It lies in the nature of the | operator! The first operation is the start of my tests and the second operation after the | operator is the grep and my $? references to the last command on the console, therefore it returns the exit code of the grep not the mocha-casperjs-runner
A solution: Pipe output and capture exit status in Bash

Resources