I am running Jenkins as a CI/CD pipeline for a project. To make things easier for my self, I have created a bash script to run the tests and send coverage report, here is my bash script:
#!/bin/bash
echo $GIT_COMMIT # only needed for debugging
GIT_COMMIT=$(git log | grep -m1 -oE '[^ ]+$')
echo $GIT_COMMIT # only needed for debugging
./cc-test-reporter before-build
yarn test --coverage
./cc-test-reporter after-build -t simplecov --exit-code $? || echo “Skipping Code Climate coverage upload”
And this is how I am running it in Jenkins:
sh "jenkins/scripts/load_env_variables.sh test"
Jenkins runs the script, however when the script fails, Jenkins does not exit, rather it continues:
Any help with this please?
Use "set -e" in script.
-e Exit immediately if a command exits with a non-zero status.
Related
I have a Jenkinsfile written for Scripted Pipeline where I have the following piece of code:
sh """ cd $WORRKSPACE
source myscript.sh
cd \${EXPORTED_VAR1}
.
.
.
"""
So I source myscript which in turn has a source command in it, say source their_script.sh . The problem is like their_script.sh contains a line echo $0 | egrep -iqe string. And whenever egrep finds no match my Jenkins job exits. But this happened suddenly and it was working till yesterday!!!
I understand that grep returns status 1 when it finds no match and that is why it exits. But I wanted myscript to continue even if grep failed. I also understand that using set +e and set -e will help me not to exit if grep fails. But I am not allowed to modify their_script.sh. If I add set +/-e when I call myscript itself, will the case be like none of the errors will exit the script? Wont it ignore all errors from exiting?
Is there any solution so that I can continue with my job even if grep fails?
Try
sh """#!/bin/bash -xe
cd ${WORKSPACE}
## etc.
...
"""
How do I write a shell script that continues execution even if a specific command failed, however I want to output as error later, I tried this:
#!/bin/bash
./node_modules/.bin/wdio wdio.conf.js --spec ./test/specs/login.test.js
rc=$?
echo "print here"
chown -R gitlab-runner /gpc_testes/
chown -R gitlab-runner /gpc_fontes/
exit $rc
However the script stops when the node modules command fails.
You could use
command_that_would_fail || command_failed=1
# More code and even more
.
.
if [ ${command_failed:-0} -eq 1 ]
then
echo "command_that_would_fail failed"
fi
Suppose name of the script is test.sh.
While executing the scripting, execute it with below command
./test.sh 2>>error.log
Error due bad commands won't appear on terminal but will be stored in file error.log which can be referred afterwards.
This is where I am running my tests in travis.yml:
# Run tests
script:
# Test application in Docker container
- ./tools/docker-test.sh
The shell script docker-test.sh looks like this:
#!/usr/bin/env bash
githash="$(git rev-parse --short HEAD)"
echo "-------------------------------------------------"
echo "| Running unit tests |"
echo "-------------------------------------------------"
docker create -it --name test eu.gcr.io/test/test:$githash
docker start test
docker exec test /bin/sh -c "go test ./..."
docker stop test
docker rm -fv test
The TravisCI build is a success even if the tests fail.
How can I get TravisCI to know if test have failed or not? I don't know if this is a problem with errors not being propagated from Docker, errors not being propagated from the shell script or TravisCI not knowing when go tests succeed or fail.
Your script is exiting with the status code of the last command docker rm -fv test.
You need to capture the status code of the test's, then clean up docker, then exit.
This code example is from a slightly different question over here but it's the same solution.
#!/usr/bin/env bash
set -e
# Set a default return code
RC=2
# Cleanup
function cleanup {
echo "Removing container"
docker stop test || true
docker rm -f test || true
exit $RC
}
trap cleanup EXIT
# Test steps
docker create -it --name test path
docker start test
docker exec test /bin/sh -c "go test ./..."
RC=$?
I have a problem with running my CasperJS tests on Travis CI.
Whenever a test fails CasperJS returns status code 1, which would be the correct status code to be returned on a failed test.
I am running all my tests with a bash script and I would need the exit code of the tests in the bash script. I tried the $? operator, but this only returns wheter the command was executed properly or not. Since it is done properly it always returns 0.
So my question is: Is there a way to pass the CasperJs-Test status code to my bash script?
The reason I need all this is that I am running my tests on Travis CI and Travis always exits with status 0, since the tests are executed correctly and I would need to have Travis exit with the proper exit codes.
UPDATE:
Here is my script:
#!/bin/sh
WIDGET_NAME=${1:-widget} # defaults to 'widget'
PORT=${2:-4001} # default port is 4001
SERVER_PORT=${3:-4002} # default port is 4002
TEST_CASES=${4:-./test/features/*/*/*-test.casper.js} # default run all subdirectories
# bail on errors
set -e
# switch to root folder
cd `dirname $0`/..
echo "Starting feature tests ..."
echo "- start App server on port $PORT"
WIDGET_NAME_PASCAL_CASE=`node -e "console.log(require('pascal-case') (process.argv[1]))" $WIDGET_NAME`
./node_modules/.bin/beefy app/widget.js $PORT \
--cwd public \
--index public/widget-test.html \
-- \
--standalone $WIDGET_NAME_PASCAL_CASE \
-t [ babelify --sourceMapRelative . ] \
-t browserify-shim \
--exclude moment 1>/dev/null &
echo $! > /tmp/appointment-widget-tester-process1.pid
sleep 1
echo "- start Fake API server on port $SERVER_PORT"
bin/fake-api $SERVER_PORT 1>/dev/null &
echo $! > /tmp/appointment-widget-tester-process2.pid
sleep 1
echo "- run feature tests"
mocha-casperjs $TEST_CASES --viewport-width=800 --viewport-height=600 --fail-fast | grep --line-buffered -v -e '^$' | grep --line-buffered -v "Unsafe JavaScript"
echo "- stop App and Fake API server"
kill -9 `cat /tmp/appointment-widget-tester-process*.pid`
rm /tmp/appointment-widget-tester-process*.pid
echo "done."
I have found my problem:
It lies in the nature of the | operator! The first operation is the start of my tests and the second operation after the | operator is the grep and my $? references to the last command on the console, therefore it returns the exit code of the grep not the mocha-casperjs-runner
A solution: Pipe output and capture exit status in Bash
What would be the correct format for the following, where I want to execute two scripts? The following is only executing the first one for me:
if ps aux | grep -E "[a]ffiliate_download.py|[g]oogle_download.py" > /dev/null
then
echo "Script is already running. Skipping"
else
exec "$DIR/affiliate_download.py"
exec "$DIR/google_download.py"
fi
The exec command replaces the current shell process with the program it runs. Since the shell is no longer running, it can't run commands after that.
Just execute the commands normally:
else
"$DIR/affiliate_download.py"
"$DIR/google_download.py"
fi