CasperJS pass exit code to Bash - bash

I have a problem with running my CasperJS tests on Travis CI.
Whenever a test fails CasperJS returns status code 1, which would be the correct status code to be returned on a failed test.
I am running all my tests with a bash script and I would need the exit code of the tests in the bash script. I tried the $? operator, but this only returns wheter the command was executed properly or not. Since it is done properly it always returns 0.
So my question is: Is there a way to pass the CasperJs-Test status code to my bash script?
The reason I need all this is that I am running my tests on Travis CI and Travis always exits with status 0, since the tests are executed correctly and I would need to have Travis exit with the proper exit codes.
UPDATE:
Here is my script:
#!/bin/sh
WIDGET_NAME=${1:-widget} # defaults to 'widget'
PORT=${2:-4001} # default port is 4001
SERVER_PORT=${3:-4002} # default port is 4002
TEST_CASES=${4:-./test/features/*/*/*-test.casper.js} # default run all subdirectories
# bail on errors
set -e
# switch to root folder
cd `dirname $0`/..
echo "Starting feature tests ..."
echo "- start App server on port $PORT"
WIDGET_NAME_PASCAL_CASE=`node -e "console.log(require('pascal-case') (process.argv[1]))" $WIDGET_NAME`
./node_modules/.bin/beefy app/widget.js $PORT \
--cwd public \
--index public/widget-test.html \
-- \
--standalone $WIDGET_NAME_PASCAL_CASE \
-t [ babelify --sourceMapRelative . ] \
-t browserify-shim \
--exclude moment 1>/dev/null &
echo $! > /tmp/appointment-widget-tester-process1.pid
sleep 1
echo "- start Fake API server on port $SERVER_PORT"
bin/fake-api $SERVER_PORT 1>/dev/null &
echo $! > /tmp/appointment-widget-tester-process2.pid
sleep 1
echo "- run feature tests"
mocha-casperjs $TEST_CASES --viewport-width=800 --viewport-height=600 --fail-fast | grep --line-buffered -v -e '^$' | grep --line-buffered -v "Unsafe JavaScript"
echo "- stop App and Fake API server"
kill -9 `cat /tmp/appointment-widget-tester-process*.pid`
rm /tmp/appointment-widget-tester-process*.pid
echo "done."

I have found my problem:
It lies in the nature of the | operator! The first operation is the start of my tests and the second operation after the | operator is the grep and my $? references to the last command on the console, therefore it returns the exit code of the grep not the mocha-casperjs-runner
A solution: Pipe output and capture exit status in Bash

Related

How to run shell script via kubectl without interactive shell

I am trying to export a configuration from a service called keycloak by using shell script. To do that, export.sh will be run from the pipeline.
the script connects to k8s cluster and run the command in there.
So far, everything goes okay the export work perfectly.
But when I try to exit from the k8s cluster with exit and directly end the shell script. therefore it will move back to the pipeline host without staying in the remote machine.
Running the command from the pipeline
ssh -t ubuntu#example1.com 'bash' < export.sh
export.sh
#!/bin/bash
set -x
set -e
rm -rf /tmp/realm-export
if [ $(ps -ef | grep "keycloak.migration.action=export" | grep -v grep | wc -l) != 0 ]; then
echo "Another export is currently running"
exit 1
fi
kubectl -n keycloak exec -it keycloak-0 bash
mkdir /tmp/export
/opt/jboss/keycloak/bin/standalone.sh -Dkeycloak.migration.action=export -Dkeycloak.migration.provider=dir -Dkeycloak.migration.dir=/tmp/export -Dkeycloak.migration.usersExportStrategy=DIFFERENT_FILES -Djboss.socket.binding.port-offset=100
rm /tmp/export/master-*
exit
kubectl -n keycloak cp keycloak-0:/tmp/export /tmp/realm-export
exit
exit
scp ubuntu#example1.com:/tmp/realm-export/* ./configuration2/realms/
After the first exit the whole shell script stopped, the left commands doesn't work. it won't stay on ubuntu#example1.com.
Is there any solutions?
Run the commands inside without interactive shell using HEREDOC(EOF).
It's not EOF. It's 'EOF'. this prevents a variable expansion in the current shell.
But in the other script's /tmp/export/master-* will expand as you expect.
kubectl -n keycloak exec -it keycloak-0 bash <<'EOF'
<put your codes here, which you type interactively>
EOF
export.sh
#!/bin/bash
set -x
set -e
rm -rf /tmp/realm-export
if [ $(ps -ef | grep "keycloak.migration.action=export" | grep -v grep | wc -l) != 0 ]; then
echo "Another export is currently running"
exit 1
fi
# the suggested code.
kubectl -n keycloak exec -it keycloak-0 bash <<'EOF'
<put your codes here, which you type interactively>
EOF
mkdir /tmp/export
/opt/jboss/keycloak/bin/standalone.sh -Dkeycloak.migration.action=export -Dkeycloak.migration.provider=dir -Dkeycloak.migration.dir=/tmp/export -Dkeycloak.migration.usersExportStrategy=DIFFERENT_FILES -Djboss.socket.binding.port-offset=100
rm /tmp/export/master-*
kubectl -n keycloak cp keycloak-0:/tmp/export /tmp/realm-export
scp ubuntu#example1.com:/tmp/realm-export/* ./configuration2/realms/
Even if scp runs successfully or not, this code will exit.

jenkins stuck after restarting remote service agent

This is part of bigger code that checking the OS version and choosing the correct condition by it.
( after checking OS version, it will go to this if condition: )
if [ "$AK" = "$OS6" ]
then
if [ "$(ls -la /etc/init.d/discagent 2>/dev/null | wc -l)" == 1 ]
then
/etc/init.d/discagent restart 2>&1 > /dev/null
/etc/init.d/discagent status |& grep -qe 'running'
if [ $? -eq 0 ] ; then
echo " Done "
else
echo " Error "
fi
fi
fi
If im hashing service discagent restart the pipeline is passing.
But if im not hashing it then it hang and not given any errors, And on the output file it is showing only the second server (out of few) that its hang on, And not moving to the next server.
what could be the issue?
p. S
while running it direct on the server it is working.
Run this script manually on the servers, that this script will be running on.
You can use xtrace -x which would show each statement before being executed, when you use with -v with -x so this would be -xv and the statement would be outputted, and then the line after the output of the statement with the variables substituted before the code is substituted.
using -u would show you the exact error when this occurs.
Another command is the trap command to debug your bash script.
More information on the following webpage,
https://linuxconfig.org/how-to-debug-bash-scripts

SSH not exiting properly inside if statement in bash heredoc

So i am running this script to check if a java server is up remotely by sshing into remote. If it is down, i am trying to exit and run another script locally. However, after the exit command, it is still in the remote directory.
ssh -i ec2-user#$DNS << EOF
if ! lsof -i | grep -q java ; then
echo "java server stopped running"
# want to exit ssh
exit
# after here when i check it is still in ssh
# I want to run another script locally in the same directory as the current script
./other_script.sh
else
echo "java server up"
fi;
EOF
The exit is exiting the ssh session and so never gets to the execution of the other_script.sh line in the HEREDOC. It would be better to place this outside of the script and actioned from the exit status of the HEREDOC/ssh and so:
ssh -i ec2-user#$DNS << EOF
if ! lsof -i | grep -q java ; then
echo "java server stopped running"
exit 7 # Set the exit status to a number that isn't standard in case ssh fails
else
echo "java server up"
fi;
EOF
if [[ $? -eq 7 ]]
then
./other_script.sh
fi

In a bash script, test error code from a called script

My bash script (init.sh) call another script (script.sh) and I want to test the error code from script.sh before doing any further action in init.sh.
I thought about testing it with $?, but it does not work
My init.sh is like the following:
#!/bin/bash
set -e
echo "Before call"
docker run -v $PWD:/t -w /t [command]
if [ $? == 1 ]; then
echo "Issue"
fi
echo "After call"
I only got the Before call from stdout and not the After call.
I know for a fact that if I execute docker run -v $PWD:/t -w /t [command] alone with wrong arguments, then echo $? will rightly display 1.
I was thinking that I do not catch the exit code from scrip.sh, but from somewhere else.
Any ideas?
You running the script with set -e. This means that if any command exits with a non zero status, bash will stop executing all subsequent lines. So here, if docker exits with status 1, the conditional that follows will not have a chance to run at all. Try this instead:
#!/bin/bash
set -e
echo "Before call"
if ! docker run -v $PWD:/t -w /t [command]; then
echo "Issue"
fi
echo "After call"
This runs the command inside the if test which suppresses the effect of set -e I described above and gives you a chance to catch the error. Note this is will also catch all non-zero statuses, not just 1.
Bash numeric comparison operator is -eq, and not ==...
So:
#!/bin/bash
set -e
echo "Before call"
docker run -v $PWD:/t -w /t [command]
if [ $? -eq 1 ]; then
echo "Issue"
fi
echo "After call"
set -e is generally a bad idea. Sure, it may seem like a good idea to have your script exit automatically in the event of an unexpected error, but the problem is that set -e and you may have different ideas about what constitutes a fatal error.
Instead, do your own error handling.
#!/bin/bash
echo "Before call"
docker run -v $PWD:/t -w /t [command]
docker_status=$?
if [ $docker_status != 0 ]; then
echo "docker returned: $docker_status"
exit $docker_status
fi
echo "After call"
In this simple code, I've somewhat redundantly saved the value of $? to another variable first. This ensures that it is preserved after you start executing other commands that examine, log, or otherwise process the value of $?. Also, I'm logging and exiting here on any non-zero status, not just 1. In theory, you might take different action for an exit status of 1 than for an exit status of 2, but here we take the same log-then-exit action for any error.

Catching errors in Bash with glassfish commands [return code in pipes]

I am writing a bash script to manage deployments to a GF server for several environments. What I would like to know is how can I get the result of a GF command and then determine whether to continue or exit.
For example
Say I want to redeploy, I have this script
$GF_ASADMIN --port $GF_PORT redeploy --name $EAR_FILE_NAME --keepstate=true $EAR_FILE | tee -a $LOG
The variables are already defined. So GF will start to redeploy and either suceed or fail. I want to check if it does and act accordingly. I have this right after it.
RC=$?
if [[ $RC -eq 0 ]];
then echoInfo "Application Successfully redeployed!" | tee -a $LOG;
else
echoError "Failed to redeploy application!"
exit 1
fi;
However, it doesnt really seem to work .
The problem is the pipe
$GF_ASADMIN ... | tee -a $LOG
$? reflects the return code of tee.
Your are looking for PIPESTATUS. See man bash:
PIPESTATUS
An array variable (see Arrays below) containing a list of exit
status values from the processes in the most-recently-executed
foreground pipeline (which may contain only a single command).
See also this example to clarify the PIPESTATUS
false | true
echo ${PIPESTATUS[#]}
Output is: 1 0
The corrected code is:
RC=${PIPESTATUS[0]}
Or try using a code block redirect, for example:
{
if "$GF_ASADMIN" --port $GF_PORT redeploy --name "$EAR_FILE_NAME" --keepstate=true "$EAR_FILE"
then
echo Info "Application Successfully redeployed!"
else
echo Error "Failed to redeploy application!" >&2
exit 1
fi
} | tee -a "$LOG"

Resources