I want to launch automated tests from a bash file and it's important that the bash file would exit with non-zero status if a test fails.
The issue is that I need to run an afterscript when the tests are done.
Ho do I do it with logical satements?
set -e
python -m pytest
python ./afterscript.py
So, I need to run ./afterscript.py even if the pytest fails, yet in case of test failure, I need the file to be exited with an error status after the afterscript is run.
There you go:
set -e
python -m pytest || true
python ./afterscript.py
That way, no matter what the exit status of python -m pytest is, the alternative is true and therefore set -e will not terminate your script.
Related
I do not know whether here is a problem in one of my steps regarding an github-workflow-job.
The step looks like this:
- name: check_tf_code
id: tf_check
run: |
terraform fmt -check
echo '::set-output name=bash_return_Value::$?'
and the error output for this step in the log is:
Error: Process completed with exit code 3
What could be the problem here? Is GitHub not able to make a bash evaluation for $? to get the
info whether the last (terraform) command was successful or not?
Run steps are run with set -eo pipefail (see docs); this means that if any command has a non-zero exit status, the job aborts, including when the command is part of a pipeline.
In your case, this means that if terraform fmt -check has a non-zero exit status, the next line is never run.
Then, your output would be literally $? because Bash doesn't expand anything in single quotes. You'd have to use
echo "::set-output name=bash_return_Value::$?"
but you already know that when this line is hit, the value of $? is 0.
If you want to do something in case a command fails, you could do
if ! terraform fmt -check; then
# Do something
fi
because if conditions don't trigger the set -e behaviour.
I'm trying to set up a GitLab pipeline, so that certain exit_codes are okay for a script I'm executing.
I have tried both shell and a ruby script, but both seem to have the same behaviour.
test_job:
stage: build
image: ruby:3.0
script:
- chmod +x ci/wrap.sh
- ./ci/wrap.sh
allow_failure:
exit_codes:
- 64
As you can see, I am just executing the script and nothing more, my expectation would be, that the last script executed is used a the exit status for the job.
In the script I'm only calling exit 64, which should be a "allowed failure" in that case, the pipeline log however says that the job failed because of exit code 1:
How do I get GitLab to accept the exit code of this (or a ruby) script as the job exit code?
I found a way to fix this problem. Apparently Gitlab Runner uses the -e flag, which means that any non-zero exit code will cancel the job. This can be updated by using set +e, but then you still need to capture the actual exit code for the job.
Using $? in two different lines of the configuration does not work, because Gitlab does echo calls in-between them.
So the exit code needs to be captured directly, example:
script:
- set +e
- ruby "ci/example.rb" || EXIT_CODE=$?
- exit $EXIT_CODE
Here's my trick for turning off early failure and checking the result code later:
script:
- set +e +o pipefail
- python missing_module.py 2>&1 | grep found; result=$?
- set -e -o pipefail
- "[ $result == 0 ]"
This turns off early exit and runs a command that we consider to have an acceptable failure if "found" is in the error text. It then turns early exit back on and tests whether the exit code we saved was good or not.
I am using Azure Devops to build and deploy my git repo to a third party vps. I do this by logging into the server from Azure Devops through SSH, executing a shell script to pull git repo, and build it with ie. vue-cli and Laravel.
When the bash script is executed I receive a lot of errors on nearly all commands although everything is succeeding - can anyone tell me how to get rid of these unless something is really failing (would be nice to fail if npm build exit with code 1 for instance).
See screenshot below.
Screenshots are only really helpful for visual issues. You can use PasteBin or etc to share long logs if necessary.
According to this issue Azure just follows the lead of whatever shell it's running code in. So, in Bash it continues unless explicitly told to stop.
To easily change this behavior you can add set -e (or set -o errexit) at the start of your script. The errexit option causes Bash to exit as soon as a command/etc returns a non-zero exit code.
Another worthy addition is the set -o pipefail option. If you've got any pipes like command1 | command2 this will return the first non-zero exit code from a chain of pipes of any length as the result. So, if command1 fails above but command2 succeeds it would return the failure code from command1 instead of overwriting it.
Finally, set -u (or -o nounset) causes an error when unset variables are encountered during parameter expansion. If running in a non-interactive shell, it will also exit.
Many scripts combine these by running set -euo pipefail at the beginning to stop them from running after the first problem is encountered.
If you want to explicitly force a bash script to exit you can use || and && accordingly. The expression command || exit will exit if the command fails and command && exit will exit if the command succeeds.
This seems to be one bug starting from npm V.3.10.8. You can check this discussion.
As a workaround you can add this script to package.json and run the command with --no-optional switch:
"optionalDependencies": {
"fsevents": "*"
},
Also, there's possibility that your NPM version is too old. You can use Node.js tool installer task with version spec = 12.x to install higher node.js and npm versions.
I have having trouble chaining scripts in npm. I am using webpack, running a build script then would like to run a bash file after. Both commands are working, but not if chaining them.
In my package.json I have this:
"scripts": {
"build-staging": "webpack --config webpack-staging.config.js -p || ./build-staging.sh"
},
If I run npm run build-staging it webpack runs the build and works fine. It does not run my build-staing.sh however. If I manually run this bash file it runs, so my issue is having it chain and run after the webpack script is finished. I've seen that the pipe || should do this, but no luck.
Am I doing the pipe wrong, or does the bash script not run because webpack does not 'kill' the script once finished? I am not able to run any more commands unless I use Crtl+C, maybe that's the issue?
Thanks!
|| is only used to run a program if the previous command failed (returned a non-zero status).
$ bash -c "exit 0" || echo "This won't run"
$ bash -c "exit 1" || echo "This will run"
This will run
$
If you want your second script to run regardless, you could use
"scripts": {
"build-staging": "webpack --config webpack-staging.config.js -p ; ./build-staging.sh"
},
Or if you only want it to run on success (which is more likely), you could use && instead of ||. Note that ; may not be supported by your platform. As mentioned in the comments, ; doesn't work on Windows, but && does.
Problem: I am trying to create an hg pre-commit hook. The commit always succeeds even if the only thing in the bash script invoked by the hook is "exit 1".
What I've Tried:
-Wrote a bash script that only had "exit 1" in it. (This still let the commit proceed. I would expect the commit to be aborted.)
-Put a pause in the bash script to verify it was being executed. (It was being executed)
-Set precommit directly equal to "exit 1" in the hgrc file. This successfully aborted the commit so I knew hooks were working.
-Wrote a python script that returned non-zero and that stopped the commit from happening. This verified that hooks were working with python scripts.
-Called my bash script from my python script and printed out the return value of the bash script in python. This always printed zero which tells me my bash script is not working correctly.
Bash Example:
hgrc file:
[hooks]
precommit = pre-commit.sh
pre-commit.sh
#!/bin/sh
exit 1
Python Example:
hgrc file:
[hooks]
precommit = pre_commit.py
pre_commit.py:
#!/usr/bin/env python
import sys
import os
# I would expect this to print 1 since pre-commit.sh explicitly exits 1
# However this always prints 0
print os.system("pre-commit.sh")
# This is working fine. It succesfully aborts the commit
sys.exit(1)
I just want to reiterate that this has nothing to do with python. I only used python to check the value of my bash script. The problem is my bash script always returns 0 and I need it to return non-zero in order to abort a commit.
I have a feeling this is due to me running windows. I have git hooks working, but I believe that is because git bash interprets the script. Any suggestions? Thanks in advance.
You cannot use "pre-commit.sh" on windows and expect it to work without telling Windows what to do with this.
You can use a bat-file like this (because the OS knows what to do with it!):
C:\temp>echo #echo test>pre-commit.bat
C:\temp>echo #exit 1 1>>pre-commit.bat
C:\temp>python -c "import os,sys;sys.exit(os.system('pre-commit.bat'))"
test
C:\temp>echo %ERRORLEVEL%
1