Teamcity to fail build with sane message - teamcity

How could I fail build with sane message/status while running Command line build step?
Or course I can exit 1 in my script but I'll get ugly 'Exit code 1' as a build result.

function fail_build {
echo "##teamcity[buildProblem description='$1']" 1>&2
exit 0
}
could be used in script like
cd ./logs
if grep -Pqr 'error text regex' *;
then fail_build "There are errors in logs"; fi
More on TC documentation page.

Related

In GCP, how can I get a parent cloud build to fail when a child build fails?

I have a monorepo set up and a cloudbuild.yaml file in the root of my repository spins off child cloud build jobs in the first step:
# Trigger builds for all packages in the repository.
- name: "gcr.io/cloud-builders/gcloud"
entrypoint: "bash"
args: [
"./scripts/cloudbuild/build-all.sh",
# Child builds don't have the git context, so pass them the SHORT_SHA.
"--substitutions=_TAG=$SHORT_SHA",
]
timeout: 1200s # 20 minutes
The build all script is something I copied from the community builders repo:
#!/usr/bin/env bash
DIR_NAME="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
set -e # Sets mode to exit on an error, without executing the remaining commands.
for d in {packages,ops/helm,ops/pulumi}/*/; do
config="${d}cloudbuild.yaml"
if [[ ! -f "${config}" ]]; then
continue
fi
echo "Building $d ... "
(
gcloud builds submit . --config=${config} $*
) &
done
wait
It waits until all child builds are done before continuing to the next one... handy!
Only problem is, if any of the child builds fail, it will still continue to the next step.
Is there a way to make this step fail if any of the child builds fail? I guess my script isn't returning the correct error code...?
The set -e flag should make the script to exit if any of the commands performed has an error, however you can also check the output of a command by using the $? variable, for example you can include the next lines:
echo "Building $d ... "
(
gcloud builds submit . --config=${config} $*
if [ $? == 1 ]; then #Check the status of the last command
echo "There was an error while building $d, exiting"
exit 1
fi
) &
So if there was an error the script will exit and give an status of 1 (error)

Makefile result message during parallel build

Given a Makefile that's often times run with the -j flag for parallel builds. I want it to terminate with a result message. I would like this message to say if the build failed, and if it failed, what the error was. It doesn't have to say anything if the build succeeded (although it could) but it must warn the user when a target failed to build and why.
This behavior is already there during sequential builds, but not during parallel builds. Parallel builds interweaves the output and an error message is often overlooked because output from other targets might push the failed target's error off screen. A careless developer might see no errors on his/her screen and assume the build succeeded.
It's quite an intuitive feature and I've searched for an answer, but it doesn't seem like there's any straight forward solutions. Any ideas?
You basically run
make -j 8 2> >(tee /tmp/error.log)
test $? -ne 0 && echo "build errors:"
cat /tmp/error.log
and you get all of stderr after the build finishes.
-- EDIT --
Updating to use tee, to output on stdout and into file:
Make returns non-zero if one of its recipe's fails so you could do something like this from the command line (assuming bash shell):
make 2>&1 | tee build.log
[ ${PIPESTATUS}[0] -eq 0 ] || ( echo "MAKE FAILED!"; grep --color build.log "Error:" )
The ${PIPESTATUS}[0] gives you the exit code of the first command (make 2>&1) as opposed to the exit status of the entire command (which would the exit status of tee if the make failed). It is bash specific, so it won't work in zsh for example.
Alternatively you could add the same logic as the top level target of a recursive make.
ifndef IN_RECURSION
export IN_RECURSION:=1
$(info At top level -- defining default target)
_default:
#echo "doing recursive call of make"
#$(MAKE) $(MAKECMDGOALS) IN_RECURSION=1 2>&1 | tee build.log; \
[ ${PIPESTATUS}[0] -eq 0 ] || ( echo "MAKE FAILED!"; grep --color "Error:" build.log )
.PHONY: _default
endif
all:
....
Note that in this case the \ used to catinate the two recipe lines is crucial, as the second command must run in the same shell instance as the first.

How to trigger a fail in Travis CI?

One of my test is a simple bash command with an if condition. I want Travis CI to consider a build as failed if the condition is positive.
I try to do it this way (a part of the .travis.yml file):
# ...
script:
- npm run build
- if [[ `git status --porcelain` ]]; then >&2 echo "Fail"; fi
# ...
But when the condition is positive, the message is just printed and the build is considered as successful.
What should I do to make a build failed when the condition is positive?
Just add exit 1; after the echo. More info.
If you just want to assert a condition but continue with testing, the following worked for me:
bash -c 'if [[ `git status --porcelain` ]]; then >&2 echo "Fail"; exit 1; fi'
This will make the build results fail but not terminate it.
Add return 1;
Travis compiles the different commands into a single bash script so exit 1 or travis_terminate 1 will abruptly interrupt the workflow and skip the after_script phase.
For complex commands that you want to make more readable and don't want to move to their own script, you can take advantage of YAML's literal scalar indicator:
script:
- |
if [[ `git status --porcelain` ]]; then
>&2 echo "Fail"
return 1
fi

Understanding Bash if statement that invokes a command

Does anyone know what this is doing:
if ! /fgallery/fgallery -v -j3 /images /usr/share/nginx/html/ "${GALLERY_TITLE:-Gallery}"; then
mkdir -p /usr/share/nginx/html
I understand the first part is saying if /fgallery/fgallery directory doesn't exist but after this it it not clear for me.
In Bash, we can build an if based on the exit status of a command this way:
if command; then
echo "Command succeeded"
else
echo "Command failed"
fi
then part is executed when the command exits with 0 and else part otherwise.
Your code is doing exactly that.
It can be rewritten as:
/fgallery/fgallery -v -j3 /images /usr/share/nginx/html/ "${GALLERY_TITLE:-Gallery}"; fgallery_status=$?
if [ "$fgallery_status" -ne 0 ]; then
mkdir -p /usr/share/nginx/html
fi
But the former construct is more elegant and less error prone.
See these posts:
How to conditionally do something if a command succeeded or failed
Why is testing "$?" to see if a command succeeded or not, an antipattern?

Shell Script, When executing commands do something if an error is returned

I am trying to automate out a lot of our pre fs/db tasks and one thing that bugs me is not knowing whether or not a command i issue REALLY happened. I'd like a way to be able to watch for a return code of some sort from executing that command. Where if it fails to rm because of a permission denied or any error. to issue an exit..
If i have a shell script as such:
rm /oracle/$SAPSID/mirrlogA/cntrl/cntrl$SAPSID.ctl;
psuedo code could be something similar to..
rm /oracle/$SAPSID/mirrlogA/cntrl/cntrl$SAPSID.ctl;
if [returncode == 'error']
exit;
fi
how could i for example, execute that rm command and exit if its NOT rm'd. I will be adapting the answer to execute with multiple other types of commands such as sed -i -e, and cp and umount
edit:
Lets suppose i have a write protected file such as:
$ ls -lrt | grep protectedfile
-rwx------ 1 orasmq sapsys 0 Nov 14 12:39 protectedfile
And running the below script generates the following error because obviously theres no permissions..
rm: remove write-protected regular empty file `/tmp/protectedfile'? y
rm: cannot remove `/tmp/protectedfile': Operation not permitted
Here is what i worked out from your guys' answers.. is this the right way to do something like this? Also how could i dump the error rm: cannot remove /tmp/protectedfile': Operation not permitted` to a logfile?
#! /bin/bash
function log(){
//some logging code, simply writes to a file and then echo's out inpit
}
function quit(){
read -p "Failed to remove protected file, permission denied?"
log "Some log message, and somehow append the returned error message from rm"
exit 1;
}
rm /tmp/protectedfile || quit;
If I understand correctly what you want, just use this:
rm blah/blah/blah || exit 1
a possibility: a 'wrapper' so that you can retrieve the original commands stderr |and stdout?], and maybe also retry it a few times before giving up?, etc.
Here is a version that redirects both stdout and stderr
Of course you could not redirect stdout at all (and usually, you shouldn't, i guess, making the "try_to" function a bit more useful in the rest of the script!)
export timescalled=0 #external to the function itself
try_to () {
let "timescalled += 1" #let "..." allows white spaces and simple arithmetics
try_to_out="/tmp/try_to_${$}.${timescalled}"
#tries to avoid collisions within same script and/or if multiple script run in parrallel
zecmd="$1" ; shift ;
"$1" "$#" 2>"${try_to_out}.ERR" >"${try_to_out}.OUT"
try_to_ret=$?
#or: "$1" "$#" >"${try_to_out}.ERR" 2>&1 to have both in the same one
if [ "$try_to_ret" -ne "0" ]
then log "error $try_to_ret while trying to : '${zecmd} $#' ..." "${try_to_out}.ERR"
#provides custom error message + the name of the stderr from the command
rm -f "${try_to_out}.ERR" "${try_to_out}.OUT" #before we exit, better delete this
exit 1 #or exit $try_to_ret ?
fi
rm -f "${try_to_out}.ERR" "${try_to_out}.OUT"
}
it's ugly, but could help ^^
note that there are many things that could go wrong: 'timecalled' could become too high, the tmp file(s) could not be writable, zecmd could contain special caracters, etc...
Usualy some would use a command like this:
doSomething.sh
if [ $? -ne 0 ]
then
echo "oops, i did it again"
exit 1;
fi
B.T.W. searching for 'bash exit status' will give you already a lot of good results

Resources