Is it possible in some manner to gracefully exit/break in some middle step of GCB?
If some step command exits with non success code, build flow will break, but will also be considered failed. I'd like to break but maintain a success status.
I found a solution which I consider a better workaround: let the build cancel itself using gcloud sdk.
- name: 'gcr.io/cloud-builders/gcloud'
id: 'Cancel current build if on master'
entrypoint: 'sh'
args:
- '-c'
- |
test $BRANCH_NAME = "master" && gcloud builds cancel $BUILD_ID > /dev/null || true
Note the service account that runs the builds (xxx#cloudbuild.gserviceaccount.com) should have appropriate permissions to cancel build.
That's something which is not possible at the moment (even though it's an option that will be added in the future), but you can use a workaround to ignore the failure of any step. Using bash you achieve that by using something like the following:
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args:
- '-c'
- |
docker pull gcr.io/$PROJECT_ID/my-image || exit 0
Here you can find the link to the Github issue where the same question was addressed and you will be able to find more information.
Related
I have been working now for 2 weeks on a playbook to deploy CKAN with Ansible on RHEL. I manage to get the site up and running with:
- name: test ckan
shell: ". {{ckan_user.home_path}}/{{ckan_site_name}}/bin/activate && paster serve /etc/ckan/{{ckan_site_name}}/development.ini"
args:
chdir: "{{ckan_user.home_path}}/{{ckan_site_name}}/src/ckan"
The result is that the playbook is stuck in an infinite loop saying:
'Escalation succeeded'.
I'm sure that there is an easy fix, but I can't find it...
Any ideas are more than welcome.
Add “&& exit” or something after your shell command.
Also, register the task, then read it at the next task to see what the output value is.
In this way, you can stop or continue the play based on the output of the task.
I'd like to run my build pipeline only when my repo is tagged with certain specific release tags. I can get the tag value from the CODEBUILD_WEBHOOK_TRIGGER environment variable and I can conditionally execute code in my BUILD phase with some bash kung fu :
build:
commands:
- echo ${CODEBUILD_WEBHOOK_TRIGGER##*/}
- |
if expr "${CODEBUILD_WEBHOOK_TRIGGER}" : '^tag/30' >/dev/null; then
git add *
git commit -am "System commit"
git push
git tag ${CODEBUILD_WEBHOOK_TRIGGER##*/}
git push origin ${CODEBUILD_WEBHOOK_TRIGGER##*/}
echo Pushed the repo
fi
Works fine, I only push when a tag looks a certain way.
Putting aside the brittleness of the above, what I really want to do is to terminate the entire build process in the INSTALL phase if my CODEBUILD_WEBHOOK_TRIGGER variable doesn't have a specific prefix. I'd like to skip all subsequent steps and exit the pipeline without error.
Is there a way to do this? It would be nice to minimize the resources I'm using.
It worked for me to use the aws-cli command for stopping the build, using the provided CodeBuild environment variable ${CODEBUILD_BUILD_ID}:
- aws codebuild stop-build --id ${CODEBUILD_BUILD_ID}
For instance:
- |
if expr "${CODEBUILD_WEBHOOK_TRIGGER}" : '^tag/30' >/dev/null; then
. . .
else
aws codebuild stop-build --id ${CODEBUILD_BUILD_ID}
fi
CodeBuild natively supports tag filtering now. Documentation # https://docs.aws.amazon.com/codebuild/latest/userguide/sample-github-pull-request.html#sample-github-pull-request-filter-webhook-events.
Answering my own question, It turns out you can do this by specifying a branch filter in the source set up. The regular expression appears to match anything that comes back from the webhook:
^tag/30
That works for my tag pattern.
The question stands. I can still imagine use cases where you want to short-circuit the execution of the build pipeline for some other reason.
Running this command inside a .gitlab-ci.yml:
task:
script:
- yes | true
- yes | someOtherCommandWhichNeedsYOrN
Returns:
$ yes | true
ERROR: Job failed: exit status 1
Any clues, ideas why this happens or how to debug this?
Setup:
Gitlab runner in a docker
If running with set -o pipefail, a failure at any stage in a shell pipeline will cause the entire pipeline to be considered failed. When yes tries to write to stdout but the program whose stdin that stdout is connected to is not reading, this will cause an EPIPE signal -- thus, an expected failure message, which the shell will usually ignore (in favor of treating only the last component of a pipeline as important for purposes of that pipeline's exit status).
Turn this off for the remainder of your current script with set +o pipefail
Explicitly ignore a single failure: { yes || :; } | true
Can't comment yet.
I would extract the script in to a file and run that file from the pipeline with some debug stuff in it and see if you can reproduce it.
Make sure you make it to to, and not past, the line in question.
I try the following to get some more info maybe?
( set -x ; yes | true ; echo status: "${PIPESTATUS[#]}" )
See if you have some weird chars in the file or some weird modes set.
Make sure you are in the right shell, true can be built in so worth checking.
Good luck.
In my circle.yml file I have a post test that runs after the normal tests, and only when you are building master. I am trying to find a way to alert if this post test succeeds or fails, but have the build pass regardless of success or failure. Note that the build should still fail if any of the tests in the normal test suite fail. It is only this post test that I wish to see test failure for, but still have the build succeed.
test:
post:
- |
if [ master == $CIRCLE_BRANCH ]; then
npm run extra-tests
fi
disclaimer: CircleCI Developer Evangelist
You can do the following:
test:
post:
- |
if [ master == $CIRCLE_BRANCH ]; then
npm run extra-tests || true
fi
The double pipe is an "or" in Bash. If the command to the left succeeds (exit code 0), then we move onto the next line, ending the if block. If it fails, then the command to the right of || runs, which always succeeds.
Just be careful as you'll only know if any of these "extra tests" fail by logging into CircleCI's website, viewing the build, and expanding the build output for that section.
I am trying to setup Travis CI to build a latex report. When building the latex report some steps have to be repeated and so the first time they are called there is a non-zero return code.
My travis.yml so far is
language: R
before_install:
- tlmgr install index
script:
- latex report
- bibtex report
- latex report
- latex report
- dvipdf report.dvi report.pdf
However in Travis Docs it states
If script returns a non-zero exit code, the build is failed, but continues to run before being marked as failed.
So if my first latex report command has a non zero return code it will fail the build.
I would only like the build to fail if the last latex report or dvipdf report failed.
Does anyone have any idea or help?
Thanks in advance.
Just append || true to your command.
(complex) Example:
- (docker run --rm -v $(pwd)/example:/workdir stocker-alert || true) 2>&1 | tee >(cat) | grep 'Price change within 1 day'
The docker command returns < 0 (because it's a negative test), but we want to continue anyway
2>&1 - stderr is forwarded to stdin (to be picked up by grep later)
tee - the output is printed (for debugging) and forwarded to grep
Finally grep asserts if the output contains a required string. If not, grep returns > 0 failing the build.
If we wanted to ignore greps result we would need another ||true after grep.
Taken from schnatterer/stock-alert.
Not directly related with your original question but I had quite the same problem.
I found a solution using latexmk. This runs latex and bibtex as many times as needed.
If you look at my Travis configuration file :
https://github.com/73VW/TechnicalReport/blob/master/.travis.yml
You will see that you simply have to add it in apt dependencies.
Then you can run it like this: latexmk -pdf -xelatex [Your_latex_file]