How to terminate a Google Cloud Build Pipeline properly? - google-cloud-build

A pipeline is automatically terminated when any step in pipeline exited with an error code greater than 0. After that the build process is marked as failed. So far so good.
For instance when the current branch is master after some steps such as build and test I want to process some extra steps like tag and deploy. When the current branch is not master these extra steps can be skipped.
One workaround is having a guard on each extra step, which asks for the current branch. But this seems to be inelegant.
How to break a pipeline with exit code 0?

there is nothing 'out of box' yet from cloud builder.
Work arounds are adding if statements
[[ "$REPO_NAME" == "master" ]] && your_command_here
You're going to have to change the entrypoint to a bash shell for this to work

Related

In Travis, is it possible to mark a staged as "Cancelled" instead of "Failed" when running a bash script?

There does exist a "Cancelled" state, which you can invoke by clicking on small x next to the job. This is how a cancelled job looks:
Is it possible to enter this cancelled state when running a bash script invoked by your .travis.yml? From Travis docs:
If script returns a non-zero exit code, the build is failed
So returning a different error code doesn't help. Is it just not doable?

How to skip a always running task in airflow after few hours?

My example DAG:
Task1-->Task2-->Task3
I have a pipeline with a BashOperator task that should not stop (at least for a few hours).
Task1: It watches a folder for zip files and extracts them to another folder
#!/bin/bash
inotifywait -m /<path> -e create -e moved_to|
while read dir action file; do
echo "The file '$file' appeared in directory '$dir' via '$action'"
unzip -o -q "/<path>/$file" "*.csv" -d /<output_path>
rm path/$file
done
Task2: PythonOperator(loads the CSV into MySQL database after cleaning)
The problem is that my task is always running due to the loop, and I want it to proceed to the next task after (execution_date+ x hours).
I was thinking of changing the trigger rules of the downstream task.I have tried the execution_timeout in BashOperator but the task shows as failed on the graph.
What should be my approach to solve this kind of problem?
There are several ways to address the issue you are facing.
Option 1: Use execution_time on parent task and trigger_rule='all_done' on child task. This is basically what you suggested but just for clarifications - Airflow doesn't mind that one of the task in the pipeline has failed. In your use case you describe it as a valid state for the task so it's OK but not very intuitive as people often associate failed with something that is wrong so it's understandable that this is not the preferred solution.
Option 2: Airflow has AirflowSkipException. You can set timer in your python code. If timer exceed the time you defined then do:
from airflow.exceptions import AirflowSkipException
raise AirflowSkipException(f"Snap. Time is OUT")
This will set parent task to status Skipped then the child task can use trigger_rule='none_failed'. In this way if parent task fails it's due to an actual failure (but not timeout). Valid execution will yield either success status or skipped.

Jenkins always SUCCESS state when the Shell script actually failed

I'm facing this challenge in my current Jenkins setup. Where the set of cases like Shell(bash) script executed remotely:
Permission denied while my installer copied
Unable to connect with SSH
Any suggestion on these cases how can I fix it? any pointers?
Thanks in advance
A pipeline will fail if a script / software returns a value not equal zero. There are programs like Robocopy that execute a command, fail and return a 0. Jenkins does not understand that the program was not successful and marks the pipeline as a success.
Basically this is what you have to do. If your script returns a value not equal zero the pipeline will fail.

Mark as successful action for builds in Teamcity

I have a question to all the experienced Teamcity users out there.
I would like to exit out of a job based on a particular condition, but I do not want the status of the job as a failure. Is it possible to mark a job as successful even when you exit out of the job with an "exit code 1" or any pointers to achieve the same (exit out of a Teamcity job but mark the job as successful) through an alternative way is greatly appreciated!
Thanks!
You can use TeamCity service messages to update build status, e.g. write to the output
##teamcity[buildStatus status='SUCCESS' text='{build.status.text} and then made green']
to get build status text concatenated with the and then made green string.
If you have Command Line Build Step and you are using TeamCity 2017.2 then you can format stderr output as warning. Here is a documentation: https://confluence.jetbrains.com/display/TCD10/Command+Line

Jenkins trigger conditional build steps in shell?

I am using Jenkins as a server to run cron jobs that are conditional on the success of other jobs. These can be run as multiple execute shell steps. I am specifically wondering if there is a way to make an execute shell step contingent on the exit status of the previous execute shell step.
This is the default behaviour. Each build step, such as "Execute Shell" step, returns an exit code (last command). If that is 0, the next build step is executed. If that is not 0, Jenkins "FAILS" the build, and skips straight to post-build steps.
If your shell returns 0 on success, and everything else is a failure, just put several "Execute Shell" build steps one after another.

Resources