In Travis, is it possible to mark a staged as "Cancelled" instead of "Failed" when running a bash script? - bash

There does exist a "Cancelled" state, which you can invoke by clicking on small x next to the job. This is how a cancelled job looks:
Is it possible to enter this cancelled state when running a bash script invoked by your .travis.yml? From Travis docs:
If script returns a non-zero exit code, the build is failed
So returning a different error code doesn't help. Is it just not doable?

Related

bash: stop subshell script marked as failed if one step exits with an error

I am running a script through the SLURM job scheduler on HPC.
I am invoking a subshell script through a master script.
The subshell script contains several steps. One step in the script sometimes fails because of the quality of the data; this step is not required for further steps, but if this step fails, my whole subshell script is marked with "failed" Status in the job scheduler. However, I need this subshell script to have a "completed" Status in the Job scheduler as it is dependency in my master script.
I tried setting up
set +e
in my subshell script right before the optional step, but it doesn't seem to work: I still get an exitCode with errors and FAILED status inthe job scheduler.
In short: I need the subshell script to have Status "completed" in the job scheduler, no matter whether one particular step is finished with errors or not. Will appreciate help with this.
For Slurm jobs submitted with sbatch, the job exit code is taken to be the return code of the submission script itself. The return code of a Bash script is that of the last command in the script.
So if you just end your script with exit 0, Slurm should consider it COMPLETED no matter what.

Mark as successful action for builds in Teamcity

I have a question to all the experienced Teamcity users out there.
I would like to exit out of a job based on a particular condition, but I do not want the status of the job as a failure. Is it possible to mark a job as successful even when you exit out of the job with an "exit code 1" or any pointers to achieve the same (exit out of a Teamcity job but mark the job as successful) through an alternative way is greatly appreciated!
Thanks!
You can use TeamCity service messages to update build status, e.g. write to the output
##teamcity[buildStatus status='SUCCESS' text='{build.status.text} and then made green']
to get build status text concatenated with the and then made green string.
If you have Command Line Build Step and you are using TeamCity 2017.2 then you can format stderr output as warning. Here is a documentation: https://confluence.jetbrains.com/display/TCD10/Command+Line

Autosys job not failing when the shell script fails

I am moving existing manual shell scripts to execute via autosys jobs. However, after adding exit 1 for each failed autosys job; it is not failing and autosys shows exit code as 0.
I tried the below simple script
#!/bin/ksh
exit 1;
When I execute this, the autosys job shows a success status.I have not updated success code or max success code in autosys, everything is default. What am I missing?

Jenkins trigger conditional build steps in shell?

I am using Jenkins as a server to run cron jobs that are conditional on the success of other jobs. These can be run as multiple execute shell steps. I am specifically wondering if there is a way to make an execute shell step contingent on the exit status of the previous execute shell step.
This is the default behaviour. Each build step, such as "Execute Shell" step, returns an exit code (last command). If that is 0, the next build step is executed. If that is not 0, Jenkins "FAILS" the build, and skips straight to post-build steps.
If your shell returns 0 on success, and everything else is a failure, just put several "Execute Shell" build steps one after another.

using $? when running several commands in parallel in bash

I'm creating a startup/shutdown script for WebSEAL. It's written to allow several instances to be stopped/started in parallel. The only problem is verifying that it completed without issue. With other infrastructures, I could simply grep for a particular keyword in the output (which I redirect to a log file), but WebSEAL does not give any success/error message.
Instead, I thought to use the $? to throw the exit status into a dynamic variable that will be checked after the startups have occured (during log consolidation).
Here is the code that starts/stops and then creates the variable
${PDCOMMAND} >> ${LOGDIR}/${APP}.txt 2>&1 &
let return_${APP}=$?
PDCOMMAND is a valid startup/stop command: aka pdweb start my_instance
APP is the name of the instance: aka my_instance
The goal is that return_${APP} (return_my_instance) will have a value of 0 (success) or 1 (failure) when I check it at a later point in the script.
Are there problems using the $? for a command that may have not technically completed at the time that it was set, or does it set it upon completion of that? So let's say I have 3 instances
instance_1, instance_2, instance_3
if I ran the following:
pdweb start instance1 &
let return_instance_1 = $?
pdweb start instance2 &
let return_instance_2 = $?
pdweb start instance_3 &
let_return_instance_3 = $?
would return_instance_[1|2|3] have the correct values if they started in unequal amounts of time? If instance_3 starts before instance_1, for example, will it still output the result of instance_3 to return_instance_3?
Basically, I'm trying to figure out how the command line treats an asynchronous request in regards to the exit status.
Thanks in advance
No; the exit status code is only available when the command finishes. (That's why it's called "exit status".) If you successfully spawned a service and it is up and running, it does not yet have an exit status.
If I am able to correctly guess what you are trying to accomplish, you could reap the values of $! after starting each instance, wait for a "reasonable" time (a few seconds?) and check that the processes you started are still running. If they have terminated, there was a problem.

Resources