Record shellcheck findings in Jenkins - shell

I'm looking for a way to record violation findings of shellcheck in my Jenkins Pipeline script. I was not able to find something so far. For other tools (Java, Python), I'm using Warnings Next Generation, but it does not seem to support shellcheck, yet. I'd like to have the violations visualized within my Jenkins Job dashboard. Does anyone have experience with that? Or perhaps a ready to use custom tool for Warnings NG?

I did find a feasible solution myself. Like suggested in the comments, spellcheck offers checkstyle format, which can be parsed and visualized with Warnings NG. The following Pipeline stage definition works fine.
stage('Analyze') {
steps {
catchError(buildResult: 'SUCCESS') {
sh """#!/bin/bash
# The null command `:` only returns exit code 0 to ensure following task executions.
shellcheck -f checkstyle *.sh > shellcheck.xml || :
"""
recordIssues(tools: [checkStyle(pattern: 'shellcheck.xml')])
}
}
}
Running the build generates a nice trend diagram like follows.

Running shellcheck for all files merging the output in a single xml file didn't play well with recordIssues in my case.
I had create individual report for each source file to make it work.
stage('Shellcheck') {
steps {
catchError(
buildResult: hudson.model.Result.SUCCESS.toString(),
stageResult: hudson.model.Result.UNSTABLE.toString(),
message: "shellcheck error detected, but allowing job to continue!")
{
sh '''
# shellcheck with all files in single xml doesnt play well with jenkins report
ret=0
for file in $(grep -rl '^#!/.*/bash' src); do
echo shellcheck ${file}
mkdir -p .checkstyle/${file}/
shellcheck -f checkstyle ${file} > .checkstyle/${file}/shellcheck.xml || (( ret+=$? ))
done
exit ${ret}
'''
}//catchError
}//steps
post {
always {
recordIssues(tools: [checkStyle(pattern: '.checkstyle/**/shellcheck.xml')])
}
}//post
}//stage

Related

Terraform GCP Instance Metadata Startup Script Issue

I've been working with Terraform, v0.15.4, for a few weeks now, and have gotten to grips with most of the lingo. I'm currently trying to create a cluster of RHEL 7 instances dynamically on GCP, and have, for the most part, got it to run okay.
I'm at the point of deploying an instance with certain metadata passed along to it for use in scripts built into the machine image for configuration thereafter. This metadata is typically just passed via an echo into a text file, which the scripts then pickup as required.
It's... very simple. Echo "STUFF" > file... Alas, I am hitting the same issue OVER AND OVER and it's driving me INSANE. I've Google'd around for ages, but all I can find is examples of the exact thing that I'm doing, the only difference is that theirs works, mine doesn't... So hopefully I can get some help here.
My 'makes it half-way' code is as follows:
resource "google_compute_instance" "GP_Master_Node" {
...
metadata_startup_script = <<-EOF
echo "hello
you" > /test.txt
echo "help
me" > /test2.txt
EOF
Now the instance with this does create successfully, although when I look onto the instance, I get one file called ' /test.txt? ' (or if I 'ls' the file, it shows as ' /test.txt^M ') and no second file.. I can run any command instead of echo, and whilst the first finishes, the second+ does not. Why?? What on earth is causing that??
The following code I found also, but it doesn't work for me at all, with the error, 'Blocks of type "metadata" are not expected here.'
resource "google_compute_instance" "GP_Master_Node" {
...
metadata {
startup-script = "echo test > /test.txt"
}
Okaaaaay! Simple answer for a, in hindsight, silly question (sort of). The file was somehow formmated in DOS, meaning the script required a line continuation character to run correctly (specifically \ at the end of each individual command). Code as follows:
resource "google_compute_instance" "GP_Master_Node" {
...
metadata_startup_script = <<-EOF
echo "hello
you" > /test.txt \
echo "help
me" > /test2.txt \
echo "example1" > /test3.txt \
echo "and so on..." > /final.txt
EOF
However, what also fixed my issue was just 'refreshing' the file (probably a word for this, I don't know). I created a brand new file using touch, 'more'd the original file contents to screen, and then copy pasted them into the new one. On save, it is no longer DOS, as expected, and then when I run terraform the code runs as expected without requiring the line continuation characters at the end of commands.
Thank you to commentors for the help :)

How do I exit immediately in conditional command? [duplicate]

I am making a presubmit script. It looks like this:
function presubmit() {
gradle test android
gradle test ios
gradle test server
git push origin master
}
I want the function to exit, if any of the tests fail so it won't push a bug to git. How?
1. Use subshell ( .. ) with set -e; to make it more concise, you can do this:
build() {( set -e # Fail early
build_cmd_step_1
build_cmd_step_2
build_cmd_step_3
...
)}
Then, the function will fail on the first failure and you can intercept the exit status:
build
exit_status=$?
if [ ${exit_status} -ne 0 ]; then
echo "We have error - build failed!"
exit "${exit_status}"
fi
2. Alternatively, the && \ chaining inside a function is also good (https://stackoverflow.com/a/51913013/1375784), though it might get bad if you have a bigger function.
Both methods can be good, depending on your use case (in some cases using subshells may cause some unwanted side effects)
The way I do it is to add && \ after every command in the function (except the last one).
function presubmit() {
gradle test android && \
gradle test ios && \
gradle test server && \
git push origin master
}
I would make script more granular:
#!/bin/bash
function test() {
gradle test android
gradle test ios
gradle test server
}
function push() {
git push origin master
}
# this subshell runs similar to try/catch
(
# this flag will make to exit from current subshell on any error inside test or push
set -e
test
push
)
# you catch errors with this if
if [ $? -ne 0 ]; then
echo "We have error"
exit $?
fi
We track error only inside test and push. You can add more actions outside of subshell where test and push run. You can also this way add different scope for errors (let consider it as try/catch)
You can do this:
# declare a wrapper function for gradle
gradle() {
command gradle "$#" || exit 1
}
presubmit() {
gradle test android
gradle test ios
gradle test server
git push origin master
}
declare -xf presubmit gradle
Call the function in a subshell as:
( presubmit )
Usually when I call a function and want an error message incase it fails I do this:
presubmit || { echo 'presubmit failed' ; exit 1; }
By adding the || flag, it will determine whether which expression is TRUE.
Hope this helps :)
Also others have given ways that map one to one to your case, I think a more generic view is better. Also using the || and && for such is a cryptic way of writing scripts (read: prone to ending up with bugs).
I think the following is much easier to work with long term:
function presubmit() {
if ! gradle test android
then
return 1
fi
if ! gradle test ios
then
return 1
fi
if ! gradle test server
then
return 1
fi
git push origin master
}
The return from the last command is returned by the function so we do not need to have an if/then there.
In your specific case, to avoid the duplication, you could use a for loop like so:
function presubmit() {
for name in android ios server
do
if ! gradle test ${name}
then
return 1
fi
done
git push origin master
}
Now, you may instead want to look at a pre-push hook which would probably be much better since whether you run your script or not, the push won't happen unless the hook succeeds.

equivalent of -e within a bash function [duplicate]

I am making a presubmit script. It looks like this:
function presubmit() {
gradle test android
gradle test ios
gradle test server
git push origin master
}
I want the function to exit, if any of the tests fail so it won't push a bug to git. How?
1. Use subshell ( .. ) with set -e; to make it more concise, you can do this:
build() {( set -e # Fail early
build_cmd_step_1
build_cmd_step_2
build_cmd_step_3
...
)}
Then, the function will fail on the first failure and you can intercept the exit status:
build
exit_status=$?
if [ ${exit_status} -ne 0 ]; then
echo "We have error - build failed!"
exit "${exit_status}"
fi
2. Alternatively, the && \ chaining inside a function is also good (https://stackoverflow.com/a/51913013/1375784), though it might get bad if you have a bigger function.
Both methods can be good, depending on your use case (in some cases using subshells may cause some unwanted side effects)
The way I do it is to add && \ after every command in the function (except the last one).
function presubmit() {
gradle test android && \
gradle test ios && \
gradle test server && \
git push origin master
}
I would make script more granular:
#!/bin/bash
function test() {
gradle test android
gradle test ios
gradle test server
}
function push() {
git push origin master
}
# this subshell runs similar to try/catch
(
# this flag will make to exit from current subshell on any error inside test or push
set -e
test
push
)
# you catch errors with this if
if [ $? -ne 0 ]; then
echo "We have error"
exit $?
fi
We track error only inside test and push. You can add more actions outside of subshell where test and push run. You can also this way add different scope for errors (let consider it as try/catch)
You can do this:
# declare a wrapper function for gradle
gradle() {
command gradle "$#" || exit 1
}
presubmit() {
gradle test android
gradle test ios
gradle test server
git push origin master
}
declare -xf presubmit gradle
Call the function in a subshell as:
( presubmit )
Usually when I call a function and want an error message incase it fails I do this:
presubmit || { echo 'presubmit failed' ; exit 1; }
By adding the || flag, it will determine whether which expression is TRUE.
Hope this helps :)
Also others have given ways that map one to one to your case, I think a more generic view is better. Also using the || and && for such is a cryptic way of writing scripts (read: prone to ending up with bugs).
I think the following is much easier to work with long term:
function presubmit() {
if ! gradle test android
then
return 1
fi
if ! gradle test ios
then
return 1
fi
if ! gradle test server
then
return 1
fi
git push origin master
}
The return from the last command is returned by the function so we do not need to have an if/then there.
In your specific case, to avoid the duplication, you could use a for loop like so:
function presubmit() {
for name in android ios server
do
if ! gradle test ${name}
then
return 1
fi
done
git push origin master
}
Now, you may instead want to look at a pre-push hook which would probably be much better since whether you run your script or not, the push won't happen unless the hook succeeds.

bats - how can i echo the file name in a bats script for reporting

I have some bats scripts that I run to test some functionality
how can I echo the bats file name in the script?
my bats script looks like:
#!/usr/bin/env bats
load test_helper
echo $BATS_TEST_FILENAME
#test "run cloned mission" {
blah blah blah
}
in order for my report to appear as:
✓ run cloned mission
✓ run cloned mission
✓ addition using bc
---- TEST NAME IS xxx
✓ run cloned mission
✓ run cloned mission
✓ addition using bc
---- TEST NAME IS yyy
✓ run cloned mission
✓ run cloned mission
✓ addition using bc
but got the error
2: syntax error:
operand expected (error token is ".bats
2")
what is the correct way to do it?
I don't want to change the sets names for it only to echo the filename between different tests.
Thanks.
TL;DR
Just output the file name from the setup function using a combination of prefixing the message with # and redirecting it to fd3 (documented in the project README).
#!/usr/bin/env bats
setup() {
if [ "${BATS_TEST_NUMBER}" = 1 ];then
echo "# --- TEST NAME IS $(basename ${BATS_TEST_FILENAME})" >&3
fi
}
#test "run cloned mission" {
blah blah blah
}
All your options
Just use BASH
The simplest solution is to just iterate all test files and output the filename yourself:
for file in $(find ./ -name '*.bats');do
echo "--- TEST NAME IS ${file}"
bats "${file}"
done
The downside of this solution is that you lose the summary at the end. Instead a summary will be given after each single file.
Use the setup function
The simplest solution within BATS is to output the file name from a setup function. I think this is the solution you are after.
The code looks like this:
setup() {
if [ "${BATS_TEST_NUMBER}" = 1 ];then
echo "# --- TEST NAME IS $(basename ${BATS_TEST_FILENAME})" >&3
fi
}
A few things to note:
The output MUST begin with a hash #
The MUST be a space after the hash
The output MUST be redirected to file descriptor 3 (i.e. >&3)
A check is added to only output the file name once (for the first test)
The downside here is that the output might confuse people as it shows up in red.
Use a skipped #test
The next solution would be to just add the following as the first test in each file:
#test "--- TEST NAME IS $(basename ${BATS_TEST_FILENAME})" {
skip ''
}
The downside here is that there will be an addition to the amount of skipped tests...
Use an external helper function
The only other solution I can think of would be to create a test helper that lives in global scope and keeps tracks of its state.
Such code would look something like this:
output-test-name-helper.bash
#!/usr/bin/env bash
create_tmp_file() {
local -r fileName="$(basename ${BATS_TEST_FILENAME})"
if [[ ! -f "${BATS_TMPDIR}/${fileName}" ]];then
touch "${BATS_TMPDIR}/${fileName}"
echo "---- TEST NAME IS ${fileName}" >&2
fi
}
remove_tmp_file() {
rm "${BATS_TMPDIR}/$(basename ${BATS_TEST_FILENAME})"
}
trap remove_tmp_file EXIT
create_tmp_file
Which could then be loaded in each test:
#!/usr/bin/env bats
load output-test-name-helper
#test "run cloned mission" {
return 0
}
The major downside here is that there are no guarantees where the output is most likely to end up.
Adding output from outside the #test, setup and teardown functions can lead to unexpected results.
Such code will also be called (at least) once for every test, slowing down execution.
Open a pull-request
As a last resort, you could patch the code of BATS yourself, open a pull-request on the BATS repository and hope this functionality will be supported natively by BATS.
Conclusion
Life is a bunch of tradeoffs. Pick a solution that most closely fits your needs.
I've figured out a way to do this, but it requires you changing how you handle your individual setup in each file.
Create a helper file that defines a setup function that does as Potherca described above:
global.bash:
test_setup() { return 0; }
setup() {
(($BATS_TEST_NUMBER==1)) \
&& echo "# --- $(basename "$BATS_TEST_FILENAME")" >&3
test_setup
}
Then in your test, instead of calling setup you would just load 'global'.
If you need to create a setup for a specific file, then instead of creating a setup function, you'd create a test_setup function.
Putting the echo in setup outputs the file name after the test name.
What I wound up doing is adding the file name to the test name itself:
test "${BATS_TEST_FILENAME##*/}: should …" {
…
}
Also, if going the setup route, the condition can be avoided with:
function setup() {
echo "# --- $(basename "$BATS_TEST_FILENAME")" >&3
function setup() {
…
}
setup
}

How to mark a build unstable in Jenkins when running shell scripts

In a project I'm working on, we are using shell scripts to execute different tasks. Some are sh/bash scripts that run rsync, and some are PHP scripts. One of the PHP scripts is running some integration tests that output to JUnit XML, code coverage reports, and similar.
Jenkins is able to mark the jobs as successful / failed based on exit status. In PHP, the script exits with 1 if it has detected that the tests failed during the run. The other shell scripts run commands and use the exit codes from those to mark a build as failed.
// :: End of PHP script:
// If any tests have failed, fail the build
if ($build_error) exit(1);
In Jenkins Terminology, an unstable build is defined as:
A build is unstable if it was built successfully and one or more publishers report it unstable. For example if the JUnit publisher is configured and a test fails then the build will be marked unstable.
How can I get Jenkins to mark a build as unstable instead of only success / failed when running shell scripts?
Modern Jenkins versions (since 2.26, October 2016) solved this: it's just an advanced option for the Execute shell build step!
You can just choose and set an arbitrary exit value; if it matches, the build will be unstable. Just pick a value which is unlikely to be launched by a real process in your build.
It can be done without printing magic strings and using TextFinder. Here's some info on it.
Basically you need a .jar file from http://yourserver.com/cli available in shell scripts, then you can use the following command to mark a build unstable:
java -jar jenkins-cli.jar set-build-result unstable
To mark build unstable on error, you can use:
failing_cmd cmd_args || java -jar jenkins-cli.jar set-build-result unstable
The problem is that jenkins-cli.jar has to be available from shell script. You can either put it in easy-to-access path, or download in via job's shell script:
wget ${JENKINS_URL}jnlpJars/jenkins-cli.jar
Use the Text-finder plugin.
Instead of exiting with status 1 (which would fail the build), do:
if ($build_error) print("TESTS FAILED!");
Than in the post-build actions enable the Text Finder, set the regular expression to match the message you printed (TESTS FAILED!) and check the "Unstable if found" checkbox under that entry.
You should use Jenkinsfile to wrap your build script and simply mark the current build as UNSTABLE by using currentBuild.result = "UNSTABLE".
stage {
status = /* your build command goes here */
if (status === "MARK-AS-UNSTABLE") {
currentBuild.result = "UNSTABLE"
}
}
you should also be able to use groovy and do what textfinder did
marking a build as un-stable with groovy post-build plugin
if(manager.logContains("Could not login to FTP server")) {
manager.addWarningBadge("FTP Login Failure")
manager.createSummary("warning.gif").appendText("<h1>Failed to login to remote FTP Server!</h1>", false, false, false, "red")
manager.buildUnstable()
}
Also see Groovy Postbuild Plugin
In my job script, I have the following statements (this job only runs on the Jenkins master):
# This is the condition test I use to set the build status as UNSTABLE
if [ ${PERCENTAGE} -gt 80 -a ${PERCENTAGE} -lt 90 ]; then
echo WARNING: disc usage percentage above 80%
# Download the Jenkins CLI JAR:
curl -o jenkins-cli.jar ${JENKINS_URL}/jnlpJars/jenkins-cli.jar
# Set build status to unstable
java -jar jenkins-cli.jar -s ${JENKINS_URL}/ set-build-result unstable
fi
You can see this and a lot more information about setting build statuses on the Jenkins wiki: https://wiki.jenkins-ci.org/display/JENKINS/Jenkins+CLI
Configure PHP build to produce xml junit report
<phpunit bootstrap="tests/bootstrap.php" colors="true" >
<logging>
<log type="junit" target="build/junit.xml"
logIncompleteSkipped="false" title="Test Results"/>
</logging>
....
</phpunit>
Finish build script with status 0
...
exit 0;
Add post-build action Publish JUnit test result report for Test report XMLs. This plugin will change Stable build to Unstable when test are failing.
**/build/junit.xml
Add Jenkins Text Finder plugin with console output scanning and unchecked options. This plugin fail whole build on fatal error.
PHP Fatal error:
Duplicating my answer from here because I spent some time looking for this:
This is now possible in newer versions of Jenkins, you can do something like this:
#!/usr/bin/env groovy
properties([
parameters([string(name: 'foo', defaultValue: 'bar', description: 'Fails job if not bar (unstable if bar)')]),
])
stage('Stage 1') {
node('parent'){
def ret = sh(
returnStatus: true, // This is the key bit!
script: '''if [ "$foo" = bar ]; then exit 2; else exit 1; fi'''
)
// ret can be any number/range, does not have to be 2.
if (ret == 2) {
currentBuild.result = 'UNSTABLE'
} else if (ret != 0) {
currentBuild.result = 'FAILURE'
// If you do not manually error the status will be set to "failed", but the
// pipeline will still run the next stage.
error("Stage 1 failed with exit code ${ret}")
}
}
}
The Pipeline Syntax generator shows you this in the advanced tab:
I find the most flexible way to do this is by reading a file in the groovy post build plugin.
import hudson.FilePath
import java.io.InputStream
def build = Thread.currentThread().executable
String unstable = null
if(build.workspace.isRemote()) {
channel = build.workspace.channel;
fp = new FilePath(channel, build.workspace.toString() + "/build.properties")
InputStream is = fp.read()
unstable = is.text.trim()
} else {
fp = new FilePath(new File(build.workspace.toString() + "/build.properties"))
InputStream is = fp.read()
unstable = is.text.trim()
}
manager.listener.logger.println("Build status file: " + unstable)
if (unstable.equalsIgnoreCase('true')) {
manager.listener.logger.println('setting build to unstable')
manager.buildUnstable()
}
If the file contents are 'true' the build will be set to unstable. This will work on the local master and on any slaves you run the job on, and for any kind of scripts that can write to disk.
I thought I would post another answer for people that might be looking for something similar.
In our build job we have cases where we would want the build to continue, but be marked as unstable. For ours it's relating to version numbers.
So, I wanted to set a condition on the build and set the build to unstable if that condition is met.
I used the Conditional step (single) option as a build step.
Then I used Execute system Groovy script as the build step that would run when that condition is met.
I used Groovy Command and set the script the following
import hudson.model.*
def build = Thread.currentThread().executable
build.#result = hudson.model.Result.UNSTABLE
return
That seems to work quite well.
I stumbled upon the solution here
http://tech.akom.net/archives/112-Marking-Jenkins-build-UNSTABLE-from-environment-inject-groovy-script.html
In addition to all others answers, jenkins also allows the use of the unstable() method (which is in my opinion clearer).
This method can be used with a message parameter which describe why the build is unstable.
In addition of this, you can use the returnStatus of your shell script (bat or sh) to enable this.
For example:
def status = bat(script: "<your command here>", returnStatus: true)
if (status != 0) {
unstable("unstable build because script failed")
}
Of course, you can make something with more granularity depending on your needs and the return status.
Furthermore, for raising error, you can also use warnError() in place of unstable(). It will indicate your build as failed instead of unstable, but the syntax is same.
The TextFinder is good only if the job status hasn't been changed from SUCCESS to FAILED or ABORTED.
For such cases, use a groovy script in the PostBuild step:
errpattern = ~/TEXT-TO-LOOK-FOR-IN-JENKINS-BUILD-OUTPUT.*/;
manager.build.logFile.eachLine{ line ->
errmatcher=errpattern.matcher(line)
if (errmatcher.find()) {
manager.build.#result = hudson.model.Result.NEW-STATUS-TO-SET
}
}
See more details in a post I've wrote about it:
http://www.tikalk.com/devops/JenkinsJobStatusChange/
As a lighter alternative to the existing answers, you can set the build result with a simple HTTP POST to access the Groovy script console REST API:
curl -X POST \
--silent \
--user "$YOUR_CREDENTIALS" \
--data-urlencode "script=Jenkins.instance.getItemByFullName( '$JOB_NAME' ).getBuildByNumber( $BUILD_NUMBER ).setResult( hudson.model.Result.UNSTABLE )" $JENKINS_URL/scriptText
Advantages:
no need to download and run a huge jar file
no kludges for setting and reading some global state (console text, files in workspace)
no plugins required (besides Groovy)
no need to configure an extra build step that is superfluous in the PASSED or FAILURE cases.
For this solution, your environment must meet these conditions:
Jenkins REST API can be accessed from slave
Slave must have access to credentials that allows to access the Jenkins Groovy script REST API.
If you want to use a declarative approach I suggest you to use code like this.
pipeline {
stages {
// create separate stage only for problematic command
stage("build") {
steps {
sh "command"
}
post {
failure {
// set status
unstable 'rsync was unsuccessful'
}
always {
echo "Do something at the end of stage"
}
}
}
}
post {
always {
echo "Do something at the end of pipeline"
}
}
}
In case you want to keep everything in one stage use catchError
pipeline {
stages {
// create separate stage only for problematic command
stage("build") {
steps {
catchError(stageResult: 'UNSTABLE') {
sh "command"
}
sh "other command"
}
}
}
}
One easy way to set a build as unstable, is in your "execute shell" block, run exit 13
You can just call "exit 1", and the build will fail at that point and not continue. I wound up making a passthrough make function to handle it for me, and call safemake instead of make for building:
function safemake {
make "$#"
if [ "$?" -ne 0 ]; then
echo "ERROR: BUILD FAILED"
exit 1
else
echo "BUILD SUCCEEDED"
fi
}

Resources