List Branches With Active Choices Using Groovy Script - bash

I'm trying to list github branches for a parameterized Jenkins job using active choice plugin. I tried the following groovy scripts, but to no avail - no branch is retrieved/listed on the drop down.
1.
def process = ("ssh-agent bash -c 'ssh-add /home/ubuntu/.ssh/id_rsa; git ls-remote -t -h git#github.com:username/repository.git'").execute()
return process.text.readLines().collect {
it.split()[1].replaceAll('refs/heads/', '').replaceAll('refs/tags/', '').replaceAll("\\^\\{\\}", '')
}
2.
def command = 'ssh-add /home/ubuntu/.ssh/id_rsa; git ls-remote -t -h git#github.com:username/repository.git'
def process = ["ssh-agent", "bash", "-c", command].execute()
return process.text.readLines().collect {
it.split()[1].replaceAll('refs/heads/', '').replaceAll('refs/tags/', '').replaceAll("\\^\\{\\}", '')
}
3.
def process = ["ssh-agent", "bash", "-c", "ssh-add", "/home/ubuntu/.ssh/id_rsa", "git", "ls-remote", "-t", "-h", "git#github.com:username/repository.git"].execute()
return process.text.readLines().collect {
it.split()[1].replaceAll('refs/heads/', '').replaceAll('refs/tags/', '').replaceAll("\\^\\{\\}", '')
}
I suspect this has something to do with how groovy parses nested shell commands, but I am not sure.
Your help would be much appreciated.
Update
I tried script 2 in groovysh as suggested by #cfrick and it works perfectly fine. It seems that the issue has something to do with my Jenkins version because extensible choice parameter plugin is having the same issue with my groovy script.
I decided to do a work around for my needs and dropped this whole thing.

Related

How do I execute a Move script with the Aptos CLI?

Let's say I have a Move script like this:
script {
use std::signer;
use aptos_framework::aptos_account;
use aptos_framework::aptos_coin;
use aptos_framework::coin;
fun main(src: &signer, dest: address, desired_balance: u64) {
let src_addr = signer::address_of(src);
let balance = coin::balance<aptos_coin::AptosCoin>(src_addr);
if (balance < desired_balance) {
aptos_account::transfer(src, dest, desired_balance - balance);
};
addr::my_module::do_nothing();
}
}
This is calling functions on the aptos_coin.move module, which is deployed on chain. What it does isn't so important for this question, but in short, it checks that the balance of the destination account is less than desired_balance, and if so, tops it up to desired_balance.
Notice also how it calls a function in a Move module I've defined:
module addr::my_module {
public entry fun do_nothing() { }
}
Where do I put these files? Do I need a Move.toml? How do I run my script with the CLI?
Let's run through how to execute a Move script with a step by step example, this should answer all your questions.
Make a new directory to work from:
mkdir testing
cd testing
Setup the Aptos CLI:
aptos init
The CLI will ask you which network you want to work with (e.g. devnet, testnet, mainnet). It will also ask you for your private key (which looks like this: 0xf1adc8d01c1a890f17efc6b08f92179e6008d43026dd56b71e7b0d9b453536be), or it can generate a new one for you, as part of setting up your account.
From here, initialize a new Move project:
aptos move init --name my_script
Now you need to make a file for your script. So, take the script you created above, and put it in sources/, e.g. like this:
testing/
Move.toml
sources/
top_up.move
In other words, top_up.move should contain the script you included in the question.
Now do the same thing with the Move module, leaving you with this:
testing/
Move.toml
sources/
top_up.move
my_module.move
Now you can compile the script:
$ aptos move compile --named-addresses addr=81e2e2499407693c81fe65c86405ca70df529438339d9da7a6fc2520142b591e
Compiling, may take a little while to download git dependencies...
INCLUDING DEPENDENCY AptosFramework
INCLUDING DEPENDENCY AptosStdlib
INCLUDING DEPENDENCY MoveStdlib
BUILDING my_script
{
"Result": []
}
Note how I use the --named-addresses argument. This is necessary because in your code, you refer to this named address called addr. The compiler needs to know what this refers to. Instead of using this CLI argument, you could put something like this in your Move.toml:
[addresses]
addr = "b078d693856a65401d492f99ca0d6a29a0c5c0e371bc2521570a86e40d95f823"
Finally you can run the compiled script:
$ aptos move run-script --compiled-script-path build/my_script/bytecode_scripts/main.mv --args address:b078d693856a65401d492f99ca0d6a29a0c5c0e371bc2521570a86e40d95f823 --args u64:5
Do you want to submit a transaction for a range of [17000 - 25500] Octas at a gas unit price of 100 Octas? [yes/no] >
yes
{
"Result": {
"transaction_hash": "0x655f839a45c5f14ba92590c321f97c3c3f9aba334b9152e994fb715d5648db4b",
"gas_used": 178,
"gas_unit_price": 100,
"sender": "81e2e2499407693c81fe65c86405ca70df529438339d9da7a6fc2520142b591e",
"sequence_number": 53,
"success": true,
"timestamp_us": 1669811892262502,
"version": 370133122,
"vm_status": "Executed successfully"
}
}
Note that the path of the compiled script is under build/my_script/, not build/top_up/. This is because it uses the name of the project contained in Move.toml, which is my_script from when we ran aptos move init --name my_script.
So to answer one of your questions, yes you need a Move.toml, you can't currently just execute a script file on its own with the CLI. The compiler needs this determine what Aptos framework to use for example.
See the code used in this answer here: https://github.com/banool/move-examples/tree/main/run_script.
See also how to do this with the Rust SDK instead of the CLI: How do I execute a Move script on Aptos using the Rust SDK?.
P.S. There is a more streamlined way to execute a script. Instead of running aptos move compile and then aptos move run-script --compiled-script-path separately, you can just do this:
$ aptos move run-script --script-path sources/my_script.move --args address:b078d693856a65401d492f99ca0d6a29a0c5c0e371bc2521570a86e40d95f823 --args u64:5
This will do both steps with a single CLI command. Note however that there are some major footguns with this approach, see https://github.com/aptos-labs/aptos-core/issues/5733. So I'd recommend using the previous two-step approach for now.

Record shellcheck findings in Jenkins

I'm looking for a way to record violation findings of shellcheck in my Jenkins Pipeline script. I was not able to find something so far. For other tools (Java, Python), I'm using Warnings Next Generation, but it does not seem to support shellcheck, yet. I'd like to have the violations visualized within my Jenkins Job dashboard. Does anyone have experience with that? Or perhaps a ready to use custom tool for Warnings NG?
I did find a feasible solution myself. Like suggested in the comments, spellcheck offers checkstyle format, which can be parsed and visualized with Warnings NG. The following Pipeline stage definition works fine.
stage('Analyze') {
steps {
catchError(buildResult: 'SUCCESS') {
sh """#!/bin/bash
# The null command `:` only returns exit code 0 to ensure following task executions.
shellcheck -f checkstyle *.sh > shellcheck.xml || :
"""
recordIssues(tools: [checkStyle(pattern: 'shellcheck.xml')])
}
}
}
Running the build generates a nice trend diagram like follows.
Running shellcheck for all files merging the output in a single xml file didn't play well with recordIssues in my case.
I had create individual report for each source file to make it work.
stage('Shellcheck') {
steps {
catchError(
buildResult: hudson.model.Result.SUCCESS.toString(),
stageResult: hudson.model.Result.UNSTABLE.toString(),
message: "shellcheck error detected, but allowing job to continue!")
{
sh '''
# shellcheck with all files in single xml doesnt play well with jenkins report
ret=0
for file in $(grep -rl '^#!/.*/bash' src); do
echo shellcheck ${file}
mkdir -p .checkstyle/${file}/
shellcheck -f checkstyle ${file} > .checkstyle/${file}/shellcheck.xml || (( ret+=$? ))
done
exit ${ret}
'''
}//catchError
}//steps
post {
always {
recordIssues(tools: [checkStyle(pattern: '.checkstyle/**/shellcheck.xml')])
}
}//post
}//stage

bash use ruby code and save as variable

my end goal is to parse a visual studio team services ssh git url and use it to clone origin and my fork. I'm in windows and I use git bash I've made a few shell scripts to help me to clone it. Before when we used gitweb it was easy for me to parse as I could either run git_clone myproject or git_clone myproject.git or git_clone git://ourgitserver.ourcompany.com/myproject.git and the script would clone the above as origin and also add a remote with my user name in the form of ssh://git#outgitserver.ourcompany.com/myproject.git (and it handled name spaces well too). Well we started using vsts and I want to do the same thing.
The git_clone method changed a few times because of how people would tell/im/email me the link for the git project. I wanted to be able to just copy and paste it with minimal changes. thus far I have a simple git_vsts_close which requires two parameters the name of the project and the name of the repository. (in gitweb we would reference the namespace as vsts's project and the project would be vsts's repository). For the time being I'd like to take either the ssh url or the two parameters and do all the git things. in brief this is what i have so far
function git_vsts_clone {
local projectName=$1
local repositoryName=$2
if MISSING_ARG "usage: git_vsts_clone <project name> <repositoryName>\n projectName must be provided\n repositoryName must be provided" $projectName; then return 1; fi;
if MISSING_ARG "usage: git_vsts_clone <project name> <repositoryName>\n projectName must be provided\n repositoryName must be provided" $repositoryName; then return 1; fi;
local gitServer="ssh://mycompany#vs-ssh.visualstudio.com:22/${projectName}/_ssh/${repositoryName}"
local clonePath="/c/git/${projectName}/${repositoryName}"
local user_name=${USER:-${USERNAME}}
if [ ! -d $clonePath ]; then
INFO "Cloning $gitServer"
git clone $gitServer $clonePath || { ERROR "ERROR cloning $gitServer"; return 1;}
pushd $clonePath
INFO "Updating Submodules (gsui)"
git submodule update --init
INFO "adding user fork ${user_name}"
git remote add $user_name $gitServer.$user_name
git fetch $user_name
popd
INFO "Opening $clonePath in vscode"
fi
code $clonePath
}
last time when I tried to parse a url in bash I struggled with the whole split an item into an array. so I decided i'd try to use ruby (since it has a easy split method) so i've tried things like
$ gitServer='ssh://mycompany#vs-ssh.visualstudio.com:22/someProject/_ssh/myRepo'
$ ruby -e "a = '$gitServer'; b=a.split('/'); p b"
["ssh:", "", "mycompany#vs-ssh.visualstudio.com:22", "someProject", "_ssh", "myRepo"]
$ foo=`ruby -e "a = '$gitServer'; b=a.split('/'); p b"`
$ echo "${c[3]}"
$ echo "${c[0]}"
["ssh:", "", "mycompany#vs-ssh.visualstudio.com:22", "someProject", "_ssh", "myRepo"]
so I dunno. I don't have to use ruby it just seemed like a easy solution... now not so much. So how can I get the project and repository name out of the url in either bash or bash using ruby?
Here is a way you can get the values into environment variables using Ruby:
Assuming you have a URL environment variable containing the git repo url, such as created by the line below:
export URL='ssh://mycompany#vs-ssh.visualstudio.com:22/someProject/_ssh/myRepo'
You can do the following to put your desired values into other environment variables:
export PROJECT=`ruby -e "puts ENV['URL'].split('/')[3]"`
export REPO_NAME=`ruby -e "puts ENV['URL'].split('/')[5]"`

Is it possible to execute Jenkins jobs from Powershell or Bash or Groovy scripts?

I have separated quite small Jenkins jobs.
Now I need to configure another job which depending on the selected by the user parameters (selected probably through checkboxes?) will execute some of them.
I would like to start them from Powershell or Bash or Groovy script. Is it possible?
If you are using Groovy in a Postbuild/pipeline step, you can start the job via the Jenkins API.
For example, something like this for parameterless builds:
import hudson.model.*
Jenkins.instance.getItem("My Job").scheduleBuild(5)
And something like this for parameterized builds:
import hudson.model.*
Jenkins.instance.getItem("My Job").scheduleBuild( 5, new Cause.UpstreamCause( currentBuild ), new ParametersAction([ new StringParameterValue( "My Parameter Name", "My Parameter Value" ) ]));
You can also use the Jenkins Rest API for the rest. For example, by hitting the following url:
Parameterless:
curl -X POST JENKINS_URL/job/JOB_NAME/build
Parameterized:
curl -X POST JENKINS_URL/job/JOB_NAME/buildWithParameters?MyParameterName=MyParameterValue
sample:
import hudson.model.*
def actions=[]
def plist=[ ];
["ok":"ok","label":"master"].each {k,v->
plist << new hudson.model.StringParameterValue(k,"$v","");
}
actions.add(new hudson.model.ParametersAction(plist));
def future = Jenkins.instance.getItemByFullName("samples/testPipeline").scheduleBuild2(0,actions as hudson.model.Action[] );
future.get().getLog()

How to mark a build unstable in Jenkins when running shell scripts

In a project I'm working on, we are using shell scripts to execute different tasks. Some are sh/bash scripts that run rsync, and some are PHP scripts. One of the PHP scripts is running some integration tests that output to JUnit XML, code coverage reports, and similar.
Jenkins is able to mark the jobs as successful / failed based on exit status. In PHP, the script exits with 1 if it has detected that the tests failed during the run. The other shell scripts run commands and use the exit codes from those to mark a build as failed.
// :: End of PHP script:
// If any tests have failed, fail the build
if ($build_error) exit(1);
In Jenkins Terminology, an unstable build is defined as:
A build is unstable if it was built successfully and one or more publishers report it unstable. For example if the JUnit publisher is configured and a test fails then the build will be marked unstable.
How can I get Jenkins to mark a build as unstable instead of only success / failed when running shell scripts?
Modern Jenkins versions (since 2.26, October 2016) solved this: it's just an advanced option for the Execute shell build step!
You can just choose and set an arbitrary exit value; if it matches, the build will be unstable. Just pick a value which is unlikely to be launched by a real process in your build.
It can be done without printing magic strings and using TextFinder. Here's some info on it.
Basically you need a .jar file from http://yourserver.com/cli available in shell scripts, then you can use the following command to mark a build unstable:
java -jar jenkins-cli.jar set-build-result unstable
To mark build unstable on error, you can use:
failing_cmd cmd_args || java -jar jenkins-cli.jar set-build-result unstable
The problem is that jenkins-cli.jar has to be available from shell script. You can either put it in easy-to-access path, or download in via job's shell script:
wget ${JENKINS_URL}jnlpJars/jenkins-cli.jar
Use the Text-finder plugin.
Instead of exiting with status 1 (which would fail the build), do:
if ($build_error) print("TESTS FAILED!");
Than in the post-build actions enable the Text Finder, set the regular expression to match the message you printed (TESTS FAILED!) and check the "Unstable if found" checkbox under that entry.
You should use Jenkinsfile to wrap your build script and simply mark the current build as UNSTABLE by using currentBuild.result = "UNSTABLE".
stage {
status = /* your build command goes here */
if (status === "MARK-AS-UNSTABLE") {
currentBuild.result = "UNSTABLE"
}
}
you should also be able to use groovy and do what textfinder did
marking a build as un-stable with groovy post-build plugin
if(manager.logContains("Could not login to FTP server")) {
manager.addWarningBadge("FTP Login Failure")
manager.createSummary("warning.gif").appendText("<h1>Failed to login to remote FTP Server!</h1>", false, false, false, "red")
manager.buildUnstable()
}
Also see Groovy Postbuild Plugin
In my job script, I have the following statements (this job only runs on the Jenkins master):
# This is the condition test I use to set the build status as UNSTABLE
if [ ${PERCENTAGE} -gt 80 -a ${PERCENTAGE} -lt 90 ]; then
echo WARNING: disc usage percentage above 80%
# Download the Jenkins CLI JAR:
curl -o jenkins-cli.jar ${JENKINS_URL}/jnlpJars/jenkins-cli.jar
# Set build status to unstable
java -jar jenkins-cli.jar -s ${JENKINS_URL}/ set-build-result unstable
fi
You can see this and a lot more information about setting build statuses on the Jenkins wiki: https://wiki.jenkins-ci.org/display/JENKINS/Jenkins+CLI
Configure PHP build to produce xml junit report
<phpunit bootstrap="tests/bootstrap.php" colors="true" >
<logging>
<log type="junit" target="build/junit.xml"
logIncompleteSkipped="false" title="Test Results"/>
</logging>
....
</phpunit>
Finish build script with status 0
...
exit 0;
Add post-build action Publish JUnit test result report for Test report XMLs. This plugin will change Stable build to Unstable when test are failing.
**/build/junit.xml
Add Jenkins Text Finder plugin with console output scanning and unchecked options. This plugin fail whole build on fatal error.
PHP Fatal error:
Duplicating my answer from here because I spent some time looking for this:
This is now possible in newer versions of Jenkins, you can do something like this:
#!/usr/bin/env groovy
properties([
parameters([string(name: 'foo', defaultValue: 'bar', description: 'Fails job if not bar (unstable if bar)')]),
])
stage('Stage 1') {
node('parent'){
def ret = sh(
returnStatus: true, // This is the key bit!
script: '''if [ "$foo" = bar ]; then exit 2; else exit 1; fi'''
)
// ret can be any number/range, does not have to be 2.
if (ret == 2) {
currentBuild.result = 'UNSTABLE'
} else if (ret != 0) {
currentBuild.result = 'FAILURE'
// If you do not manually error the status will be set to "failed", but the
// pipeline will still run the next stage.
error("Stage 1 failed with exit code ${ret}")
}
}
}
The Pipeline Syntax generator shows you this in the advanced tab:
I find the most flexible way to do this is by reading a file in the groovy post build plugin.
import hudson.FilePath
import java.io.InputStream
def build = Thread.currentThread().executable
String unstable = null
if(build.workspace.isRemote()) {
channel = build.workspace.channel;
fp = new FilePath(channel, build.workspace.toString() + "/build.properties")
InputStream is = fp.read()
unstable = is.text.trim()
} else {
fp = new FilePath(new File(build.workspace.toString() + "/build.properties"))
InputStream is = fp.read()
unstable = is.text.trim()
}
manager.listener.logger.println("Build status file: " + unstable)
if (unstable.equalsIgnoreCase('true')) {
manager.listener.logger.println('setting build to unstable')
manager.buildUnstable()
}
If the file contents are 'true' the build will be set to unstable. This will work on the local master and on any slaves you run the job on, and for any kind of scripts that can write to disk.
I thought I would post another answer for people that might be looking for something similar.
In our build job we have cases where we would want the build to continue, but be marked as unstable. For ours it's relating to version numbers.
So, I wanted to set a condition on the build and set the build to unstable if that condition is met.
I used the Conditional step (single) option as a build step.
Then I used Execute system Groovy script as the build step that would run when that condition is met.
I used Groovy Command and set the script the following
import hudson.model.*
def build = Thread.currentThread().executable
build.#result = hudson.model.Result.UNSTABLE
return
That seems to work quite well.
I stumbled upon the solution here
http://tech.akom.net/archives/112-Marking-Jenkins-build-UNSTABLE-from-environment-inject-groovy-script.html
In addition to all others answers, jenkins also allows the use of the unstable() method (which is in my opinion clearer).
This method can be used with a message parameter which describe why the build is unstable.
In addition of this, you can use the returnStatus of your shell script (bat or sh) to enable this.
For example:
def status = bat(script: "<your command here>", returnStatus: true)
if (status != 0) {
unstable("unstable build because script failed")
}
Of course, you can make something with more granularity depending on your needs and the return status.
Furthermore, for raising error, you can also use warnError() in place of unstable(). It will indicate your build as failed instead of unstable, but the syntax is same.
The TextFinder is good only if the job status hasn't been changed from SUCCESS to FAILED or ABORTED.
For such cases, use a groovy script in the PostBuild step:
errpattern = ~/TEXT-TO-LOOK-FOR-IN-JENKINS-BUILD-OUTPUT.*/;
manager.build.logFile.eachLine{ line ->
errmatcher=errpattern.matcher(line)
if (errmatcher.find()) {
manager.build.#result = hudson.model.Result.NEW-STATUS-TO-SET
}
}
See more details in a post I've wrote about it:
http://www.tikalk.com/devops/JenkinsJobStatusChange/
As a lighter alternative to the existing answers, you can set the build result with a simple HTTP POST to access the Groovy script console REST API:
curl -X POST \
--silent \
--user "$YOUR_CREDENTIALS" \
--data-urlencode "script=Jenkins.instance.getItemByFullName( '$JOB_NAME' ).getBuildByNumber( $BUILD_NUMBER ).setResult( hudson.model.Result.UNSTABLE )" $JENKINS_URL/scriptText
Advantages:
no need to download and run a huge jar file
no kludges for setting and reading some global state (console text, files in workspace)
no plugins required (besides Groovy)
no need to configure an extra build step that is superfluous in the PASSED or FAILURE cases.
For this solution, your environment must meet these conditions:
Jenkins REST API can be accessed from slave
Slave must have access to credentials that allows to access the Jenkins Groovy script REST API.
If you want to use a declarative approach I suggest you to use code like this.
pipeline {
stages {
// create separate stage only for problematic command
stage("build") {
steps {
sh "command"
}
post {
failure {
// set status
unstable 'rsync was unsuccessful'
}
always {
echo "Do something at the end of stage"
}
}
}
}
post {
always {
echo "Do something at the end of pipeline"
}
}
}
In case you want to keep everything in one stage use catchError
pipeline {
stages {
// create separate stage only for problematic command
stage("build") {
steps {
catchError(stageResult: 'UNSTABLE') {
sh "command"
}
sh "other command"
}
}
}
}
One easy way to set a build as unstable, is in your "execute shell" block, run exit 13
You can just call "exit 1", and the build will fail at that point and not continue. I wound up making a passthrough make function to handle it for me, and call safemake instead of make for building:
function safemake {
make "$#"
if [ "$?" -ne 0 ]; then
echo "ERROR: BUILD FAILED"
exit 1
else
echo "BUILD SUCCEEDED"
fi
}

Resources