I have been running fortify scan for some Java components. Below are the general steps followed:
For java Project:
mvn com.fortify.ps.maven.plugin:sca-maven-plugin:4.30:clean
mvn install -DskipTests -DSTABILITY_ID=1 -DRELEASE_NUMBER=0 -DBUID_ID=1
mvn -Dfortify.sca.debug=true -Dfortify.sca.Xmx=1800M -Dfortify.sca.Xss=5M -DSTABILITY_ID=2 -DRELEASE_NUMBER=2 package com.fortify.ps.maven.plugin:sca-maven-plugin:4.30:translate
sourceanalyzer -b build_id -Xmx1800M -Xss4M -scan -f build_id_results.fpr -logfile scan.log -clobber-log -debug-verbose
After this fpr files gets generated and is uploaded to the server.
Now I have to do the same for a component using gradle.
What would be the commands that I will have to use to generate the fpr files.
I have to remove duplicity, improve a little bit and probably create a plugin, but basically, try the following snippet.
/*
* Performs the Fortify security scan.
*
* 1) Runs source code translation.
* 2) Creates the export session file.
* 3) Submits the export session file for processing through the scp.
*
* Credentials and url for the scp are obtained from the gradle.properties file
* (or can be passed from the command line through the -P switch).
* <ul>
* <li>fortifyUploadUsername</li>
* <li>fortifyUploadPassword</li>
* <li>fortifyUploadUrl</li>
* </ul>
*/
task fortify(group: 'fortify', description: 'Security analysis by HP Fortify') << {
def fortifyBuildId = 'myProjectId'
logger.debug "Running command: sourceanalyzer -b $fortifyBuildId -clean"
exec {
commandLine 'sourceanalyzer', '-b', fortifyBuildId, '-clean'
}
def classpath = configurations.runtime.asPath
logger.debug "Running command: sourceanalyzer -b ${fortifyBuildId} -source ${sourceCompatibility} -cp $classpath src/**/*.java"
exec {
commandLine 'sourceanalyzer', '-b', fortifyBuildId, '-source', sourceCompatibility, '-cp', classpath, 'src/**/*.java'
}
def fortifyBuildFolder = 'build/fortify'
new File(fortifyBuildFolder).mkdirs()
def fortifyArtifactFileName = "$fortifyBuildId#${project.version}.mbs"
def fortifyArtifact = "$fortifyBuildFolder/$fortifyArtifactFileName"
logger.debug "Running command: sourceanalyzer -b ${fortifyBuildId} -build-label ${project.version} -export-build-session $fortifyArtifact"
exec {
commandLine 'sourceanalyzer', '-b', fortifyBuildId, '-build-label', project.version, '-export-build-session', "$fortifyArtifact"
}
logger.debug "Running command: sshpass -p <password> scp $fortifyArtifact <user>#$fortifyUploadUrl:$fortifyArtifactFileName"
exec {
commandLine 'sshpass', '-p', fortifyUploadPassword, 'scp', "$fortifyArtifact", "$fortifyUploadUsername#$fortifyUploadUrl:$fortifyArtifactFileName"
}
}
Related
{
"TEST_SCRIPTS":["test_1.py","test_2.py"],
"TEST_SCRIPTS1":"test_1.py;test_2.py"
}
This json file, I load in my Jenkins pipeline using :
def load_config(){
def config = readJSON file "./test.json"
return config
}
Now, I need a loop in shell script which can execute each python files defined in TEST_SCRIPTS & TEST_SCRIPTS1.
stage('Test') {
steps {
script{
config = load_config()
sh """
conda env create -n test_env_py37 -f conda.yaml
conda activate test_env_py37
// Below loop is not working. This env is huge, and mendatory for below code to run
for test_script in ${config.TEST_SCRIPTS};
do
python "\$test_script"
done
for test_script in ${config.TEST_SCRIPTS1};
do
python "\$test_script"
done
"""
}
}
}
You can use a groovy approach instead of a shell one, and do all the parsing logic using groovy functionality.
Something like:
stage('Test') {
steps {
script {
def config = readJSON file "./test.json"
def testScripts = config.TEST_SCRIPTS.collect { "python \"$it\""}.join("\n")
def testScripts1 = config.TEST_SCRIPTS1.split(';').collect { "python \"$it\""}.join("\n")
sh """
conda env create -n test_env_py37 -f conda.yaml
conda activate test_env_py37
${testScripts}
${testScripts1}
"""
}
}
}
I have the following task in my build.gradle file:
task myTask(type:Exec) {
def stdout = new ByteArrayOutputStream()
exec {
commandLine 'cmd', '/c', 'whoami'
standardOutput = stdout;
}
println "Output: $stdout"
}
When I run my task with ./gradlew myTask, I get the following output:
> Configure project :
Output: retrovius
> Task :myTask FAILED
FAILURE: Build failed with an exception.
* What went wrong:
Execution failed for task ':myTask'.
> execCommand == null!
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output. Run with --scan to get full insights.
* Get more help at https://help.gradle.org
BUILD FAILED in 2s
1 actionable task: 1 executed
The task successfully outputs my username (retrovius), then fails anyway. Any pointers for what I'm doing wrong?
Depending on what you want to achieve, the answer you found is probably still not correct.
All tasks have two main stages: configuration and execution. Everything you put in the outermost block to the task definition is part of the configuration. And the exec method actually executes the command whenever that block of code is evaluated. So when you type:
task myTask() {
def stdout = new ByteArrayOutputStream()
exec {
commandLine 'cmd', '/c', 'whoami'
standardOutput = stdout;
}
println "Output: $stdout"
}
Then it means you are running the whoami command no matter what task you specify. If you run gradle -i help, it will print the name. I expect this is not what you intend.
Most of the time, you will want to run a command only when the task is actually executed. So if you want the command to only run if you type gradle -i myTask, you will need to do defer it to the execution stage instead. There are two ways you can do that.
Either you can put everything in a doLast block like this:
task myTask() {
doLast {
def stdout = new ByteArrayOutputStream()
exec {
commandLine 'cmd', '/c', 'whoami'
standardOutput = stdout
}
println "Output: $stdout"
}
}
Or you use the Exec type, like you already tried. The reason it didn't work for you is that you need to configure it with the command you like - and not actually run the command through the exec method. It could look like this:
task myTask(type: Exec) {
commandLine 'cmd', '/c', 'whoami'
standardOutput = new ByteArrayOutputStream()
doLast {
println "Output: $standardOutput"
}
}
You an also probably get rid of the cmd /c part. And println should only be used for debugging - use logger.info (or .warn, etc) if you need to output something to the user.
I figured out that the only thing I was doing wrong was to include the (type:Exec) in the definition of my task. If I place the following code in my build.gradle file:
task myTask() {
def stdout = new ByteArrayOutputStream()
exec {
commandLine 'cmd', '/c', 'whoami'
standardOutput = stdout;
}
println "Output: $stdout"
}
I get the following output:
> Configure project :
Output: retrovius
BUILD SUCCESSFUL in 2s
My mistake must have been that I was defining the task to be of type exec, but not giving it a command to run. This reveals a fundamental misunderstanding of the exec task and task type on my part. If anyone knows more specifically what I did wrong, please feel free to comment and explain or post a better answer.
I need Jenkins to run a shell script named create_environment.sh including the following command:
sed -i '' "s~variable \"backendPoolID\" { default = \".*\"~variable \"backendPoolID\" { default = \"$backend_address_pool_id\"~g" var.tf
When I run the script inside the Jenkins machine it works without a problem, but inside the Jenkins pipeline I get the Error:
sed: can't read s~variable "backendPoolID" { default = ".*"~variable "backendPoolID" { default = ""~g: No such file or directory
The pipeline step:
steps {
withCredentials([azureServicePrincipal('xxxx')]) {
dir("${WORKSPACE}/azure/terraform/deployment/${params.AZURE_ENV}") {
echo " *************************** Deploy /${params.AZURE_ENV} ***************************** "
sh "chmod 777 *"
sh "./create_environment.sh"
echo " *************************** Deploy erfolgreich ***************************** "
}
I already tried to replace sh " with sh """, but it didn´t helped.
Do anyone have some experience with that?
I have a Jenkins scripted pipeline set up where I execute a number of Maven builds. I want to treat one of them as non-fatal if the root cause is a known one.
I have tried to achieve that by inspecting the Exception's message, e.g.
try {
sh "mvn -U clean verify sonar:sonar ${sonarcloudParams}"
} catch ( Exception e ) {
if ( e.getMessage().contains("not authorized to run analysis")) {
echo "Marking build unstable due to missing SonarCloud onboarding. See https://cwiki.apache.org/confluence/display/SLING/SonarCloud+analysis for steps to fix."
currentBuild.result = 'UNSTABLE'
}
}
The problem is that the exception's message is not the one from Maven, but instead "script returned exit code 1".
There is no further information in e.getCause().
How can I access the cause of the Maven build failure inside my scripted pipeline?
You can get the command output, then parse it containers specific message.
def output = sh(
script: "mvn -U clean verify sonar:sonar ${sonarcloudParams}",
returnStdout: true
).trim()
echo "mvn cmd output: ${output}"
if(output.contains('not authorized to run analysis')) {
currentBuild.result = 'UNSTABLE'
}
// parse jenkins job build log
def logUrl = env.BUILD_URL + 'consoleText'
def cmd = "curl -u \${JENKINS_AUTH} -k ${logUrl} | tail -n 50"
def output = sh(returnStdout: true, script: cmd).trim()
echo "job build log: ${output}"
if(output.contains('not authorized to run analysis')) {
currentBuild.result = 'UNSTABLE'
}
One option is to inspect the last log lines using
def sonarCloudNotEnabled = currentBuild.rawBuild.getLog(50).find {
line -> line.contains("not authorized to run analysis")
}
However, this does not work by default. On the Jenkins instance I'm using it errors out with
Scripts not permitted to use method org.jenkinsci.plugins.workflow.support.steps.build.RunWrapper getRawBuild. Administrators can decide whether to approve or reject this signature.
I have this stage in my Jenkins pipeline:
stage('Build') {
def mvnHome = tool 'M3'
sh '''for f in i7j-*; do
(cd $f && ${mvnHome}/bin/mvn clean package)
done
wait'''
}
In Jenkins » Manage Jenkins » Global Tool Configuration I have a Maven installation called M3, version 3.3.9.
When running this pipeline, mvnHome is empty because I get this in the log:
+ /bin/mvn clean install -Dmaven.test.skip=true
/var/lib/jenkins/jobs/***SNIP***/script.sh: 3: /var/lib/jenkins/jobs/***SNIP***/script.sh: /bin/mvn: not found
I did find a path /var/lib/jenkins/tools/hudson.tasks.Maven_MavenInstallation/M3 on the Jenkins server, which works, but I would prefer not to use a hard coded path to mvn in this script.
How do I fix this?
EDIT: Summary of the answer, using tool and withEnv.
My working code is now:
stage('Build') {
def mvn_version = 'M3'
withEnv( ["PATH+MAVEN=${tool mvn_version}/bin"] ) {
sh '''for f in i7j-*; do
(cd $f && mvn clean package -Dmaven.test.skip=true -Dadditionalparam=-Xdoclint:none | tee ../jel-mvn-$f.log) &
done
wait'''
}
}
You can use your Tools in Jenkinsfile with the tool and withEnv snippets.
Should looks like this:
def mvn_version = 'M3'
withEnv( ["PATH+MAVEN=${tool mvn_version}/bin"] ) {
//sh "mvn clean package"
}
The easiest way should be to use is tools directives:
pipeline {
agent any
tools {
maven 'M3'
}
stages {
stage('Build') {
steps {
sh 'mvn -B -DskipTests clean package'
}
}
}
}
M3 is the name pre-configured in Global Tool Configuration, see the docs: https://jenkins.io/doc/book/pipeline/syntax/#tools
What about using the construct:
withMaven(mavenOpts: MAVEN_OPTS, maven: 'M3', mavenLocalRepo: MAVEN_LOCAL_REPOSITORY, mavenSettingsConfig: MAVEN_SETTINGS) {
sh "mvn ..."
}