How to automate veracode scans - veracode

Hey I am looking to use a jenkins pipeline to automatically run a vercode application scan. I know how to launch the scan manually using a few sets of commands. I was just going to add these commands to a script and run them, but maybe there is a better way to do this? Something like this is over engineered for my purposes:https://github.com/OLSPayments/veracode-scripts/blob/master/submitToVeracode.py.

I figured out that it can be done through a Jenkins pipeline. Here is an example:
yml
pipeline {
agent any-with-jdk8-maven-curl-unzip
stages {
stage('Maven Build') {
steps {
- sh 'maven clean verify'
}
}
stage('Veracode Pipeline Scan') {
steps {
- sh `curl -O https://downloads.veracode.com/securityscan/pipeline-scan-LATEST.zip`
- sh `unzip pipeline-scan-LATEST.zip pipeline-scan.jar`
- sh `java -jar pipeline-scan.jar \
--veracode_api_id "${VERACODE_API_ID}" \
--veracode_api_key "${VERACODE_API_SECRET}" \
--file "build/libs/sample.jar" \
--fail_on_severity="Very High, High" \
--fail_on_cwe="80" \
--baseline_file "${CI_BASELINE_PATH}" \
--timeout "${CI_TIMEOUT}" \
--project_name "${env.JOB_NAME}" \
--project_url "${env.GIT_URL}" \
--project_ref "${env.GIT_COMMIT}" \
}
}
}
post {
always {
archiveArtifacts artifacts: 'results.json', fingerprint: true
}
}
}
documentation: https://help.veracode.com/reader/tS9CaFwL4_lbIEWWomsJoA/G02kb80l3gTu_ygcuFODaw

Related

How to run aws bash commands consecutively?

How can I execute the following bash commands consecutively?
aws logs create-export-task --task-name "cloudwatch-log-group-export1" \
--log-group-name "/my/log/group1" \
--from 1488708419000 --to 1614938819000 \
--destination "my-s3-bucket" \
--destination-prefix "my-log-group1"
aws logs create-export-task --task-name "cloudwatch-log-group-export" \
--log-group-name "/my/log/group2" \
--from 1488708419000 --to 1614938819000 \
--destination "my-s3-bucket" \
--destination-prefix "my-log-group2"
The problem I have with the above commands is that after the first command completes execution, the script will stuck at the following state, making the second command not reachable.
{
"taskId": "0e3cdd4e-1e95-4b98-bd8b-3291ee69f9ae"
}
It seems that I should find a way to wait for cloudwatch-log-group-export1 task to complete.
You could have to crate a waiter function which uses describe-export-tasks to get current status of an export job.
Example of such function:
wait_for_export() {
local sleep_time=${2:-10}
while true; do
job_status=$(aws logs describe-export-tasks \
--task-id ${1} \
--query "exportTasks[0].status.code" \
--output text)
echo ${job_status}
[[ $job_status == "COMPLETED" ]] && break
sleep ${sleep_time}
done
}
Then you use it:
task_id1=$(aws logs create-export-task \
--task-name "cloudwatch-log-group-export1" \
--log-group-name "/my/log/group1" \
--from 1488708419000 --to 1614938819000 \
--destination "my-s3-bucket" \
--destination-prefix "my-log-group1" \
--query 'taskId' --output text)
wait_for_export ${task_id1}
# second export
aws-cli auto access to vim edit mode by default.
You can avoid it by setting AWS_PAGER environment variable is "" before execute aws command.
export AWS_PAGER=""
aws logs create-export-task...
Or, you can fix it in to aws's config file (~/.aws/config):
[default]
cli_pager=

Does "sh" command in Jenkins file starts a new session or a new shell?

I observe a scenario when I'm writing a Jenkinsfile to first authenticate a session on AWS and then push a dockerfile to designated ECR. The below code block works fine and pushes the image to ECR:
stage('build and push images') {
steps {
sh """
sh assume_role.sh
source /tmp/${assume_role_session_name}
aws ecr get-login --region ${aws_region} --registry-ids ${ROLEARN} --no-include-email
docker build -t my-docker-image .
docker tag my-docker-image:latest ${ROLEARN}.dkr.ecr.${aws_region}.amazonaws.com/${ECR_name}:${ECS_TAG_VERSION}
docker push ${ROLEARN}.dkr.ecr.${aws_region}.amazonaws.com/${ECR_name}:${ECS_TAG_VERSION}
docker rmi -f my-docker-image:latest
"""
}
}
However, when I divided each step with an individual sh command (like below), docker push failed because the Jenkins agent hasn't been authenticated, which means the authentication token isn't passed to docker push command line.
stage('build and push images') {
steps {
sh "assume_role.sh"
sh "source /tmp/${assume_role_session_name}"
sh "aws ecr get-login --region ${aws_region} --registry-ids ${ROLEARN} --no-include-email"
sh "docker build -t my-docker-image . "
sh "docker tag my-docker-image:latest ${ROLEARN}.dkr.ecr.${aws_region}.amazonaws.com/${ECR_name}:${ECS_TAG_VERSION}"
sh "docker push ${ROLEARN}.dkr.ecr.${aws_region}.amazonaws.com/${ECR_name}:${ECS_TAG_VERSION}"
sh "docker rmi -f my-docker-image:latest"
}
}
Thus, I'm suspecting that the each sh starts a new session in Jenkins steps, in between which, authentication tokens cannot be passed through. I don't know whether my guess is correct and how to find evidence to support my guess.
I thought I would share my solution on how I overcame the annoying need to repeatedly assume the role in every sh block. Passing the extracted credentials (dynamically of course) as environment variables solved the issue for me, and there was no need to re-authenticate again in different scripts.
Adding the credentials to the environment variables forces each script to use them.
environment {
ACCESS = sh(
returnStdout: true,
script: '''
echo "$(aws \
sts assume-role \
--role-arn="arn:aws:iam::\${AWS_ACCOUNT_DEV}:role/\${ASSUME_ROLE}" \
--role-session-name="jenkins" \
--output json
)"
'''
).trim()
}
stages{
stage('Create env variables') {
steps {
script {
env.AWS_ACCESS_KEY_ID = sh(
returnStdout: true,
script: '''
echo "${ACCESS}" | jq -re '.Credentials.AccessKeyId'
'''
).trim()
env.AWS_SECRET_ACCESS_KEY = sh(
returnStdout: true,
script: '''
echo "${ACCESS}" | jq -re '.Credentials.SecretAccessKey'
'''
).trim()
env.AWS_SESSION_TOKEN = sh(
returnStdout: true,
script: '''
echo "${ACCESS}" | jq -re '.Credentials.SessionToken'
'''
).trim()
}
}
}
}
To your question, this StackOverflow answer describes what happens to the environment variables set within the sh execution.
Hope this helps ;)

Pass environment variable to jenkins pipeline bash script

Hey I'm trying to make changes to the environment variable GIT_BRANCH and parse the right side of the /, i know this can be achieved with cut like this: $(echo ${env.GIT_BRANCH} | cut -d \"/\" -f 2 )
Thing is, cannot make it work in Jenkins pipelines, error: bad substitution
pipeline {
agent any
stages {
stage('Build') {
steps {
sh "docker build -t jpq/jpq:test ."
}
}
stage('Test') {
steps {
sh "docker run jpq/jpq:test python3 tests.py"
}
}
stage('Push') {
steps {
sh '''#!/bin/bash
BRANCH=\$(echo \${env.GIT_BRANCH} | cut -d \"/\" -f 2 )
echo ${BRANCH}
docker tag jpq/jpq:test jpq/jpq:${BRANCH}
docker push jpq/jpq:test
'''
}
}
// stage('Deploy') {
// steps {
// }
// }
}
}
How can I correctly generate the BRANCH variable and pass it to the docker tag?
This should work:
stage('Push') {
steps {
sh '''#!/bin/bash
#printenv
BRANCH=$(echo ${GIT_BRANCH} | cut -d "/" -f2)
echo "Branch: ${BRANCH}"
'''
}
}
Note: To see what all environment variables are available to the shell block, you may use printenv.

Maven tool is not set in Jenkins pipeline

I have this stage in my Jenkins pipeline:
stage('Build') {
def mvnHome = tool 'M3'
sh '''for f in i7j-*; do
(cd $f && ${mvnHome}/bin/mvn clean package)
done
wait'''
}
In Jenkins » Manage Jenkins » Global Tool Configuration I have a Maven installation called M3, version 3.3.9.
When running this pipeline, mvnHome is empty because I get this in the log:
+ /bin/mvn clean install -Dmaven.test.skip=true
/var/lib/jenkins/jobs/***SNIP***/script.sh: 3: /var/lib/jenkins/jobs/***SNIP***/script.sh: /bin/mvn: not found
I did find a path /var/lib/jenkins/tools/hudson.tasks.Maven_MavenInstallation/M3 on the Jenkins server, which works, but I would prefer not to use a hard coded path to mvn in this script.
How do I fix this?
EDIT: Summary of the answer, using tool and withEnv.
My working code is now:
stage('Build') {
def mvn_version = 'M3'
withEnv( ["PATH+MAVEN=${tool mvn_version}/bin"] ) {
sh '''for f in i7j-*; do
(cd $f && mvn clean package -Dmaven.test.skip=true -Dadditionalparam=-Xdoclint:none | tee ../jel-mvn-$f.log) &
done
wait'''
}
}
You can use your Tools in Jenkinsfile with the tool and withEnv snippets.
Should looks like this:
def mvn_version = 'M3'
withEnv( ["PATH+MAVEN=${tool mvn_version}/bin"] ) {
//sh "mvn clean package"
}
The easiest way should be to use is tools directives:
pipeline {
agent any
tools {
maven 'M3'
}
stages {
stage('Build') {
steps {
sh 'mvn -B -DskipTests clean package'
}
}
}
}
M3 is the name pre-configured in Global Tool Configuration, see the docs: https://jenkins.io/doc/book/pipeline/syntax/#tools
What about using the construct:
withMaven(mavenOpts: MAVEN_OPTS, maven: 'M3', mavenLocalRepo: MAVEN_LOCAL_REPOSITORY, mavenSettingsConfig: MAVEN_SETTINGS) {
sh "mvn ..."
}

Jenkins: Pipeline sh bad substitution error

A step in my pipeline uploads a .tar to an artifactory server. I am getting a Bad substitution error when passing in env.BUILD_NUMBER, but the same commands works when the number is hard coded. The script is written in groovy through jenkins and is running in the jenkins workspace.
sh 'curl -v --user user:password --data-binary ${buildDir}package${env.BUILD_NUMBER}.tar -X PUT "http://artifactory.mydomain.com/artifactory/release-packages/package${env.BUILD_NUMBER}.tar"'
returns the errors:
[Pipeline] sh
[Package_Deploy_Pipeline] Running shell script
/var/lib/jenkins/workspace/Package_Deploy_Pipeline#tmp/durable-4c8b7958/script.sh: 2:
/var/lib/jenkins/workspace/Package_Deploy_Pipeline#tmp/durable-4c8b7958/script.sh: Bad substitution
[Pipeline] } //node
[Pipeline] Allocate node : End
[Pipeline] End of Pipeline
ERROR: script returned exit code 2
If hard code in a build number and swap out ${env.BUILD_NUMBER} I get no errors and the code runs successfully.
sh 'curl -v --user user:password --data-binary ${buildDir}package113.tar -X PUT "http://artifactory.mydomain.com/artifactory/release-packages/package113.tar"'
I use ${env.BUILD_NUMBER} within other sh commands within the same script and have no issues in any other places.
This turned out to be a syntax issue. Wrapping the command in ''s caused ${env.BUILD_NUMBER to be passed instead of its value. I wrapped the whole command in "s and escaped the nested. Works fine now.
sh "curl -v --user user:password --data-binary ${buildDir}package${env.BUILD_NUMBER}.tar -X PUT \"http://artifactory.mydomain.com/artifactory/release-packages/package${env.BUILD_NUMBER}.tar\""
In order to Pass groovy parameters into bash scripts in Jenkins pipelines (causing sometimes bad substitions) You got 2 options:
The triple double quotes way [ " " " ]
OR
the triple single quotes way [ ' ' ' ]
In triple double quotes you can render the normal parameter from groovy using ${someVariable} ,if it's environment variable ${env.someVariable} , if it's parameters injected into your job ${params.someVariable}
example:
def YOUR_APPLICATION_PATH= "${WORKSPACE}/myApp/"
sh """#!/bin/bash
cd ${YOUR_APPLICATION_PATH}
npm install
"""
In triple single quotes things getting little bit tricky, you can pass the parameter to environment parameter and using it by "\${someVaraiable}" or concating the groovy parameter using ''' + someVaraiable + '''
examples:
def YOUR_APPLICATION_PATH= "${WORKSPACE}/myApp/"
sh '''#!/bin/bash
cd ''' + YOUR_APPLICATION_PATH + '''
npm install
'''
OR
pipeline{
agent { node { label "test" } }
environment {
YOUR_APPLICATION_PATH = "${WORKSPACE}/myapp/"
}
continue...
continue...
continue...
sh '''#!/bin/bash
cd "\${YOUR_APPLICATION_PATH}"
npm install
'''
//OR
sh '''#!/bin/bash
cd "\${env.YOUR_APPLICATION_PATH}"
npm install
'''
Actually, you seem to have misunderstood the env variable. In your sh block, you should access ${BUILD_NUMBER} directly.
Reason/Explanation: env represents the environment inside the script. This environment is used/available directly to anything that is executed, e.g. shell scripts.
Please also pay attention to not write anything to env.*, but use withEnv{} blocks instead.
Usually the most common issue for:
Bad substitution
error is to use sh instead of bash.
Especially when using Jenkins, if you're using Execute shell, make sure your Command starts with shebang, e.g. #!/bin/bash -xe or #!/usr/bin/env bash.
I can definitely tell you, it's all about sh shell and bash shell. I fixed this problem by specifying #!/bin/bash -xe as follows:
node {
stage("Preparing"){
sh'''#!/bin/bash -xe
colls=( col1 col2 col3 )
for eachCol in ${colls[#]}
do
echo $eachCol
done
'''
}
}
I had this same issue when working on a Jenkins Pipeline for Amazon S3 Application upload.
My script was like this:
pipeline {
agent any
parameters {
string(name: 'Bucket', defaultValue: 's3-pipeline-test', description: 'The name of the Amazon S3 Bucket')
string(name: 'Prefix', defaultValue: 'my-website', description: 'Application directory in the Amazon S3 Bucket')
string(name: 'Build', defaultValue: 'public/', description: 'Build directory for the application')
}
stages {
stage('Build') {
steps {
echo 'Running build phase'
sh 'npm install' // Install packages
sh 'npm run build' // Build project
sh 'ls' // List project files
}
}
stage('Deploy') {
steps {
echo 'Running deploy phase'
withCredentials([[$class: 'AmazonWebServicesCredentialsBinding', accessKeyVariable: 'AWS_ACCESS_KEY_ID', credentialsId: 'AWSCredentials', secretKeyVariable: 'AWS_SECRET_ACCESS_KEY']]) {
sh 'aws s3 ls' // List AWS S3 buckets
sh 'aws s3 sync "${params.Build}" s3://"${params.Bucket}/${params.Prefix}" --delete' // Sync project files with AWS S3 Bucket project path
}
}
}
}
post {
success {
echo 'Deployment to Amazon S3 suceeded'
}
failure {
echo 'Deployment to Amazon S3 failed'
}
}
}
Here's how I fixed it:
Seeing that it's an interpolation call of variables, I had to substitute the single quotation marks (' ') in this line of the script:
sh 'aws s3 sync "${params.Build}" s3://"${params.Bucket}/${params.Prefix}" --delete' // Sync project files with AWS S3 Bucket project path
to double quotation marks (" "):
sh "aws s3 sync ${params.Build} s3://${params.Bucket}/${params.Prefix} --delete" // Sync project files with AWS S3 Bucket project path
So my script looked like this afterwards:
pipeline {
agent any
parameters {
string(name: 'Bucket', defaultValue: 's3-pipeline-test', description: 'The name of the Amazon S3 Bucket')
string(name: 'Prefix', defaultValue: 'my-website', description: 'Application directory in the Amazon S3 Bucket')
string(name: 'Build', defaultValue: 'public/', description: 'Build directory for the application')
}
stages {
stage('Build') {
steps {
echo 'Running build phase'
sh 'npm install' // Install packages
sh 'npm run build' // Build project
sh 'ls' // List project files
}
}
stage('Deploy') {
steps {
echo 'Running deploy phase'
withCredentials([[$class: 'AmazonWebServicesCredentialsBinding', accessKeyVariable: 'AWS_ACCESS_KEY_ID', credentialsId: 'AWSCredentials', secretKeyVariable: 'AWS_SECRET_ACCESS_KEY']]) {
sh 'aws s3 ls' // List AWS S3 buckets
sh "aws s3 sync ${params.Build} s3://${params.Bucket}/${params.Prefix} --delete" // Sync project files with AWS S3 Bucket project path
}
}
}
}
post {
success {
echo 'Deployment to Amazon S3 suceeded'
}
failure {
echo 'Deployment to Amazon S3 failed'
}
}
}
That's all
I hope this helps
I was having the issue with showing the {env.MAJOR_VERSION} in an artifactory of jar file . show I approaches by keeping of environment step in Jenkinsfile.
pipeline {
agent any
environment {
MAJOR_VERSION = 1
}
stages {
stage('build') {
steps {
sh 'ant -f build.xml -v'
}
}
}
post {
always{
archiveArtifacts artifacts: 'dist/*.jar', fingerprint: true
}
}
}
I got the issue solved and then it was not showing me bad substitution in my Jenkins build output. so environment step plays a more role in Jenkinsfile.
suggestion from #avivamg didn't worked for me, here is the syntax which works for me:
sh "python3 ${env.WORKSPACE}/package.py --product productname " +
"--build_dir ${release_build_dir} " +
"--signed_product_dir ${signed_product_dir} " +
"--version ${build_version}"
I got similar issue. But my usecase is little different
steps{
sh '''#!/bin/bash -xe
VAR=TRIAL
echo $VAR
if [ -d /var/lib/jenkins/.m2/'\${params.application_name}' ]
then
echo 'working'
echo ${VAR}
else
echo 'not working'
fi
'''
}
}
here I'm trying to declare a variable inside the script and also use a parameter from outside
After trying multiple ways
The following script worked
stage('cleaning com/avizva directory'){
steps{
sh """#!/bin/bash -xe
VAR=TRIAL
echo \$VAR
if [ -d /var/lib/jenkins/.m2/${params.application_name} ]
then
echo 'working'
echo \${VAR}
else
echo 'not working'
fi
"""
}
}
changes made :
Replaced triple single quotes --> triple double quotes
Whenever I want to refer to local variable I used escape character
$VAR --> \$VAR
This caused the error Bad Substitution:
pipeline {
agent any
environment {
DOCKER_IMAGENAME = "mynginx:latest"
DOCKER_FILE_PATH = "./docker"
}
stages {
stage('DockerImage-Build') {
steps {
sh 'docker build -t ${env.DOCKER_IMAGENAME} ${env.DOCKER_FILE_PATH}'
}
}
}
}
This fixed it: replace ' with " on sh command
pipeline {
agent any
environment {
DOCKER_IMAGENAME = "mynginx:latest"
DOCKER_FILE_PATH = "./docker"
}
stages {
stage('DockerImage-Build') {
steps {
sh "docker build -t ${env.DOCKER_IMAGENAME} ${env.DOCKER_FILE_PATH}"
}
}
}
}
The Jenkins Script is failing inside the "sh" command-line E.g:
sh 'npm run build' <-- Fails referring to package.json
Needs to be changed to:
sh 'npm run ng build....'
... ng $PATH is not found by the package.json.

Resources