Assign extracted values from aws command to variables in jenkins pipeline - bash

def id
def state
pipeline {
agent any
stages{
stage('aws') {
steps {
script{
/*extract load generator instanceId*/
sh "aws ec2 describe-instances --filters 'Name=tag:Name,Values=xxx' --output text --query 'Reservations[*].Instances[*].{id:InstanceId,state:State.Name}' --region us-east-1"
echo "id and state: ${id} ${state}"
}
}
}
}
}
I am trying to extract the instace id and state of the xxx instance using the above command and able to get the values of them
But when I try to echo them I get the values as null. So they are not being assigned to the ${id} and {state} variables
Is there any way I could assign them to the above variables in jenkins pipeline
Note: Don't want to use jq
Thanks

Your current implementation doesn't assign any variables, shell, Jenkins, or otherwise. id and instanceState are just aliases for other fields in the context of the aws command. In order to have access to those values in the context of the pipeline, I'd recommend combining the output of the sh step with the readJSON step (it's part of the pipeline utility steps plugin). Then you can do something like this:
def id
def state
pipeline {
agent any
stages{
stage('aws') {
steps {
script{
/*extract load generator instanceId*/
instanceInfo = sh (
script: "aws ec2 describe-instances --filters 'Name=tag:Name,Values=xxx' --output text --query 'Reservations[*].Instances[*].{id:InstanceId,instanceState:State.Name}' --region us-east-1",
returnStdout: true
).trim()
instanceJSON = readJSON text: instanceInfo
instanceJSON.each { instance ->
echo "${instance.id[0]}: ${instance.instanceState[0]}"
}
}
}
}
}
}
(I hand-fudged a couple of those items for my minimal test case; please post any errors you get and we'll clean things up)

Related

How to extract command output from the multi lines shell in Jenkins

How to get the output of kubectl describe deployment nginx | grep Image in an environment variable?
My code:
stage('Deployment'){
script {
sh """
export KUBECONFIG=/tmp/kubeconfig
kubectl describe deployment nginx | grep Image"""
}
}
In this situation, you can access the environment variables in the pipeline scope within the env object, and assign values to its members to initialize new environment variables. You can also utilize the optional returnStdout parameter to the sh step method to return the stdout of the method, and therefore assign it to a Groovy variable (because it is within the script block in the pipeline).
script {
env.IMAGE = sh(script: 'export KUBECONFIG=/tmp/kubeconfig && kubectl describe deployment nginx | grep Image', returnStdout: true).trim()
}
Note you would also want to place the KUBECONFIG environment variable within the environment directive at the pipeline scope instead (unless the kubeconfig will be different in different scopes):
pipeline {
environment { KUBECONFIG = '/tmp/kubeconfig' }
}
You can use the syntax:
someVariable = sh(returnStdout: true, script: some_script).trim()

Passing json to aws glue create-job after replacement done using jq

I have the following bash script that I execute in order to create new Glue Job via CLI:
#!/usr/bin/env bash
set -e
NAME=$1
PROFILE=$2
SCRIPT_LOCATION='s3://bucket/scripts/'$1'.py'
echo [*]--- Creating new job on AWS
aws glue create-job --profile $PROFILE --name $NAME --cli-input-json | jq '.Command.ScriptLocation = '\"$SCRIPT_LOCATION\"'' ./resources/config.json
I'm using jq as i need one of the values to be replaced on runtime before i pass the .json as --cli-input-json argument. How can i pass json with replaced value to this command? As of now, it prints out the json content (although with value already replaced).
Running the command above causes the following error:
[*]--- Creating new job on AWS
{
"Description": "Template for Glue Job",
"LogUri": "",
"Role": "arn:aws:iam::11111111111:role/role",
"ExecutionProperty": {
"MaxConcurrentRuns": 1
},
"Command": {
"Name": "glueetl",
"ScriptLocation": "s3://bucket/scripts/script.py",
"PythonVersion": "3"
},
"DefaultArguments": {
"--TempDir": "s3://temp/admin/",
"--job-bookmark-option": "job-bookmark-disable",
"--enable-metrics": "",
"--enable-glue-datacatalog": "",
"--enable-continuous-cloudwatch-log": "",
"--enable-spark-ui": "true",
"--spark-event-logs-path": "s3://assets/sparkHistoryLogs/"
},
"NonOverridableArguments": {
"KeyName": ""
},
"MaxRetries": 0,
"AllocatedCapacity": 0,
"Timeout": 2880,
"MaxCapacity": 0,
"Tags": {
"KeyName": ""
},
"NotificationProperty": {
"NotifyDelayAfter": 60
},
"GlueVersion": "3.0",
"NumberOfWorkers": 2,
"WorkerType": "G.1X"
}
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
aws.exe: error: argument --cli-input-json: expected one argument
The command line
aws glue create-job --profile $PROFILE --name $NAME --cli-input-json | jq '.Command.ScriptLocation = '\"$SCRIPT_LOCATION\"'' ./resources/config.json
executes the command
aws glue create-job --profile $PROFILE --name $NAME --cli-input-json,
takes its standard output and uses it as input to
jq '.Command.ScriptLocation = '\"$SCRIPT_LOCATION\"'' ./resources/config.json
(which will ignore the input and read from the file given as argument). Please also note that blanks or spaces in $SCRIPT_LOCATION will break your script, because it is not quoted (your quotes are off).
To use the output of one command in the argument list of another command, you must use Command Substitution: outer_command --some-arg "$(inner_command)".
So your command should become:
aws glue create-job --profile $PROFILE --name $NAME --cli-input-json "$(jq '.Command.ScriptLocation = "'"$SCRIPT_LOCATION"'"' ./resources/config.json)"
# or simplified with only double quotes:
aws glue create-job --profile $PROFILE --name $NAME --cli-input-json "$(jq ".Command.ScriptLocation = \"$SCRIPT_LOCATION\"" ./resources/config.json)"
See https://superuser.com/questions/1306071/aws-cli-using-cli-input-json-in-a-pipeline for additional examples.
Although, I have to admit I am not 100% certain that the JSON content can be passed directly on the command line. From looking at the docs and some official examples, it looks like this parameter expects a file name, not a JSON document's content. So it could be possible that your command in fact needs to be:
# if "-" filename is specially handled:
jq ".Command.ScriptLocation = \"$SCRIPT_LOCATION\"" ./resources/config.json | aws glue create-job --profile $PROFILE --name $NAME --cli-input-json -
# "-" filename not recognized:
jq ".Command.ScriptLocation = \"$SCRIPT_LOCATION\"" ./resources/config.json > ./resources/config.replaced.json && aws glue create-job --profile $PROFILE --name $NAME --cli-input-json file://./resources/config.replaced.json
Let us know which one worked.

How do I set secrets in Jenkins Step

I am looking for a solution to inject secrets only during a Jenkins step:
application.properties:
spring.datasource.username=mySecretValue
spring.datasource.password=mySecretValue
...
Current State:
stage('Test') {
agent {
docker {
image 'myregistry.com/maven:3-alpine'
reuseNode true
}
}
steps {
configFileProvider([configFile(fileId: 'maven-settings-my-services', variable: 'MAVEN_SETTINGS')]) {
sh 'mvn -s $MAVEN_SETTINGS verify'
}
}
...
Thanks!
Option 1) Add a password job parameter for that secret. But the job have to be run manually, because need someone to input the secret.
// write the secret to application.property at any stage that
// prior to test and deployment stage
sh "echo spring.datasource.password=${params.DB_PASSWORD} >> application.property"
Option 2) Add the secret as Jenkins String Text credential. But adding credential needs Jenkins administrator access and also need considering update in future.
stage('test or deployment') {
environment {
DB_PASSWORD = credentials('<credential_id_of_the_secret>')
}
steps {
sh "echo spring.datasource.password=${env.DB_PASSWORD} >> application.property"
}
}
One way I did it, was to attach the secrets with the credentials-Plugin variable by variable:
echo 'Attach properties for tests to property file:'
withCredentials([string(credentialsId: 'DB_PW', variable: 'SECRET_ENV')]) {
sh 'echo spring.mydatabase.password=${SECRET_ENV} >> ./src/main/resources/application.properties'
Instead of "echo", "sed" would also an option to replace the empty value for the key instead of add the property to the end of the file.
The second way I did is to attach a complete property file, instead of a key/value pair. The property file contains all needed properties for the tests:
echo 'Attach properties file for test runs:' withCredentials([file(credentialsId: 'TEST_PROPERTIES', variable: 'APPLICATION_PROPERTIES')]) { dir('$WORKSPACE') {
sh 'sed s#'/src/main/resources/' application.properties > TEST_PROPERTIES'
In both cases the secrets has to be deleted atter the run, otherwise they can be viewed in plaintext under the Workspace folder.

Does "sh" command in Jenkins file starts a new session or a new shell?

I observe a scenario when I'm writing a Jenkinsfile to first authenticate a session on AWS and then push a dockerfile to designated ECR. The below code block works fine and pushes the image to ECR:
stage('build and push images') {
steps {
sh """
sh assume_role.sh
source /tmp/${assume_role_session_name}
aws ecr get-login --region ${aws_region} --registry-ids ${ROLEARN} --no-include-email
docker build -t my-docker-image .
docker tag my-docker-image:latest ${ROLEARN}.dkr.ecr.${aws_region}.amazonaws.com/${ECR_name}:${ECS_TAG_VERSION}
docker push ${ROLEARN}.dkr.ecr.${aws_region}.amazonaws.com/${ECR_name}:${ECS_TAG_VERSION}
docker rmi -f my-docker-image:latest
"""
}
}
However, when I divided each step with an individual sh command (like below), docker push failed because the Jenkins agent hasn't been authenticated, which means the authentication token isn't passed to docker push command line.
stage('build and push images') {
steps {
sh "assume_role.sh"
sh "source /tmp/${assume_role_session_name}"
sh "aws ecr get-login --region ${aws_region} --registry-ids ${ROLEARN} --no-include-email"
sh "docker build -t my-docker-image . "
sh "docker tag my-docker-image:latest ${ROLEARN}.dkr.ecr.${aws_region}.amazonaws.com/${ECR_name}:${ECS_TAG_VERSION}"
sh "docker push ${ROLEARN}.dkr.ecr.${aws_region}.amazonaws.com/${ECR_name}:${ECS_TAG_VERSION}"
sh "docker rmi -f my-docker-image:latest"
}
}
Thus, I'm suspecting that the each sh starts a new session in Jenkins steps, in between which, authentication tokens cannot be passed through. I don't know whether my guess is correct and how to find evidence to support my guess.
I thought I would share my solution on how I overcame the annoying need to repeatedly assume the role in every sh block. Passing the extracted credentials (dynamically of course) as environment variables solved the issue for me, and there was no need to re-authenticate again in different scripts.
Adding the credentials to the environment variables forces each script to use them.
environment {
ACCESS = sh(
returnStdout: true,
script: '''
echo "$(aws \
sts assume-role \
--role-arn="arn:aws:iam::\${AWS_ACCOUNT_DEV}:role/\${ASSUME_ROLE}" \
--role-session-name="jenkins" \
--output json
)"
'''
).trim()
}
stages{
stage('Create env variables') {
steps {
script {
env.AWS_ACCESS_KEY_ID = sh(
returnStdout: true,
script: '''
echo "${ACCESS}" | jq -re '.Credentials.AccessKeyId'
'''
).trim()
env.AWS_SECRET_ACCESS_KEY = sh(
returnStdout: true,
script: '''
echo "${ACCESS}" | jq -re '.Credentials.SecretAccessKey'
'''
).trim()
env.AWS_SESSION_TOKEN = sh(
returnStdout: true,
script: '''
echo "${ACCESS}" | jq -re '.Credentials.SessionToken'
'''
).trim()
}
}
}
}
To your question, this StackOverflow answer describes what happens to the environment variables set within the sh execution.
Hope this helps ;)

Change groovy variables inside shell executor in Jenkins pipeline

I have a Jenkins pipeline job where I am taking some build variables as input, and if the variables are not passed by the user, I execute a script and get the value of those variables. Later I have to use the value of these variables to trigger other jobs.
So my code looks something like this:
node {
withCredentials([[$class: 'StringBinding', credentialsId: 'DOCKER_HOST', variable: 'DOCKER_HOST']]) {
env.T_RELEASE_VERSION = T_RELEASE_VERSION
env.C_RELEASE_VERSION = C_RELEASE_VERSION
env.N_RELEASE_VERSION = N_RELEASE_VERSION
env.F_RELEASE_VERSION = F_RELEASE_VERSION
....
stage concurrency: 1, name: 'consul-get-version'
sh '''
if [ -z ${T_RELEASE_VERSION} ]
then
export T_RELEASE_VERSION=$(ruby common/consul/services_prod_version.rb prod_t_release_version)
aws ecr get-login --region us-east-1
aws ecr list-images --repository-name t-server | grep ${T_RELEASE_VERSION}
else
aws ecr get-login --region us-east-1
aws ecr list-images --repository-name t-server | grep ${T_RELEASE_VERSION}
fi
.......
't-integ-pipeline' : {
build job: 't-integ-pipeline', parameters: [[$class: 'StringParameterValue', name: 'RELEASE_VERSION', value: T_RELEASE_VERSION],
[$class: 'BooleanParameterValue', name: 'FASTFORWARD_TO_DEPLOY', value: true]]
},
......
The issue is when I am triggering the main job with empty T_RELEASE_VERSION, the child build job t-integ-pipeline is triggered with an empty value of the RELEASE_VERSION parameter.
How can I change a groovy parameter inside a shell executor and then access it again in the groovy executor with the modified value?
When using env-inject it was possible to store the values in the properties files and the inject them as environment variables. Couldn't find any easy way to do it in pipeline.
Here is a solution anyway, store the values to a file, and read the file from the pipeline. Then use eval or similar to transform it to an parsable object (hash).
Eval.me example: Serializing groovy map to string with quotes
Write/Read to file example:
https://wilsonmar.github.io/jenkins2-pipeline/
EDIT
Manish solution for readability:
sh 'ruby common/consul/services_prod_version.rb prod_n_release_version > status'
N_RELEASE_VERSION_NEW = readFile('status').trim()
sh 'ruby common/consul/services_prod_version.rb prod_q_release_version > status'
Q_RELEASE_VERSION_NEW = readFile('status').trim()
I found a way change the groovy variable in the shell, No need to store it in the file, There is a example here git-tag-message-plugin, I use this method like below:
script{
N_RELEASE_VERSION_NEW = getN_RELEASE_VERSION_NEW()
}
String getN_RELEASE_VERSION_NEW() {
return sh(script: "ruby common/consul/services_prod_version.rb prod_n_release_version ", returnStdout: true)?.trim()
}

Resources