AWS CodeBuild buildspec bash syntax error: bad substitution with if statement - bash

Background:
I'm using an AWS CodeBuild buildspec.yml to iterate through directories from a GitHub repo. Before looping through the directory path $TF_ROOT_DIR, I'm using a bash if statement to check if the GitHub branch name $BRANCH_NAME is within an env variable $LIVE_BRANCHES. As you can see in the error screenshot below, the bash if statement outputs the error: syntax error: bad substitution. When I reproduce the if statement within a local bash script, the if statement works as it's supposed to.
Here's the env variables defined in the CodeBuild project:
Here's a relevant snippet from the buildspec.yml:
version: 0.2
env:
shell: bash
phases:
build:
commands:
- |
if [[ " ${LIVE_BRANCHES[*]} " == *"$BRANCH_NAME"* ]]; then
# Iterate only through BRANCH_NAME directory
TF_ROOT_DIR=${TF_ROOT_DIR}/*/${BRANCH_NAME}/
else
# Iterate through both dev and prod directories
TF_ROOT_DIR=${TF_ROOT_DIR}/*/
fi
- echo $TF_ROOT_DIR
Here's the build log that shows the syntax error:
Here's the AWS CodeBuild project JSON to reproduce the CodeBuild project:
{
"projects": [
{
"name": "terraform_validate_plan",
"arn": "arn:aws:codebuild:us-west-2:xxxxx:project/terraform_validate_plan",
"description": "Perform terraform plan and terraform validator",
"source": {
"type": "GITHUB",
"location": "https://github.com/marshall7m/sparkify_end_to_end.git",
"gitCloneDepth": 1,
"gitSubmodulesConfig": {
"fetchSubmodules": false
},
"buildspec": "deployment/CI/dev/cfg/buildspec_terraform_validate_plan.yml",
"reportBuildStatus": false,
"insecureSsl": false
},
"secondarySources": [],
"secondarySourceVersions": [],
"artifacts": {
"type": "NO_ARTIFACTS",
"overrideArtifactName": false
},
"cache": {
"type": "NO_CACHE"
},
"environment": {
"type": "LINUX_CONTAINER",
"image": "hashicorp/terraform:0.12.28",
"computeType": "BUILD_GENERAL1_SMALL",
"environmentVariables": [
{
"name": "TF_ROOT_DIR",
"value": "deployment",
"type": "PLAINTEXT"
},
{
"name": "LIVE_BRANCHES",
"value": "(dev, prod)",
"type": "PLAINTEXT"
}
Here's the associated buildspec file content: (buildspec_terraform_validate_plan.yml)
version: 0.2
env:
shell: bash
parameter-store:
AWS_ACCESS_KEY_ID_PARAM: TF_AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY_PARAM: TF_AWS_SECRET_ACCESS_KEY_ID
phases:
install:
commands:
# install/incorporate terraform validator?
pre_build:
commands:
# CodeBuild environment variables
# BRANCH_NAME -- GitHub branch that triggered the CodeBuild project
# TF_ROOT_DIR -- Directory within branch ($BRANCH_NAME) that will be iterated through for terraform planning and testing
# LIVE_BRANCHES -- Branches that represent a live cloud environment
- export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID_PARAM
- export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY_PARAM
- bash -version || echo "${BASH_VERSION}" || bash --version
- |
if [[ -z "${BRANCH_NAME}" ]]; then
# extract branch from github webhook
BRANCH_NAME=$(echo $CODEBUILD_WEBHOOK_HEAD_REF | cut -d'/' -f 3)
fi
- "echo Triggered Branch: $BRANCH_NAME"
- |
if [[ " ${LIVE_BRANCHES[*]} " == *"$BRANCH_NAME"* ]]; then
# Iterate only through BRANCH_NAME directory
TF_ROOT_DIR=${TF_ROOT_DIR}/*/${BRANCH_NAME}/
else
# Iterate through both dev and prod directories
TF_ROOT_DIR=${TF_ROOT_DIR}/*/
fi
- "echo Terraform root directory: $TF_ROOT_DIR"
build:
commands:
- |
for dir in $TF_ROOT_DIR; do
#get list of non-hidden directories within $dir/
service_dir_list=$(find "${dir}" -type d | grep -v '/\.')
for sub_dir in $service_dir_list; do
#if $sub_dir contains .tf or .tfvars files
if (ls ${sub_dir}/*.tf) > /dev/null 2>&1 || (ls ${sub_dir}/*.tfvars) > /dev/null 2>&1; then
cd $sub_dir
echo ""
echo "*************** terraform init ******************"
echo "******* At directory: ${sub_dir} ********"
echo "*************************************************"
terraform init
echo ""
echo "*************** terraform plan ******************"
echo "******* At directory: ${sub_dir} ********"
echo "*************************************************"
terraform plan
cd - > /dev/null
fi
done
done
Given this is just a side project, all files that could be relevant to this problem are within a public repo here.
UPDATES
Tried adding #!/bin/bash shebang line but resulted in the CodeBuild error:
Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: #!/bin/bash
version: 0.2
env:
shell: bash
phases:
build:
commands:
- |
#!/bin/bash
if [[ " ${LIVE_BRANCHES[*]} " == *"$BRANCH_NAME"* ]]; then
# Iterate only through BRANCH_NAME directory
TF_ROOT_DIR=${TF_ROOT_DIR}/*/${BRANCH_NAME}/
else
# Iterate through both dev and prod directories
TF_ROOT_DIR=${TF_ROOT_DIR}/*/
fi
- echo $TF_ROOT_DIR
Solution
As mentioned by #Marcin, I used an AWS managed image within Codebuild (aws/codebuild/standard:4.0) and downloaded Terraform within the install phase.
phases:
install:
commands:
- wget https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_amd64.zip -q
- unzip terraform_${TERRAFORM_VERSION}_linux_amd64.zip && mv terraform /usr/local/bin/

I tried to reproduce your issue, but it all works fine for me.
The only thing I've noticed is that you are using $BRANCH_NAME but its not defined anywhere. But even with missing $BRANCH_NAME the buildspec.yml you've posted runs fine.
Update using hashicorp/terraform:0.12.28 image

Related

Bash: How to execute paths

I have job in my gitlab-cicd.yml file:
unit_test:
stage: test
image: $MAVEN_IMAGE
script:
- *tests_variables_export
- mvn ${MAVEN_CLI_OPTS} clean test
- cat ${CI_PROJECT_DIR}/rest-service/target/site/jacoco/index.html
- cat ${CI_PROJECT_DIR}/soap-service/target/site/jacoco/index.html
artifacts:
expose_as: 'code coverage'
paths:
- ${CI_PROJECT_DIR}/soap-service/target/surefire-reports/
- ${CI_PROJECT_DIR}/rest-service/target/surefire-reports/
- ${CI_PROJECT_DIR}/soap-service/target/site/jacoco/index.html
- ${CI_PROJECT_DIR}/rest-service/target/site/jacoco/index.html
And I want to change it to this one:
unit_test:
stage: test
image: $MAVEN_IMAGE
script:
- *tests_variables_export
- mvn ${MAVEN_CLI_OPTS} clean test
- cat ${CI_PROJECT_DIR}/rest-service/target/site/jacoco/index.html
- cat ${CI_PROJECT_DIR}/soap-service/target/site/jacoco/index.html
artifacts:
expose_as: 'code coverage'
paths:
- *resolve_paths
I try to use this bash script:
.resolve_paths: &resolve_paths |-
if [ "${MODULE_FIRST}" != "UNKNOWN" ]; then
echo "- ${CI_PROJECT_DIR}/${MODULE_FIRST}/target/surefire-reports/"
echo "- ${CI_PROJECT_DIR}/${MODULE_FIRST}/target/site/jacoco/index.html"
fi
if [ "${MODULE_SECOND}" != "UNKNOWN" ]; then
echo "- ${CI_PROJECT_DIR}/${MODULE_SECOND}/target/surefire-reports/"
echo "- ${CI_PROJECT_DIR}/${MODULE_SECOND}/target/site/jacoco/index.html"
fi
And right now I'm getting this error in pipeline:
WARNING: if [ "rest-service" != "UNKNOWN" ]; then\n echo "- /builds/minv/common/testcommons/taf-api-support/rest-service/target/surefire-reports/"\n echo "- /builds/minv/common/testcommons/taf-api-support/rest-service/target/site/jacoco/index.html"\nfi\nif [ "soap-service" != "UNKNOWN" ]; then\n echo "- /builds/minv/common/testcommons/taf-api-support/soap-service/target/surefire-reports/"\n echo "- /builds/minv/common/testcommons/taf-api-support/soap-service/target/site/jacoco/index.html"\nfi: no matching files ERROR: No files to upload
Can I execute [sic] paths using bash script like this?
No, scripts cannot alter the current YAML, particularly not if you specify the script (which is just a string) in a place where it is interpreted as a path.
You could trigger a dynamically generated YAML:
generate:
stage: build
script:
- |
exec > generated.yml
echo ".resolve_paths: &resolve_paths"
for module in "${MODULE_FIRST}" "${MODULE_SECOND}"; do
[[ "$module" = UNKNOWN ]] && continue
echo "- ${CI_PROJECT_DIR}/${module}/target/surefire-reports/"
echo "- ${CI_PROJECT_DIR}/${module}/target/site/jacoco/index.html"
done
sed '1,/^\.\.\. *$/d' "${CI_CONFIG_PATH}"
artifacts:
paths:
- generated.yml
run:
stage: deploy
trigger:
include:
- artifact: generated.yml
job: generate
...
# Start of actual CI. When this runs, there will be an
# auto-generated job `.resolve_paths: &resolve_paths`.
# Put the rest of your CI (e.g. `unit_test:`) here.
But there are so many extensions in GitLab's YAML that you likely will find a tremendously better solution, which depends on what you plan to do with .resolve_paths. Maybe have a look at
artifacts:exclude
additional jobs with rules:

MS Teams Not Working As Expected With and Github Action

I am facing issue while sending notification to microsoft teams using below github action workflow YAML. As you can see in first job i am using right "ls -lrt" command and when this job1 succeeded then i got success notification in teams but to get failed notification, i purposefully removed hypen (-) from "ls lrt" command so that second job can fail and i can get fail notification. Overall idea is any job fail or success, i must get notification. But this is not happening for me actually. Any guidance and help would be appreciated.
name: msteams
on: push
jobs:
job1:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: test run
run: ls -lrt
- name: "testing_ms"
if: always()
uses: ./.github/actions
job2:
runs-on: ubuntu-latest
needs: job1
steps:
- uses: actions/checkout#v2
- name: test run
run: ls lrt
- name: "testing ms"
if: always()
uses: ./.github/actions
As in above YAML you can see i am using uses: ./.github/actions so i kept below mentioned code in another YAML file and kept in .github/actions folder parallel to my above main github action workflow YAML.
name: 'MS Notification'
description: 'Notification to MS Teams'
runs:
using: "composite"
steps:
- id: notify
shell: bash
run: |
echo "This is for testing"
# step logic
# Specific to this workflow variables set
PIPELINE_PUBLISHING_NAME="GitHub Actions Workflows Library"
BRANCH_NAME="${GITHUB_REF#refs/*/}"
PIPELINE_TEAMS_WEBHOOK=${{ secrets.MSTEAMS_WEBHOOK }}
# Common logic for notifications
TIME_STAMP=$(date '+%A %d %B %Y, %r - %Z')
GITHUBREPO_OWNER=$(echo ${GITHUB_REPOSITORY} | cut -d / -f 1)
GITHUBREPO_NAME=${GITHUB_REPOSITORY}
GITHUBREPO_URL=${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
SHA=${GITHUB_SHA}
SHORT_SHA=${SHA::7}
RUN_ID=${GITHUB_RUN_ID}
RUN_NUM=${GITHUB_RUN_NUMBER}
AUTHOR_AVATAR_URL="${{github.event.sender.avatar_url}}"
AUTHOR_HTML_URL="${{github.event.sender.url}}"
AUTHOR_LOGIN="${{github.event.sender.login}}"
COMMIT_HTML_URL="${GITHUBREPO_URL}/commit/${SHA}"
COMMIT_AUTHOR_NAME="${{github.event.sender.login}}"
case ${{ job.status }} in
failure )
NOTIFICATION_COLOR="dc3545"
NOTIFICATION_ICON="&#x274C"
NOTIFICATION_STATUS="FAILURE"
;;
success )
NOTIFICATION_COLOR="28a745"
NOTIFICATION_ICON="&#x2705"
NOTIFICATION_STATUS="SUCCESS"
;;
cancelled )
NOTIFICATION_COLOR="17a2b8"
NOTIFICATION_ICON="&#x2716"
NOTIFICATION_STATUS="CANCELLED"
;;
*)
NOTIFICATION_COLOR="778899"
NOTIFICATION_ICON=&#x2754""
NOTIFICATION_STATUS="UNKOWN"
;;
esac
# set pipeline version information if available
if [[ '${{ env.CICD_PIPELINE_VERSION}}' != '' ]];then
PIPELINE_VERSION="(v. ${{ env.CICD_PIPELINE_VERSION}})"
else
PIPELINE_VERSION=""
fi
NOTIFICATION_SUMARY="${NOTIFICATION_ICON} ${NOTIFICATION_STATUS} - ${PIPELINE_PUBLISHING_NAME} [ ${BRANCH_NAME} branch ] >> ${{ github.workflow }} ${PIPELINE_VERSION} "
TEAMS_WEBHOOK_URL="${PIPELINE_TEAMS_WEBHOOK}"
# addtional sections can be added to specify additional, specific to its workflow, information
message-card_json_payload() {
cat <<EOF
{
"#type": "MessageCard",
"#context": "https://schema.org/extensions",
"summary": "${NOTIFICATION_SUMARY}",
"themeColor": "${NOTIFICATION_COLOR}",
"title": "${NOTIFICATION_SUMARY}",
"sections": [
{
"activityTitle": "**CI #${RUN_NUM} (commit [${SHORT_SHA}](COMMIT_HTML_URL))** on [${GITHUBREPO_NAME}](${GITHUBREPO_URL})",
"activitySubtitle": "by ${COMMIT_AUTHOR_NAME} [${AUTHOR_LOGIN}](${AUTHOR_HTML_URL}) on ${TIME_STAMP}",
"activityImage": "${AUTHOR_AVATAR_URL}",
"markdown": true
}
],
"potentialAction": [
{
"#type": "OpenUri",
"name": "View Workflow Run",
"targets": [{
"os": "default",
"uri": "${GITHUBREPO_URL}/actions/runs/${RUN_ID}"
}]
},
{
"#type": "OpenUri",
"name": "View Commit Changes",
"targets": [{
"os": "default",
"uri": "${COMMIT_HTML_URL}"
}]
}
]
}
EOF
}
echo "NOTIFICATION_SUMARY ${NOTIFICATION_SUMARY}"
echo "------------------------------------------------"
echo "MessageCard payload"
echo "$(message-card_json_payload)"
echo "------------------------------------------------"
HTTP_RESPONSE=$(curl -s -H "Content-Type: application/json" \
--write-out "HTTPSTATUS:%{http_code}" \
--url "${TEAMS_WEBHOOK_URL}" \
-d "$(message-card_json_payload)"
)
echo "------------------------------------------------"
echo "HTTP_RESPONSE $HTTP_RESPONSE"
echo "------------------------------------------------"
# extract the body
HTTP_BODY=$(echo $HTTP_RESPONSE | sed -e 's/HTTPSTATUS\:.*//g')
# extract the status
HTTP_STATUS=$(echo $HTTP_RESPONSE | tr -d '\n' | sed -e 's/.*HTTPSTATUS://')
if [ ! $HTTP_STATUS -eq 200 ]; then
echo "::error::Error sending MS Teams message card request [HTTP status: $HTTP_STATUS]"
# print the body
echo "$HTTP_BODY"
exit 1
fi
I don't know the entire answer for you, but right off I see the composite action trying to read secrets, which composite actions don't support. Try setting input params to the composite actions to pass in what you need.

Jenkins pipeline - How to read the success status of build?

Below is the output after running the build(with success):
$ sam build
2019-06-02 15:36:37 Building resource 'SomeFunction'
2019-06-02 15:36:37 Running PythonPipBuilder:ResolveDependencies
2019-06-02 15:36:39 Running PythonPipBuilder:CopySource
Build Succeeded
Built Artifacts : .aws-sam/build
Built Template : .aws-sam/build/template.yaml
Commands you can use next
=========================
[*] Invoke Function: sam local invoke
[*] Package: sam package --s3-bucket <yourbucket>
[command] && echo "Yes" approach did not help me.
I tried to use this in Jenkins pipeline
def samAppBuildStatus = sh(script: '[cd sam-app-folder; sam build | grep 'Succeeded' ] && echo true', returnStatus: true) as Boolean
as one-liner script command, but does not work
How to grab the success build status using bash script? for Jenkins pipeline
Use this to grab the exit status of the command:
def samAppBuildStatus = sh returnStatus: true, script: 'cd sam-app-folder; sam build | grep "Succeeded"'
or this if you don't want to see any stderr in the output:
def samAppBuildStatus = sh returnStatus: true, script: 'cd sam-app-folder; sam build 2>&1 | grep "Succeeded"'
then later in your Jenkinsfile you can do something like this:
if (!samAppBuildStatus){
echo "build success [$samAppBuildStatus]"
} else {
echo "build failed [$samAppBuildStatus]"
}
The reason for the ! is because the definitions of true and false between shell and groovy differ (0 is true for shell).

How to fix "length must be less than 40" error in shippable.yml file?

Shippable CI UI is showing me the following error:
ERROR: 1 validation error detected: Value '[if [ develop == master ]; then xxx-xx-prod; else xxx-xx-dev; fi]' at 'environmentNames' failed to satisfy constraint: Member must satisfy constraint: [Member must have length less than or equal to 40, Member must have length greater than or equal to 4]
This is my shippable.yml file:
branches:
only:
- develop
- master
build:
ci:
- "echo 'CI is running'"
post_ci:
- "docker build -t=\"xxxx/xxx-xxxx:$BRANCH.$BUILD_NUMBER\" ."
- "docker push xxxx/xxx-xxx:$BRANCH.$BUILD_NUMBER"
- "pip install --upgrade botocore"
- "pip install setuptools==34.0.1"
integrations:
deploy:
-
application_name: seamless-ai
env_name: if [ "$BRANCH" == "master" ]; then "xxx-xx-prod"; else "xxx-xx-dev"; fi
image_name: xxxx/xxx-xxx
image_tag: $BRANCH.$BUILD_NUMBER
integrationName: AWS-int
region: us-east-1
type: aws
hub:
-
integrationName: "Docker Hub"
type: docker
language: node_js
So essentially, my issue is the following:
env_name: if [ "$BRANCH" == "master" ]; then "xxx-xx-prod"; else "xxx-xx-dev"; fi
Essentially what I need to do is:
If the branch is master, then env_name must be xxx-xx-prod otherwise, then env_name = xxx-xx-dev
How can I fix this issue?
Since we see that $BRANCH gets evaluated inside the value, a possible solution could be to write it to an env variable and then just replace that.
This can be done by adding this line to post-ci:
- if [ "$BRANCH" == "master" ]; then export ENV_NAME="xxx-xx-prod"; else export ENV_NAME="xxx-xx-dev"; fi
and then in deploy:
env_name: $ENV_NAME
I have no idea whether that actually works.

Jenkins: Pipeline sh bad substitution error

A step in my pipeline uploads a .tar to an artifactory server. I am getting a Bad substitution error when passing in env.BUILD_NUMBER, but the same commands works when the number is hard coded. The script is written in groovy through jenkins and is running in the jenkins workspace.
sh 'curl -v --user user:password --data-binary ${buildDir}package${env.BUILD_NUMBER}.tar -X PUT "http://artifactory.mydomain.com/artifactory/release-packages/package${env.BUILD_NUMBER}.tar"'
returns the errors:
[Pipeline] sh
[Package_Deploy_Pipeline] Running shell script
/var/lib/jenkins/workspace/Package_Deploy_Pipeline#tmp/durable-4c8b7958/script.sh: 2:
/var/lib/jenkins/workspace/Package_Deploy_Pipeline#tmp/durable-4c8b7958/script.sh: Bad substitution
[Pipeline] } //node
[Pipeline] Allocate node : End
[Pipeline] End of Pipeline
ERROR: script returned exit code 2
If hard code in a build number and swap out ${env.BUILD_NUMBER} I get no errors and the code runs successfully.
sh 'curl -v --user user:password --data-binary ${buildDir}package113.tar -X PUT "http://artifactory.mydomain.com/artifactory/release-packages/package113.tar"'
I use ${env.BUILD_NUMBER} within other sh commands within the same script and have no issues in any other places.
This turned out to be a syntax issue. Wrapping the command in ''s caused ${env.BUILD_NUMBER to be passed instead of its value. I wrapped the whole command in "s and escaped the nested. Works fine now.
sh "curl -v --user user:password --data-binary ${buildDir}package${env.BUILD_NUMBER}.tar -X PUT \"http://artifactory.mydomain.com/artifactory/release-packages/package${env.BUILD_NUMBER}.tar\""
In order to Pass groovy parameters into bash scripts in Jenkins pipelines (causing sometimes bad substitions) You got 2 options:
The triple double quotes way [ " " " ]
OR
the triple single quotes way [ ' ' ' ]
In triple double quotes you can render the normal parameter from groovy using ${someVariable} ,if it's environment variable ${env.someVariable} , if it's parameters injected into your job ${params.someVariable}
example:
def YOUR_APPLICATION_PATH= "${WORKSPACE}/myApp/"
sh """#!/bin/bash
cd ${YOUR_APPLICATION_PATH}
npm install
"""
In triple single quotes things getting little bit tricky, you can pass the parameter to environment parameter and using it by "\${someVaraiable}" or concating the groovy parameter using ''' + someVaraiable + '''
examples:
def YOUR_APPLICATION_PATH= "${WORKSPACE}/myApp/"
sh '''#!/bin/bash
cd ''' + YOUR_APPLICATION_PATH + '''
npm install
'''
OR
pipeline{
agent { node { label "test" } }
environment {
YOUR_APPLICATION_PATH = "${WORKSPACE}/myapp/"
}
continue...
continue...
continue...
sh '''#!/bin/bash
cd "\${YOUR_APPLICATION_PATH}"
npm install
'''
//OR
sh '''#!/bin/bash
cd "\${env.YOUR_APPLICATION_PATH}"
npm install
'''
Actually, you seem to have misunderstood the env variable. In your sh block, you should access ${BUILD_NUMBER} directly.
Reason/Explanation: env represents the environment inside the script. This environment is used/available directly to anything that is executed, e.g. shell scripts.
Please also pay attention to not write anything to env.*, but use withEnv{} blocks instead.
Usually the most common issue for:
Bad substitution
error is to use sh instead of bash.
Especially when using Jenkins, if you're using Execute shell, make sure your Command starts with shebang, e.g. #!/bin/bash -xe or #!/usr/bin/env bash.
I can definitely tell you, it's all about sh shell and bash shell. I fixed this problem by specifying #!/bin/bash -xe as follows:
node {
stage("Preparing"){
sh'''#!/bin/bash -xe
colls=( col1 col2 col3 )
for eachCol in ${colls[#]}
do
echo $eachCol
done
'''
}
}
I had this same issue when working on a Jenkins Pipeline for Amazon S3 Application upload.
My script was like this:
pipeline {
agent any
parameters {
string(name: 'Bucket', defaultValue: 's3-pipeline-test', description: 'The name of the Amazon S3 Bucket')
string(name: 'Prefix', defaultValue: 'my-website', description: 'Application directory in the Amazon S3 Bucket')
string(name: 'Build', defaultValue: 'public/', description: 'Build directory for the application')
}
stages {
stage('Build') {
steps {
echo 'Running build phase'
sh 'npm install' // Install packages
sh 'npm run build' // Build project
sh 'ls' // List project files
}
}
stage('Deploy') {
steps {
echo 'Running deploy phase'
withCredentials([[$class: 'AmazonWebServicesCredentialsBinding', accessKeyVariable: 'AWS_ACCESS_KEY_ID', credentialsId: 'AWSCredentials', secretKeyVariable: 'AWS_SECRET_ACCESS_KEY']]) {
sh 'aws s3 ls' // List AWS S3 buckets
sh 'aws s3 sync "${params.Build}" s3://"${params.Bucket}/${params.Prefix}" --delete' // Sync project files with AWS S3 Bucket project path
}
}
}
}
post {
success {
echo 'Deployment to Amazon S3 suceeded'
}
failure {
echo 'Deployment to Amazon S3 failed'
}
}
}
Here's how I fixed it:
Seeing that it's an interpolation call of variables, I had to substitute the single quotation marks (' ') in this line of the script:
sh 'aws s3 sync "${params.Build}" s3://"${params.Bucket}/${params.Prefix}" --delete' // Sync project files with AWS S3 Bucket project path
to double quotation marks (" "):
sh "aws s3 sync ${params.Build} s3://${params.Bucket}/${params.Prefix} --delete" // Sync project files with AWS S3 Bucket project path
So my script looked like this afterwards:
pipeline {
agent any
parameters {
string(name: 'Bucket', defaultValue: 's3-pipeline-test', description: 'The name of the Amazon S3 Bucket')
string(name: 'Prefix', defaultValue: 'my-website', description: 'Application directory in the Amazon S3 Bucket')
string(name: 'Build', defaultValue: 'public/', description: 'Build directory for the application')
}
stages {
stage('Build') {
steps {
echo 'Running build phase'
sh 'npm install' // Install packages
sh 'npm run build' // Build project
sh 'ls' // List project files
}
}
stage('Deploy') {
steps {
echo 'Running deploy phase'
withCredentials([[$class: 'AmazonWebServicesCredentialsBinding', accessKeyVariable: 'AWS_ACCESS_KEY_ID', credentialsId: 'AWSCredentials', secretKeyVariable: 'AWS_SECRET_ACCESS_KEY']]) {
sh 'aws s3 ls' // List AWS S3 buckets
sh "aws s3 sync ${params.Build} s3://${params.Bucket}/${params.Prefix} --delete" // Sync project files with AWS S3 Bucket project path
}
}
}
}
post {
success {
echo 'Deployment to Amazon S3 suceeded'
}
failure {
echo 'Deployment to Amazon S3 failed'
}
}
}
That's all
I hope this helps
I was having the issue with showing the {env.MAJOR_VERSION} in an artifactory of jar file . show I approaches by keeping of environment step in Jenkinsfile.
pipeline {
agent any
environment {
MAJOR_VERSION = 1
}
stages {
stage('build') {
steps {
sh 'ant -f build.xml -v'
}
}
}
post {
always{
archiveArtifacts artifacts: 'dist/*.jar', fingerprint: true
}
}
}
I got the issue solved and then it was not showing me bad substitution in my Jenkins build output. so environment step plays a more role in Jenkinsfile.
suggestion from #avivamg didn't worked for me, here is the syntax which works for me:
sh "python3 ${env.WORKSPACE}/package.py --product productname " +
"--build_dir ${release_build_dir} " +
"--signed_product_dir ${signed_product_dir} " +
"--version ${build_version}"
I got similar issue. But my usecase is little different
steps{
sh '''#!/bin/bash -xe
VAR=TRIAL
echo $VAR
if [ -d /var/lib/jenkins/.m2/'\${params.application_name}' ]
then
echo 'working'
echo ${VAR}
else
echo 'not working'
fi
'''
}
}
here I'm trying to declare a variable inside the script and also use a parameter from outside
After trying multiple ways
The following script worked
stage('cleaning com/avizva directory'){
steps{
sh """#!/bin/bash -xe
VAR=TRIAL
echo \$VAR
if [ -d /var/lib/jenkins/.m2/${params.application_name} ]
then
echo 'working'
echo \${VAR}
else
echo 'not working'
fi
"""
}
}
changes made :
Replaced triple single quotes --> triple double quotes
Whenever I want to refer to local variable I used escape character
$VAR --> \$VAR
This caused the error Bad Substitution:
pipeline {
agent any
environment {
DOCKER_IMAGENAME = "mynginx:latest"
DOCKER_FILE_PATH = "./docker"
}
stages {
stage('DockerImage-Build') {
steps {
sh 'docker build -t ${env.DOCKER_IMAGENAME} ${env.DOCKER_FILE_PATH}'
}
}
}
}
This fixed it: replace ' with " on sh command
pipeline {
agent any
environment {
DOCKER_IMAGENAME = "mynginx:latest"
DOCKER_FILE_PATH = "./docker"
}
stages {
stage('DockerImage-Build') {
steps {
sh "docker build -t ${env.DOCKER_IMAGENAME} ${env.DOCKER_FILE_PATH}"
}
}
}
}
The Jenkins Script is failing inside the "sh" command-line E.g:
sh 'npm run build' <-- Fails referring to package.json
Needs to be changed to:
sh 'npm run ng build....'
... ng $PATH is not found by the package.json.

Resources