GitLab CI rules not working with extends and individual rules - bash

Below are two jobs in the build stage.
Default, there is set some common condition, and using extends keyword for that, ifawsdeploy.
As only one of them should run, if variable $ADMIN_SERVER_IP provided then connect_admin_server should run, working that way.
If no value provided to $ADMIN_SERVER_IP then create_admin_server should run, but it is not running.
.ifawsdeploy:
rules:
- if: '$TEST_CREATE_ADMIN && $REGION && $ROLE_ARN && $PACKAGEURL && $TEST_CREATE_ADMIN == "aws" && $SUB_PLATFORM == "aws" && $ROLE_ARN != "" && $PACKAGEURL != "" && $REGION != ""'
variables:
TEST_CREATE_ADMIN:
#value: aws
description: "Platform, currently aws only"
SUB_PLATFORM:
value: aws
description: "Platform, currently aws only"
REGION:
value: "us-west-2"
description: "region where to deploy company"
PACKAGEURL:
value: "http://somerpmurl.x86_64.rpm"
description: "company rpm file url"
ACCOUNT_NAME:
value: "testsubaccount"
description: "Account name of sub account to refer in the deployment, no need to match in AWS"
ROLE_ARN:
value: "arn:aws:iam::491483064167:role/uat"
description: "ROLE ARN of the user account assuming: aws sts get-caller-identity"
tfenv_version: "1.1.9"
DEV_PUB_KEY:
description: "Optional public key file to add access to admin server"
ADMIN_SERVER_IP:
description: "Existing Admin Server IP Address"
ADMIN_SERVER_SSH_KEY:
description: "Existing Admin Server SSH_KEY PEM content"
#export variables below will cause the terraform to use the root account instead of the one specified in tfvars file
.configure_aws_cli: &configure_aws_cli
- aws configure set region $REGION
- aws configure set aws_access_key_id $AWS_FULL_STS_ACCESS_KEY_ID
- aws configure set aws_secret_access_key $AWS_FULL_STS_ACCESS_KEY_SECRET
- aws sts get-caller-identity
- aws configure set source_profile default --profile $ACCOUNT_NAME
- aws configure set role_arn $ROLE_ARN --profile $ACCOUNT_NAME
- aws sts get-caller-identity --profile $ACCOUNT_NAME
- aws configure set region $REGION --profile $ACCOUNT_NAME
.copy_remote_log: &copy_remote_log
- if [ -e outfile ]; then rm outfile; fi
- copy_command="$(cat $CI_PROJECT_DIR/scp_command.txt)"
- new_copy_command=${copy_command/"%s"/"outfile"}
- new_copy_command=${new_copy_command/"~"/"/home/ec2-user/outfile"}
- echo $new_copy_command
- new_copy_command=$(echo "$new_copy_command" | sed s'/\([^.]*\.[^ ]*\) \([^ ]*\) \(.*\)/\1 \3 \2/')
- echo $new_copy_command
- sleep 10
- eval $new_copy_command
.check_remote_log: &check_remote_log
- sleep 10
- grep Error outfile || true
- sleep 10
- returnCode=$(grep -c Error outfile) || true
- echo "Return code received $returnCode"
- if [ $returnCode -ge 1 ]; then exit 1; fi
- echo "No errors"
.prepare_ssh_key: &prepare_ssh_key
- echo $ADMIN_SERVER_SSH_KEY > $CI_PROJECT_DIR/ssh_key.pem
- cat ssh_key.pem
- sed -i -e 's/-----BEGIN RSA PRIVATE KEY-----/-bk-/g' ssh_key.pem
- sed -i -e 's/-----END RSA PRIVATE KEY-----/-ek-/g' ssh_key.pem
- perl -p -i -e 's/\s/\n/g' ssh_key.pem
- sed -i -e 's/-bk-/-----BEGIN RSA PRIVATE KEY-----/g' ssh_key.pem
- sed -i -e 's/-ek-/-----END RSA PRIVATE KEY-----/g' ssh_key.pem
- cat ssh_key.pem
- chmod 400 ssh_key.pem
connect-admin-server:
stage: build
allow_failure: true
image:
name: amazon/aws-cli:latest
entrypoint: [ "" ]
rules:
- if: '$ADMIN_SERVER_IP && $ADMIN_SERVER_IP != "" && $ADMIN_SERVER_SSH_KEY && $ADMIN_SERVER_SSH_KEY != ""'
extends:
- .ifawsdeploy
script:
- TF_IN_AUTOMATION=true
- yum update -y
- yum install git unzip gettext jq -y
- echo "Your admin server key and info are added as artifacts"
# Copy the important terraform outputs to files for artifacts to pass into other jobs
- *prepare_ssh_key
- echo "ssh -i ssh_key.pem ec2-user#${ADMIN_SERVER_IP}" > $CI_PROJECT_DIR/ssh_command.txt
- echo "scp -q -i ssh_key.pem %s ec2-user#${ADMIN_SERVER_IP}:~" > $CI_PROJECT_DIR/scp_command.txt
- test_pre_command="$(cat "$CI_PROJECT_DIR/ssh_command.txt") -o StrictHostKeyChecking=no"
- echo $test_pre_command
- test_command="$(echo $test_pre_command | sed -r 's/(ssh )(.*)/\1-tt \2/')"
- echo $test_command
- echo "sudo yum install -yq $PACKAGEURL 2>&1 | tee outfile ; exit 0" | $test_command
- *copy_remote_log
- echo "Now checking log file for returnCode"
- *check_remote_log
artifacts:
untracked: true
when: always
paths:
- "$CI_PROJECT_DIR/ssh_key.pem"
- "$CI_PROJECT_DIR/ssh_command.txt"
- "$CI_PROJECT_DIR/scp_command.txt"
after_script:
- cat $CI_PROJECT_DIR/ssh_key.pem
- cat $CI_PROJECT_DIR/ssh_command.txt
- cat $CI_PROJECT_DIR/scp_command.txt
create-admin-server:
stage: build
allow_failure: false
image:
name: amazon/aws-cli:latest
entrypoint: [ "" ]
rules:
- if: '$ADMIN_SERVER_IP != ""'
when: never
extends:
- .ifawsdeploy
script:
- echo "admin server $ADMIN_SERVER_IP"
- TF_IN_AUTOMATION=true
- yum update -y
- yum install git unzip gettext jq -y
- *configure_aws_cli
- aws sts get-caller-identity --profile $ACCOUNT_NAME #to check whether updated correctly or not
- git clone "https://project-n-setup:$(echo $PERSONAL_GITLAB_TOKEN)#gitlab.com/company-oss/project-n-setup.git"
# Install tfenv
- git clone https://github.com/tfutils/tfenv.git ~/.tfenv
- ln -s ~/.tfenv /root/.tfenv
- ln -s ~/.tfenv/bin/* /usr/local/bin
# Install terraform 1.1.9 through tfenv
- tfenv install $tfenv_version
- tfenv use $tfenv_version
# Copy the tfvars temp file to the terraform setup directory
- cp .gitlab/admin_server.temp_tfvars project-n-setup/$SUB_PLATFORM/
- cd project-n-setup/$SUB_PLATFORM/
- envsubst < admin_server.temp_tfvars > admin_server.tfvars
- rm -rf .terraform || exit 0
- cat ~/.aws/config
- terraform init -input=false
- terraform apply -var-file=admin_server.tfvars -input=false -auto-approve
- echo "Your admin server key and info are added as artifacts"
# Copy the important terraform outputs to files for artifacts to pass into other jobs
- terraform output -raw ssh_key > $CI_PROJECT_DIR/ssh_key.pem
- terraform output -raw ssh_command > $CI_PROJECT_DIR/ssh_command.txt
- terraform output -raw scp_command > $CI_PROJECT_DIR/scp_command.txt
- cp $CI_PROJECT_DIR/project-n-setup/$SUB_PLATFORM/terraform.tfstate $CI_PROJECT_DIR
- cp $CI_PROJECT_DIR/project-n-setup/$SUB_PLATFORM/admin_server.tfvars $CI_PROJECT_DIR
artifacts:
untracked: true
paths:
- "$CI_PROJECT_DIR/ssh_key.pem"
- "$CI_PROJECT_DIR/ssh_command.txt"
- "$CI_PROJECT_DIR/scp_command.txt"
- "$CI_PROJECT_DIR/terraform.tfstate"
- "$CI_PROJECT_DIR/admin_server.tfvars"
How to fix that?
I tried the below step from suggestions on comments section.
.generalgrabclustertrigger:
rules:
- if: '$TEST_CREATE_ADMIN && $REGION && $ROLE_ARN && $PACKAGEURL && $TEST_CREATE_ADMIN == "aws" && $SUB_PLATFORM == "aws" && $ROLE_ARN != "" && $PACKAGEURL != "" && $REGION != ""'
.ifteardownordestroy: # Automatic if triggered from gitlab api AND destroy variable is set
rules:
- !reference [.generalgrabclustertrigger, rules]
- if: 'CI_PIPELINE_SOURCE == "triggered"'
when: never
And included the above in extends of a job.
destroy-admin-server:
stage: cleanup
extends:
- .ifteardownordestroy
allow_failure: true
interruptible: false
But I am getting syntax error in the .ifteardownordestroy part.
jobs:destroy-admin-server:rules:rule if invalid expression syntax

You are overriding rules: in your job that extends .ifawsdeploy. rules: are not combined in this case -- the definition of rules: in the job takes complete precedence.
Take for example the following configuration:
.template:
rules:
- one
- two
myjob:
extends: .template
rules:
- a
- b
In the above example, the myjob job only has rules a and b in effect. Rules one and two are completely ignored because they are overridden in the job configuration.
Instead of uinsg extends:, you can use !reference to preserve and combine rules. You can also use YAML anchors if you want.
create-admin-server:
rules:
- !reference [.ifawsdeploy, rules]
- ... # your additional rules
If no value provided to $ADMIN_SERVER_IP then create_admin_server should run
Lastly, pay special attention to your rules:
rules:
- if: '$ADMIN_SERVER_IP != ""'
when: never
In this case, there are no rules that allow the job to run ever. You either need a case that will evaluate true for the job to run, or to have a default case (an item with no if: condition) in order for the job to run.
To get the behavior you expect, you probably want your default case to be on_success:
rules:
- if: '$ADMIN_SERVER_IP != ""'
when: never
- when: on_success

you can change your rules to :
rules:
- if: '$ADMIN_SERVER_IP != ""'
when: never
- when: always
or
rules:
- if: '$ADMIN_SERVER_IP == ""'
when: always
I have a sample in here: try-rules-stackoverflow-72545625 - GitLab and the pipeline record Pipeline no value - GitLab, Pipeline has value - GitLab

Related

GitHub Actions: Passing JSON data to another job

I'm attempting to pass an array of dynamically fetched data from one GitHub Action job to the actual job doing the build. This array will be used as part of a matrix to build for multiple versions. However, I'm encountering an issue when the bash variable storing the array is evaluated.
jobs:
setup:
runs-on: ubuntu-latest
outputs:
versions: ${{ steps.matrix.outputs.value }}
steps:
- id: matrix
run: |
sudo apt-get install -y jq && \
MAINNET=$(curl https://api.mainnet-beta.solana.com -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","id":1, "method":"getVersion"}' | jq '.result["solana-core"]') && \
TESTNET=$(curl https://api.testnet.solana.com -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","id":1, "method":"getVersion"}' | jq '.result["solana-core"]') && \
VERSIONS=($MAINNET $TESTNET) && \
echo "${VERSIONS[#]}" && \
VERSION_JSON=$(echo "${VERSIONS[#]}" | jq -s) && \
echo $VERSION_JSON && \
echo '::set-output name=value::$VERSION_JSON'
shell: bash
- id: debug
run: |
echo "Result: ${{ steps.matrix.outputs.value }}"
changes:
needs: setup
runs-on: ubuntu-latest
# Set job outputs to values from filter step
outputs:
core: ${{ steps.filter.outputs.core }}
package: ${{ steps.filter.outputs.package }}
strategy:
matrix:
TEST: [buy, cancel, create_auction_house, delegate, deposit, execute_sale, sell, update_auction_house, withdraw_from_fee, withdraw_from_treasury, withdraw]
SOLANA_VERSION: ${{fromJson(needs.setup.outputs.versions)}}
steps:
- uses: actions/checkout#v2
# For pull requests it's not necessary to checkout the code
- uses: dorny/paths-filter#v2
id: filter
with:
filters: |
core:
- 'core/**'
package:
- 'auction-house/**'
- name: debug
id: debug
working-directory: ./auction-house/program
run: echo ${{ needs.setup.outputs.versions }}
In the setup job above, the two versions are evaluated to a bash array (in VERSIONS) and converted into a JSON array to be passed to the next job (in VERSION_JSON). The last echo in the matrix step results in a print of [ "1.10.31", "1.11.1" ], but the debug step prints out this:
Run echo "Result: "$VERSION_JSON""
echo "Result: "$VERSION_JSON""
shell: /usr/bin/bash -e {0}
env:
CARGO_TERM_COLOR: always
RUST_TOOLCHAIN: stable
Result:
The changes job also results in an error:
Error when evaluating 'strategy' for job 'changes'.
.github/workflows/program-auction-house.yml (Line: 44, Col: 25): Unexpected type of value '$VERSION_JSON', expected type: Sequence.
It definitely seems like the $VERSION_JSON variable isn't actually being evaluated properly, but I can't figure out where the evaluation is going wrong.
For echo '::set-output name=value::$VERSION_JSON' you need to use double quotes or bash would not expand $VERSION_JSON.
set-output is not happy with multi-lined data. For your case, you can use jq -s -c so the output will be one line.

Bash: How to execute paths

I have job in my gitlab-cicd.yml file:
unit_test:
stage: test
image: $MAVEN_IMAGE
script:
- *tests_variables_export
- mvn ${MAVEN_CLI_OPTS} clean test
- cat ${CI_PROJECT_DIR}/rest-service/target/site/jacoco/index.html
- cat ${CI_PROJECT_DIR}/soap-service/target/site/jacoco/index.html
artifacts:
expose_as: 'code coverage'
paths:
- ${CI_PROJECT_DIR}/soap-service/target/surefire-reports/
- ${CI_PROJECT_DIR}/rest-service/target/surefire-reports/
- ${CI_PROJECT_DIR}/soap-service/target/site/jacoco/index.html
- ${CI_PROJECT_DIR}/rest-service/target/site/jacoco/index.html
And I want to change it to this one:
unit_test:
stage: test
image: $MAVEN_IMAGE
script:
- *tests_variables_export
- mvn ${MAVEN_CLI_OPTS} clean test
- cat ${CI_PROJECT_DIR}/rest-service/target/site/jacoco/index.html
- cat ${CI_PROJECT_DIR}/soap-service/target/site/jacoco/index.html
artifacts:
expose_as: 'code coverage'
paths:
- *resolve_paths
I try to use this bash script:
.resolve_paths: &resolve_paths |-
if [ "${MODULE_FIRST}" != "UNKNOWN" ]; then
echo "- ${CI_PROJECT_DIR}/${MODULE_FIRST}/target/surefire-reports/"
echo "- ${CI_PROJECT_DIR}/${MODULE_FIRST}/target/site/jacoco/index.html"
fi
if [ "${MODULE_SECOND}" != "UNKNOWN" ]; then
echo "- ${CI_PROJECT_DIR}/${MODULE_SECOND}/target/surefire-reports/"
echo "- ${CI_PROJECT_DIR}/${MODULE_SECOND}/target/site/jacoco/index.html"
fi
And right now I'm getting this error in pipeline:
WARNING: if [ "rest-service" != "UNKNOWN" ]; then\n echo "- /builds/minv/common/testcommons/taf-api-support/rest-service/target/surefire-reports/"\n echo "- /builds/minv/common/testcommons/taf-api-support/rest-service/target/site/jacoco/index.html"\nfi\nif [ "soap-service" != "UNKNOWN" ]; then\n echo "- /builds/minv/common/testcommons/taf-api-support/soap-service/target/surefire-reports/"\n echo "- /builds/minv/common/testcommons/taf-api-support/soap-service/target/site/jacoco/index.html"\nfi: no matching files ERROR: No files to upload
Can I execute [sic] paths using bash script like this?
No, scripts cannot alter the current YAML, particularly not if you specify the script (which is just a string) in a place where it is interpreted as a path.
You could trigger a dynamically generated YAML:
generate:
stage: build
script:
- |
exec > generated.yml
echo ".resolve_paths: &resolve_paths"
for module in "${MODULE_FIRST}" "${MODULE_SECOND}"; do
[[ "$module" = UNKNOWN ]] && continue
echo "- ${CI_PROJECT_DIR}/${module}/target/surefire-reports/"
echo "- ${CI_PROJECT_DIR}/${module}/target/site/jacoco/index.html"
done
sed '1,/^\.\.\. *$/d' "${CI_CONFIG_PATH}"
artifacts:
paths:
- generated.yml
run:
stage: deploy
trigger:
include:
- artifact: generated.yml
job: generate
...
# Start of actual CI. When this runs, there will be an
# auto-generated job `.resolve_paths: &resolve_paths`.
# Put the rest of your CI (e.g. `unit_test:`) here.
But there are so many extensions in GitLab's YAML that you likely will find a tremendously better solution, which depends on what you plan to do with .resolve_paths. Maybe have a look at
artifacts:exclude
additional jobs with rules:

AWS CodeBuild buildspec bash syntax error: bad substitution with if statement

Background:
I'm using an AWS CodeBuild buildspec.yml to iterate through directories from a GitHub repo. Before looping through the directory path $TF_ROOT_DIR, I'm using a bash if statement to check if the GitHub branch name $BRANCH_NAME is within an env variable $LIVE_BRANCHES. As you can see in the error screenshot below, the bash if statement outputs the error: syntax error: bad substitution. When I reproduce the if statement within a local bash script, the if statement works as it's supposed to.
Here's the env variables defined in the CodeBuild project:
Here's a relevant snippet from the buildspec.yml:
version: 0.2
env:
shell: bash
phases:
build:
commands:
- |
if [[ " ${LIVE_BRANCHES[*]} " == *"$BRANCH_NAME"* ]]; then
# Iterate only through BRANCH_NAME directory
TF_ROOT_DIR=${TF_ROOT_DIR}/*/${BRANCH_NAME}/
else
# Iterate through both dev and prod directories
TF_ROOT_DIR=${TF_ROOT_DIR}/*/
fi
- echo $TF_ROOT_DIR
Here's the build log that shows the syntax error:
Here's the AWS CodeBuild project JSON to reproduce the CodeBuild project:
{
"projects": [
{
"name": "terraform_validate_plan",
"arn": "arn:aws:codebuild:us-west-2:xxxxx:project/terraform_validate_plan",
"description": "Perform terraform plan and terraform validator",
"source": {
"type": "GITHUB",
"location": "https://github.com/marshall7m/sparkify_end_to_end.git",
"gitCloneDepth": 1,
"gitSubmodulesConfig": {
"fetchSubmodules": false
},
"buildspec": "deployment/CI/dev/cfg/buildspec_terraform_validate_plan.yml",
"reportBuildStatus": false,
"insecureSsl": false
},
"secondarySources": [],
"secondarySourceVersions": [],
"artifacts": {
"type": "NO_ARTIFACTS",
"overrideArtifactName": false
},
"cache": {
"type": "NO_CACHE"
},
"environment": {
"type": "LINUX_CONTAINER",
"image": "hashicorp/terraform:0.12.28",
"computeType": "BUILD_GENERAL1_SMALL",
"environmentVariables": [
{
"name": "TF_ROOT_DIR",
"value": "deployment",
"type": "PLAINTEXT"
},
{
"name": "LIVE_BRANCHES",
"value": "(dev, prod)",
"type": "PLAINTEXT"
}
Here's the associated buildspec file content: (buildspec_terraform_validate_plan.yml)
version: 0.2
env:
shell: bash
parameter-store:
AWS_ACCESS_KEY_ID_PARAM: TF_AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY_PARAM: TF_AWS_SECRET_ACCESS_KEY_ID
phases:
install:
commands:
# install/incorporate terraform validator?
pre_build:
commands:
# CodeBuild environment variables
# BRANCH_NAME -- GitHub branch that triggered the CodeBuild project
# TF_ROOT_DIR -- Directory within branch ($BRANCH_NAME) that will be iterated through for terraform planning and testing
# LIVE_BRANCHES -- Branches that represent a live cloud environment
- export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID_PARAM
- export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY_PARAM
- bash -version || echo "${BASH_VERSION}" || bash --version
- |
if [[ -z "${BRANCH_NAME}" ]]; then
# extract branch from github webhook
BRANCH_NAME=$(echo $CODEBUILD_WEBHOOK_HEAD_REF | cut -d'/' -f 3)
fi
- "echo Triggered Branch: $BRANCH_NAME"
- |
if [[ " ${LIVE_BRANCHES[*]} " == *"$BRANCH_NAME"* ]]; then
# Iterate only through BRANCH_NAME directory
TF_ROOT_DIR=${TF_ROOT_DIR}/*/${BRANCH_NAME}/
else
# Iterate through both dev and prod directories
TF_ROOT_DIR=${TF_ROOT_DIR}/*/
fi
- "echo Terraform root directory: $TF_ROOT_DIR"
build:
commands:
- |
for dir in $TF_ROOT_DIR; do
#get list of non-hidden directories within $dir/
service_dir_list=$(find "${dir}" -type d | grep -v '/\.')
for sub_dir in $service_dir_list; do
#if $sub_dir contains .tf or .tfvars files
if (ls ${sub_dir}/*.tf) > /dev/null 2>&1 || (ls ${sub_dir}/*.tfvars) > /dev/null 2>&1; then
cd $sub_dir
echo ""
echo "*************** terraform init ******************"
echo "******* At directory: ${sub_dir} ********"
echo "*************************************************"
terraform init
echo ""
echo "*************** terraform plan ******************"
echo "******* At directory: ${sub_dir} ********"
echo "*************************************************"
terraform plan
cd - > /dev/null
fi
done
done
Given this is just a side project, all files that could be relevant to this problem are within a public repo here.
UPDATES
Tried adding #!/bin/bash shebang line but resulted in the CodeBuild error:
Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: #!/bin/bash
version: 0.2
env:
shell: bash
phases:
build:
commands:
- |
#!/bin/bash
if [[ " ${LIVE_BRANCHES[*]} " == *"$BRANCH_NAME"* ]]; then
# Iterate only through BRANCH_NAME directory
TF_ROOT_DIR=${TF_ROOT_DIR}/*/${BRANCH_NAME}/
else
# Iterate through both dev and prod directories
TF_ROOT_DIR=${TF_ROOT_DIR}/*/
fi
- echo $TF_ROOT_DIR
Solution
As mentioned by #Marcin, I used an AWS managed image within Codebuild (aws/codebuild/standard:4.0) and downloaded Terraform within the install phase.
phases:
install:
commands:
- wget https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_amd64.zip -q
- unzip terraform_${TERRAFORM_VERSION}_linux_amd64.zip && mv terraform /usr/local/bin/
I tried to reproduce your issue, but it all works fine for me.
The only thing I've noticed is that you are using $BRANCH_NAME but its not defined anywhere. But even with missing $BRANCH_NAME the buildspec.yml you've posted runs fine.
Update using hashicorp/terraform:0.12.28 image

How do I pass boolean Environmental Variables to a `when` step in CircleCI?

I want to do something along the lines of
commands:
send-slack:
parameters:
condition:
type: env_var_name
steps:
- when:
# only send if it's true
condition: << parameters.condition >>
steps:
- run: # do some stuff if it's true
jobs:
deploy:
steps:
- run:
name: Prepare Message
command: |
# Do Some stuff dynamically to figure out what message to send
# and save it to success_message or failure_message
echo "export success_message=true" >> $BASH_ENV
echo "export failure_message=false" >> $BASH_ENV
- send-slack:
text: "yay"
condition: success_message
- send-slack:
text: "nay"
condition: failure_message
```
Based on this documentation, you cannot use environment variables as conditions in CircleCI. This is because the when logic is done when the configuration is processed (ie, before the job actually runs and the environment variables are set). As an alternative, I would add the logic to a separate run step (or the same initial one).
jobs:
deploy:
steps:
- run:
name: Prepare Message
command: |
# Do Some stuff dynamically to figure out what message to send
# and save it to success_message or failure_message
echo "export success_message=true" >> $BASH_ENV
echo "export failure_message=false" >> $BASH_ENV
- run:
name: Send Message
command: |
if $success_message; then
# Send success message
fi
if $failure_message; then
# Send failure message
fi
Here is a relevant ticket on the CircleCI discussion board.

How to fix "length must be less than 40" error in shippable.yml file?

Shippable CI UI is showing me the following error:
ERROR: 1 validation error detected: Value '[if [ develop == master ]; then xxx-xx-prod; else xxx-xx-dev; fi]' at 'environmentNames' failed to satisfy constraint: Member must satisfy constraint: [Member must have length less than or equal to 40, Member must have length greater than or equal to 4]
This is my shippable.yml file:
branches:
only:
- develop
- master
build:
ci:
- "echo 'CI is running'"
post_ci:
- "docker build -t=\"xxxx/xxx-xxxx:$BRANCH.$BUILD_NUMBER\" ."
- "docker push xxxx/xxx-xxx:$BRANCH.$BUILD_NUMBER"
- "pip install --upgrade botocore"
- "pip install setuptools==34.0.1"
integrations:
deploy:
-
application_name: seamless-ai
env_name: if [ "$BRANCH" == "master" ]; then "xxx-xx-prod"; else "xxx-xx-dev"; fi
image_name: xxxx/xxx-xxx
image_tag: $BRANCH.$BUILD_NUMBER
integrationName: AWS-int
region: us-east-1
type: aws
hub:
-
integrationName: "Docker Hub"
type: docker
language: node_js
So essentially, my issue is the following:
env_name: if [ "$BRANCH" == "master" ]; then "xxx-xx-prod"; else "xxx-xx-dev"; fi
Essentially what I need to do is:
If the branch is master, then env_name must be xxx-xx-prod otherwise, then env_name = xxx-xx-dev
How can I fix this issue?
Since we see that $BRANCH gets evaluated inside the value, a possible solution could be to write it to an env variable and then just replace that.
This can be done by adding this line to post-ci:
- if [ "$BRANCH" == "master" ]; then export ENV_NAME="xxx-xx-prod"; else export ENV_NAME="xxx-xx-dev"; fi
and then in deploy:
env_name: $ENV_NAME
I have no idea whether that actually works.

Resources