How do I pass boolean Environmental Variables to a `when` step in CircleCI? - bash

I want to do something along the lines of
commands:
send-slack:
parameters:
condition:
type: env_var_name
steps:
- when:
# only send if it's true
condition: << parameters.condition >>
steps:
- run: # do some stuff if it's true
jobs:
deploy:
steps:
- run:
name: Prepare Message
command: |
# Do Some stuff dynamically to figure out what message to send
# and save it to success_message or failure_message
echo "export success_message=true" >> $BASH_ENV
echo "export failure_message=false" >> $BASH_ENV
- send-slack:
text: "yay"
condition: success_message
- send-slack:
text: "nay"
condition: failure_message
```

Based on this documentation, you cannot use environment variables as conditions in CircleCI. This is because the when logic is done when the configuration is processed (ie, before the job actually runs and the environment variables are set). As an alternative, I would add the logic to a separate run step (or the same initial one).
jobs:
deploy:
steps:
- run:
name: Prepare Message
command: |
# Do Some stuff dynamically to figure out what message to send
# and save it to success_message or failure_message
echo "export success_message=true" >> $BASH_ENV
echo "export failure_message=false" >> $BASH_ENV
- run:
name: Send Message
command: |
if $success_message; then
# Send success message
fi
if $failure_message; then
# Send failure message
fi
Here is a relevant ticket on the CircleCI discussion board.

Related

Unable to get the value of variable inside a variable in azure pipelines

I'm trying to use variables inside variables in azure pipelines.
Below is an example of the bash script:
#!/bin/bash
customer=google
environment=preprod
android_google_preprod_account_activation_url=preprod.google.com
echo "Customer is $customer"
echo "Environment is $environment"
var1=android_${customer}_${environment}_account_activation_url
echo "variable is $var1"
echo "original value is ${!var1}"
I get the expected output for the above bash script when I run it on my Ubuntu server, with NO errors:
Customer is google
Environment is preprod
variable is android_google_preprod_account_activation_url
original value is preprod.google.com
The yml code for azure pipelines is:
parameters:
- name: customer
displayName: 'select customer'
type: string
values:
- google
- name: environment
displayName: 'select environment'
type: string
values:
- preprod
variables:
- group: android-${{ parameters.customer }}-${{ parameters.environment }}
- name: var1
value: android-${{ parameters.customer }}-${{ parameters.environment }}-account-activation-url
script: |
echo "Customer is $(customer)"
echo "Environment is $(environment)"
echo "variable is $(var1)"
echo "original value is $(!var1)"
displayName: 'echo variables'
The value of android-google-preprod-account-activation-url is being taken from variable groups inside library.
It gives me an error for the 4th line:
invalid indirect expansion
The first 3 lines output is as expected.
Expected output is:
Customer is google
Environment is preprod
variable is android_google_preprod_account_activation_url
original value is preprod.google.com
Is there a different syntax that needs to be followed in azure pipelines?
I`m not a bash expert ))) however... you're trying to use the parameters expansion What is indirect expansion? What does ${!var*} mean?
But it refers to the bash variables.... when you define variables in the devops pipeline, you have to use them as environment variables or through the macro.
or something like that:
android_google_preprod_account_activation_url=preprod.google.com
echo "Customer is $(customer)"
echo "Environment is $(environment)"
var1=android_$(customer)_$(environment)_account_activation_url
echo "variable is $var1"
echo "original value is ${!var1}"
The macro syntax "$(varName)" is a proprietary syntax in Azure Pipelines to interpolate variable values. It is processed during runtime and different with the syntax "${varName}" in Bash scripts.
For your case, you can try to use the compile time syntax "${{ variables.varName }}" to get the value in the pipeline.
echo "original value is $(${{ variables.var1 }})"
With above change, after you triggered the pipeline:
At the compile time, the expression "${{ variables.var1 }}" will be replaced with the actual value "android_google_preprod_account_activation_url". So, the expression "$(${{ variables.var1 }})" will be changed to "$(android_google_preprod_account_activation_url)".
Then at the runtime, the expression will be parsed as the correct value "preprod.google.com".
Below is an example I have tested on my side.
YAML
variables:
android_google_preprod_account_activation_url: 'preprod.google.com'
var1: 'android_google_preprod_account_activation_url'
jobs:
- job: A
displayName: 'Job A'
pool:
vmImage: ubuntu-latest
steps:
- checkout: none
- task: Bash#3
displayName: 'Print variables'
inputs:
targetType: inline
script: |
echo "android_google_preprod_account_activation_url = $(android_google_preprod_account_activation_url)"
echo "var1 = $(var1)"
echo "original value = $(${{ variables.var1 }})"
Result
For more details, you can reference the related document "Understand variable syntax".

GitLab CI rules not working with extends and individual rules

Below are two jobs in the build stage.
Default, there is set some common condition, and using extends keyword for that, ifawsdeploy.
As only one of them should run, if variable $ADMIN_SERVER_IP provided then connect_admin_server should run, working that way.
If no value provided to $ADMIN_SERVER_IP then create_admin_server should run, but it is not running.
.ifawsdeploy:
rules:
- if: '$TEST_CREATE_ADMIN && $REGION && $ROLE_ARN && $PACKAGEURL && $TEST_CREATE_ADMIN == "aws" && $SUB_PLATFORM == "aws" && $ROLE_ARN != "" && $PACKAGEURL != "" && $REGION != ""'
variables:
TEST_CREATE_ADMIN:
#value: aws
description: "Platform, currently aws only"
SUB_PLATFORM:
value: aws
description: "Platform, currently aws only"
REGION:
value: "us-west-2"
description: "region where to deploy company"
PACKAGEURL:
value: "http://somerpmurl.x86_64.rpm"
description: "company rpm file url"
ACCOUNT_NAME:
value: "testsubaccount"
description: "Account name of sub account to refer in the deployment, no need to match in AWS"
ROLE_ARN:
value: "arn:aws:iam::491483064167:role/uat"
description: "ROLE ARN of the user account assuming: aws sts get-caller-identity"
tfenv_version: "1.1.9"
DEV_PUB_KEY:
description: "Optional public key file to add access to admin server"
ADMIN_SERVER_IP:
description: "Existing Admin Server IP Address"
ADMIN_SERVER_SSH_KEY:
description: "Existing Admin Server SSH_KEY PEM content"
#export variables below will cause the terraform to use the root account instead of the one specified in tfvars file
.configure_aws_cli: &configure_aws_cli
- aws configure set region $REGION
- aws configure set aws_access_key_id $AWS_FULL_STS_ACCESS_KEY_ID
- aws configure set aws_secret_access_key $AWS_FULL_STS_ACCESS_KEY_SECRET
- aws sts get-caller-identity
- aws configure set source_profile default --profile $ACCOUNT_NAME
- aws configure set role_arn $ROLE_ARN --profile $ACCOUNT_NAME
- aws sts get-caller-identity --profile $ACCOUNT_NAME
- aws configure set region $REGION --profile $ACCOUNT_NAME
.copy_remote_log: &copy_remote_log
- if [ -e outfile ]; then rm outfile; fi
- copy_command="$(cat $CI_PROJECT_DIR/scp_command.txt)"
- new_copy_command=${copy_command/"%s"/"outfile"}
- new_copy_command=${new_copy_command/"~"/"/home/ec2-user/outfile"}
- echo $new_copy_command
- new_copy_command=$(echo "$new_copy_command" | sed s'/\([^.]*\.[^ ]*\) \([^ ]*\) \(.*\)/\1 \3 \2/')
- echo $new_copy_command
- sleep 10
- eval $new_copy_command
.check_remote_log: &check_remote_log
- sleep 10
- grep Error outfile || true
- sleep 10
- returnCode=$(grep -c Error outfile) || true
- echo "Return code received $returnCode"
- if [ $returnCode -ge 1 ]; then exit 1; fi
- echo "No errors"
.prepare_ssh_key: &prepare_ssh_key
- echo $ADMIN_SERVER_SSH_KEY > $CI_PROJECT_DIR/ssh_key.pem
- cat ssh_key.pem
- sed -i -e 's/-----BEGIN RSA PRIVATE KEY-----/-bk-/g' ssh_key.pem
- sed -i -e 's/-----END RSA PRIVATE KEY-----/-ek-/g' ssh_key.pem
- perl -p -i -e 's/\s/\n/g' ssh_key.pem
- sed -i -e 's/-bk-/-----BEGIN RSA PRIVATE KEY-----/g' ssh_key.pem
- sed -i -e 's/-ek-/-----END RSA PRIVATE KEY-----/g' ssh_key.pem
- cat ssh_key.pem
- chmod 400 ssh_key.pem
connect-admin-server:
stage: build
allow_failure: true
image:
name: amazon/aws-cli:latest
entrypoint: [ "" ]
rules:
- if: '$ADMIN_SERVER_IP && $ADMIN_SERVER_IP != "" && $ADMIN_SERVER_SSH_KEY && $ADMIN_SERVER_SSH_KEY != ""'
extends:
- .ifawsdeploy
script:
- TF_IN_AUTOMATION=true
- yum update -y
- yum install git unzip gettext jq -y
- echo "Your admin server key and info are added as artifacts"
# Copy the important terraform outputs to files for artifacts to pass into other jobs
- *prepare_ssh_key
- echo "ssh -i ssh_key.pem ec2-user#${ADMIN_SERVER_IP}" > $CI_PROJECT_DIR/ssh_command.txt
- echo "scp -q -i ssh_key.pem %s ec2-user#${ADMIN_SERVER_IP}:~" > $CI_PROJECT_DIR/scp_command.txt
- test_pre_command="$(cat "$CI_PROJECT_DIR/ssh_command.txt") -o StrictHostKeyChecking=no"
- echo $test_pre_command
- test_command="$(echo $test_pre_command | sed -r 's/(ssh )(.*)/\1-tt \2/')"
- echo $test_command
- echo "sudo yum install -yq $PACKAGEURL 2>&1 | tee outfile ; exit 0" | $test_command
- *copy_remote_log
- echo "Now checking log file for returnCode"
- *check_remote_log
artifacts:
untracked: true
when: always
paths:
- "$CI_PROJECT_DIR/ssh_key.pem"
- "$CI_PROJECT_DIR/ssh_command.txt"
- "$CI_PROJECT_DIR/scp_command.txt"
after_script:
- cat $CI_PROJECT_DIR/ssh_key.pem
- cat $CI_PROJECT_DIR/ssh_command.txt
- cat $CI_PROJECT_DIR/scp_command.txt
create-admin-server:
stage: build
allow_failure: false
image:
name: amazon/aws-cli:latest
entrypoint: [ "" ]
rules:
- if: '$ADMIN_SERVER_IP != ""'
when: never
extends:
- .ifawsdeploy
script:
- echo "admin server $ADMIN_SERVER_IP"
- TF_IN_AUTOMATION=true
- yum update -y
- yum install git unzip gettext jq -y
- *configure_aws_cli
- aws sts get-caller-identity --profile $ACCOUNT_NAME #to check whether updated correctly or not
- git clone "https://project-n-setup:$(echo $PERSONAL_GITLAB_TOKEN)#gitlab.com/company-oss/project-n-setup.git"
# Install tfenv
- git clone https://github.com/tfutils/tfenv.git ~/.tfenv
- ln -s ~/.tfenv /root/.tfenv
- ln -s ~/.tfenv/bin/* /usr/local/bin
# Install terraform 1.1.9 through tfenv
- tfenv install $tfenv_version
- tfenv use $tfenv_version
# Copy the tfvars temp file to the terraform setup directory
- cp .gitlab/admin_server.temp_tfvars project-n-setup/$SUB_PLATFORM/
- cd project-n-setup/$SUB_PLATFORM/
- envsubst < admin_server.temp_tfvars > admin_server.tfvars
- rm -rf .terraform || exit 0
- cat ~/.aws/config
- terraform init -input=false
- terraform apply -var-file=admin_server.tfvars -input=false -auto-approve
- echo "Your admin server key and info are added as artifacts"
# Copy the important terraform outputs to files for artifacts to pass into other jobs
- terraform output -raw ssh_key > $CI_PROJECT_DIR/ssh_key.pem
- terraform output -raw ssh_command > $CI_PROJECT_DIR/ssh_command.txt
- terraform output -raw scp_command > $CI_PROJECT_DIR/scp_command.txt
- cp $CI_PROJECT_DIR/project-n-setup/$SUB_PLATFORM/terraform.tfstate $CI_PROJECT_DIR
- cp $CI_PROJECT_DIR/project-n-setup/$SUB_PLATFORM/admin_server.tfvars $CI_PROJECT_DIR
artifacts:
untracked: true
paths:
- "$CI_PROJECT_DIR/ssh_key.pem"
- "$CI_PROJECT_DIR/ssh_command.txt"
- "$CI_PROJECT_DIR/scp_command.txt"
- "$CI_PROJECT_DIR/terraform.tfstate"
- "$CI_PROJECT_DIR/admin_server.tfvars"
How to fix that?
I tried the below step from suggestions on comments section.
.generalgrabclustertrigger:
rules:
- if: '$TEST_CREATE_ADMIN && $REGION && $ROLE_ARN && $PACKAGEURL && $TEST_CREATE_ADMIN == "aws" && $SUB_PLATFORM == "aws" && $ROLE_ARN != "" && $PACKAGEURL != "" && $REGION != ""'
.ifteardownordestroy: # Automatic if triggered from gitlab api AND destroy variable is set
rules:
- !reference [.generalgrabclustertrigger, rules]
- if: 'CI_PIPELINE_SOURCE == "triggered"'
when: never
And included the above in extends of a job.
destroy-admin-server:
stage: cleanup
extends:
- .ifteardownordestroy
allow_failure: true
interruptible: false
But I am getting syntax error in the .ifteardownordestroy part.
jobs:destroy-admin-server:rules:rule if invalid expression syntax
You are overriding rules: in your job that extends .ifawsdeploy. rules: are not combined in this case -- the definition of rules: in the job takes complete precedence.
Take for example the following configuration:
.template:
rules:
- one
- two
myjob:
extends: .template
rules:
- a
- b
In the above example, the myjob job only has rules a and b in effect. Rules one and two are completely ignored because they are overridden in the job configuration.
Instead of uinsg extends:, you can use !reference to preserve and combine rules. You can also use YAML anchors if you want.
create-admin-server:
rules:
- !reference [.ifawsdeploy, rules]
- ... # your additional rules
If no value provided to $ADMIN_SERVER_IP then create_admin_server should run
Lastly, pay special attention to your rules:
rules:
- if: '$ADMIN_SERVER_IP != ""'
when: never
In this case, there are no rules that allow the job to run ever. You either need a case that will evaluate true for the job to run, or to have a default case (an item with no if: condition) in order for the job to run.
To get the behavior you expect, you probably want your default case to be on_success:
rules:
- if: '$ADMIN_SERVER_IP != ""'
when: never
- when: on_success
you can change your rules to :
rules:
- if: '$ADMIN_SERVER_IP != ""'
when: never
- when: always
or
rules:
- if: '$ADMIN_SERVER_IP == ""'
when: always
I have a sample in here: try-rules-stackoverflow-72545625 - GitLab and the pipeline record Pipeline no value - GitLab, Pipeline has value - GitLab

Update declared variables in GIthub Actions workflow

How does one go about updating a variable that is declared in github action workflow?
Consider the following:
name: Test Variable
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
env:
DAY_OF_WEEK: Monday
jobs:
job1:
name: Job1
runs-on: ubuntu-latest
env:
Greeting: Hello
steps:
- name: "Say Hello John it's Monday"
run: |
echo $Greeting=Holla
echo "$Greeting $First_Name. Today is $DAY_OF_WEEK!"
env:
First_Name: John
- name: "Eval"
run: echo $Greeting $First_Name
So here I'm attempting to update Greeting then eval it later but GH is throwing
Invalid workflow file.You have an error in your yaml syntax on line 21.
So, if I were to update Greeting First_Name and DAY_OF_WEEKhow would I go about doing that?
Update
Fixed yaml syntax but the variable is not updated. Output for Eval is
Run echo $Greeting $First_Name
echo $Greeting $First_Name
shell: /usr/bin/bash -e {0}
env:
DAY_OF_WEEK: Monday
Greeting: Hello
Hello
Assign a variable:
run echo "Greeting=HOLLA" >> $GITHUB_ENV
using the variable
run echo "$Greeting"
docs:
https://docs.github.com/en/actions/using-workflows/workflow-commands-for-github-actions#setting-an-environment-variable
(Also make sure yours yml-file's indentation is correct.)

Bash: How to execute paths

I have job in my gitlab-cicd.yml file:
unit_test:
stage: test
image: $MAVEN_IMAGE
script:
- *tests_variables_export
- mvn ${MAVEN_CLI_OPTS} clean test
- cat ${CI_PROJECT_DIR}/rest-service/target/site/jacoco/index.html
- cat ${CI_PROJECT_DIR}/soap-service/target/site/jacoco/index.html
artifacts:
expose_as: 'code coverage'
paths:
- ${CI_PROJECT_DIR}/soap-service/target/surefire-reports/
- ${CI_PROJECT_DIR}/rest-service/target/surefire-reports/
- ${CI_PROJECT_DIR}/soap-service/target/site/jacoco/index.html
- ${CI_PROJECT_DIR}/rest-service/target/site/jacoco/index.html
And I want to change it to this one:
unit_test:
stage: test
image: $MAVEN_IMAGE
script:
- *tests_variables_export
- mvn ${MAVEN_CLI_OPTS} clean test
- cat ${CI_PROJECT_DIR}/rest-service/target/site/jacoco/index.html
- cat ${CI_PROJECT_DIR}/soap-service/target/site/jacoco/index.html
artifacts:
expose_as: 'code coverage'
paths:
- *resolve_paths
I try to use this bash script:
.resolve_paths: &resolve_paths |-
if [ "${MODULE_FIRST}" != "UNKNOWN" ]; then
echo "- ${CI_PROJECT_DIR}/${MODULE_FIRST}/target/surefire-reports/"
echo "- ${CI_PROJECT_DIR}/${MODULE_FIRST}/target/site/jacoco/index.html"
fi
if [ "${MODULE_SECOND}" != "UNKNOWN" ]; then
echo "- ${CI_PROJECT_DIR}/${MODULE_SECOND}/target/surefire-reports/"
echo "- ${CI_PROJECT_DIR}/${MODULE_SECOND}/target/site/jacoco/index.html"
fi
And right now I'm getting this error in pipeline:
WARNING: if [ "rest-service" != "UNKNOWN" ]; then\n echo "- /builds/minv/common/testcommons/taf-api-support/rest-service/target/surefire-reports/"\n echo "- /builds/minv/common/testcommons/taf-api-support/rest-service/target/site/jacoco/index.html"\nfi\nif [ "soap-service" != "UNKNOWN" ]; then\n echo "- /builds/minv/common/testcommons/taf-api-support/soap-service/target/surefire-reports/"\n echo "- /builds/minv/common/testcommons/taf-api-support/soap-service/target/site/jacoco/index.html"\nfi: no matching files ERROR: No files to upload
Can I execute [sic] paths using bash script like this?
No, scripts cannot alter the current YAML, particularly not if you specify the script (which is just a string) in a place where it is interpreted as a path.
You could trigger a dynamically generated YAML:
generate:
stage: build
script:
- |
exec > generated.yml
echo ".resolve_paths: &resolve_paths"
for module in "${MODULE_FIRST}" "${MODULE_SECOND}"; do
[[ "$module" = UNKNOWN ]] && continue
echo "- ${CI_PROJECT_DIR}/${module}/target/surefire-reports/"
echo "- ${CI_PROJECT_DIR}/${module}/target/site/jacoco/index.html"
done
sed '1,/^\.\.\. *$/d' "${CI_CONFIG_PATH}"
artifacts:
paths:
- generated.yml
run:
stage: deploy
trigger:
include:
- artifact: generated.yml
job: generate
...
# Start of actual CI. When this runs, there will be an
# auto-generated job `.resolve_paths: &resolve_paths`.
# Put the rest of your CI (e.g. `unit_test:`) here.
But there are so many extensions in GitLab's YAML that you likely will find a tremendously better solution, which depends on what you plan to do with .resolve_paths. Maybe have a look at
artifacts:exclude
additional jobs with rules:

How to override an environment variable in Github Actions?

Given I have the following:
name: MyWorkFlow
on: push
env:
FOO: bar1
jobs:
myJob:
- run: export FOO=bar2
- run: echo $FOO
The output is 'bar1'. Is there anyway that I can override these environment variables?
run: echo "FOO=1234" >> $GITHUB_ENV

Resources