GitHub Actions: Passing JSON data to another job - bash

I'm attempting to pass an array of dynamically fetched data from one GitHub Action job to the actual job doing the build. This array will be used as part of a matrix to build for multiple versions. However, I'm encountering an issue when the bash variable storing the array is evaluated.
jobs:
setup:
runs-on: ubuntu-latest
outputs:
versions: ${{ steps.matrix.outputs.value }}
steps:
- id: matrix
run: |
sudo apt-get install -y jq && \
MAINNET=$(curl https://api.mainnet-beta.solana.com -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","id":1, "method":"getVersion"}' | jq '.result["solana-core"]') && \
TESTNET=$(curl https://api.testnet.solana.com -X POST -H "Content-Type: application/json" -d '{"jsonrpc":"2.0","id":1, "method":"getVersion"}' | jq '.result["solana-core"]') && \
VERSIONS=($MAINNET $TESTNET) && \
echo "${VERSIONS[#]}" && \
VERSION_JSON=$(echo "${VERSIONS[#]}" | jq -s) && \
echo $VERSION_JSON && \
echo '::set-output name=value::$VERSION_JSON'
shell: bash
- id: debug
run: |
echo "Result: ${{ steps.matrix.outputs.value }}"
changes:
needs: setup
runs-on: ubuntu-latest
# Set job outputs to values from filter step
outputs:
core: ${{ steps.filter.outputs.core }}
package: ${{ steps.filter.outputs.package }}
strategy:
matrix:
TEST: [buy, cancel, create_auction_house, delegate, deposit, execute_sale, sell, update_auction_house, withdraw_from_fee, withdraw_from_treasury, withdraw]
SOLANA_VERSION: ${{fromJson(needs.setup.outputs.versions)}}
steps:
- uses: actions/checkout#v2
# For pull requests it's not necessary to checkout the code
- uses: dorny/paths-filter#v2
id: filter
with:
filters: |
core:
- 'core/**'
package:
- 'auction-house/**'
- name: debug
id: debug
working-directory: ./auction-house/program
run: echo ${{ needs.setup.outputs.versions }}
In the setup job above, the two versions are evaluated to a bash array (in VERSIONS) and converted into a JSON array to be passed to the next job (in VERSION_JSON). The last echo in the matrix step results in a print of [ "1.10.31", "1.11.1" ], but the debug step prints out this:
Run echo "Result: "$VERSION_JSON""
echo "Result: "$VERSION_JSON""
shell: /usr/bin/bash -e {0}
env:
CARGO_TERM_COLOR: always
RUST_TOOLCHAIN: stable
Result:
The changes job also results in an error:
Error when evaluating 'strategy' for job 'changes'.
.github/workflows/program-auction-house.yml (Line: 44, Col: 25): Unexpected type of value '$VERSION_JSON', expected type: Sequence.
It definitely seems like the $VERSION_JSON variable isn't actually being evaluated properly, but I can't figure out where the evaluation is going wrong.

For echo '::set-output name=value::$VERSION_JSON' you need to use double quotes or bash would not expand $VERSION_JSON.
set-output is not happy with multi-lined data. For your case, you can use jq -s -c so the output will be one line.

Related

How to split a string that is separated by comma, into multiple strings using Bash

I am creating a git-hub workflow using workflow_dispatch for my inputs data
workflow_dispatch:
inputs:
Browser:
description: 'Select what browsers to use'
required: true
default: 'Chrome'
And I got my setup job, where I take my workflow_dispatch data and trasfer them to json, so I can use it in my matrix
jobs:
setup:
runs-on: ubuntu-latest
outputs:
matrix-browser: ${{ steps.set-matrix-browser.outputs.matrix-browser }}
steps:
- uses: actions/checkout#v2
- id: set-matrix-browser
run: |
testBro="${{ github.event.inputs.Browser }}"
echo "::set-output name=matrix-browser::[\"${testBro}\"]"
echo "[\"${testBro}\"]"
So, my question is:
If I got two or more browsers in my github.event.inputs.Browser = "Chrome, Safari, Edge", who I can to split them, for each browser to be a separate string.
I want my output to look like this,
["Chrome", "Safari, "Edge"]
but instead I got this
["Chrome, Safari, Edge"]
Can you please suggest how I need to change this line of code?
echo "::set-output name=matrix-browser::[\"${testBro}\"]"
I've tried something like this:
echo "::set-output name=matrix-browser::[\"${testBro | tr "," "\n"}\"]"
Assuming testBro="Chrome,Safari,Edge"
echo [\"$testBro\"] |sed 's/,/\", \"/g' or for the full line echo "::set-output name=matrix-browser:: $(echo [\"$testBro\"] |sed 's/,/\", \"/g')"
gives the output
::set-output name=matrix-browser:: ["Chrome", "Safari", "Edge"]
I figured it
echo "::set-output name=matrix-browser::[\"${testBro//', '/\",\"}\"]"
Thanks everybody

GitLab CI rules not working with extends and individual rules

Below are two jobs in the build stage.
Default, there is set some common condition, and using extends keyword for that, ifawsdeploy.
As only one of them should run, if variable $ADMIN_SERVER_IP provided then connect_admin_server should run, working that way.
If no value provided to $ADMIN_SERVER_IP then create_admin_server should run, but it is not running.
.ifawsdeploy:
rules:
- if: '$TEST_CREATE_ADMIN && $REGION && $ROLE_ARN && $PACKAGEURL && $TEST_CREATE_ADMIN == "aws" && $SUB_PLATFORM == "aws" && $ROLE_ARN != "" && $PACKAGEURL != "" && $REGION != ""'
variables:
TEST_CREATE_ADMIN:
#value: aws
description: "Platform, currently aws only"
SUB_PLATFORM:
value: aws
description: "Platform, currently aws only"
REGION:
value: "us-west-2"
description: "region where to deploy company"
PACKAGEURL:
value: "http://somerpmurl.x86_64.rpm"
description: "company rpm file url"
ACCOUNT_NAME:
value: "testsubaccount"
description: "Account name of sub account to refer in the deployment, no need to match in AWS"
ROLE_ARN:
value: "arn:aws:iam::491483064167:role/uat"
description: "ROLE ARN of the user account assuming: aws sts get-caller-identity"
tfenv_version: "1.1.9"
DEV_PUB_KEY:
description: "Optional public key file to add access to admin server"
ADMIN_SERVER_IP:
description: "Existing Admin Server IP Address"
ADMIN_SERVER_SSH_KEY:
description: "Existing Admin Server SSH_KEY PEM content"
#export variables below will cause the terraform to use the root account instead of the one specified in tfvars file
.configure_aws_cli: &configure_aws_cli
- aws configure set region $REGION
- aws configure set aws_access_key_id $AWS_FULL_STS_ACCESS_KEY_ID
- aws configure set aws_secret_access_key $AWS_FULL_STS_ACCESS_KEY_SECRET
- aws sts get-caller-identity
- aws configure set source_profile default --profile $ACCOUNT_NAME
- aws configure set role_arn $ROLE_ARN --profile $ACCOUNT_NAME
- aws sts get-caller-identity --profile $ACCOUNT_NAME
- aws configure set region $REGION --profile $ACCOUNT_NAME
.copy_remote_log: &copy_remote_log
- if [ -e outfile ]; then rm outfile; fi
- copy_command="$(cat $CI_PROJECT_DIR/scp_command.txt)"
- new_copy_command=${copy_command/"%s"/"outfile"}
- new_copy_command=${new_copy_command/"~"/"/home/ec2-user/outfile"}
- echo $new_copy_command
- new_copy_command=$(echo "$new_copy_command" | sed s'/\([^.]*\.[^ ]*\) \([^ ]*\) \(.*\)/\1 \3 \2/')
- echo $new_copy_command
- sleep 10
- eval $new_copy_command
.check_remote_log: &check_remote_log
- sleep 10
- grep Error outfile || true
- sleep 10
- returnCode=$(grep -c Error outfile) || true
- echo "Return code received $returnCode"
- if [ $returnCode -ge 1 ]; then exit 1; fi
- echo "No errors"
.prepare_ssh_key: &prepare_ssh_key
- echo $ADMIN_SERVER_SSH_KEY > $CI_PROJECT_DIR/ssh_key.pem
- cat ssh_key.pem
- sed -i -e 's/-----BEGIN RSA PRIVATE KEY-----/-bk-/g' ssh_key.pem
- sed -i -e 's/-----END RSA PRIVATE KEY-----/-ek-/g' ssh_key.pem
- perl -p -i -e 's/\s/\n/g' ssh_key.pem
- sed -i -e 's/-bk-/-----BEGIN RSA PRIVATE KEY-----/g' ssh_key.pem
- sed -i -e 's/-ek-/-----END RSA PRIVATE KEY-----/g' ssh_key.pem
- cat ssh_key.pem
- chmod 400 ssh_key.pem
connect-admin-server:
stage: build
allow_failure: true
image:
name: amazon/aws-cli:latest
entrypoint: [ "" ]
rules:
- if: '$ADMIN_SERVER_IP && $ADMIN_SERVER_IP != "" && $ADMIN_SERVER_SSH_KEY && $ADMIN_SERVER_SSH_KEY != ""'
extends:
- .ifawsdeploy
script:
- TF_IN_AUTOMATION=true
- yum update -y
- yum install git unzip gettext jq -y
- echo "Your admin server key and info are added as artifacts"
# Copy the important terraform outputs to files for artifacts to pass into other jobs
- *prepare_ssh_key
- echo "ssh -i ssh_key.pem ec2-user#${ADMIN_SERVER_IP}" > $CI_PROJECT_DIR/ssh_command.txt
- echo "scp -q -i ssh_key.pem %s ec2-user#${ADMIN_SERVER_IP}:~" > $CI_PROJECT_DIR/scp_command.txt
- test_pre_command="$(cat "$CI_PROJECT_DIR/ssh_command.txt") -o StrictHostKeyChecking=no"
- echo $test_pre_command
- test_command="$(echo $test_pre_command | sed -r 's/(ssh )(.*)/\1-tt \2/')"
- echo $test_command
- echo "sudo yum install -yq $PACKAGEURL 2>&1 | tee outfile ; exit 0" | $test_command
- *copy_remote_log
- echo "Now checking log file for returnCode"
- *check_remote_log
artifacts:
untracked: true
when: always
paths:
- "$CI_PROJECT_DIR/ssh_key.pem"
- "$CI_PROJECT_DIR/ssh_command.txt"
- "$CI_PROJECT_DIR/scp_command.txt"
after_script:
- cat $CI_PROJECT_DIR/ssh_key.pem
- cat $CI_PROJECT_DIR/ssh_command.txt
- cat $CI_PROJECT_DIR/scp_command.txt
create-admin-server:
stage: build
allow_failure: false
image:
name: amazon/aws-cli:latest
entrypoint: [ "" ]
rules:
- if: '$ADMIN_SERVER_IP != ""'
when: never
extends:
- .ifawsdeploy
script:
- echo "admin server $ADMIN_SERVER_IP"
- TF_IN_AUTOMATION=true
- yum update -y
- yum install git unzip gettext jq -y
- *configure_aws_cli
- aws sts get-caller-identity --profile $ACCOUNT_NAME #to check whether updated correctly or not
- git clone "https://project-n-setup:$(echo $PERSONAL_GITLAB_TOKEN)#gitlab.com/company-oss/project-n-setup.git"
# Install tfenv
- git clone https://github.com/tfutils/tfenv.git ~/.tfenv
- ln -s ~/.tfenv /root/.tfenv
- ln -s ~/.tfenv/bin/* /usr/local/bin
# Install terraform 1.1.9 through tfenv
- tfenv install $tfenv_version
- tfenv use $tfenv_version
# Copy the tfvars temp file to the terraform setup directory
- cp .gitlab/admin_server.temp_tfvars project-n-setup/$SUB_PLATFORM/
- cd project-n-setup/$SUB_PLATFORM/
- envsubst < admin_server.temp_tfvars > admin_server.tfvars
- rm -rf .terraform || exit 0
- cat ~/.aws/config
- terraform init -input=false
- terraform apply -var-file=admin_server.tfvars -input=false -auto-approve
- echo "Your admin server key and info are added as artifacts"
# Copy the important terraform outputs to files for artifacts to pass into other jobs
- terraform output -raw ssh_key > $CI_PROJECT_DIR/ssh_key.pem
- terraform output -raw ssh_command > $CI_PROJECT_DIR/ssh_command.txt
- terraform output -raw scp_command > $CI_PROJECT_DIR/scp_command.txt
- cp $CI_PROJECT_DIR/project-n-setup/$SUB_PLATFORM/terraform.tfstate $CI_PROJECT_DIR
- cp $CI_PROJECT_DIR/project-n-setup/$SUB_PLATFORM/admin_server.tfvars $CI_PROJECT_DIR
artifacts:
untracked: true
paths:
- "$CI_PROJECT_DIR/ssh_key.pem"
- "$CI_PROJECT_DIR/ssh_command.txt"
- "$CI_PROJECT_DIR/scp_command.txt"
- "$CI_PROJECT_DIR/terraform.tfstate"
- "$CI_PROJECT_DIR/admin_server.tfvars"
How to fix that?
I tried the below step from suggestions on comments section.
.generalgrabclustertrigger:
rules:
- if: '$TEST_CREATE_ADMIN && $REGION && $ROLE_ARN && $PACKAGEURL && $TEST_CREATE_ADMIN == "aws" && $SUB_PLATFORM == "aws" && $ROLE_ARN != "" && $PACKAGEURL != "" && $REGION != ""'
.ifteardownordestroy: # Automatic if triggered from gitlab api AND destroy variable is set
rules:
- !reference [.generalgrabclustertrigger, rules]
- if: 'CI_PIPELINE_SOURCE == "triggered"'
when: never
And included the above in extends of a job.
destroy-admin-server:
stage: cleanup
extends:
- .ifteardownordestroy
allow_failure: true
interruptible: false
But I am getting syntax error in the .ifteardownordestroy part.
jobs:destroy-admin-server:rules:rule if invalid expression syntax
You are overriding rules: in your job that extends .ifawsdeploy. rules: are not combined in this case -- the definition of rules: in the job takes complete precedence.
Take for example the following configuration:
.template:
rules:
- one
- two
myjob:
extends: .template
rules:
- a
- b
In the above example, the myjob job only has rules a and b in effect. Rules one and two are completely ignored because they are overridden in the job configuration.
Instead of uinsg extends:, you can use !reference to preserve and combine rules. You can also use YAML anchors if you want.
create-admin-server:
rules:
- !reference [.ifawsdeploy, rules]
- ... # your additional rules
If no value provided to $ADMIN_SERVER_IP then create_admin_server should run
Lastly, pay special attention to your rules:
rules:
- if: '$ADMIN_SERVER_IP != ""'
when: never
In this case, there are no rules that allow the job to run ever. You either need a case that will evaluate true for the job to run, or to have a default case (an item with no if: condition) in order for the job to run.
To get the behavior you expect, you probably want your default case to be on_success:
rules:
- if: '$ADMIN_SERVER_IP != ""'
when: never
- when: on_success
you can change your rules to :
rules:
- if: '$ADMIN_SERVER_IP != ""'
when: never
- when: always
or
rules:
- if: '$ADMIN_SERVER_IP == ""'
when: always
I have a sample in here: try-rules-stackoverflow-72545625 - GitLab and the pipeline record Pipeline no value - GitLab, Pipeline has value - GitLab

Github Actions variable showing output as null

This is a step in my workflow:
- name: Fetching INT creds
if: ${{ github.event.inputs.resourcesCreated == 'true' }}
id: int-creds
run: |
az login --service-principal -u X -p Y --tenant Z
INTEGRATION=$(az identity show --name int-${{ steps.service-id-valid.outputs.service-id-valid }} --resource-group XxX --query 'clientId' -otsv)
echo "::set-output name=int-creds-valid::${INTEGRATION}"
echo "${{ steps.int-creds.outputs.int-creds-valid }}"
I've added the last echo command in order to see the output (for checking purposes).
However I allways receive it as null.
When I echo $INTEGRATION, I will see the requested output.
Any idea why the set-output command seems to not work ?
I spent a few hours looking for a work around but couldn't find it, thanks.

Curl command in Github action fails on "no such file or directory"

I am running my Github Action on a self-hosted machine. The action should send a curl command:
name: Build and Publish
on:
push:
branches:
- master
jobs:
trigger:
runs-on: self-hosted
steps:
- uses: actions/checkout#v2
- name: curl command
working-directory: /home/ubuntu/actions-runner/_work/${{ github.repository }}
run: |
DATA= '{"data": { "repository" : "${{ github.repository }}", "package_version" : "0.1.0-${{ github.sha }}", "branch" : "${{ github.sha }}" }}'
printf -v content "$DATA"
curl --request 'POST' --data "$content" 'http://<my-url>/actions'
The output seems strange as it is looking for a certain path with an .sh script and fails on "no such file or directory"
Run DATA= '{"data": { "repository" : "iz/<my-repo>", "package_version" : "0.1.0-41b1005d15069bbb515b52416721b84", "branch" : "41b1005d15069bbb515b52416721b84" }}'
DATA= '{"data": { "repository" : "iz/<my-repo>", "package_version" : "0.1.0-41b1005d15069bbb515b52416721b84", "branch" : "41b1005d15069bbb515b52416721b84" }}'
printf -v content "$DATA"
curl --request 'POST' --data "$content" 'http://<my-url>/actions'
shell: /usr/bin/bash -e {0}
/home/ubuntu/actions-runner/_work/_temp/5fee3c-a409-4db-48f-cb1d16d15d.sh: line 1: {"data": { "repository" : "iz/<my-repo>", "package_version" : "0.1.0-41b1005d15069bbb515b52416721b84", "branch" : "41b1005d15069bbb515b52416721b84" }}:
No such file or directory
Error: Process completed with exit code 127.
Try removing the space after DATA=. It thinks you are temporarily setting DATA to empty for executing the command '{"data": { "repository" :... instead of setting DATA to that value permanently (i.e. non-temporarily).
Set X to Y (until overwritten):
X=Y
Set X to Y temporarily during execution of CMD:
X=Y CMD

MS Teams Not Working As Expected With and Github Action

I am facing issue while sending notification to microsoft teams using below github action workflow YAML. As you can see in first job i am using right "ls -lrt" command and when this job1 succeeded then i got success notification in teams but to get failed notification, i purposefully removed hypen (-) from "ls lrt" command so that second job can fail and i can get fail notification. Overall idea is any job fail or success, i must get notification. But this is not happening for me actually. Any guidance and help would be appreciated.
name: msteams
on: push
jobs:
job1:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: test run
run: ls -lrt
- name: "testing_ms"
if: always()
uses: ./.github/actions
job2:
runs-on: ubuntu-latest
needs: job1
steps:
- uses: actions/checkout#v2
- name: test run
run: ls lrt
- name: "testing ms"
if: always()
uses: ./.github/actions
As in above YAML you can see i am using uses: ./.github/actions so i kept below mentioned code in another YAML file and kept in .github/actions folder parallel to my above main github action workflow YAML.
name: 'MS Notification'
description: 'Notification to MS Teams'
runs:
using: "composite"
steps:
- id: notify
shell: bash
run: |
echo "This is for testing"
# step logic
# Specific to this workflow variables set
PIPELINE_PUBLISHING_NAME="GitHub Actions Workflows Library"
BRANCH_NAME="${GITHUB_REF#refs/*/}"
PIPELINE_TEAMS_WEBHOOK=${{ secrets.MSTEAMS_WEBHOOK }}
# Common logic for notifications
TIME_STAMP=$(date '+%A %d %B %Y, %r - %Z')
GITHUBREPO_OWNER=$(echo ${GITHUB_REPOSITORY} | cut -d / -f 1)
GITHUBREPO_NAME=${GITHUB_REPOSITORY}
GITHUBREPO_URL=${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
SHA=${GITHUB_SHA}
SHORT_SHA=${SHA::7}
RUN_ID=${GITHUB_RUN_ID}
RUN_NUM=${GITHUB_RUN_NUMBER}
AUTHOR_AVATAR_URL="${{github.event.sender.avatar_url}}"
AUTHOR_HTML_URL="${{github.event.sender.url}}"
AUTHOR_LOGIN="${{github.event.sender.login}}"
COMMIT_HTML_URL="${GITHUBREPO_URL}/commit/${SHA}"
COMMIT_AUTHOR_NAME="${{github.event.sender.login}}"
case ${{ job.status }} in
failure )
NOTIFICATION_COLOR="dc3545"
NOTIFICATION_ICON="&#x274C"
NOTIFICATION_STATUS="FAILURE"
;;
success )
NOTIFICATION_COLOR="28a745"
NOTIFICATION_ICON="&#x2705"
NOTIFICATION_STATUS="SUCCESS"
;;
cancelled )
NOTIFICATION_COLOR="17a2b8"
NOTIFICATION_ICON="&#x2716"
NOTIFICATION_STATUS="CANCELLED"
;;
*)
NOTIFICATION_COLOR="778899"
NOTIFICATION_ICON=&#x2754""
NOTIFICATION_STATUS="UNKOWN"
;;
esac
# set pipeline version information if available
if [[ '${{ env.CICD_PIPELINE_VERSION}}' != '' ]];then
PIPELINE_VERSION="(v. ${{ env.CICD_PIPELINE_VERSION}})"
else
PIPELINE_VERSION=""
fi
NOTIFICATION_SUMARY="${NOTIFICATION_ICON} ${NOTIFICATION_STATUS} - ${PIPELINE_PUBLISHING_NAME} [ ${BRANCH_NAME} branch ] >> ${{ github.workflow }} ${PIPELINE_VERSION} "
TEAMS_WEBHOOK_URL="${PIPELINE_TEAMS_WEBHOOK}"
# addtional sections can be added to specify additional, specific to its workflow, information
message-card_json_payload() {
cat <<EOF
{
"#type": "MessageCard",
"#context": "https://schema.org/extensions",
"summary": "${NOTIFICATION_SUMARY}",
"themeColor": "${NOTIFICATION_COLOR}",
"title": "${NOTIFICATION_SUMARY}",
"sections": [
{
"activityTitle": "**CI #${RUN_NUM} (commit [${SHORT_SHA}](COMMIT_HTML_URL))** on [${GITHUBREPO_NAME}](${GITHUBREPO_URL})",
"activitySubtitle": "by ${COMMIT_AUTHOR_NAME} [${AUTHOR_LOGIN}](${AUTHOR_HTML_URL}) on ${TIME_STAMP}",
"activityImage": "${AUTHOR_AVATAR_URL}",
"markdown": true
}
],
"potentialAction": [
{
"#type": "OpenUri",
"name": "View Workflow Run",
"targets": [{
"os": "default",
"uri": "${GITHUBREPO_URL}/actions/runs/${RUN_ID}"
}]
},
{
"#type": "OpenUri",
"name": "View Commit Changes",
"targets": [{
"os": "default",
"uri": "${COMMIT_HTML_URL}"
}]
}
]
}
EOF
}
echo "NOTIFICATION_SUMARY ${NOTIFICATION_SUMARY}"
echo "------------------------------------------------"
echo "MessageCard payload"
echo "$(message-card_json_payload)"
echo "------------------------------------------------"
HTTP_RESPONSE=$(curl -s -H "Content-Type: application/json" \
--write-out "HTTPSTATUS:%{http_code}" \
--url "${TEAMS_WEBHOOK_URL}" \
-d "$(message-card_json_payload)"
)
echo "------------------------------------------------"
echo "HTTP_RESPONSE $HTTP_RESPONSE"
echo "------------------------------------------------"
# extract the body
HTTP_BODY=$(echo $HTTP_RESPONSE | sed -e 's/HTTPSTATUS\:.*//g')
# extract the status
HTTP_STATUS=$(echo $HTTP_RESPONSE | tr -d '\n' | sed -e 's/.*HTTPSTATUS://')
if [ ! $HTTP_STATUS -eq 200 ]; then
echo "::error::Error sending MS Teams message card request [HTTP status: $HTTP_STATUS]"
# print the body
echo "$HTTP_BODY"
exit 1
fi
I don't know the entire answer for you, but right off I see the composite action trying to read secrets, which composite actions don't support. Try setting input params to the composite actions to pass in what you need.

Resources