how do nested anchor works in yaml? - yaml

I have the following yaml file, and after following some other anchor tutorials, this looks like it should work, but passing it through some yaml validators, it doesn't:
echo_baz: &echo_baz
- run:
name: echo baz
command: echo baz
jobs:
myjobs:
steps:
- mystep
- run:
name: echo foo
command: echo foo
<<: *echo_baz
- run:
name: echo bar
command: echo bar
using the yamllint tool:
13:7 error syntax error: expected <block end>, but found '?' (syntax)
using http://www.yamllint.com/ gives me
(): did not find expected '-' indicator while parsing a block collection at line 9 column 7
I keep staring and I don't find anything wrong with it.

Looking at the the merge spec, you need to merge into a map, not a sequence, so keep the sequence indicator ("-") in the content part and define the anchor just in terms of its keys.
echo_baz: &echo_baz
run:
name: echo baz
command: echo baz
jobs:
myjobs:
steps:
- mystep
- run:
name: echo foo
command: echo foo
- <<: *echo_baz
- run:
name: echo bar
command: echo bar
Produces:
{
"echo_baz": {
"run": {
"name": "echo baz",
"command": "echo baz"
}
},
"jobs": {
"myjobs": {
"steps": [
"mystep",
{
"run": {
"name": "echo foo",
"command": "echo foo"
}
},
{
"run": {
"name": "echo baz",
"command": "echo baz"
}
},
{
"run": {
"name": "echo bar",
"command": "echo bar"
}
}
]
}
}
}

Related

MS Teams Not Working As Expected With and Github Action

I am facing issue while sending notification to microsoft teams using below github action workflow YAML. As you can see in first job i am using right "ls -lrt" command and when this job1 succeeded then i got success notification in teams but to get failed notification, i purposefully removed hypen (-) from "ls lrt" command so that second job can fail and i can get fail notification. Overall idea is any job fail or success, i must get notification. But this is not happening for me actually. Any guidance and help would be appreciated.
name: msteams
on: push
jobs:
job1:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: test run
run: ls -lrt
- name: "testing_ms"
if: always()
uses: ./.github/actions
job2:
runs-on: ubuntu-latest
needs: job1
steps:
- uses: actions/checkout#v2
- name: test run
run: ls lrt
- name: "testing ms"
if: always()
uses: ./.github/actions
As in above YAML you can see i am using uses: ./.github/actions so i kept below mentioned code in another YAML file and kept in .github/actions folder parallel to my above main github action workflow YAML.
name: 'MS Notification'
description: 'Notification to MS Teams'
runs:
using: "composite"
steps:
- id: notify
shell: bash
run: |
echo "This is for testing"
# step logic
# Specific to this workflow variables set
PIPELINE_PUBLISHING_NAME="GitHub Actions Workflows Library"
BRANCH_NAME="${GITHUB_REF#refs/*/}"
PIPELINE_TEAMS_WEBHOOK=${{ secrets.MSTEAMS_WEBHOOK }}
# Common logic for notifications
TIME_STAMP=$(date '+%A %d %B %Y, %r - %Z')
GITHUBREPO_OWNER=$(echo ${GITHUB_REPOSITORY} | cut -d / -f 1)
GITHUBREPO_NAME=${GITHUB_REPOSITORY}
GITHUBREPO_URL=${GITHUB_SERVER_URL}/${GITHUB_REPOSITORY}
SHA=${GITHUB_SHA}
SHORT_SHA=${SHA::7}
RUN_ID=${GITHUB_RUN_ID}
RUN_NUM=${GITHUB_RUN_NUMBER}
AUTHOR_AVATAR_URL="${{github.event.sender.avatar_url}}"
AUTHOR_HTML_URL="${{github.event.sender.url}}"
AUTHOR_LOGIN="${{github.event.sender.login}}"
COMMIT_HTML_URL="${GITHUBREPO_URL}/commit/${SHA}"
COMMIT_AUTHOR_NAME="${{github.event.sender.login}}"
case ${{ job.status }} in
failure )
NOTIFICATION_COLOR="dc3545"
NOTIFICATION_ICON="&#x274C"
NOTIFICATION_STATUS="FAILURE"
;;
success )
NOTIFICATION_COLOR="28a745"
NOTIFICATION_ICON="&#x2705"
NOTIFICATION_STATUS="SUCCESS"
;;
cancelled )
NOTIFICATION_COLOR="17a2b8"
NOTIFICATION_ICON="&#x2716"
NOTIFICATION_STATUS="CANCELLED"
;;
*)
NOTIFICATION_COLOR="778899"
NOTIFICATION_ICON=&#x2754""
NOTIFICATION_STATUS="UNKOWN"
;;
esac
# set pipeline version information if available
if [[ '${{ env.CICD_PIPELINE_VERSION}}' != '' ]];then
PIPELINE_VERSION="(v. ${{ env.CICD_PIPELINE_VERSION}})"
else
PIPELINE_VERSION=""
fi
NOTIFICATION_SUMARY="${NOTIFICATION_ICON} ${NOTIFICATION_STATUS} - ${PIPELINE_PUBLISHING_NAME} [ ${BRANCH_NAME} branch ] >> ${{ github.workflow }} ${PIPELINE_VERSION} "
TEAMS_WEBHOOK_URL="${PIPELINE_TEAMS_WEBHOOK}"
# addtional sections can be added to specify additional, specific to its workflow, information
message-card_json_payload() {
cat <<EOF
{
"#type": "MessageCard",
"#context": "https://schema.org/extensions",
"summary": "${NOTIFICATION_SUMARY}",
"themeColor": "${NOTIFICATION_COLOR}",
"title": "${NOTIFICATION_SUMARY}",
"sections": [
{
"activityTitle": "**CI #${RUN_NUM} (commit [${SHORT_SHA}](COMMIT_HTML_URL))** on [${GITHUBREPO_NAME}](${GITHUBREPO_URL})",
"activitySubtitle": "by ${COMMIT_AUTHOR_NAME} [${AUTHOR_LOGIN}](${AUTHOR_HTML_URL}) on ${TIME_STAMP}",
"activityImage": "${AUTHOR_AVATAR_URL}",
"markdown": true
}
],
"potentialAction": [
{
"#type": "OpenUri",
"name": "View Workflow Run",
"targets": [{
"os": "default",
"uri": "${GITHUBREPO_URL}/actions/runs/${RUN_ID}"
}]
},
{
"#type": "OpenUri",
"name": "View Commit Changes",
"targets": [{
"os": "default",
"uri": "${COMMIT_HTML_URL}"
}]
}
]
}
EOF
}
echo "NOTIFICATION_SUMARY ${NOTIFICATION_SUMARY}"
echo "------------------------------------------------"
echo "MessageCard payload"
echo "$(message-card_json_payload)"
echo "------------------------------------------------"
HTTP_RESPONSE=$(curl -s -H "Content-Type: application/json" \
--write-out "HTTPSTATUS:%{http_code}" \
--url "${TEAMS_WEBHOOK_URL}" \
-d "$(message-card_json_payload)"
)
echo "------------------------------------------------"
echo "HTTP_RESPONSE $HTTP_RESPONSE"
echo "------------------------------------------------"
# extract the body
HTTP_BODY=$(echo $HTTP_RESPONSE | sed -e 's/HTTPSTATUS\:.*//g')
# extract the status
HTTP_STATUS=$(echo $HTTP_RESPONSE | tr -d '\n' | sed -e 's/.*HTTPSTATUS://')
if [ ! $HTTP_STATUS -eq 200 ]; then
echo "::error::Error sending MS Teams message card request [HTTP status: $HTTP_STATUS]"
# print the body
echo "$HTTP_BODY"
exit 1
fi
I don't know the entire answer for you, but right off I see the composite action trying to read secrets, which composite actions don't support. Try setting input params to the composite actions to pass in what you need.

Using jq in Gitlab CI yaml

I'm having a json file sample.json. Below is the snippet from the sample.json -
{
"AddOnModules": {
"Description": "add on modules",
"Type": "Array",
"AllowedValues": [
"a",
"b",
"c"
],
"value": []
}
}
I'm trying to provide value to AddOnModules through git ci variable (parameter-value) at runtime while running the pipeline. The following is the snippet of the pipeline -
stages:
- deploy
# Job to deploy for development
dev-deploy:
variables:
before_script:
- apk add jq
image: python:3.7.4-alpine3.9
script:
- tmp=$(mktemp)
- jq -r --arg add_on_modules "$add_on_modules" '.AddOnModules.value |= .+ [$add_on_modules] ' sample.json > "$tmp" && mv "$tmp" sample.json
- cat sample.json
stage: deploy
tags:
- docker
- linux
only:
variables:
- $stage =~ /^deploy$/ && $deployment_mode =~ /^dev$/
I'm giving the value of variable add_on_modules as "a","b" through git ci while running the pipeline. On performing cat sample.json, it's observed to be -
{
"AddOnModules": {
"Description": "add on modules",
"Type": "Array",
"AllowedValues": [
"a",
"b",
"c"
],
"value": [ "\"a\",\"b\""]
}
}
The extra double quotes are getting the prepended and appended while the existing ones are escaped.
I want output something like -
{
"AddOnModules": {
"Description": "add on modules",
"Type": "Array",
"AllowedValues": [
"a",
"b",
"c"
],
"value": ["a","b"]
}
}
Looks like I'm missing something with jq -
- jq -r --arg add_on_modules "$add_on_modules" '.AddOnModules.value |= .+ [$add_on_modules] ' sample.json > "$tmp" && mv "$tmp" sample.json
Tried with -r/--raw-output flag with jq but no success. Any suggestions on how to solve this?
This is how I'm running the pipeline -
Pipeline run
If $add_on_modules is ["a","b"]
If you can set add_on_modules to the textual representation of a JSON array, then you would use --argjson, like so:
add_on_modules='["a","b"]'
jq -r --argjson add_on_modules "$add_on_modules" '
.AddOnModules.value += $add_on_modules
' sample.json
If $add_on_modules is the string "a","b"
add_on_modules='"a","b"'
jq -r --arg add_on_modules "$add_on_modules" '
.AddOnModules.value += ($add_on_modules|split(",")|map(fromjson))
' sample.json

AWS CodeBuild buildspec bash syntax error: bad substitution with if statement

Background:
I'm using an AWS CodeBuild buildspec.yml to iterate through directories from a GitHub repo. Before looping through the directory path $TF_ROOT_DIR, I'm using a bash if statement to check if the GitHub branch name $BRANCH_NAME is within an env variable $LIVE_BRANCHES. As you can see in the error screenshot below, the bash if statement outputs the error: syntax error: bad substitution. When I reproduce the if statement within a local bash script, the if statement works as it's supposed to.
Here's the env variables defined in the CodeBuild project:
Here's a relevant snippet from the buildspec.yml:
version: 0.2
env:
shell: bash
phases:
build:
commands:
- |
if [[ " ${LIVE_BRANCHES[*]} " == *"$BRANCH_NAME"* ]]; then
# Iterate only through BRANCH_NAME directory
TF_ROOT_DIR=${TF_ROOT_DIR}/*/${BRANCH_NAME}/
else
# Iterate through both dev and prod directories
TF_ROOT_DIR=${TF_ROOT_DIR}/*/
fi
- echo $TF_ROOT_DIR
Here's the build log that shows the syntax error:
Here's the AWS CodeBuild project JSON to reproduce the CodeBuild project:
{
"projects": [
{
"name": "terraform_validate_plan",
"arn": "arn:aws:codebuild:us-west-2:xxxxx:project/terraform_validate_plan",
"description": "Perform terraform plan and terraform validator",
"source": {
"type": "GITHUB",
"location": "https://github.com/marshall7m/sparkify_end_to_end.git",
"gitCloneDepth": 1,
"gitSubmodulesConfig": {
"fetchSubmodules": false
},
"buildspec": "deployment/CI/dev/cfg/buildspec_terraform_validate_plan.yml",
"reportBuildStatus": false,
"insecureSsl": false
},
"secondarySources": [],
"secondarySourceVersions": [],
"artifacts": {
"type": "NO_ARTIFACTS",
"overrideArtifactName": false
},
"cache": {
"type": "NO_CACHE"
},
"environment": {
"type": "LINUX_CONTAINER",
"image": "hashicorp/terraform:0.12.28",
"computeType": "BUILD_GENERAL1_SMALL",
"environmentVariables": [
{
"name": "TF_ROOT_DIR",
"value": "deployment",
"type": "PLAINTEXT"
},
{
"name": "LIVE_BRANCHES",
"value": "(dev, prod)",
"type": "PLAINTEXT"
}
Here's the associated buildspec file content: (buildspec_terraform_validate_plan.yml)
version: 0.2
env:
shell: bash
parameter-store:
AWS_ACCESS_KEY_ID_PARAM: TF_AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY_PARAM: TF_AWS_SECRET_ACCESS_KEY_ID
phases:
install:
commands:
# install/incorporate terraform validator?
pre_build:
commands:
# CodeBuild environment variables
# BRANCH_NAME -- GitHub branch that triggered the CodeBuild project
# TF_ROOT_DIR -- Directory within branch ($BRANCH_NAME) that will be iterated through for terraform planning and testing
# LIVE_BRANCHES -- Branches that represent a live cloud environment
- export AWS_ACCESS_KEY_ID=$AWS_ACCESS_KEY_ID_PARAM
- export AWS_SECRET_ACCESS_KEY=$AWS_SECRET_ACCESS_KEY_PARAM
- bash -version || echo "${BASH_VERSION}" || bash --version
- |
if [[ -z "${BRANCH_NAME}" ]]; then
# extract branch from github webhook
BRANCH_NAME=$(echo $CODEBUILD_WEBHOOK_HEAD_REF | cut -d'/' -f 3)
fi
- "echo Triggered Branch: $BRANCH_NAME"
- |
if [[ " ${LIVE_BRANCHES[*]} " == *"$BRANCH_NAME"* ]]; then
# Iterate only through BRANCH_NAME directory
TF_ROOT_DIR=${TF_ROOT_DIR}/*/${BRANCH_NAME}/
else
# Iterate through both dev and prod directories
TF_ROOT_DIR=${TF_ROOT_DIR}/*/
fi
- "echo Terraform root directory: $TF_ROOT_DIR"
build:
commands:
- |
for dir in $TF_ROOT_DIR; do
#get list of non-hidden directories within $dir/
service_dir_list=$(find "${dir}" -type d | grep -v '/\.')
for sub_dir in $service_dir_list; do
#if $sub_dir contains .tf or .tfvars files
if (ls ${sub_dir}/*.tf) > /dev/null 2>&1 || (ls ${sub_dir}/*.tfvars) > /dev/null 2>&1; then
cd $sub_dir
echo ""
echo "*************** terraform init ******************"
echo "******* At directory: ${sub_dir} ********"
echo "*************************************************"
terraform init
echo ""
echo "*************** terraform plan ******************"
echo "******* At directory: ${sub_dir} ********"
echo "*************************************************"
terraform plan
cd - > /dev/null
fi
done
done
Given this is just a side project, all files that could be relevant to this problem are within a public repo here.
UPDATES
Tried adding #!/bin/bash shebang line but resulted in the CodeBuild error:
Phase context status code: COMMAND_EXECUTION_ERROR Message: Error while executing command: #!/bin/bash
version: 0.2
env:
shell: bash
phases:
build:
commands:
- |
#!/bin/bash
if [[ " ${LIVE_BRANCHES[*]} " == *"$BRANCH_NAME"* ]]; then
# Iterate only through BRANCH_NAME directory
TF_ROOT_DIR=${TF_ROOT_DIR}/*/${BRANCH_NAME}/
else
# Iterate through both dev and prod directories
TF_ROOT_DIR=${TF_ROOT_DIR}/*/
fi
- echo $TF_ROOT_DIR
Solution
As mentioned by #Marcin, I used an AWS managed image within Codebuild (aws/codebuild/standard:4.0) and downloaded Terraform within the install phase.
phases:
install:
commands:
- wget https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_amd64.zip -q
- unzip terraform_${TERRAFORM_VERSION}_linux_amd64.zip && mv terraform /usr/local/bin/
I tried to reproduce your issue, but it all works fine for me.
The only thing I've noticed is that you are using $BRANCH_NAME but its not defined anywhere. But even with missing $BRANCH_NAME the buildspec.yml you've posted runs fine.
Update using hashicorp/terraform:0.12.28 image

how to execute multiline command in ansible 2.9.10 in fedora 32

I want to execute a command using ansible 2.9.10 in remote machine, first I tried like this:
ansible kubernetes-root -m command -a "cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"registry-mirrors":[
"https://kfwkfulq.mirror.aliyuncs.com",
"https://2lqq34jg.mirror.aliyuncs.com",
"https://pee6w651.mirror.aliyuncs.com",
"http://hub-mirror.c.163.com",
"https://docker.mirrors.ustc.edu.cn",
"https://registry.docker-cn.com"
]
}"
obviously it is not working.so I read this guide and tried like this:
- hosts: kubernetes-root
remote_user: root
tasks:
- name: add docker config
shell: >
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"registry-mirrors":[
"https://kfwkfulq.mirror.aliyuncs.com",
"https://2lqq34jg.mirror.aliyuncs.com",
"https://pee6w651.mirror.aliyuncs.com",
"http://hub-mirror.c.163.com",
"https://docker.mirrors.ustc.edu.cn",
"https://registry.docker-cn.com"
]
}
and execute it like this:
[dolphin#MiWiFi-R4CM-srv playboook]$ ansible-playbook add-docker-config.yaml
[WARNING]: Invalid characters were found in group names but not replaced, use
-vvvv to see details
ERROR! We were unable to read either as JSON nor YAML, these are the errors we got from each:
JSON: Expecting value: line 1 column 1 (char 0)
Syntax Error while loading YAML.
could not find expected ':'
The error appears to be in '/home/dolphin/source-share/source/dolphin/dolphin-scripts/ansible/playboook/add-docker-config.yaml': line 7, column 7, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
cat > /etc/docker/daemon.json <<EOF
{
^ here
is there anyway to achive this?how to fix it?
your playbook should work fine, you just have to add some indentation after the shell clause line, and change the > to |:
here is the updated PB:
---
- name: play name
hosts: dell420
gather_facts: false
vars:
tasks:
- name: run shell task
shell: |
cat > /tmp/temp.file << EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"registry-mirrors":[
"https://kfwkfulq.mirror.aliyuncs.com",
"https://2lqq34jg.mirror.aliyuncs.com",
"https://pee6w651.mirror.aliyuncs.com",
"http://hub-mirror.c.163.com",
"https://docker.mirrors.ustc.edu.cn",
"https://registry.docker-cn.com"
]
}
EOF
Not sure what is wrong with the ad-hoc command, i tried a few things but didnt manage to make it work.
hope these help
EDIT:
as pointed out by Zeitounator, the ad-hoc command will work if you use shell module instead of command. example:
ansible -i hosts dell420 -m shell -a 'cat > /tmp/temp.file <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"registry-mirrors":[
"https://kfwkfulq.mirror.aliyuncs.com",
"https://2lqq34jg.mirror.aliyuncs.com",
"https://pee6w651.mirror.aliyuncs.com",
"http://hub-mirror.c.163.com",
"https://docker.mirrors.ustc.edu.cn",
"https://registry.docker-cn.com"
]
}
EOF
'

Why jq does not see environment variables when run in script?

I have the following JSON file:
{
"1":
{
"media_content":"test3.xspf"
},
"2":
{
"media_content":"test3.xspf"
}
}
In the terminal, using bash as shell, I can execute the following commands:
export schedules="1"
echo $(jq '.[env.schedules]["media_content"]' json_file.json)
Which results in outputing this:
test3.xspf
So it works as expected, but when I place that jq command in a script and run it, it just returns null.
I did echo the values of schedules to make sure the value is non-null inside the script, and it is ok:
echo $schedules
But I did not manage to find the reason, why this command works when run directly in shell and does not work when run in script.
I run the script in the following ways:
bash script.sh
./script.sh
PS: yes, I did offer execute permission: chmod +x script.sh
HINT: env.schedules represents the environment variable 'schedules', and I did make sure that it is assigned in the script before calling jq.
EDIT: I am posting now a whole script, specifying the files tree.
There is one directory containing:
script.sh
json_file.json
static.json
script.sh:
export zone=$(cat static.json | jq '.["1"]');
echo "json block: "$zone
export schedules="$(echo $zone | jq '.schedules')"
echo "environment variable: "$schedules
export media_content=$(jq '.[env.schedules]["media_content"]' json_file.json)
echo "What I want to get: \"test3.xspf\""
echo "What I get: "$media_content
json_file.json:
{
"1":
{
"media_content":"test3.xspf"
},
"2":
{
"media_content":"test3.xspf"
}
}
static.json:
{
"1":
{
"x": "0",
"y": "0",
"width": "960",
"height": "540",
"schedules":"1"
}
}
If I run the script, it displays:
json block: { "x": "0", "y": "0", "width": "960", "height": "540", "schedules": "1" }
environment variable: "1"
What I want to get: "test3.xspf"
What I get: null
If I hardcode the variable:
export schedules="1"
The problem no longer occurs
The problem is simple.
It's not jq's fault.
It the unproper way the schedule's value is piped to the next command.
You have to remove the "s that surround the variable's value, add the second command that uses sed to do that:
export schedules="$(echo $zone | jq '.schedules')"
schedules=$( echo $schedules | sed s/\"//g )
Long answer
Let's see:
here schedules is a string and echo shows its value as being 1:
export schedules="1" ; echo $schedules
here even though double quotes are not mentioned:
export schedules=1 ; echo $schedules
But the result from this also generates additional "s:
export schedules=$(echo $zone | jq '.schedules')
If you print it now you will see additional "s:
echo $schedules # "1"
So just remove the "s from the value:
schedules=$( echo $schedules | sed s/\"//g )

Resources