How to extract and reuse common part for this yaml fragment? - yaml

In a circleci config.yml file I have a number of jobs defined similarly this way:
defaults: &defaults
working_directory: ~/repo/appengine
docker:
- image: circleci/python
version: 2
jobs:
deploy_uat:
<<: *defaults
steps:
- attach_workspace:
at: ~/repo
- checkout
- run: *setup_secret
- run: *enable_npm
- run: *appengine_dep
- run: *webview_dep
- run: *apps_dep
- run:
name: Setup key file
command: |
mkdir ~/gcloud_keys
echo ${GCLOUD_UAT_ENV_KEY} | base64 --decode --ignore-garbage > ${HOME}/gcloud_keys/uat-env.json
- run: deployt.sh uat
deploy_dev:
# ... Skipped for brevity
deploy_staging:
# ...
I would like to further simplify the yaml code to something like this
defaults: &defaults
working_directory: ~/repo/appengine
docker:
- image: circleci/python
# Common steps
deploy_steps: &deploy_steps
steps:
- attach_workspace:
at: ~/repo
- checkout
- run: *setup_secret
- run: *enable_npm
- run: *appengine_dep
- run: *webview_dep
- run: *apps_dep
version: 2
jobs:
deploy_uat:
<<: *defaults
steps:
*deploy_steps
- run:
name: Setup key file
command: |
mkdir ~/gcloud_keys
echo ${GCLOUD_UAT_ENV_KEY} | base64 --decode --ignore-garbage > ${HOME}/gcloud_keys/uat-env.json
- run: deployt.sh uat
deploy_dev:
<<: *defaults
steps:
*deploy_steps
- run:
name: Setup key file
command: |
mkdir ~/gcloud_keys
echo ${GCLOUD_DEV_ENV_KEY} | base64 --decode --ignore-garbage > ${HOME}/gcloud_keys/dev-env.json
- run: deployt.sh dev
deploy_staging:
<<: *defaults
steps:
*deploy_steps
- run:
name: Setup key file
command: |
mkdir ~/gcloud_keys
echo ${GCLOUD_STAGING_ENV_KEY} | base64 --decode --ignore-garbage > ${HOME}/gcloud_keys/staging-env.json
- run: deployt.sh staging
However if I do it this way, I got did not find expected key error at the line *deploy_steps
If I change it to
deploy_uat:
<<: *defaults
steps:
<<: *deploy_steps
# ...
I got the same error
What is the right way to write simpler yaml config?

Well, the value of steps is expected to be an array. In the first case, there is an alias which points to a mapping (containing a steps key), followed by two sequence items. This is not a valid YAML structure and won't even get past the parser.
In the second case, you are using the (deprecated) merge key. That is only defined for mappings, there is no equivalent for sequences.
What you want to do is to merge two sequences inside YAML. There is no way to do that as YAML is not a programming language and does not support transformations on input data (apart from the merge key, which current YAML devs agree was a bad idea from the start).
Since YAML does not allow you to do what you want, you can turn to templating languages like Jinja, which is what Ansible and SaltStack do to enable doing such things in their YAML configs. Since CircleCI does not support it, you'd need to write yourself a script to transform your input YAML into the version CircleCI understands. It's up to you whether this is a feasible solution to your problem.

Related

Setting a GitHub Action environment variable with bash before reusable workflow

I have scoured the forums and couldn't find a solution.
I have a config.yml file which contains a set of key/value pairs. One of those pairs I want to set as a GITHUB_ENV to be used in my workflow. But I am running into issues as the logs say
"reusable workflows should be referenced at the top-level
`jobs.*.uses' key, not within steps"
How do I get around this?
name: "Deploy The Kraken"
on:
push:
branches:
- dev
pull_request:
branches:
- dev
workflow_dispatch:
jobs:
call-build:
steps:
- name: Set env
run: |
echo "FEATURE_BRANCH=$(cat config.yml | awk -F: '/^branch:/ { print $2 }'
| sed 's/ //g')" >> $GITHUB_ENV
- name: Test
run: echo $FEATURE_BRANCH
- uses: ######/#####/.github/workflows/kraken.yml#$FEATURE_BRANCH
with:
environment: #####
workspace: #####
contract: #####
production-ref: #####
I have tried multiple variations of placing the shell command in different parts. But I still can't get it to point to my chosen branch.

How can I run PNPM workspace projects as parallel jobs on GitHub Actions?

Given a repository structure with two packages like this:
$ tree
.
└── packages
├── foo
└── bar
$ cat pnpm-workspace.yaml
packages:
- 'packages/**'
$ pnpm -s m ls --depth -1
monorepo /monorepo
#mono/foo#0.0.0 /monorepo/packages/foo
#mono/baz#0.0.0 /monorepo/packages/bar
I'd like to run GitHub Actions CI such that it automatically runs each project as separate job. Here I've set up a job that manually does that parallelization:
name: CI
on:
push:
jobs:
build:
strategy:
matrix:
package: ["#mono/foo", "#mono/bar" ]
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- uses: pnpm/action-setup#v2
with:
version: 6.9.1
- run: pnpm run --filter ${{ matrix.package }} test
That runs fast because each project is run as parallel jobs, but I don't want to manually maintain that matrix.package list. How can I utilize pnpm to provide a list of workspace projects that gets fed into GitHub Actions CI?
Hmmm... I hit my head on this a bit more and I've found a solution.
I first made a package.json script to turn pnpm output into a json array-fragment:
$ cat package.json
{
"scripts": {
"list-packages": "echo [$(pnpm -s m ls --depth -1 | tr \" \" \"\n\" | grep -o \"#.*#\" | rev | cut -c 2- | rev | sed -e 's/\\(.*\\)/\"\\1\"/' | paste -sd, - )]",
}
}
$ pnpm -s list-packages
["#mono/foo","#mono/bar"]
(I'm not good enough with shell to know if there's a much easier way to express this transformation so I'd be happy to learn!)
I then followed GitHub documentation on dynamically setting matrix variables and created this workflow:
name: CI
on:
push:
workflow_dispatch:
jobs:
packages:
runs-on: ubuntu-latest
outputs:
matrix: ${{ steps.set-matrix.outputs.matrix }}
steps:
- uses: actions/checkout#v2
- uses: pnpm/action-setup#v2
with:
version: 6.9.1
- run: $(echo pnpm -s list-packages2)
- id: set-matrix
run: echo "::set-output name=matrix::{\"packages\":$(pnpm -s list-packages)}"
build:
needs: packages
strategy:
matrix: ${{ fromJson(needs.packages.outputs.matrix) }}
runs-on: ubuntu-latest
steps:
- run: echo ${{ matrix.package }}
The packages job now takes the output of $(pnpm -s list-packages) and puts it into the matrix variable, and that makes GitHub Actions run them all in parallel:

GitHub Actions pass list of variables to shell script

Using GitHub Actions, I would like to invoke a shell script with a list of directories.
(Essentially equivalent to passing an Ansible vars list to the shell script)
I don't really know how, is this even possible? Here's what I have until now, how could one improve this?
name: CI
on:
push:
branches:
- master
tags:
- v*
pull_request:
jobs:
run-script:
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout#v2
- name: Run script on targets
run: ./.github/workflows/script.sh {{ targets }}
env:
targets:
- FolderA/SubfolderA/
- FolderB/SubfolderB/
Today I was able to do this with the following YAML (truncated):
...
with:
targets: |
FolderA/SubfolderA
FolderB/SubfolderB
The actual GitHub Action passes this as an argument like the following:
runs:
using: docker
image: Dockerfile
args:
- "${{ inputs.targets }}"
What this does is simply sends the parameters as a string with the newline characters embedded, which can then be iterated over similar to an array in a POSIX-compliant manner via the following shell code:
#!/bin/sh -l
targets="${1}"
for target in $targets
do
echo "Proof that this code works: $target"
done
Which should be capable of accomplishing your desired task, if I understand the question correctly. You can always run something like sh ./script.sh $target in the loop if your use case requires it.

How can I access GitHub Action environment variables within a Bash script run by the Action?

I'm not able to access environment variables defined at the top-level of a GitHub Action configuration file from within a script run by the action.
For example, given the following config file:
name: x-pull-request
on: pull_request
env:
FOO: bar
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v1
- name: does a thing
run: ./scripts/do-a-thing.sh
... and the following script:
X=${FOO:default}
echo "X: $X" # X: default
The FOO environment variable defined in the config file is not available to the script and the default value is being used.
So, how can I access the environment variable from a Bash script run by the build step? Am I missing a prefix or something? (I know values defined in the input hash require you to use an INPUT_ prefix when referencing them.)
You can use env at any level also in jobs and steps.
I wrote a test action and a test script to validate it:
The action file:
name: Env tests
on: push
env:
FOO_ROOT: bar on root
jobs:
test:
runs-on: ubuntu-latest
env:
FOO_JOB: bar on job
steps:
- uses: actions/checkout#v1
- name: Test envs
run: ./env-test.sh
env:
FOO_STEP: bar on step
The script file:
#!/usr/bin/env bash
echo "FOO_ROOT: $FOO_ROOT"
echo "FOO_JOB: $FOO_JOB"
echo "FOO_STEP: $FOO_STEP"
echo " "
printenv
The results:
FOO_ROOT: bar on root
FOO_JOB: bar on job
FOO_STEP: bar on step
LEIN_HOME=/usr/local/lib/lein
M2_HOME=/usr/share/apache-maven-3.6.3
...
Check my results and, in fact, I don't know why it didn't work on your side because it must work.
If some one else is searching for a solution you have to explicitly pass the environment variables to the bash script. Otherwise they are not available:
name: x-pull-request
on: pull_request
env:
FOO: bar
jobs:
test:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v1
- name: does a thing
run: ./scripts/do-a-thing.sh $FOO
and in the script you have to map the parameter to a variable.
X=$1
echo "X: $X" # X: default

Can't share global variable value between jobs in gitlab ci yaml file

I'm trying to build an application using GitLab CI.
The name of the generated file is depending on the time, in this format
DEV_APP_yyyyMMddhhmm
(example: DEV_APP_201810221340, corresponding to the date of today 2018/10/22 13h40).
How can I store this name in a global variable inside the .gitlab-ci.yml file?
Here is my .gitlab-ci.yml file:
image: docker:latest
image: docker:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay
SPRING_PROFILES_ACTIVE: gitlab-ci
# TIME: ""
# BRANCH: ""
# REC_BUILD_NAME: ""
TIME: "timex"
BRANCH: "branchx"
DEV_BUILD_NAME: "DEV_APP_x"
stages:
- preparation
- build
- package
- deploy
- manual_rec_build
- manual_rec_package
job_preparation:
stage: preparation
script:
- echo ${TIME}
- export TIME=$(date +%Y%m%d%H%M)
- "BRANCH=$(echo $CI_BUILD_REF_SLUG | sed 's/[^[[:alnum:]]/_/g')"
- "DEV_BUILD_NAME=DEV_APP_${BRANCH}_${TIME}"
- echo ${TIME}
maven-build:
image: maven:3-jdk-8
stage: build
script:
- echo ${TIME}
- "mvn package -B"
artifacts:
paths:
- target/*.jar
only:
- merge-requests
- /^feature\/sprint.*$/
- /^DEV_.*$/
# when: manual
docker-build:
stage: package
script:
- echo ${TIME}
- docker build -t registry.gitlab.com/mourad.sellam/actuator-simple .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
- docker push registry.gitlab.com/mourad.sellam/actuator-simple
only:
- merge-requests
- /^feature\/sprint.*$/
- /^DEV_.*$/
when: manual
k8s-deploy-production:
image: google/cloud-sdk
stage: deploy
script:
- echo ${TIME}
- echo "$GOOGLE_KEY" > key.json
- gcloud auth activate-service-account --key-file key.json
- gcloud config set compute/zone europe-west1-c
- gcloud config set project actuator-sample
- gcloud config set container/use_client_certificate True
- gcloud container clusters get-credentials actuator-example
- kubectl delete secret registry.gitlab.com
- kubectl create secret docker-registry registry.gitlab.com --docker-server=https://registry.gitlab.com --docker-username=myUserName--docker-password=$REGISTRY_PASSWD --docker-email=myEmail#gmail.com
- kubectl apply -f deployment.yml --namespace=production
environment:
name: production
url: https://example.production.com
when: manual
job_manual_rec_build:
image: maven:3-jdk-8
stage: manual_rec_build
script:
- echo ${TIME}
- "mvn package -B"
artifacts:
paths:
- target/*.jar
when: manual
# allow_failure: false
job_manual_rec_package:
stage: manual_rec_package
variables:
script:
- echo ${TIME}
- echo ${DEV_BUILD_NAME}
- docker build -t registry.gitlab.com/mourad.sellam/actuator-simple:${DEV_BUILD_NAME} .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
- docker push registry.gitlab.com/mourad.sellam/actuator-simple
artifacts:
paths:
- target/*.jar
when: on_success
#test 1
When I call
echo ${TIME}
It displays "timex".
echo faild
Could you tell me how to store a global variable and set it in each job?
Check if GitLab 13.0 (May 2020) could help in your case:
Inherit environment variables from other jobs
Passing environment variables (or other data) between CI jobs is now possible.
By using the dependencies keyword (or needs keyword for DAG pipelines), a job can inherit variables from other jobs if they are sourced with dotenv report artifacts.
This offers a more graceful approach for updating variables between jobs compared to artifacts or passing files.
See documentation and issue.
You can inherit environment variables from dependent jobs.
This feature makes use of the artifacts:reports:dotenv report feature.
Example with dependencies keyword.
build:
stage: build
script:
- echo "BUILD_VERSION=hello" >> build.env
artifacts:
reports:
dotenv: build.env
deploy:
stage: deploy
script:
- echo $BUILD_VERSION # => hello
dependencies:
- build
Example with the needs keyword:
build:
stage: build
script:
- echo "BUILD_VERSION=hello" >> build.env
artifacts:
reports:
dotenv: build.env
deploy:
stage: deploy
script:
- echo $BUILD_VERSION # => hello
needs:
- job: build
artifacts: true
You can use artifacts for passing data between jobs. Here's example from Flant to check previous pipeline manual decision:
approve:
script:
- mkdir -p .ci_status
- echo $(date +%s) > .ci_status/approved
artifacts:
paths:
- .ci_status/
NOT approve:
script:
- mkdir -p .ci_status
- echo $(date +%s) > .ci_status/not_approved
artifacts:
paths:
- .ci_status/
deploy to production:
script:
- if [[ $(cat .ci_status/not_approved) > $(cat .ci_status/approved) ]]; then echo "Need approve from release engineer!"; exit 1; fi
- echo "deploy to production!"
There's an open issue 47517 'Pass variables between jobs' on Gitlab CE..
CI/CD often needs to pass information from one job to another and
artifacts can be used for this, although it's a heavy solution with
unintended side effects. Workspaces is another proposal for passing
files between jobs. But sometimes you don't want to pass files at all,
just a small bit of data.
I have faced the same issue, and workaround this by storing DATA in file, then access to it in other Jobs..
You kind of can....The way I went about it:
You can send a POST request to the project to save a variable:
export VARIABLE=secret
curl --request POST --header "PRIVATE-TOKEN: $CI_ACCESS_TOKEN" "https://gitlab.example.com/api/v4/projects/$CI_PROJECT_ID/variables/" --form "key=VARIABLE" --form "value=$VARIABLE"
and cleanup after the work/trigger is finished
curl --request DELETE --header "PRIVATE-TOKEN: $CI_ACCESS_TOKEN" "https://gitlab.seznam.net/api/v4/projects/$CI_PROJECT_ID/variables/VARIABLE"
I'm not sure it suppose to be used this way, but it does the trick. You have the variable accessible for all the following jobs (specially when you use trigger and script in that job is not an option.)
Just please make sure, you run the cleanup job even if previous once fail...

Resources