GitHub Actions Job Based on Multiple Conditions from Inputs - yaml

I've been trying to create conditions for Jobs in GitHub Actions but I can't seem to get it working
I have the following Inputs:
on:
workflow_dispatch:
inputs:
env:
description: 'Select the Environment'
type: choice
required: true
options:
- SIT
- UAT
op:
description: 'Deploy or Delete Apps'
type: choice
required: true
options:
- Deploy
- Delete
ver:
description: 'Type the app version'
required: true
and the below jobs:
jobs:
create-sit-app:
runs-on: ubuntu-latest
name: 'Deploy App for SIT'
if: |
(${{ github.event.inputs.env }} == 'SIT' && ${{ github.event.inputs.op }} == 'Deploy')
steps:
........
........
........
I also tried this
(${{ github.event.inputs.env == 'SIT' }} && ${{ github.event.inputs.op == 'Deploy' }})
And this
${{ github.event.inputs.env == 'SIT' }} && ${{ github.event.inputs.op == 'Deploy' }}

Managed to do it like this:
if: (github.event.inputs.env == 'SIT' && github.event.inputs.op == 'Deploy')

Related

Github actions: merge artifacts after matrix steps

I have a job in Github Actions workflow that runs unit tests and then uploads reports to Jira Xray. The thing is tests step takes quite a while to complete, so I want to split task execution into a few smaller chunks using matrix.
I did it for linting and it works well, however for unit tests I'm struggling with how can I collect and merge all reports together so they can be uploaded after all matrix steps are done.
Here's how current unit tests step looks like
unit-test:
runs-on: ubuntu-latest
needs: setup
steps:
- uses: actions/checkout#v3
with:
fetch-depth: 0
- uses: actions/cache#v3
with:
path: ${{ env.CACHE_NODE_MODULES_PATH }}
key: build-${{ hashFiles('**/package-lock.json') }}
- run: npx nx affected:test --parallel=3 --base=${{ env.BASE_REF}} --head=HEAD # actual unit tests
- name: Check file existence #checking whether there're reports at all
if: success() || failure()
id: check_files
uses: andstor/file-existence-action#v1
with:
# all reports will be placed in this directory
# for matrix job reports will be separated between agents, so it's required to merge them
files: 'reports/**/test-*.xml'
- name: Import results to Xray
if: (success() || failure()) && steps.check_files.outputs.files_exists == 'true' && github.event_name == 'push'
uses: mikepenz/xray-action#v2
with:
username: ${{ secrets.XRAY_CLIENT_ID }}
password: ${{ secrets.XRAY_CLIENT_SECRET }}
testFormat: 'junit'
testPaths: 'reports/**/test-*.xml' # that's where I need to grab all reports
projectKey: 'MY_KEY'
combineInSingleTestExec: true
Matrix job for linting looks like this. I would like to do the same for unit tests, but at the same time I want to collect all reports as it works in the job above
linting:
runs-on: ubuntu-latest
needs: [setup]
strategy:
matrix:
step: ${{ fromJson(needs.setup.outputs.lint-bins) }} # this will be something like [1,2,3,4]
steps:
- uses: actions/checkout#v3
with:
fetch-depth: 0
- uses: actions/cache#v3
with:
path: ${{ env.CACHE_NODE_MODULES_PATH }}
key: build-${{ hashFiles('**/package-lock.json') }}
# some nodejs logic to run few jobs, it uses "execSync" from "child_process" to invoke the task
- run: node scripts/ci-run-many.mjs --target=lint --outputTarget=execute --partNumber=${{ matrix.step }} --base=${{ env.BASE_REF}} --head=HEAD
Figured it myself
unit-test:
runs-on: ubuntu-latest
needs: [setup]
strategy:
fail-fast: false
matrix:
step: ${{ fromJson(needs.setup.outputs.unit-test-bins) }}
steps:
- uses: actions/checkout#v3
with:
fetch-depth: 0
- uses: actions/cache#v3
with:
path: ${{ env.CACHE_NODE_MODULES_PATH }}
key: build-${{ hashFiles('**/package-lock.json') }}
- run: node scripts/ci-run-many.mjs --target=test --outputTarget=execute --partNumber=${{ matrix.step }} --base=${{ env.BASE_REF}} --head=HEAD
- name: Upload reports' artifacts
if: success() || failure()
uses: actions/upload-artifact#v3
with:
name: ${{ env.RUN_UNIQUE_ID }}_artifact_${{ matrix.step }}
if-no-files-found: ignore
path: reports
retention-days: 1
process-test-data:
runs-on: ubuntu-latest
needs: unit-test
if: success() || failure()
steps:
- uses: actions/checkout#v3
- name: Download reports' artifacts
uses: actions/download-artifact#v3
with:
path: downloaded_artifacts
- name: Place reports' artifacts
run: rsync -av downloaded_artifacts/*/*/ unit_test_reports/
- name: Check reports existence
id: check_files
uses: andstor/file-existence-action#v1
with:
files: 'unit_test_reports/**/test-*.xml'
- name: Import results to Xray
run: ls -R unit_test_reports/

Can I have a GitHub Actions Workflow without any Jobs inside?

When translating existing Azure DevOps YAML pipelines to GitHub Actions YAML, I noticed some of my Azure pipelines were just templates calling other YAML files.
trigger:
- master
resources:
repositories:
- repository: templates
type: git
name: 'Cloud Integration\PipelineTemplates'
name: $(Build.SourceBranchName)_$(Build.Reason)_$(rev:r)
variables:
- group: var-lc-integration-emailservice
- name: logicapp_workflows
value: false
- name: base_resources
value: false
- name: functionapp_resources
value: false
- name: functionapp
value: false
- name: ia_resources
value: false
- name: ia_configs
value: false
- name: apim_resources
value: true
stages:
- template: yml-templates\master.yml#templates
parameters:
logicapp_workflows: ${{ variables.logicapp_workflows }}
base_resources: ${{ variables.base_resources }}
functionapp_resources: ${{ variables.functionapp_resources }}
functionapp: ${{ variables.functionapp }}
ia_resources: ${{ variables.ia_resources }}
ia_configs: ${{ variables.ia_configs }}
apim_resources: ${{ variables.apim_resources }}
While writing a GitHub workflow for the above Azure pipeline, Can we have a "dummy job" or no job at all within a workflow to solve this issue?
IIUC yes, see reusing GitHub Actions workflows.
It allows you to call another workflow from your repository and provide inputs/secrets as necessary.

Sequencing GitHub actions job on conditions

I have: Job A, Job B & Job C.
when Job A completes
If job B runs I need job C to run (after job B has completed with success)
if Job B skipped I need Job C to run (If job A has completed with success)
See below for code snip:
check_if_containers_exist_to_pass_to_last_known_tagger_job: (JobA)
name: check_if_containers_exist
environment: test
runs-on: ubuntu-latest
#needs: [push_web_to_ecr, push_cron_###_to_ecr, push_to_###_shared_ecr, push_to_###_ecr]
needs: push_###_to_shared_ecr
#if: ${{ github.ref == 'refs/heads/main' }}
outputs:
signal_job: ${{ steps.step_id.outputs.run_container_tagger_job }}
steps:
- name: Configure AWS credentials
id: config-aws-creds
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.SHARED_AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.SHARED_AWS_SECRET_ACCESS_KEY }}
aws-region: eu-west-2
- name: Check if container exists (If containers don't exist then don't run last known tagging job for rollback)
id: step_id
run: |
aws ecr describe-images --repository-name anonymizer --image-ids imageTag=testing-latest
if [ $? == 254 ]; then echo "::set-output name=run_container_tagger_job::false"; else echo "::set-output name=run_container_tagger_job::true"; fi
tag_latest_testing_containers_as_last_known_testing_containers: (Job B)
needs: check_if_containers_exist_to_pass_to_last_known_tagger_job
if: needs.check_if_containers_exist_to_pass_to_last_known_tagger_job.outputs.signal_job == 'true'
uses: ###/###/.github/workflows/container-tagger.yml####
with:
tag_to_identify_containers: testing-latest
new_tag_to_apply_to_containers: last-known-testing
aws-region: eu-west-2
run_cron_and_cycle_containers: false
secrets:
SHARED_AWS_ACCESS_KEY_ID: ${{ secrets.SHARED_AWS_ACCESS_KEY_ID }}
SHARED_AWS_SECRET_ACCESS_KEY: ${{ secrets.SHARED_AWS_SECRET_ACCESS_KEY }}
tag_testing_containers_to_testing_latest: (Job C)
needs: [check_if_containers_exist_to_pass_to_last_known_tagger_job,tag_latest_testing_containers_as_last_known_testing_containers]
if: ${{ always() }}
uses: ###/##/.github/workflows/container-tagger.yml####
with:
tag_to_identify_containers: dev-${{ github.sha }}
new_tag_to_apply_to_containers: james-cron-test
aws-region: eu-west-2
run_cron_and_cycle_containers: true
secrets:
SHARED_AWS_ACCESS_KEY_ID: ${{ secrets.SHARED_AWS_ACCESS_KEY_ID }}
SHARED_AWS_SECRET_ACCESS_KEY: ${{ secrets.SHARED_AWS_SECRET_ACCESS_KEY }}
ENVIRONMENT_AWS_ACCESS_KEY_ID: ${{ secrets.TESTING_AWS_ACCESS_KEY_ID }}
ENVIRONMENT_AWS_SECRET_ACCESS_KEY: ${{ secrets.TESTING_AWS_SECRET_ACCESS_KEY }}
It might not be the most elegant solution, but it works.
The workaround would consist of adding 2 extra steps at the end of the Job A, and set the 2 steps always execute (if: always()).
The first one is used to create a text file and write the job status into it.
The second one is used to upload this text file as an artifact.
Then, in Job B and Job C, you will need to add the steps to download the artifacts and read the status of Job A to then perform or not specific operations.
Here is a demo of how it might look:
jobs:
JOB_A:
name: Job A
...
steps:
- name: Some steps of job A
...
- name: Create file status_jobA.txt and write the job status into it
if: always()
run: |
echo ${{ job.status }} > status_jobA.txt
- name: Upload file status_jobA.txt as an artifact
if: always()
uses: actions/upload-artifact#v1
with:
name: pass_status_jobA
path: status_jobA.txt
JOB_B:
needs: [JOB_A]
if: always()
name: Job B
...
steps:
- name: Download artifact pass_status_jobA
uses: actions/download-artifact#v1
with:
name: pass_status_jobA
- name: Set the status of Job A as output parameter
id: set_outputs
run: echo "::set-output name=status_jobA::$(<pass_status_jobA/status_jobA.txt)"
- name: Check Job A status
if: ${{ steps.set_outputs.outputs.status_jobA }} == "success"
run: |
...
JOB_C:
needs: [JOB_A]
if: always()
name: Job C
...
steps:
- name: Download artifact pass_status_jobA
uses: actions/download-artifact#v1
with:
name: pass_status_jobA
- name: Set the status of Job A as output parameter
id: set_outputs
run: echo "::set-output name=status_jobA::$(<pass_status_jobA/status_jobA.txt)"
- name: Check Job A status
if: ${{ steps.set_outputs.outputs.status_jobA }} == "failure"
run: |
...
Note that here, all jobs will always run, but Job B steps after the check will only run for Job A success, and Job C steps after the check will only run for Job A failure.
Job A --> Success --> Job B + Job C checks --> Job B steps
Job A --> Failure --> Job B + Job C checks --> Job C steps
Reference

Azure Pipelines Data Driven Matrix

In GitHub Actions, I can write a matrix job like so:
jobs:
test:
name: Test-${{matrix.template}}-${{matrix.os}}
runs-on: ${{matrix.os}}
strategy:
matrix:
os: [ubuntu-latest, windows-latest, macOS-latest]
template: ['API', 'GraphQL', 'Orleans', 'NuGet']
steps:
#...
This will run every combination of os and template. In Azure Pipelines, you have to specify each combination manually like so:
stages:
- stage: Test
jobs:
- job: Test
strategy:
matrix:
Linux:
os: ubuntu-latest
template: API
Mac:
os: macos-latest
template: API
Windows:
os: windows-latest
template: API
# ...continued
pool:
vmImage: $(os)
timeoutInMinutes: 20
steps:
#...
Is it possible to create a data driven matrix strategy similar to GitHub Actions?
Is it possible to create a data driven matrix strategy similar to GitHub Actions?
The answer is yes. This is a known issue that has already been reported on github:
Add cross-product matrix strategy
In addition, there is workaround that mentioned this issue in the official documentation:
Note
The matrix syntax doesn't support automatic job scaling but you can
implement similar functionality using the each keyword. For an
example, see nedrebo/parameterized-azure-jobs.
jobs:
- template: azure-pipelines-linux.yml
parameters:
images: [ 'archlinux/base', 'ubuntu:16.04', 'ubuntu:18.04', 'fedora:31' ]
pythonVersions: [ '3.5', '3.6', '3.7' ]
swVersions: [ '1.0.0', '1.1.0', '1.2.0', '1.3.0' ]
- template: azure-pipelines-windows.yml
parameters:
images: [ 'vs2017-win2016', 'windows-2019' ]
pythonVersions: [ '3.5', '3.6', '3.7' ]
swVersions: [ '1.0.0', '1.1.0', '1.2.0', '1.3.0' ]
azure-pipelines-windows.yml:
jobs:
- ${{ each image in parameters.images }}:
- ${{ each pythonVersion in parameters.pythonVersions }}:
- ${{ each swVersion in parameters.swVersions }}:
- job:
displayName: ${{ format('OS:{0} PY:{1} SW:{2}', image, pythonVersion, swVersion) }}
pool:
vmImage: ${{ image }}
steps:
- script: echo OS version &&
wmic os get version &&
echo Lets test SW ${{ swVersion }} on Python ${{ pythonVersion }}
Not an ideal solution, but for now, you can loop over parameters. Write a template like the following, and pass your data to it.
# jobs loop template
parameters:
jobs: []
jobs:
- ${{ each job in parameters.jobs }}: # Each job
- ${{ each pair in job }}: # Insert all properties other than "steps"
${{ if ne(pair.key, 'steps') }}:
${{ pair.key }}: ${{ pair.value }}
steps: # Wrap the steps
- task: SetupMyBuildTools#1 # Pre steps
- ${{ job.steps }} # Users steps
- task: PublishMyTelemetry#1 # Post steps
condition: always()
See here for more examples: https://github.com/Microsoft/azure-pipelines-yaml/blob/master/design/each-expression.md#scenario-wrap-jobs

Github Action Cypress Run silently fails on test errors

Using Github action to run Cypress e2e tests but when tests fail the job still passes.
name: E2E
on:
push:
branches: [ master ]
paths-ignore: [ '**.md' ]
schedule:
- cron: '0 8-20 * * *'
jobs:
cypress-run:
runs-on: ubuntu-16.04
steps:
- name: Checkout
uses: actions/checkout#v1
- name: Cypress run
uses: cypress-io/github-action#v2
continue-on-error: false
with:
record: true
env:
CYPRESS_RECORD_KEY: ${{ secrets.CYPRESS_RECORD_KEY }}
Reason I would like job to fail is to get notified either via Github job failing or with a slack notification like this
- uses: 8398a7/action-slack#v3
if: job.status == 'failure'
with:
status: ${{ job.status }}
fields: repo
channel: '#dev'
mention: here
text: "E2E tests failed"
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
Have you tried adding an if: ${{ failure() }} at the end?
I ended up creating my own curl request which notifies Slack when anything fails on a specific job. Adding it to the end worked for me.
name: E2E
on:
push:
branches: [ master ]
paths-ignore: [ '**.md' ]
schedule:
- cron: '0 8-20 * * *'
jobs:
cypress-run:
runs-on: ubuntu-16.04
steps:
- name: Checkout
uses: actions/checkout#v1
- name: Cypress run
uses: cypress-io/github-action#v2
continue-on-error: false
with:
record: true
env:
CYPRESS_RECORD_KEY: ${{ secrets.CYPRESS_RECORD_KEY }}
+
+ - name: Notify Test Failed
+ if: ${{ failure() }}
+ run: |
+ curl -X POST -H "Content-type:application/json" --data "{\"type\":\"mrkdwn\",\"text\":\".\n❌ *Cypress Run Failed*:\n*Branch:* $GITHUB_REF\n*Repo:* $GITHUB_REPOSITORY\n*SHA1:* $GITHUB_SHA\n*User:* $GITHUB_ACTOR\n.\n\"}" ${{ secrets.SLACK_WEBHOOK }}

Resources