Github actions: merge artifacts after matrix steps - continuous-integration

I have a job in Github Actions workflow that runs unit tests and then uploads reports to Jira Xray. The thing is tests step takes quite a while to complete, so I want to split task execution into a few smaller chunks using matrix.
I did it for linting and it works well, however for unit tests I'm struggling with how can I collect and merge all reports together so they can be uploaded after all matrix steps are done.
Here's how current unit tests step looks like
unit-test:
runs-on: ubuntu-latest
needs: setup
steps:
- uses: actions/checkout#v3
with:
fetch-depth: 0
- uses: actions/cache#v3
with:
path: ${{ env.CACHE_NODE_MODULES_PATH }}
key: build-${{ hashFiles('**/package-lock.json') }}
- run: npx nx affected:test --parallel=3 --base=${{ env.BASE_REF}} --head=HEAD # actual unit tests
- name: Check file existence #checking whether there're reports at all
if: success() || failure()
id: check_files
uses: andstor/file-existence-action#v1
with:
# all reports will be placed in this directory
# for matrix job reports will be separated between agents, so it's required to merge them
files: 'reports/**/test-*.xml'
- name: Import results to Xray
if: (success() || failure()) && steps.check_files.outputs.files_exists == 'true' && github.event_name == 'push'
uses: mikepenz/xray-action#v2
with:
username: ${{ secrets.XRAY_CLIENT_ID }}
password: ${{ secrets.XRAY_CLIENT_SECRET }}
testFormat: 'junit'
testPaths: 'reports/**/test-*.xml' # that's where I need to grab all reports
projectKey: 'MY_KEY'
combineInSingleTestExec: true
Matrix job for linting looks like this. I would like to do the same for unit tests, but at the same time I want to collect all reports as it works in the job above
linting:
runs-on: ubuntu-latest
needs: [setup]
strategy:
matrix:
step: ${{ fromJson(needs.setup.outputs.lint-bins) }} # this will be something like [1,2,3,4]
steps:
- uses: actions/checkout#v3
with:
fetch-depth: 0
- uses: actions/cache#v3
with:
path: ${{ env.CACHE_NODE_MODULES_PATH }}
key: build-${{ hashFiles('**/package-lock.json') }}
# some nodejs logic to run few jobs, it uses "execSync" from "child_process" to invoke the task
- run: node scripts/ci-run-many.mjs --target=lint --outputTarget=execute --partNumber=${{ matrix.step }} --base=${{ env.BASE_REF}} --head=HEAD

Figured it myself
unit-test:
runs-on: ubuntu-latest
needs: [setup]
strategy:
fail-fast: false
matrix:
step: ${{ fromJson(needs.setup.outputs.unit-test-bins) }}
steps:
- uses: actions/checkout#v3
with:
fetch-depth: 0
- uses: actions/cache#v3
with:
path: ${{ env.CACHE_NODE_MODULES_PATH }}
key: build-${{ hashFiles('**/package-lock.json') }}
- run: node scripts/ci-run-many.mjs --target=test --outputTarget=execute --partNumber=${{ matrix.step }} --base=${{ env.BASE_REF}} --head=HEAD
- name: Upload reports' artifacts
if: success() || failure()
uses: actions/upload-artifact#v3
with:
name: ${{ env.RUN_UNIQUE_ID }}_artifact_${{ matrix.step }}
if-no-files-found: ignore
path: reports
retention-days: 1
process-test-data:
runs-on: ubuntu-latest
needs: unit-test
if: success() || failure()
steps:
- uses: actions/checkout#v3
- name: Download reports' artifacts
uses: actions/download-artifact#v3
with:
path: downloaded_artifacts
- name: Place reports' artifacts
run: rsync -av downloaded_artifacts/*/*/ unit_test_reports/
- name: Check reports existence
id: check_files
uses: andstor/file-existence-action#v1
with:
files: 'unit_test_reports/**/test-*.xml'
- name: Import results to Xray
run: ls -R unit_test_reports/

Related

automatic new release in GIthub action

I have no knowledge in the yaml language, I am a designer and not a programmer, but I followed a step by step to create an atomization that would create an automatic release, but it is returning an Error: Input required and not supplied: tag_name
YAML code:
name: Release
on:
push:
branches:
- master
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
FILE_NAME: "Dark-Everywhere"
FILE_EXTENSION: ".zip"
BRANCHES: "1.19.3,1.19,1.18"
PACKAGE_NAME: "assets,pack.mcmeta,pack.png"
TAG_NAME: "1.0.0"
jobs:
release:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout#v2
- name: Create a release
uses: actions/create-release#v1
env:
GITHUB_TOKEN: ${{ env.GITHUB_TOKEN }}
with:
tag_name: "v${{ env.TAG_NAME }}"
release_name: "Dark Everywhere ${{ env.TAG_NAME }}šŸŒ™"
draft: true
prerelease: false
- name: Generate a zip archive for each branch
run: |
for branch in $(echo $BRANCHES | tr "," "\\n"); do
zip -r "$FILE_NAME-$branch$FILE_EXTENSION" $PACKAGE_NAME
done
- name: Upload files
uses: actions/upload-artifact#v2
with:
name: "$FILE_NAME-$branch$FILE_EXTENSION"
path: "$FILE_NAME-$branch$FILE_EXTENSION"
- name: Update release
uses: actions/update-release#v1
env:
GITHUB_TOKEN: ${{ env.GITHUB_TOKEN }}
with:
release_id: ${{ env.RELEASE_ID }}
body: "Build description here"
I already tried to add TAG_NAME: "1.0.0" inside env in the code but the error persists

Lambda Deploy with Github Actions - deploy succeeds but pipeline fails

I am trying to deploy a Lambda function with the GO runtime. The AWS console indicates that the function was indeed updated when the pipeline runs (the Last Modified date is updated in the Lambda console), but Github Actions indicates a failure (this following error is shown twice):
2022/04/04 00:08:30 ResourceConflictException: The operation cannot be performed at this time. An update is in progress for resource: arn:aws:lambda:us-east-2:***:function:get_all_products_go
{
RespMetadata: {
StatusCode: 409,
RequestID: "cd2b5f3f-c245-4c55-9440-9bdc078bb2b9"
},
Message_: "The operation cannot be performed at this time. An update is in progress for resource: arn:aws:lambda:us-east-2:***:function:get_all_products_go",
Type: "User"
}
Here is my pipeline configuration yaml file:
name: Deploy Lambda function
on:
push:
branches:
- main
jobs:
deploy-lambda:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v1
- name: Install Go
uses: actions/setup-go#v2
with:
go-version: '1.17.4'
- name: Install dependencies
run: |
go version
go mod init storefront-lambdas
go mod tidy
- name: Build binary
run: |
GOOS=linux GOARCH=amd64 go build get-all-products.go && zip deployment.zip get-all-products
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-2
- name: default deploy
uses: appleboy/lambda-action#master
with:
aws_access_key_id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws_secret_access_key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws_region: us-east-2
function_name: get_all_products_go
zip_file: deployment.zip
memory_size: 128
timeout: 10
handler: get-all-products
role: arn:aws:iam::xxxxxxxxxx:role/lambda-basic-execution
runtime: go1.x

Sequencing GitHub actions job on conditions

I have: Job A, Job B & Job C.
when Job A completes
If job B runs I need job C to run (after job B has completed with success)
if Job B skipped I need Job C to run (If job A has completed with success)
See below for code snip:
check_if_containers_exist_to_pass_to_last_known_tagger_job: (JobA)
name: check_if_containers_exist
environment: test
runs-on: ubuntu-latest
#needs: [push_web_to_ecr, push_cron_###_to_ecr, push_to_###_shared_ecr, push_to_###_ecr]
needs: push_###_to_shared_ecr
#if: ${{ github.ref == 'refs/heads/main' }}
outputs:
signal_job: ${{ steps.step_id.outputs.run_container_tagger_job }}
steps:
- name: Configure AWS credentials
id: config-aws-creds
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.SHARED_AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.SHARED_AWS_SECRET_ACCESS_KEY }}
aws-region: eu-west-2
- name: Check if container exists (If containers don't exist then don't run last known tagging job for rollback)
id: step_id
run: |
aws ecr describe-images --repository-name anonymizer --image-ids imageTag=testing-latest
if [ $? == 254 ]; then echo "::set-output name=run_container_tagger_job::false"; else echo "::set-output name=run_container_tagger_job::true"; fi
tag_latest_testing_containers_as_last_known_testing_containers: (Job B)
needs: check_if_containers_exist_to_pass_to_last_known_tagger_job
if: needs.check_if_containers_exist_to_pass_to_last_known_tagger_job.outputs.signal_job == 'true'
uses: ###/###/.github/workflows/container-tagger.yml####
with:
tag_to_identify_containers: testing-latest
new_tag_to_apply_to_containers: last-known-testing
aws-region: eu-west-2
run_cron_and_cycle_containers: false
secrets:
SHARED_AWS_ACCESS_KEY_ID: ${{ secrets.SHARED_AWS_ACCESS_KEY_ID }}
SHARED_AWS_SECRET_ACCESS_KEY: ${{ secrets.SHARED_AWS_SECRET_ACCESS_KEY }}
tag_testing_containers_to_testing_latest: (Job C)
needs: [check_if_containers_exist_to_pass_to_last_known_tagger_job,tag_latest_testing_containers_as_last_known_testing_containers]
if: ${{ always() }}
uses: ###/##/.github/workflows/container-tagger.yml####
with:
tag_to_identify_containers: dev-${{ github.sha }}
new_tag_to_apply_to_containers: james-cron-test
aws-region: eu-west-2
run_cron_and_cycle_containers: true
secrets:
SHARED_AWS_ACCESS_KEY_ID: ${{ secrets.SHARED_AWS_ACCESS_KEY_ID }}
SHARED_AWS_SECRET_ACCESS_KEY: ${{ secrets.SHARED_AWS_SECRET_ACCESS_KEY }}
ENVIRONMENT_AWS_ACCESS_KEY_ID: ${{ secrets.TESTING_AWS_ACCESS_KEY_ID }}
ENVIRONMENT_AWS_SECRET_ACCESS_KEY: ${{ secrets.TESTING_AWS_SECRET_ACCESS_KEY }}
It might not be the most elegant solution, but it works.
The workaround would consist of adding 2 extra steps at the end of the Job A, and set the 2 steps always execute (if: always()).
The first one is used to create a text file and write the job status into it.
The second one is used to upload this text file as an artifact.
Then, in Job B and Job C, you will need to add the steps to download the artifacts and read the status of Job A to then perform or not specific operations.
Here is a demo of how it might look:
jobs:
JOB_A:
name: Job A
...
steps:
- name: Some steps of job A
...
- name: Create file status_jobA.txt and write the job status into it
if: always()
run: |
echo ${{ job.status }} > status_jobA.txt
- name: Upload file status_jobA.txt as an artifact
if: always()
uses: actions/upload-artifact#v1
with:
name: pass_status_jobA
path: status_jobA.txt
JOB_B:
needs: [JOB_A]
if: always()
name: Job B
...
steps:
- name: Download artifact pass_status_jobA
uses: actions/download-artifact#v1
with:
name: pass_status_jobA
- name: Set the status of Job A as output parameter
id: set_outputs
run: echo "::set-output name=status_jobA::$(<pass_status_jobA/status_jobA.txt)"
- name: Check Job A status
if: ${{ steps.set_outputs.outputs.status_jobA }} == "success"
run: |
...
JOB_C:
needs: [JOB_A]
if: always()
name: Job C
...
steps:
- name: Download artifact pass_status_jobA
uses: actions/download-artifact#v1
with:
name: pass_status_jobA
- name: Set the status of Job A as output parameter
id: set_outputs
run: echo "::set-output name=status_jobA::$(<pass_status_jobA/status_jobA.txt)"
- name: Check Job A status
if: ${{ steps.set_outputs.outputs.status_jobA }} == "failure"
run: |
...
Note that here, all jobs will always run, but Job B steps after the check will only run for Job A success, and Job C steps after the check will only run for Job A failure.
Job A --> Success --> Job B + Job C checks --> Job B steps
Job A --> Failure --> Job B + Job C checks --> Job C steps
Reference

Azure Pipelines Data Driven Matrix

In GitHub Actions, I can write a matrix job like so:
jobs:
test:
name: Test-${{matrix.template}}-${{matrix.os}}
runs-on: ${{matrix.os}}
strategy:
matrix:
os: [ubuntu-latest, windows-latest, macOS-latest]
template: ['API', 'GraphQL', 'Orleans', 'NuGet']
steps:
#...
This will run every combination of os and template. In Azure Pipelines, you have to specify each combination manually like so:
stages:
- stage: Test
jobs:
- job: Test
strategy:
matrix:
Linux:
os: ubuntu-latest
template: API
Mac:
os: macos-latest
template: API
Windows:
os: windows-latest
template: API
# ...continued
pool:
vmImage: $(os)
timeoutInMinutes: 20
steps:
#...
Is it possible to create a data driven matrix strategy similar to GitHub Actions?
Is it possible to create a data driven matrix strategy similar to GitHub Actions?
The answer is yes. This is a known issue that has already been reported on github:
Add cross-product matrix strategy
In addition, there is workaround that mentioned this issue in the official documentation:
Note
The matrix syntax doesn't support automatic job scaling but you can
implement similar functionality using the each keyword. For an
example, see nedrebo/parameterized-azure-jobs.
jobs:
- template: azure-pipelines-linux.yml
parameters:
images: [ 'archlinux/base', 'ubuntu:16.04', 'ubuntu:18.04', 'fedora:31' ]
pythonVersions: [ '3.5', '3.6', '3.7' ]
swVersions: [ '1.0.0', '1.1.0', '1.2.0', '1.3.0' ]
- template: azure-pipelines-windows.yml
parameters:
images: [ 'vs2017-win2016', 'windows-2019' ]
pythonVersions: [ '3.5', '3.6', '3.7' ]
swVersions: [ '1.0.0', '1.1.0', '1.2.0', '1.3.0' ]
azure-pipelines-windows.yml:
jobs:
- ${{ each image in parameters.images }}:
- ${{ each pythonVersion in parameters.pythonVersions }}:
- ${{ each swVersion in parameters.swVersions }}:
- job:
displayName: ${{ format('OS:{0} PY:{1} SW:{2}', image, pythonVersion, swVersion) }}
pool:
vmImage: ${{ image }}
steps:
- script: echo OS version &&
wmic os get version &&
echo Lets test SW ${{ swVersion }} on Python ${{ pythonVersion }}
Not an ideal solution, but for now, you can loop over parameters. Write a template like the following, and pass your data to it.
# jobs loop template
parameters:
jobs: []
jobs:
- ${{ each job in parameters.jobs }}: # Each job
- ${{ each pair in job }}: # Insert all properties other than "steps"
${{ if ne(pair.key, 'steps') }}:
${{ pair.key }}: ${{ pair.value }}
steps: # Wrap the steps
- task: SetupMyBuildTools#1 # Pre steps
- ${{ job.steps }} # Users steps
- task: PublishMyTelemetry#1 # Post steps
condition: always()
See here for more examples: https://github.com/Microsoft/azure-pipelines-yaml/blob/master/design/each-expression.md#scenario-wrap-jobs

Github Action Cypress Run silently fails on test errors

Using Github action to run Cypress e2e tests but when tests fail the job still passes.
name: E2E
on:
push:
branches: [ master ]
paths-ignore: [ '**.md' ]
schedule:
- cron: '0 8-20 * * *'
jobs:
cypress-run:
runs-on: ubuntu-16.04
steps:
- name: Checkout
uses: actions/checkout#v1
- name: Cypress run
uses: cypress-io/github-action#v2
continue-on-error: false
with:
record: true
env:
CYPRESS_RECORD_KEY: ${{ secrets.CYPRESS_RECORD_KEY }}
Reason I would like job to fail is to get notified either via Github job failing or with a slack notification like this
- uses: 8398a7/action-slack#v3
if: job.status == 'failure'
with:
status: ${{ job.status }}
fields: repo
channel: '#dev'
mention: here
text: "E2E tests failed"
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
Have you tried adding an if: ${{ failure() }} at the end?
I ended up creating my own curl request which notifies Slack when anything fails on a specific job. Adding it to the end worked for me.
name: E2E
on:
push:
branches: [ master ]
paths-ignore: [ '**.md' ]
schedule:
- cron: '0 8-20 * * *'
jobs:
cypress-run:
runs-on: ubuntu-16.04
steps:
- name: Checkout
uses: actions/checkout#v1
- name: Cypress run
uses: cypress-io/github-action#v2
continue-on-error: false
with:
record: true
env:
CYPRESS_RECORD_KEY: ${{ secrets.CYPRESS_RECORD_KEY }}
+
+ - name: Notify Test Failed
+ if: ${{ failure() }}
+ run: |
+ curl -X POST -H "Content-type:application/json" --data "{\"type\":\"mrkdwn\",\"text\":\".\nāŒ *Cypress Run Failed*:\n*Branch:* $GITHUB_REF\n*Repo:* $GITHUB_REPOSITORY\n*SHA1:* $GITHUB_SHA\n*User:* $GITHUB_ACTOR\n.\n\"}" ${{ secrets.SLACK_WEBHOOK }}

Resources