I'm trying to create a pipeline on github actions to EKS but Im having the following error on Build & Push Image step:
***------
[2/2] ADD /target/customer.jar customer.jar:
Dockerfile:5
4 |
5 | >>> ADD /target/customer.jar customer.jar
6 |
7 | ENV JAVA_OPTS="-Xmx256m -Xms256m -XX:MetaspaceSize=48m -XX:+UseG1GC -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap"
error: failed to solve: failed to compute cache key: failed to walk /tmp/buildkit-mount023157727/target: lstat /tmp/buildkit-mount023157727/target: no such file or directory
Error: Process completed with exit code 1.***
I think there is someething about context of the root step process because I can build the image locally with the same dockerFile (after building and creating the target folder of the project of course).
Any suggestion? What Am I missing?
My DockerFile:
FROM openjdk:11-jre as release
ADD /target/customer.jar customer.jar
ENV JAVA_OPTS="-Xmx256m -Xms256m -XX:MetaspaceSize=48m -XX:+UseG1GC -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap"
ENTRYPOINT [ "sh", "-c", "java $JAVA_OPTS -jar customer.jar" ]
My Pipeline file:
name: Release
on:
pull_request:
branches:
- main
env:
RELEASE_REVISION: "pr-${{ github.event.pull_request.number }}-${{ github.event.pull_request.head.sha }}"
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_REGION: ${{ secrets.AWS_REGION }}
KUBE_CONFIG_DATA: ${{ secrets.KUBE_CONFIG_DATA }}
KUBE_NAMESPACE: default
ECR_REPOSITORY: my-cool-application
jobs:
release:
name: Release
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Set up JDK 11
uses: actions/setup-java#v2
with:
java-version: '11'
distribution: 'adopt'
- name: Build with Maven
run: mvn -B package --file pom.xml
- name: Cancel Previous Runs
uses: styfle/cancel-workflow-action#0.4.1
with:
access_token: ${{ github.token }}
- name: Checkout
uses: actions/checkout#v2
with:
ref: ${{ github.event.pull_request.head.sha }}
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ env.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ env.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login#v1
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action#master
- name: Docker cache layers
uses: actions/cache#v2
with:
path: /tmp/.buildx-cache
key: ${{ runner.os }}-single-buildx-${{ github.sha }}
restore-keys: |
${{ runner.os }}-single-buildx
- name: Build & Push Image
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
RELEASE_IMAGE: ${{ steps.login-ecr.outputs.registry }}/${{ env.ECR_REPOSITORY }}:${{ env.RELEASE_REVISION }}
run: |
docker buildx create --use
docker buildx build \
--cache-from=type=local,src=/tmp/.buildx-cache \
--cache-to=type=local,dest=/tmp/.buildx-cache-new \
--tag ${{ env.RELEASE_IMAGE }} \
--target release \
--push \
.
rm -rf /tmp/.buildx-cache
mv /tmp/.buildx-cache-new /tmp/.buildx-cache
- name: Deploy to Kubernetes cluster
uses: kodermax/kubectl-aws-eks#master
env:
RELEASE_IMAGE: ${{ steps.login-ecr.outputs.registry }}/${{ env.ECR_REPOSITORY }}:${{ env.RELEASE_REVISION }}
with:
args: set image deployment/my-pod app=${{ env.RELEASE_IMAGE }} --record -n $KUBE_NAMESPACE
Related
I have the following workflow for GitHub actions
it runs a matrix to build and publish images and at the end saves the image name to the outputs environment
and then uses those outputs as a parameter to the run command
name: Build and Deploy
on:
push:
branches: [main]
# Publish semver tags as releases.
tags: ["v*.*.*"]
env:
jobs:
set-env:
name: Set Environment Variables
runs-on: ubuntu-latest
outputs:
version: ${{ steps.main.outputs.version }}
created: ${{ steps.main.outputs.created }}
# repository: ${{ steps.main.outputs.repository }}
steps:
- id: main
run: |
echo "version=$(echo ${GITHUB_SHA} | cut -c1-7)" >> $GITHUB_OUTPUT
echo "created=$(date -u +'%Y-%m-%dT%H:%M:%SZ')" >> $GITHUB_OUTPUT
package-services:
runs-on: ubuntu-latest
needs: set-env
permissions:
contents: read
packages: write
outputs:
containerImage-disposal: ${{ steps.image-tag.outputs.image-disposal-web-api }}
containerImage-executive: ${{ steps.image-tag.outputs.image-executive-web-api }}
containerImage-finance-management: ${{ steps.image-tag.outputs.image-finance-management-web-api }}
containerImage-fm-background-task: ${{ steps.image-tag.outputs.image-finance-management-background-tasks-web-api }}
containerImage-legal: ${{ steps.image-tag.outputs.image-legal-web-api }}
containerImage-licensing: ${{ steps.image-tag.outputs.image-licensing-web-api }}
containerImage-vehicle-management: ${{ steps.image-tag.outputs.vehicle-management-web-api }}
containerImage-vehicle-rental: ${{ steps.image-tag.outputs.vehicle-rental-web-api }}
containerImage-workshop-management: ${{ steps.image-tag.outputs.workshop-management-web-api }}
containerImage-wm-clearance: ${{ steps.image-tag.outputs.workshop-management-clearance-web-api }}
strategy:
matrix:
services:
[
{
"appName": "disposal-web-api"
},
{
"appName": "executive-web-api"
},
{
"appName": "finance-management-web-api"
},
{
"appName": "finance-management-background-tasks"
},
{
"appName": "legal-web-api"
},
{
"appName": "licensing-web-api"
},
{
"appName": "vehicle-management-web-api"
},
{
"appName": "vehicle-rental-web-api"
},
{
"appName": "workshop-management-web-api"
},
{
"appName": "workshop-management-clearance-web-api"
},
]
steps:
- name: Checkout repository
uses: actions/checkout#v3
- name: Output image tag
id: image-tag
run: |
echo "image-${{ matrix.services.appName }}=${{env.IMAGE_NAME }}.${{ matrix.services.appName }}:sha-${{ needs.set-env.outputs.version }}" >> $GITHUB_OUTPUT
deploy:
if: ${{ github.event_name != 'pull_request' }}
runs-on: ubuntu-latest
needs: [package-services]
steps:
- name: Checkout repository
uses: actions/checkout#v2
- name: Azure Login
uses: azure/login#v1
with:
creds: ${{ secrets.AZURE_CREDENTIALS }}
- name: Deploy bicep
uses: azure/CLI#v1
with:
inlineScript: |
az group create -g ifms-gmt-demo-westeurope -l westeurope
az deployment group create -g ifms-gmt-demo-westeurope \
-f ./deploy/containerapps/main.bicep \
-p \
imageNamespace=${{ secrets.DOCKERHUB_USERNAME }} \
postgresAdministratorLogin=${{ secrets.POSTGRES_ADMINISTRATOR_LOGIN_DEMO }} \
postgresAdministratorLoginPassword=${{ secrets.POSTGRES_ADMINISTRATOR_LOGIN_PASSWORD_DEMO }} \
disposalImage='${{ needs.package-services.outputs.containerImage-disposal }}' \
executiveImage='${{ needs.package-services.outputs.containerImage-executive }}' \
financeManagementImage='${{ needs.package-services.outputs.containerImage-finance-management }}' \
financeManagementBackgroundTasksImage='${{ needs.package-services.outputs.containerImage-fm-background-tasks }}' \
legalImage='${{ needs.package-services.outputs.containerImage-legal }}' \
licensingImage='${{ needs.package-services.outputs.containerImage-licensing }}' \
vehicleManagementImage='${{ needs.package-services.outputs.containerImage-vehicle-management }}' \
vehicleRentalImage='${{ needs.package-services.outputs.containerImage-vehicle-rental }}' \
workshopManagementImage='${{ needs.package-services.outputs.containerImage-workshop-management }}' \
workshopManagementClearanceImage='${{ needs.package-services.outputs.containerImage-wm-clearance }}'
for some weird reason all parameters get populated except for vehicleManagementImage vehicleRentalImage workshopManagementImage workshopManagementClearanceImage
as can be seen, below they have an empty string
Run azure/CLI#v1
with:
inlineScript: az group create -g ifms-gmt-demo-westeurope -l westeurope
az deployment group create -g ifms-gmt-demo-westeurope \
-f ./deploy/containerapps/main.bicep \
-p \
imageNamespace=*** \
postgresAdministratorLogin=*** \
postgresAdministratorLoginPassword=*** \
disposalImage='ifms.gmt.disposal-web-api:sha-4a1bf13' \
executiveImage='ifms.gmt.executive-web-api:sha-4a1bf13' \
financeManagementImage='ifms.gmt.finance-management-web-api:sha-4a1bf13' \
financeManagementBackgroundTasksImage='' \
legalImage='ifms.gmt.legal-web-api:sha-4a1bf13' \
licensingImage='ifms.gmt.licensing-web-api:sha-4a1bf13' \
vehicleManagementImage='' \
vehicleRentalImage='' \
workshopManagementImage='' \
workshopManagementClearanceImage=''
I have no knowledge in the yaml language, I am a designer and not a programmer, but I followed a step by step to create an atomization that would create an automatic release, but it is returning an Error: Input required and not supplied: tag_name
YAML code:
name: Release
on:
push:
branches:
- master
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
FILE_NAME: "Dark-Everywhere"
FILE_EXTENSION: ".zip"
BRANCHES: "1.19.3,1.19,1.18"
PACKAGE_NAME: "assets,pack.mcmeta,pack.png"
TAG_NAME: "1.0.0"
jobs:
release:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout#v2
- name: Create a release
uses: actions/create-release#v1
env:
GITHUB_TOKEN: ${{ env.GITHUB_TOKEN }}
with:
tag_name: "v${{ env.TAG_NAME }}"
release_name: "Dark Everywhere ${{ env.TAG_NAME }}š"
draft: true
prerelease: false
- name: Generate a zip archive for each branch
run: |
for branch in $(echo $BRANCHES | tr "," "\\n"); do
zip -r "$FILE_NAME-$branch$FILE_EXTENSION" $PACKAGE_NAME
done
- name: Upload files
uses: actions/upload-artifact#v2
with:
name: "$FILE_NAME-$branch$FILE_EXTENSION"
path: "$FILE_NAME-$branch$FILE_EXTENSION"
- name: Update release
uses: actions/update-release#v1
env:
GITHUB_TOKEN: ${{ env.GITHUB_TOKEN }}
with:
release_id: ${{ env.RELEASE_ID }}
body: "Build description here"
I already tried to add TAG_NAME: "1.0.0" inside env in the code but the error persists
I have a job in Github Actions workflow that runs unit tests and then uploads reports to Jira Xray. The thing is tests step takes quite a while to complete, so I want to split task execution into a few smaller chunks using matrix.
I did it for linting and it works well, however for unit tests I'm struggling with how can I collect and merge all reports together so they can be uploaded after all matrix steps are done.
Here's how current unit tests step looks like
unit-test:
runs-on: ubuntu-latest
needs: setup
steps:
- uses: actions/checkout#v3
with:
fetch-depth: 0
- uses: actions/cache#v3
with:
path: ${{ env.CACHE_NODE_MODULES_PATH }}
key: build-${{ hashFiles('**/package-lock.json') }}
- run: npx nx affected:test --parallel=3 --base=${{ env.BASE_REF}} --head=HEAD # actual unit tests
- name: Check file existence #checking whether there're reports at all
if: success() || failure()
id: check_files
uses: andstor/file-existence-action#v1
with:
# all reports will be placed in this directory
# for matrix job reports will be separated between agents, so it's required to merge them
files: 'reports/**/test-*.xml'
- name: Import results to Xray
if: (success() || failure()) && steps.check_files.outputs.files_exists == 'true' && github.event_name == 'push'
uses: mikepenz/xray-action#v2
with:
username: ${{ secrets.XRAY_CLIENT_ID }}
password: ${{ secrets.XRAY_CLIENT_SECRET }}
testFormat: 'junit'
testPaths: 'reports/**/test-*.xml' # that's where I need to grab all reports
projectKey: 'MY_KEY'
combineInSingleTestExec: true
Matrix job for linting looks like this. I would like to do the same for unit tests, but at the same time I want to collect all reports as it works in the job above
linting:
runs-on: ubuntu-latest
needs: [setup]
strategy:
matrix:
step: ${{ fromJson(needs.setup.outputs.lint-bins) }} # this will be something like [1,2,3,4]
steps:
- uses: actions/checkout#v3
with:
fetch-depth: 0
- uses: actions/cache#v3
with:
path: ${{ env.CACHE_NODE_MODULES_PATH }}
key: build-${{ hashFiles('**/package-lock.json') }}
# some nodejs logic to run few jobs, it uses "execSync" from "child_process" to invoke the task
- run: node scripts/ci-run-many.mjs --target=lint --outputTarget=execute --partNumber=${{ matrix.step }} --base=${{ env.BASE_REF}} --head=HEAD
Figured it myself
unit-test:
runs-on: ubuntu-latest
needs: [setup]
strategy:
fail-fast: false
matrix:
step: ${{ fromJson(needs.setup.outputs.unit-test-bins) }}
steps:
- uses: actions/checkout#v3
with:
fetch-depth: 0
- uses: actions/cache#v3
with:
path: ${{ env.CACHE_NODE_MODULES_PATH }}
key: build-${{ hashFiles('**/package-lock.json') }}
- run: node scripts/ci-run-many.mjs --target=test --outputTarget=execute --partNumber=${{ matrix.step }} --base=${{ env.BASE_REF}} --head=HEAD
- name: Upload reports' artifacts
if: success() || failure()
uses: actions/upload-artifact#v3
with:
name: ${{ env.RUN_UNIQUE_ID }}_artifact_${{ matrix.step }}
if-no-files-found: ignore
path: reports
retention-days: 1
process-test-data:
runs-on: ubuntu-latest
needs: unit-test
if: success() || failure()
steps:
- uses: actions/checkout#v3
- name: Download reports' artifacts
uses: actions/download-artifact#v3
with:
path: downloaded_artifacts
- name: Place reports' artifacts
run: rsync -av downloaded_artifacts/*/*/ unit_test_reports/
- name: Check reports existence
id: check_files
uses: andstor/file-existence-action#v1
with:
files: 'unit_test_reports/**/test-*.xml'
- name: Import results to Xray
run: ls -R unit_test_reports/
I am trying to deploy a Lambda function with the GO runtime. The AWS console indicates that the function was indeed updated when the pipeline runs (the Last Modified date is updated in the Lambda console), but Github Actions indicates a failure (this following error is shown twice):
2022/04/04 00:08:30 ResourceConflictException: The operation cannot be performed at this time. An update is in progress for resource: arn:aws:lambda:us-east-2:***:function:get_all_products_go
{
RespMetadata: {
StatusCode: 409,
RequestID: "cd2b5f3f-c245-4c55-9440-9bdc078bb2b9"
},
Message_: "The operation cannot be performed at this time. An update is in progress for resource: arn:aws:lambda:us-east-2:***:function:get_all_products_go",
Type: "User"
}
Here is my pipeline configuration yaml file:
name: Deploy Lambda function
on:
push:
branches:
- main
jobs:
deploy-lambda:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v1
- name: Install Go
uses: actions/setup-go#v2
with:
go-version: '1.17.4'
- name: Install dependencies
run: |
go version
go mod init storefront-lambdas
go mod tidy
- name: Build binary
run: |
GOOS=linux GOARCH=amd64 go build get-all-products.go && zip deployment.zip get-all-products
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-2
- name: default deploy
uses: appleboy/lambda-action#master
with:
aws_access_key_id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws_secret_access_key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws_region: us-east-2
function_name: get_all_products_go
zip_file: deployment.zip
memory_size: 128
timeout: 10
handler: get-all-products
role: arn:aws:iam::xxxxxxxxxx:role/lambda-basic-execution
runtime: go1.x
Using Github action to run Cypress e2e tests but when tests fail the job still passes.
name: E2E
on:
push:
branches: [ master ]
paths-ignore: [ '**.md' ]
schedule:
- cron: '0 8-20 * * *'
jobs:
cypress-run:
runs-on: ubuntu-16.04
steps:
- name: Checkout
uses: actions/checkout#v1
- name: Cypress run
uses: cypress-io/github-action#v2
continue-on-error: false
with:
record: true
env:
CYPRESS_RECORD_KEY: ${{ secrets.CYPRESS_RECORD_KEY }}
Reason I would like job to fail is to get notified either via Github job failing or with a slack notification like this
- uses: 8398a7/action-slack#v3
if: job.status == 'failure'
with:
status: ${{ job.status }}
fields: repo
channel: '#dev'
mention: here
text: "E2E tests failed"
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}
Have you tried adding an if: ${{ failure() }} at the end?
I ended up creating my own curl request which notifies Slack when anything fails on a specific job. Adding it to the end worked for me.
name: E2E
on:
push:
branches: [ master ]
paths-ignore: [ '**.md' ]
schedule:
- cron: '0 8-20 * * *'
jobs:
cypress-run:
runs-on: ubuntu-16.04
steps:
- name: Checkout
uses: actions/checkout#v1
- name: Cypress run
uses: cypress-io/github-action#v2
continue-on-error: false
with:
record: true
env:
CYPRESS_RECORD_KEY: ${{ secrets.CYPRESS_RECORD_KEY }}
+
+ - name: Notify Test Failed
+ if: ${{ failure() }}
+ run: |
+ curl -X POST -H "Content-type:application/json" --data "{\"type\":\"mrkdwn\",\"text\":\".\nā *Cypress Run Failed*:\n*Branch:* $GITHUB_REF\n*Repo:* $GITHUB_REPOSITORY\n*SHA1:* $GITHUB_SHA\n*User:* $GITHUB_ACTOR\n.\n\"}" ${{ secrets.SLACK_WEBHOOK }}