Deploy lambdas in parallel with CodeBuild - aws-lambda

I have a CodePipeline configured with my single GitHub repo to trigger build & deploy on any commit.
My repo contains dozens of individual lambda functions.
The pre_build step does a full clean & restore. Then in the build step, I have a long list of commands like this:
- dotnet build -c Debug
- cd folder1
- dotnet lambda deploy-function --configuration Debug --region $Region --config-file aws-lambda-tools-test.json
- cd ../folder2
- dotnet lambda deploy-function --configuration Debug --region $Region --config-file aws-lambda-tools-test.json
- cd ../folder3
- dotnet lambda deploy-function --configuration Debug --region $Region --config-file aws-lambda-tools-test.json
My question is, is there any way for me to execute these deployments in parallel?

Related

Gitlab CI/CD fail with "bash: line 132: go: command not found"

We have installed Gitlab on our custom server. We are looking to use the gitlab CI/CD pipeline to build and release our software for that I'm working on a POC. I have created a project with the following .gitlab-ci.yml
variables:
GOOS: linux
GOARCH: amd64
stages:
- test
- build
- deb-build
run_tests:
stage: test
image: golang:latest
before_script:
- go mod tidy
script:
- go test ./...
build_binary:
stage: build
image: golang:latest
artifacts:
untracked: true
script:
- GOOS=$GOOS GOARCH=$GOARCH go build -o newer .
build deb:
stage: deb-build
image: ubuntu:latest
before_script:
- mkdir -p deb-build/usr/local/bin/
- chmod -R 0755 deb-build/*
- mkdir build
script:
- cp newer deb-build/usr/local/bin/
- dpkg-deb --build deb-build release-1.1.1.deb
- mv release-1.1.1.deb build
artifacts:
paths:
- build/*
TLDR: I have updated the gitlab-ci.yml and the screenshot of the error.
What I have noticed, the error is persistent if I use the shared runner(GJ7z2Aym) if you register a runner (i.e Specific Runner)
gitlab-runner register --non-interactive --url "https://gitlab.sboxdc.com/" --registration-token "<register_token>" --description "" --executor "docker" --docker-image "docker:latest"
I see the build passing without any problem
Failed case.
https://gist.github.com/meetme2meat/0676c2ee8b78b3683c236d06247a8a4d
One that Passed
https://gist.github.com/meetme2meat/058e2656595a428a28fcd91ba68874e8
The failing job is using a runner with shell executor, that was probably setup when you configured your GitLab instance. This can be seen on logs by this line:
Preparing the "shell" executor
Using Shell executor...
shell executor will ignore your job's image: config. It will run job script directly on the machine on which the runner is hosted, and will try to find go binary on this machine (failing in your case). It's a bit like running go commands on some Ubuntu without having go installed.
Your successful job is using a runner with docker executor, running your job's script in a golang:latest image as you requested. It's like running docker run golang:latest sh -c '[your script]'. This can be seen in job's logs:
Preparing the "docker" executor
Using Docker executor with image golang:latest ...
Pulling docker image golang:latest ...
Using docker image sha256:05e[...]
golang:latest with digest golang#sha256:04f[...]
What you can do:
Make sure you configure a runner with a docker executor. Your config.toml would then look like:
[[runners]]
# ...
executor = "docker"
[runners.docker]
# ...
It seems you already did by registering your own runner.
Configure your jobs to use this runner with job tags. You can set tag docker_executor on your Docker runner (when registering or via Gitlab UI) and setup something like:
build_binary:
stage: build
# Tags a runner must have to run this job
tags:
- docker_executor
image: golang:latest
artifacts:
untracked: true
script:
- GOOS=$GOOS GOARCH=$GOARCH go build -o newer .
See Runner registration and Docker executor for details.
Since you have used image: golang:latest, go should be in the $PATH
You need to check at which stage it is failing: run_tests or build_binary.
Add echo $PATH in your script steps, to check what $PATH is considered.
Check also if the error comes from the lack of git, used by Go for accessing modules remote repositories. See this answer as an example.
From your gists, the default GitLab runner uses a shell executor (which knows nothing about Go)
Instead, the second one uses a Docker executor, based on the Go image.
Registering the (Docker) runner is therefore the right way to ensure the expected executor.

Cache Cloud Native Buildpacks/Paketo.io pack CLI builds on GitHub Actions (e.g. with Spring Boot/Java/Maven buildpacks)?

I'm working on a Spring Boot application that should be packaged into a OCI container using Cloud Native Build Packs / Paketo.io. I build it with GitHub Actions, where my workflow build.yml looks like this:
name: build
on: [push]
jobs:
build-with-paketo-push-2-dockerhub:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Login to DockerHub Container Registry
run: echo $DOCKER_HUB_TOKEN | docker login -u jonashackt --password-stdin
env:
DOCKER_HUB_TOKEN: ${{ secrets.DOCKER_HUB_TOKEN }}
- name: Install pack CLI via apt. See https://buildpacks.io/docs/tools/pack/#pack-cli
run: |
sudo add-apt-repository ppa:cncf-buildpacks/pack-cli
sudo apt-get update
sudo apt-get install pack-cli
- name: Build app with pack CLI
run: pack build spring-boot-buildpack --path . --builder paketobuildpacks/builder:base
- name: Tag & publish to Docker Hub
run: |
docker tag spring-boot-buildpack jonashackt/spring-boot-buildpack:latest
docker push jonashackt/spring-boot-buildpack:latest
Now the step Build app with pack CLI takes relatively long, since it always downloads the Paketo builder Docker image and then does a full fresh build. This means downloading the JDK and every single Maven dependency. Is there a way to cache a Paketo build on GitHub Actions?
Caching the Docker images on GitHub Actions might be an option - which doesn't seem to be that easy. Another option would be to leverage the Docker official docker/build-push-action Action, which is able to cache the buildx-cache. But I didn't get the combination of pack CLI and buildx-caching to work (see this build for example).
Finally I stumbled upon the general Cloud Native Buildpacks approach on how to cache in the docs:
Cache Images are a way to preserve build optimizing layers across
different host machines. These images can improve performance when
using pack in ephemeral environments such as CI/CD pipelines.
I found this concept quite nice, since it uses a second cache image, which gets published on a container registry of your choice. And this image is simply used for all Paketo pack CLI builds on every machine you append the --cache-image parameter - be it your local desktop or any CI/CD platform (like GitHub Actions).
In order to use the --cache-image parameter, we also have to use the --publish flag (since the cache image needs to get published to your container registry!). This also means we need to log into the container registry before we're able to run our pack CLI command. Using Docker Hub this is something like:
echo $DOCKER_HUB_TOKEN | docker login -u YourUserNameHere --password-stdin
Also the Paketo builder image must be a trusted one. As the docs state:
By default, any builder suggested by pack builder suggest is
considered trusted.
Since I use a suggested builder, I don't have to do anything here. If you want to use another builder that isn't trusted by default, you need to run a pack config trusted-builders add your/builder-to-trust:bionic command before the final pack CLI command.
Here's the pack CLI command, which is cache enabled in case you want to build a Spring Boot app and using Docker Hub as the container registry:
pack build index.docker.io/yourApplicationImageNameHere:latest \
--builder paketobuildpacks/builder:base \
--path . \
--cache-image index.docker.io/yourCacheImageNameHere:latest \
--publish
Finally the GitHub Action workflow to build and publish the example Spring Boot app https://github.com/jonashackt/spring-boot-buildpack looks like this:
name: build
on: [push]
jobs:
build-with-paketo-push-2-dockerhub:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Login to DockerHub Container Registry
run: echo $DOCKER_HUB_TOKEN | docker login -u jonashackt --password-stdin
env:
DOCKER_HUB_TOKEN: ${{ secrets.DOCKER_HUB_TOKEN }}
- name: Install pack CLI via the official buildpack Action https://github.com/buildpacks/github-actions#setup-pack-cli-action
uses: buildpacks/github-actions/setup-pack#v4.1.0
- name: Build app with pack CLI using Buildpack Cache image (see https://buildpacks.io/docs/app-developer-guide/using-cache-image/) & publish to Docker Hub
run: |
pack build index.docker.io/jonashackt/spring-boot-buildpack:latest \
--builder paketobuildpacks/builder:base \
--path . \
--cache-image index.docker.io/jonashackt/spring-boot-buildpack-paketo-cache-image:latest \
--publish
Note that with using the pack CLI's --publish flag, we also don't need an extra step Tag & publish to Docker Hub anymore. Since this is done by pack CLI for us already.

Google App Engine Flex Container Deployment Issues

I am trying to deploy my Go 1.14 microservices application on to Google's Flexible environment. I read that there are issues with finding GOROOT, and how it is unable to obtain the correct dependencies.
I wanted to use the flexible environment because I needed to do port forwarding. Since my domain name was used to run the actual application, I wanted port 8081 to run my microservices.
I followed the instruction from this link:
https://blog.cubieserver.de/2019/go-modules-with-app-engine-flexible/
I tried option 3.
This is my gitlab-ci.yaml configurations file.
# .gitlab-ci.yaml
stages:
- build
- deploy
build_server:
stage: build
image: golang:1.14
script:
- go mod vendor
- go install farmcycle.us/user/farmcycle
artifacts:
paths:
- vendor/
deploy_app_engine:
stage: deploy
image: google/cloud-sdk:270.0.0
script:
- echo $SERVICE_ACCOUNT > /tmp/$CI_PIPELINE_ID.json
- gcloud auth activate-service-account --key-file /tmp/$CI_PIPELINE_ID.json
- gcloud --quiet --project $PROJECT_ID app deploy app.yaml
after_script:
- rm /tmp/$CI_PIPELINE_ID.json
This my app.yaml configuration file
runtime: go
env: flex
network:
forwarded_ports:
- 8081/tcp
When I deployed this using the Git CI pipeline. Build stage passes, but the Deploy stage failed.
Running with gitlab-runner 13.4.1 (e95f89a0)
on docker-auto-scale 72989761
Preparing the "docker+machine" executor
Preparing environment
00:03
Getting source from Git repository
00:04
Downloading artifacts
00:02
Executing "step_script" stage of the job script
00:03
$ echo $SERVICE_ACCOUNT > /tmp/$CI_PIPELINE_ID.json
$ gcloud auth activate-service-account --key-file /tmp/$CI_PIPELINE_ID.json
Activated service account credentials for: [farmcycle-hk1996#appspot.gserviceaccount.com]
$ gcloud --quiet --project $PROJECT_ID app deploy app.yaml
ERROR: (gcloud.app.deploy) Staging command [/usr/lib/google-cloud-sdk/platform/google_appengine/go-app-stager /builds/JLiu1272/farmcycle-backend/app.yaml /builds/JLiu1272/farmcycle-backend /tmp/tmprH6xQd/tmpSIeACq] failed with return code [1].
------------------------------------ STDOUT ------------------------------------
------------------------------------ STDERR ------------------------------------
2020/10/10 20:48:27 staging for go1.11
2020/10/10 20:48:27 Staging Flex app: failed analyzing /builds/JLiu1272/farmcycle-backend: cannot find package "farmcycle.us/user/farmcycle/api" in any of:
($GOROOT not set)
/root/go/src/farmcycle.us/user/farmcycle/api (from $GOPATH)
GOPATH: /root/go
--------------------------------------------------------------------------------
Running after_script
00:01
Running after script...
$ rm /tmp/$CI_PIPELINE_ID.json
Cleaning up file based variables
00:01
ERROR: Job failed: exit code 1
This was the error. Honestly I am not really sure what this error is and how to fix it.
Surprisingly, even using the latest Go runtime, runtime: go1.15, go modules appear to not be used. See golang-docker.
However, Flex builds your app into a container regardless of runtime and so, in this case, it may be better to use a custom runtime and build your own Dockerfile?
runtime: custom
env: flex
Then you get to use e.g. Go 1.15 and go modules (without vendoring) and whatever else you'd like. For a simple main.go that uses modules e.g.:
FROM golang:1.15 as build
ARG PROJECT="flex"
WORKDIR /${PROJECT}
COPY go.mod .
RUN go mod download
COPY main.go .
RUN GOOS=linux \
go build -a -installsuffix cgo \
-o /bin/server \
.
FROM gcr.io/distroless/base-debian10
COPY --from=build /bin/server /
USER 999
ENV PORT=8080
EXPOSE ${PORT}
ENTRYPOINT ["/server"]
This ought to be possible with Google's recently announced support for buildpacks but I've not tried it.

Bitbucket Pipeline to deploy .Net Core API to AWS Serverless Application Model

We are new to the Bitbucket CI/CD pipleline creations and AWS deployments.
We are trying to build and deploy/publish a .Net core (3.1) API to AWS Lambda as Serverless Application Model using Bitbucket pipeline. We have come across the below link to deploy to AWS.
https://bitbucket.org/atlassian/aws-sam-deploy/src/master/?_ga=2.18577786.215737733.1592837802-1805179932.1542868670
We were able to achieve project build and test but the AWS Lambda SAM deployment is not working correctly. Please find our below yaml file
image: mcr.microsoft.com/dotnet/core/sdk:3.1
pipelines:
default:
- step:
caches:
- dotnetcore
script: # Modify the commands below to build your repository.
- export PROJECT_NAME=MyProject
- export TEST_NAME=MyProject.Tests
- dotnet restore
- dotnet build $PROJECT_NAME
- dotnet test $TEST_NAME
- step:
name: deploy AWS SAM
script:
- pipe: atlassian/aws-sam-deploy:0.3.4
variables:
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY_ID} # Optional if already defined in the context.
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_ACCESS_KEY} # Optional if already defined in the context.
COMMAND: 'deploy'
I think, we are missing few steps in the "deploy AWS SAM" but not sure how to provide the published libraries path or any other steps. We are not using any docker in our project.
Can someone help?
I was facing the same issue. I found a solution that is not the best one, but works.
You can install the Amazon Lambda Tools in your pipeline image and call lambda deploy-serverless. Also, you need to install something to compress (zip) the package.
The "dotnet lambda deploy-serverless" does the same as the MS VS 2019 with AWS Toolkit.
image: mcr.microsoft.com/dotnet/core/sdk:3.1
pipelines:
default:
- step:
caches:
- dotnetcore
script: # Modify the commands below to build your repository.
- apt-get update && apt-get install -y zip
- dotnet tool install -g Amazon.Lambda.Tools
- export PATH="$PATH:/root/.dotnet/tools"
- dotnet lambda deploy-serverless

How to deploy Spring boot application with GitLab serverless?

I configured a Google could demo project and created a cluster for it in the GitLab Serverless settings for a Hello world Spring boot application. The only information I find on deploying applications is https://docs.gitlab.com/ee/user/project/clusters/serverless/#deploying-serverless-applications which might explain how to deploy a Ruby application only. I'm not sure about that because none of the variables used in the script are explained and the hint
Note: You can reference the sample Knative Ruby App to get started.
is somehow confusing because if I'm not familiar with building Ruby applications which I'm not, that won't get me started.
Following the instruction to put
stages:
- build
- deploy
build:
stage: build
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
only:
- master
script:
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > /kaniko/.docker/config.json
- /kaniko/executor --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/Dockerfile --destination $CI_REGISTRY_IMAGE
deploy:
stage: deploy
image: gcr.io/triggermesh/tm#sha256:e3ee74db94d215bd297738d93577481f3e4db38013326c90d57f873df7ab41d5
only:
- master
environment: production
script:
- echo "$CI_REGISTRY_IMAGE"
- tm -n "$KUBE_NAMESPACE" --config "$KUBECONFIG" deploy service "$CI_PROJECT_NAME" --from-image "$CI_REGISTRY_IMAGE" --wait
in .gitlab-ci.yml causes the deploy stage to fail due to
$ tm -n "$KUBE_NAMESPACE" --config "$KUBECONFIG" deploy service "$CI_PROJECT_NAME" --from-image "$CI_REGISTRY_IMAGE" --wait
2019/02/09 11:08:09 stat /root/.kube/config: no such file or directory, falling back to in-cluster configuration
2019/02/09 11:08:09 Can't read config file
ERROR: Job failed: exit code 1
My Dockerfile which allows to build locally looks as follows:
FROM maven:3-jdk-11
COPY . .
RUN mvn --batch-mode --update-snapshots install
EXPOSE 8080
CMD java -jar target/hello-world-0.1-SNAPSHOT.jar
(the version in the filename doesn't make sense for further deployment, but that's a follow-up problem).
Reason is a mismatch in the environment value specified in .gitlab-ci.yml and the GitLab Kubernetes configuration, see https://docs.gitlab.com/ee/user/project/clusters/#troubleshooting-missing-kubeconfig-or-kube_token for details.

Resources