Why using apex deploy two alias are assigned in the same version? - aws-lambda

I have a pipeline with this commands:
apex deploy --env prod --alias prod --region us-west-2
apex deploy --env qa --alias qa --region us-west-2
apex deploy --env dev --alias dev --region us-west-2
Every time the pipeline is executed In one lambda 2 alias (dev and qa) move on the same version.
Image

Related

Laravel Vapor Docker Runtime with Gitlab CI want not to be work

I use Laravel Vapor for deploying our microservices based on Laravel. This works very good so far, if the app with their dependencies is not too large. But if it is then it gets a little bit tricky.
Vapor provides a Docker runtime for this case where you are able to deploy apps up to 10GB size.
For local development we usually use Laradock.io because its easy and flexible.
That means if we deploy from our local environment it easy to enter the workspace container and and run the vapor deploy commands. After enabling Docker Client for the workspace container it works with the vapor Docker runtime properly.
But now we integrated the deployment process into Gitlab CI Pipeline. That works very well for our small services with Vapor PHP runtime.
But for the Docker runtime I desperate on the CI deployment.
The docker runtime needs an installed docker instance where vapor will be invoked. That means in the Gitlab-ci.yml I have to add an image with installed Docker and PHP to invoke the Vapor scripts.
So I created an docker image base on the laradock workspace container but the Gitlab-runner exits always with the error message no docker deamon is available.
This is the related part of my GitLab-CI yml (the image is only local available):
testing:
image:
name: lexitaldev/vapor-docker-deploy:latest
pull_policy: never
securityContext:
privileged: true
environment: testing
stage: deploy
only:
- test
script:
- composer install
- php vendor/bin/vapor deploy test
This is the specific output:
Error Output:
================
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the
docker daemon running?
I've tried to use the standard 'laravelphp/vapor:php80' image and install docker over the script section as well.
before_script:
- apk add docker
- addgroup root docker
But nothing helped. It seems to be there is a problem with the docker.sock.
Did anybody managed to add Vapor Docker Runtime deployment to CI scripts?
Best,
Michael
I would like to tell you, that you only need to add the Service: dind, but after you do that, it will throw an error, related to the image that Gitlab create for your pipelines. So you need to create a runner with volumes, privileged flag, and tags.
I did it, using gitlab-runner on my machine.
sudo gitlab-runner register -n \
--url {{ your_url }} \
--registration-token {{your_token}} \
--executor docker \
--description "{{ Describe your runner }}" \
--docker-image "docker:20.10.12-alpine3.15" \
--docker-privileged \
--docker-volumes="/certs/client" \
--docker-volumes="cache" \
--docker-volumes="/var/run/docker.sock:/var/run/docker.sock"
--tag-list {{ a_tag_for_your_pipeline }}
Once you did that, you would need to use a docker stable version in your gitlab-ci.yml file. For some reason, it doesn't work when I was trying to use version 20 or latest
image: docker:stable
services:
- name: docker:stable:dind
before_script:
- echo $CI_JOB_TOKEN | docker login $CI_REGISTRY -u $CI_REGISTRY_USER --password-stdin
build:
tags:
- {{the tag you defined in your runner}}
variables:
IMAGE_TAG: $CI_REGISTRY_IMAGE:$CI_COMMIT_REF_SLUG
script:
- echo $IMAGE_TAG
- docker build -t $CI_REGISTRY_IMAGE -f {{your Dockerfile}} .
- docker push $CI_REGISTRY_IMAGE
All the variables are previously defined in Gitlab, so don't worry, you can "copy & paste". Also, I added some advices that Gitlab mention on its documentation when you need to register your Docker container in Gitlab container.

how to config gradle version in AWS codebuild image?

This is my buildspec.yml:
phases:
install:
runtime-versions:
java: corretto11
pre_build:
commands:
- COMMIT_ID_SHORT=`echo "${CODEBUILD_RESOLVED_SOURCE_VERSION}" | cut -c1-8`
- TAG=`echo "${MAJOR}.${MINOR}.${COMMIT_ID_SHORT}"`
- aws ecr get-login-password --region us-east-2 | docker login --username AWS --password-stdin ${ECR_URL}
- export CODEARTIFACT_AUTH_TOKEN=`aws codeartifact get-authorization-token --domain cloud-infra-packages --domain-owner 715371302281 --query authorizationToken --output text`
build:
commands:
- gradle clean build
- docker build -t ${APP}:${TAG} -f Dockerfile
- docker tag ${APP}:${TAG} ${ECR_URL}/${ECR_URL}:${TAG}
But I got a error message:
the gradle version is not supported, and I found the gradle version of codebuild standard image is 5.6.0, so how to modify the gradle version when build the project in AWS codebuild.
already fixed the problem, we should use ./graldew to build the project, not gralde.

Laravel Vapoor on aws codepipeline

Laravel vapor is completely developed on aws platform but not used aws code pipeline to deploy codes. Has anyone tried aws code pipeline to deploy vapor codes?
I can deploy ubuntu server and install required PHP extensions and run vapor deploy staging command in aws codedeploy. Wondering is there any better way to deploy laravel vapor.
Finally, I made the below changes in the buildspec.yml file and used the ubuntu instance.
install:
commands:
- nohup /usr/local/bin/dockerd --host=unix:///var/run/docker.sock --host=tcp://127.0.0.1:2375 --storage-driver=overlay2 &
- timeout 15 sh -c "until docker info; do echo .; sleep 1; done"
#- command
#- echo Logging in to Amazon ECR
pre_build:
commands:
- docker run --name myvapor -d -e VAPOR_API_TOKEN=MY_VAPOR_API_TOKEN --volume $(pwd):/~ --workdir /~ teamnovu/laravel-vapor
- docker exec myvapor vapor deploy staging

Select different region on serverless based on branch

I'm relative new on Serverless, I'm building now a lambda function and I need to deploy that same function on different stages and also different regions, for example for development stage I need to deploy to us-est-1 region and for production stage I need to deploy to a different region, how can I do that using my branches, for example when I do a merge to develop use us-est-1 region and then when code is merge to master branch use us-est-2 region?
Thanks in advance
You can use a bash script and override the default serverless yl file definition.
for instance:
provider:
stage: dev
region: us-west-1
then the script will check the branch and set environnement variables to override the default value (dev and us-est-1)
#!/usr/bin/env bash
set -e
BRANCH=$(git rev-parse --abbrev-ref HEAD)
MASTER='master'
DEVELOP='develop'
if [[ $BRANCH == $MASTER ]]; then
STAGE="prod"
AWS_REGION="us-west-2"
elif [[ $BRANCH == $DEVELOP ]]; then
STAGE="dev"
AWS_REGION="us-west-1"
fi
if [ -z ${STAGE+x} ]; then
echo "Not deploying changes"
exit 0
fi
echo "Prepare dependencies"
npm install
echo "Deploying from branch $BRANCH to stage $STAGE in region $AWS_REGION"
npx serverless deploy --stage $STAGE --region $AWS_REGION
Assuming when you deploy you use
serverless deploy --stage development
or alternatively for master using the shorthand
sls deploy -s master
You could specify your regions to choose in the serverless.yml
provider:
region: ${self:custom.region.${self:custom.myStage}}
custom:
myStage: ${opt:stage, self:provider.stage}
region:
production: us-east-2
development: us-east-1
When merging it may depend on the CI/CD service you use to then state the stage depending on the branch, develop to development and production to master.

GitLab CI/CD: Do not destroy a docker container after building

My use case is the following: I want to deploy a PHP app to docker after a branch is updated and then leave it running for manual testing.
So I would like to access it under a certain URL.
What I did so far is to create a gitlab docker runner
gitlab-ci-multi-runner register --url "https://git.example.com/ci" --registration-token xxxx \
--description "dockertest" \
--executor docker \
--docker-image "php:7.0-apache" \
--docker-services mariadb:latest
And I have a .gitlabci.yml
stages:
- deploy
job-deploy-docker:
only:
- docker-ci-test
stage: deploy
environment: docker
tags:
- docker
image: php:7.0-apache
services:
- mariadb
script:
- apt-get update && apt-get --assume-yes install mysql-client
- mysql --user=root --password="$MYSQL_ROOT_PASSWORD" --host=mariadb -e "create database ci"
The deploy runs Job succeeded and then the container is destroyed.
How can I avoid destroying the container. Of course it should be cleaned up or reused if a new push is done to a branch.
How can I make the web app accessible under a certain external URL (branchname.ci.example.com)
Is that even the right approach or should I do it differently?

Resources