As part of the build process, I need to download some python packages from my private gcs bucket and install them as part of the Dockerfile. This could be done using a multi-stage builds like this one:
FROM google/cloud-sdk:alpine as gcloud
WORKDIR /packages
RUN gsutil cp gs://... /packages
FROM python:3.7.0
COPY --from=gcloud /packages .
but I have big problem with injecting service account from my local machine so that gsutil will be able to perform copy. I tried capturing content of service account json via ARG but it does not work with cloudbuild substitutions due to its json format:
gcloud builds submit . --substitutions _SERVICE_ACCOUNT_CONTENT="$(cat path_to_service_account.json)"
I also though about shared volumes between steps but it does not work with docker build. Any ideas how can I do this with cloudbuild?
My recommendation is to do this outside your Docker Build, and to leverage the Cloud Build built in authentication mechanism (through metadata server) to download the files.
In your cloud build steps
steps:
- name: 'gcr.io/cloud-builders/gsutil'
args: ["cp", "gs://...", "./packages"]
- name: 'gcr.io/cloud-builders/docker'
args: [ 'build', '-t', 'gcr.io/<PROJECT_ID>/......', '.' ]
In your Dockerfile
FROM python:3.7.0
COPY packages/* /packages/
.....
Simply grant the Cloud Build service account to have access to your bucket.
Related
I'm working on a Spring Boot application that should be packaged into a OCI container using Cloud Native Build Packs / Paketo.io. I build it with GitHub Actions, where my workflow build.yml looks like this:
name: build
on: [push]
jobs:
build-with-paketo-push-2-dockerhub:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Login to DockerHub Container Registry
run: echo $DOCKER_HUB_TOKEN | docker login -u jonashackt --password-stdin
env:
DOCKER_HUB_TOKEN: ${{ secrets.DOCKER_HUB_TOKEN }}
- name: Install pack CLI via apt. See https://buildpacks.io/docs/tools/pack/#pack-cli
run: |
sudo add-apt-repository ppa:cncf-buildpacks/pack-cli
sudo apt-get update
sudo apt-get install pack-cli
- name: Build app with pack CLI
run: pack build spring-boot-buildpack --path . --builder paketobuildpacks/builder:base
- name: Tag & publish to Docker Hub
run: |
docker tag spring-boot-buildpack jonashackt/spring-boot-buildpack:latest
docker push jonashackt/spring-boot-buildpack:latest
Now the step Build app with pack CLI takes relatively long, since it always downloads the Paketo builder Docker image and then does a full fresh build. This means downloading the JDK and every single Maven dependency. Is there a way to cache a Paketo build on GitHub Actions?
Caching the Docker images on GitHub Actions might be an option - which doesn't seem to be that easy. Another option would be to leverage the Docker official docker/build-push-action Action, which is able to cache the buildx-cache. But I didn't get the combination of pack CLI and buildx-caching to work (see this build for example).
Finally I stumbled upon the general Cloud Native Buildpacks approach on how to cache in the docs:
Cache Images are a way to preserve build optimizing layers across
different host machines. These images can improve performance when
using pack in ephemeral environments such as CI/CD pipelines.
I found this concept quite nice, since it uses a second cache image, which gets published on a container registry of your choice. And this image is simply used for all Paketo pack CLI builds on every machine you append the --cache-image parameter - be it your local desktop or any CI/CD platform (like GitHub Actions).
In order to use the --cache-image parameter, we also have to use the --publish flag (since the cache image needs to get published to your container registry!). This also means we need to log into the container registry before we're able to run our pack CLI command. Using Docker Hub this is something like:
echo $DOCKER_HUB_TOKEN | docker login -u YourUserNameHere --password-stdin
Also the Paketo builder image must be a trusted one. As the docs state:
By default, any builder suggested by pack builder suggest is
considered trusted.
Since I use a suggested builder, I don't have to do anything here. If you want to use another builder that isn't trusted by default, you need to run a pack config trusted-builders add your/builder-to-trust:bionic command before the final pack CLI command.
Here's the pack CLI command, which is cache enabled in case you want to build a Spring Boot app and using Docker Hub as the container registry:
pack build index.docker.io/yourApplicationImageNameHere:latest \
--builder paketobuildpacks/builder:base \
--path . \
--cache-image index.docker.io/yourCacheImageNameHere:latest \
--publish
Finally the GitHub Action workflow to build and publish the example Spring Boot app https://github.com/jonashackt/spring-boot-buildpack looks like this:
name: build
on: [push]
jobs:
build-with-paketo-push-2-dockerhub:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Login to DockerHub Container Registry
run: echo $DOCKER_HUB_TOKEN | docker login -u jonashackt --password-stdin
env:
DOCKER_HUB_TOKEN: ${{ secrets.DOCKER_HUB_TOKEN }}
- name: Install pack CLI via the official buildpack Action https://github.com/buildpacks/github-actions#setup-pack-cli-action
uses: buildpacks/github-actions/setup-pack#v4.1.0
- name: Build app with pack CLI using Buildpack Cache image (see https://buildpacks.io/docs/app-developer-guide/using-cache-image/) & publish to Docker Hub
run: |
pack build index.docker.io/jonashackt/spring-boot-buildpack:latest \
--builder paketobuildpacks/builder:base \
--path . \
--cache-image index.docker.io/jonashackt/spring-boot-buildpack-paketo-cache-image:latest \
--publish
Note that with using the pack CLI's --publish flag, we also don't need an extra step Tag & publish to Docker Hub anymore. Since this is done by pack CLI for us already.
I'm using gitlab CI for deployment.
I'm running into a problem when the review branch is deleted.
stop_review:
variables:
GIT_STRATEGY: none
stage: cleanup
script:
- echo "$AWS_REGION"
- echo "Stopping review branch"
- serverless config credentials --provider aws --key ${AWS_ACCESS_KEY_ID} --secret ${AWS_SECRET_ACCESS_KEY}
- echo "$CI_COMMIT_REF_NAME"
- serverless remove --stage=$CI_COMMIT_REF_NAME --verbose
only:
- branches
except:
- master
environment:
name: review/$CI_COMMIT_REF_NAME
action: stop
when: manual
error is This command can only be run in a Serverless service directory. Make sure to reference a valid config file in the current working directory if you're using a custom config file
I have tried different GIT_STRATEGY, can some point me in right direction?
In order to run serverless remove, you'll need to have the serverless.yml file available, which means the actual repository will need to be cloned. (or that file needs to get to GitLab in some way).
It's required to have a serverless.yml configuration file available when you run serverless remove because the Serverless Framework allows users to provision infrastructure using not only the framework's YML configuration but also additional resources (like CloudFormation in AWS) which may or may not live outside of the specified app or stage CF Stack entirely.
In fact, you can also provision infrastructure into other providers as well (AWS, GCP, Azure, OpenWhisk, or actually any combination of these).
So it's not sufficient to simply identify the stage name when running sls remove, you'll need the full serverless.yml template.
Right now, I deploy my (Spring Boot) application to EC2 instance like:
Build JAR file on local machine
Deploy/Upload JAR via scp command (Ubuntu) from my local machine
I would like to automate that process, but:
without using Jenkins + Rundeck CI/CD tools
without using AWS CodeDeploy service since that does not support GitLab
Question: Is it possible to perform 2 simple steps (that are now done manualy - building and deploying via scp) with GitLab CI/CD tools and if so, can you present simple steps to do it.
Thanks!
You need to create a .gitlab-ci.yml file in your repository with CI jobs defined to do the two tasks you've defined.
Here's an example to get you started.
stages:
- build
- deploy
build:
stage: build
image: gradle:jdk
script:
- gradle build
artifacts:
paths:
- my_app.jar
deploy:
stage: deploy
image: ubuntu:latest
script:
- apt-get update
- apt-get -y install openssh-client
- scp my_app.jar target.server:/my_app.jar
In this example, the build job run a gradle container and uses gradle to build the app. GitLab CI artifacts are used to capture the built jar (my_app.jar), which will be passed on to the deploy job.
The deploy job runs an ubuntu container, installs openssh-client (for scp), then executes scp to open my_app.jar (passed from the build job) to the target server.
You have to fill in the actual details of building and copying your app. For secrets like SSH keys, set project level CI/CD variables that will be passed in to your CI jobs.
Create shell file with the following contents.
#!/bin/bash
# Copy JAR file to EC2 via SCP with PEM in home directory (usually /home/ec2-user)
scp -i user_key.pem file.txt ec2-user#my.ec2.id.amazonaws.com:/home/ec2-user
#SSH to EC2 Instnace
ssh -T -i "bastion_keypair.pem" ec2-user#y.ec2.id.amazonaws.com /bin/bash <<-'END2'
#The following commands will be executed automatically by bash.
#Consdier this as remote shell script.
killall java
java -jar ~/myJar.jar server ~/config.yml &>/dev/null &
echo 'done'
#Once completed, the shell will exit.
END2
In 2020, this should be easier with GitLab 13.0 (May 2020), using an older feature Auto DevOps (introduced in GitLab 11.0, June 2018)
Auto DevOps provides pre-defined CI/CD configuration allowing you to automatically detect, build, test, deploy, and monitor your applications.
Leveraging CI/CD best practices and tools, Auto DevOps aims to simplify the setup and execution of a mature and modern software development lifecycle.
Overview
But now (May 2020):
Auto Deploy to ECS
Until now, there hasn’t been a simple way to deploy to Amazon Web Services. As a result, Gitlab users had to spend a lot of time figuring out their own configuration.
In Gitlab 13.0, Auto DevOps has been extended to support deployment to AWS!
Gitlab users who are deploying to AWS Elastic Container Service (ECS) can now take advantage of Auto DevOps, even if they are not using Kubernetes. Auto DevOps simplifies and accelerates delivery and cloud deployment with a complete delivery pipeline out of the box. Simply commit code and Gitlab does the rest! With the elimination of the complexities, teams can focus on the innovative aspects of software creation!
In order to enable this workflow, users need to:
define AWS typed environment variables: ‘AWS_ACCESS_KEY_ID’ ‘AWS_ACCOUNT_ID’ and ‘AWS_REGION’, and
enable Auto DevOps.
Then, your ECS deployment will be automatically built for you with a complete, automatic, delivery pipeline.
See documentation and issue
We have a separate build for the frontend and backend of the application, where we need to pull the dist build of frontend to backend project during the build. During the build the 'curl' cannot write to the desired location.
In detail, we are using SpringBoot as backend for serving Angular 2 frontend. So we need to pull the frontend files to src/main/resources/static folder.
image: maven:3.3.9
pipelines:
default:
- step:
script:
- curl -s -L -v --user xxx:XXXX https://api.bitbucket.org/2.0/repositories/apprentit/rent-it/downloads/release_latest.tar.gz -o src/main/resources/static/release_latest.tar.gz
- tar -xf -C src/main/resources/static --directory src/main/resources/static release_latest.tar.gz
- mvn package -X
As a result of this the build fails with output of CURL.
* Failed writing body (0 != 16360)
Note: I've tried the same with maven-exec-plugin, the result was the same. The solution works on local machine naturally.
I would try running these commands from a local docker run of the image you're specifying (maven:3.3.9). I found that the most helpful way to debug things that were behaving differently in Pipelines vs. in my local environment.
To your specific question yes, you can download external content from the Pipeline run. I have a Pipeline that clones other repos from BitBucket via HTTP into the running container.
I've been working on a SlackBot project based in Scala using Gradle and have been looking into ways to leverage Gitlab-CI for the purpose of deploying to AWS EC2.
I am able to fully build and test my application with Gitlab-CI.
How can I perform a deployment from Gitlab-CI to Amazon EC2 Using CodeDeploy and CodePipeline?
Answer to follow as a Guide to do this.
I have created a set of sample files to go with the Guide provided below.
These files are available at the following link: https://gitlab.com/autronix/gitlabci-ec2-deployment-samples-guide/
Scope
This guide assumes the following
Gitlab EE hosted project - may work on private CE/EE instances (not tested)
Gitlab as the GIT versioning repository
Gitlab-CI as the Continuous Integration Engine
Existing AWS account
AWS EC2 as the target production or staging system for the deployment
AWS EC2 Instance running Amazon Linux AMI
AWS S3 as the storage facility for deployment files
AWS CodeDeploy as the Deployment engine for the project
AWS CodePipeline as the Pipeline for deployment
The provided .gitlab-ci.yml sample is based on a Java/Scala + Gradle project.
The script is provided as a generic example and will need to be adapted to your specific needs when implementing Continuous Delivery through this method.
The guide will assume that the user has basic knowledge about AWS services and how to perform the necessary tasks.
Note: The guide provided in this sample uses the AWS console to perform tasks. While there are likely CLI equivalent for the tasks performed here, these will not be covered throughout the guide.
Motivation
The motivation for creating these scripts and deployment guide came from the lack of availability of a proper tutorial showing how to implement Continuous Delivery using Gitlab and AWS EC2.
Gitlab introduced their freely available CI engine by partnering with Digital Ocean, which enables user repositories to benefit from good quality CI for free.
One of the main advantages of using Gitlab is that they provide built-in Continuous Integration containers for running through the various steps and validate a build.
Unfortunately, Gitblab nor AWS provide an integration that would allow to perform Continuous Deliver following passing builds.
This Guide and Scripts (https://gitlab.com/autronix/gitlabci-ec2-deployment-samples-guide/) provide a simplified version of the steps that I've undertaken in order to have a successful CI and CD using both Gitlab and AWS EC2 that can help anyone else get started with this type of implementation.
Setting up the environment on AWS
The first step in ensuring a successful Continuous Delivery process is to set up the necessary objects on AWS in order to allow the deployment process to succeed.
AWS IAM User
The initial requirement will be to set up an IAM user:
https://console.aws.amazon.com/iam/home#users
Create a user
Attach the following permissions:
CodePipelineFullAccess
AmazonEC2FullAccess
AmazonS3FullAccess
AWSCodeDeployFullAccess
Inline Policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"autoscaling:*",
"codedeploy:*",
"ec2:*",
"elasticloadbalancing:*",
"iam:AddRoleToInstanceProfile",
"iam:CreateInstanceProfile",
"iam:CreateRole",
"iam:DeleteInstanceProfile",
"iam:DeleteRole",
"iam:DeleteRolePolicy",
"iam:GetInstanceProfile",
"iam:GetRole",
"iam:GetRolePolicy",
"iam:ListInstanceProfilesForRole",
"iam:ListRolePolicies",
"iam:ListRoles",
"iam:PassRole",
"iam:PutRolePolicy",
"iam:RemoveRoleFromInstanceProfile",
"s3:*"
],
"Resource": "*"
}
]
}
Generate security credentials
Note: The policies listed above are very broad in scope. You may adjust to your requirements by creating custom policies that limit access only to certain resources.
Note: Please keep these credentials in a safe location. You will need them in a later step.
AWS EC2 instance & Role
Instance Role for CodeDeploy
https://console.aws.amazon.com/iam/home#roles
Create a new Role that will be assigned to your EC2 Instance in order to access S3,
Set the name according to your naming conventions (ie. MyDeploymentAppRole)
Select Amazon EC2 in order to allow EC2 instances to run other AWS services
Attache the following policies:
AmazonEC2FullAccess
AmazonS3FullAccess
AWSCodeDeployRole
Note: The policies listed above are very broad in scope. You may adjust to your requirements by creating custom policies that limit access only to certain resources.
Launch Instance
https://console.aws.amazon.com/ec2/v2/home
Click on Launch Instance and follow these steps:
Select Amazon Linux AMI 2016.03.3 (HVM), SSD Volume Type
Select the required instance type (t2.micro by default)
Next
Select IAM Role to be MyDeploymentAppRole (based on the name created in the previous section)
Next
Select Appropriate Storage
Next
Tag your instance with an appropriate name (ie. MyApp-Production-Instance)
add additional tags as required
Next
Configure Security group as necessary
Next
Review and Launch your instance
You will be provided with the possibility to either generate or use SSH keys. Please select the appropriate applicable method.
Setting up instance environment
Install CodeDeploy Agent
Log into your newly created EC2 instance and follow the instructions:
http://docs.aws.amazon.com/codedeploy/latest/userguide/how-to-run-agent-install.html
CodeDeploy important paths:
CodeDeploy Deployment base directory: /opt/codedeploy-agent/deployment-root/
CodeDeploy Log file: /var/log/aws/codedeploy-agent/codedeploy-agent.log
Tip: run tail -f /var/log/aws/codedeploy-agent/codedeploy-agent.log to keep track of the deployment in real time.
Install your project prerequisites
If your project has any prerequisites to run, make sure that you install those before running the deployment, otherwise your startup script may fail.
AWS S3 repository
https://console.aws.amazon.com/s3/home
In this step, you will need to create an S3 bucket that will be holding your deployment files.
Simply follow these steps:
Choose Create Bucket
Select a bucket name (ie. my-app-codepipeline-deployment)
Select a region
In the console for your bucket select Properties
Expand the Versioning menu
choose Enable Versioning
AWS CodeDeploy
https://console.aws.amazon.com/codedeploy/home#/applications
Now that the basic elements are set, we are ready to create the Deployment application in CodeDeploy
To create a CodeDeploy deployment application follow these steps:
Select Create New Application
Choose an Application Name (ie. MyApp-Production )
Choose a Deployment Group Name (ie. MyApp-Production-Fleet)
Select the EC2 Instances that will be affected by this deployment - Search by Tags
Under Key Select Name
Under Value Select MyApp-Production-Instance
Under Service Role, Select MyDeploymentAppRole
Click on Create Application
Note: You may assign the deployment to any relevant Tag that applied to the desired instances targeted for deployment. For simplicity's sake, only the Name Tag has been used to choose the instance previously defined.
AWS CodePipeline
https://console.aws.amazon.com/codepipeline/home#/dashboard
The next step is to proceed with creating the CodePipeline, which is in charge of performing the connection between the S3 bucket and the CodeDeploy process.
To create a CodePipeline, follow these steps:
Click on Create Pipeline
Name your pipeline (ie. MyAppDeploymentPipeline )
Next
Set the Source Provider to Amazon S3
set Amazon S3 location to the address of your bucket and target deployment file (ie. s3://my-app-codepipeline-deployment/myapp.zip )
Next
Set Build Provider to None - This is already handled by Gitlab-CI as will be covered later
Next
Set Deployment Provider to AWS CodeDeploy
set Application Name to the name of your CodeDeploy Application (ie. MyApp-Production)
set Deployment Group to the name of your CodeDeploy Deployment Group (ie. MyApp-Production-Fleet )
Next
Create or Choose a Pipeline Service Role
Next
Review and click Create Pipeline
Setting up the environment on Gitlab
Now that The AWS environment has been prepared to receive the application deployment we can proceed with setting up the CI environment and settings to ensure that the code is built and deployed to an EC2 Instance using S3, CodeDeploy and the CodePipeline.
Gitlab Variables
In order for the deployment to work, we will need to set a few environment variables in the project repository.
In your Gitlab Project, navigate to the Variables area for your project and set the following variables:
AWS_DEFAULT_REGION => your AWS region
AWS_SECRET_ACCESS_KEY => your AWS user credential secret key (obtained when you generated the credentials for the user)
AWS_ACCESS_KEY_ID => your AWS user credential key ID (obtained when you generated the credentials for the user)
AWS_S3_LOCATION => the location of your deployment zip file (ie. s3://my-app-codepipeline-deployment/my_app.zip )
These variables will be accessible by the scripts executed by the Gitlab-CI containers.
Startup script
A simple startup script has been provided (https://gitlab.com/autronix/gitlabci-ec2-deployment-samples-guide/blob/master/deploy/extras/my_app.sh) to allow the deployment to perform the following tasks:
Start the application and create a PID file
Check the status of the application through the PID file
Stop the application
You may find this script under deploy/extras/my_app.sh
Creating gitlab-ci.yml
The gitlab-ci.yml file is in charge of performing the Continuous Integration tasks associated with a given commit.
It acts as a simplified group of shell scripts that are organized in stages which correspond to the different phases in your Continuous Integration steps.
For more information on the details and reference, please refer to the following two links:
http://docs.gitlab.com/ce/ci/quick_start/README.html
http://docs.gitlab.com/ce/ci/yaml/README.html
You may validate the syntax of your gitlab-ci.yml file at any time with the following tool: https://gitlab.com/ci/lint
For the purpose of deployment, we will cover only the last piece of the sample provided with this guide:
deploy-job:
# Script to run for deploying application to AWS
script:
- apt-get --quiet install --yes python-pip # AWS CLI requires python-pip, python is installed by default
- pip install -U pip # pip update
- pip install awscli # AWS CLI installation
- $G build -x test -x distTar # # Build the project with Gradle
- $G distZip # creates distribution zip for deployment
- aws s3 cp $BUNDLE_SRC $AWS_S3_LOCATION # Uploads the zipfile to S3 and expects the AWS Code Pipeline/Code Deploy to pick up
# requires previous CI stages to succeed in order to execute
when: on_success
stage: deploy
environment: production
cache:
key: "$CI_BUILD_NAME/$CI_BUILD_REF_NAME"
untracked: true
paths:
- build/
# Applies only to tags matching the regex: ie: v1.0.0-My-App-Release
only:
- /^v\d+\.\d+\.\d+-.*$/
except:
- branches
- triggers
This part represents the whole job associated with the deployment following the previous, if any, C.I. stages.
The relevant part associated with the deployment is this:
# Script to run for deploying application to AWS
script:
- apt-get --quiet install --yes python-pip # AWS CLI requires python-pip, python is installed by default
- pip install -U pip # pip update
- pip install awscli # AWS CLI installation
- $G build -x test -x distTar # # Build the project with Gradle
- $G distZip # creates distribution zip for deployment
- aws s3 cp $BUNDLE_SRC $AWS_S3_LOCATION # Uploads the zipfile to S3 and expects the AWS Code Pipeline/Code Deploy to pick up
The first step involves installing the python package management system: pip.
pip is required to install AWS CLI, which is necessary to upload the deployment file to AWS S3
In this example, we are using Gradle (defined by the environment variable $G); Gradle provides a module to automatically Zip the deployment files. Depending on the type of project you are deploying this method will be different for generating the distribution zip file my_app.zip.
The aws s3 cp $BUNDLE_SRC $AWS_S3_LOCATION command uploads the distribution zip file to the Amazon S3 location that we defined earlier. This file is then automatically detected by CodePipeline, processed and sent to CodeDeploy.
Finally, CodeDeploy performs the necessary tasks through the CodeDeploy agent as specified by the appspec.yml file.
Creating appspec.yml
The appspec.yml defines the behaviour to be followed by CodeDeploy once a deployment file has been received.
A sample file has been provided along with this guide along with sample scripts to be executed during the various phases of the deployment.
Please refer to the specification for the CodeDeploy AppSpec for more information on how to build the appspec.yml file: http://docs.aws.amazon.com/codedeploy/latest/userguide/app-spec-ref.html
Generating the Deployment ZipFile
In order for CodeDeploy to work properly, you must create a properly generated zip file of your application.
The zip file must contain:
Zip root
appspec.yml => CodeDeploy deployment instructions
deployment stage scripts
provided samples would be placed in the scripts directory in the zip file, would require the presence my_app.sh script to be added at the root of your application directory (ie. my_app directory in the zip)
distribution code - in our example it would be under the my_app directory
Tools such as Gradle and Maven are capable of generating distribution zip files with certain alterations to the zip generation process.
If you do not use such a tool, you may have to instruct Gitlab-CI to generate this zip file in a different manner; this method is outside of the scope of this guide.
Deploying your application to EC2
The final step in this guide is actually performing a successful deployment.
The stages of Continuous integration are defined by the rules set in the gitlab-ci.yml. The example provided with this guide will initiate a deploy for any reference matching the following regex: /^v\d+\.\d+\.\d+-.*$/.
In this case, pushing a Tag v1.0.0-My-App-Alpha-Release through git onto your remote Gitlab would initiate the deployment process. You may adjust these rules as applicable to your project requirements.
The gitlab-ci.yml example provided would perform the following jobs when detecting the Tag v1.0.0-My-App-Alpha-Release:
build job - compile the sources
test job - run the unit tests
deploy-job - compile the sources, generate the distribution zip, upload zip to Amazon S3
Once the distribution zip has been uploaded to Amazon S3, the following steps happen:
CodePipeline detects the change in the revision of the S3 zip file
CodePipeline validates the file
CodePipeline sends signal that the bundle for CodeDeploy is ready
CodeDeploy executes the deployment steps
Start - initialization of the deployment
Application Stop - Executes defined script for hook
DownloadBundle - Gets the bundle file from the S3 repository through the CodePipeline
BeforeInstall - Executes defined script for hook
Install - Copies the contents to the deployment location as defined by the files section of appspec.yml
AfterInstall - Executes defined script for hook
ApplicationStart - Executes defined script for hook
ValidateService - Executes defined script for hook
End - Signals the CodePipeline that the deployment has completed successfully
Successful deployment screenshots:
References
Gitlab-CI QuickStart: http://docs.gitlab.com/ce/ci/quick_start/README.html
Gitlab-CI .gitlab-ci.yml: http://docs.gitlab.com/ce/ci/yaml/README.html
AWS CodePipeline Walkthrough: http://docs.aws.amazon.com/codepipeline/latest/userguide/getting-started-w.html
Install or Reinstall the AWS CodeDeploy Agent: http://docs.aws.amazon.com/codedeploy/latest/userguide/how-to-run-agent-install.html
AWS CLI Getting Started - Env: http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-environment
AppSpec Reference: http://docs.aws.amazon.com/codedeploy/latest/userguide/app-spec-ref.html
autronix's answer is awesome, although in my case I had to gave up the CodePipeline part due to the following error : The deployment failed because a specified file already exists at this location : /path/to/file. This is because I already have files at the location since I'm using an existing instance with a server running already on it.
Here is my workaround :
In the .gitlab-ci.yml here is what I changed :
deploy:
stage: deploy
script:
- curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" # Downloading and installing awscli
- unzip awscliv2.zip
- ./aws/install
- aws deploy push --application-name App-Name --s3-location s3://app-deployment/app.zip # Adding revision to s3 bucket
- aws deploy create-deployment --application-name App-Name --s3-location bucket=app-deployment,key=app.zip,bundleType=zip --deployment-group-name App-Name-Fleet --deployment-config-name CodeDeployDefault.OneAtATime --file-exists-behavior OVERWRITE # Ordering the deployment of the new revision
when: on_success
only:
refs:
- dev
The important part is the aws deploy create-deployment line with it's flag --file-exists-behavior. There are three options available, OVERWRITE was the one I needed and I couldn't manage to set this flag with CodePipeline so I went with the cli option.
I've also changed a bit the part for the upload of the .zip. Instead of creating the .zip myself I'm using aws deploy push command which will create a .zip for me on the s3 bucket.
There is really nothing else to modify.