CI pipeline for AWS ECS from gitlab repository - amazon-ec2

I want to create a pipeline for ECS deployment.
So my .gitlab-ci.yml file is like that:
stages:
- build
- test
- review
- deploy
- production
- cleanup
variables:
AUTO_DEVOPS_PLATFORM_TARGET: ECS
CI_AWS_ECS_CLUSTER: shopservice
CI_AWS_ECS_SERVICE: sample-app-service
CI_AWS_ECS_TASK_DEFINITION: first-run-task-definition:1
AWS_REGION: us-west-2
include:
- template: Jobs/Build.gitlab-ci.yml
- template: Jobs/Deploy/ECS.gitlab-ci.yml
But on the stage production_ecs my pipeline was failed. And I gettin an error as :
Using docker image sha256:9920fa2b45873efd675cf992d7130e8a691133fd51ec88b2765d977f82695263 for registry.gitlab.com/gitlab-org/cloud-deploy/aws-ecs:latest with digest registry.gitlab.com/gitlab-org/cloud-deploy/aws-ecs#sha256:c833e508b00451a09639e96fb7551f62abb4f75ba1d31f88b4dc8299c608e0dd ...
22$ ecs update-task-definition
23Unable to locate credentials. You can configure credentials by running "aws configure".
25
Cleaning up project directory and file based variables
00:01
27ERROR: Job failed: exit code 1
Why I'm getting this error? How can I configure was on ECS?

You need to pass your credentials (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY) as environment variables and also set the region as environment variable AWS_DEFAULT_REGION. You can set them by going to your project (or group) and navigate to Settings > CI/CD.
You can find the documentation for ecs deployments here: https://docs.gitlab.com/ee/ci/cloud_deployment/#deploy-your-application-to-the-aws-elastic-container-service-ecs
and for configuring the aws cli (which is used in the template and GitLab AWS Docker images) here : https://docs.gitlab.com/ee/ci/cloud_deployment/#run-aws-commands-from-gitlab-cicd

Related

[Laravel + AWS]: Passing stored ENV variables into an EC2 container via an ECS deployment task results in the .env file being ignored

I'm working with a dockerized Laravel 8 website hosted on an ECS-managed EC2 instance.
Deployments are managed by AWS CodePipeline.
The code is stored in GitHub.
Production images are built by CodeBuild.
An ECS service runs a production release task to push that prod image to alternating EC2 instances.
This works well. I'm having problems, however, altering how I provision environment variables to newly released containers. During early development these were provided in a static .env file, then generated in CodeBuild during the build stage, and are now specified in the deployment task as stored AWS Systems Manager variables.
The goal is to allow the automatic provision of env variables without storing secrets in the codebase or build artifacts, or having to SSH into containers, and that's achieved.
However, I'd still like to run php artisan key:generate in the build stage to create a new app key when the production site is released, rather than storing that statically in AWS.
The problem
Whenever I specify any environment variables in the ECS deployment task, any environment variables I have provided in the site's .env file (specifically, those built in the CodeBuild build stage) are ignored.
Here's a snippet of the relevant buildspec.yml section:
build:
commands:
- echo Building front-end assets...
- npm run prod
- echo Installing composer libraries...
- composer install
- echo Creating .env file...
- touch .env
- echo Generating app key...
- php artisan key:generate
On release the site will throw a missing app key error, implying that php artisan key:generate failed - yet CodeBuild logging reports that it has succeeded. If I remove the environment variables from the ECS task then the generated app key is read correctly and the site works.
Illuminate\Encryption\MissingAppKeyException
No application encryption key has been specified.
It appears, basically, that if I want to provide some environment variables via ECS deployment task injection, I have to provide all environment variables that way, because any others will be ignored.
Any insights into why the ECS task environment variables method could result in the .env (or its contents) being ignored?

Deployed wrong app to Google Cloud, how to prevent?

When I run gcloud app deploy I get the message:
ERROR: (gcloud.app.deploy) The required property [project] is not currently set.
You may set it for your current workspace by running:
$ gcloud config set project VALUE
or it can be set temporarily by the environment variable [CLOUDSDK_CORE_PROJECT]
Setting the project in my current workspace caused me to deploy the wrong app, so I don't want to do that again. I want to try deploying using the environment variable option listed above. How do I do this? What is the deploy syntax to use CLOUDSDK_CORE_PROJECT? I thought this would come from my app.yaml but haven't gotten it working.
You should configure it with your project using projectID.
gcloud config set project <PROJECT_ID>
You can find your projectID your GCP account.
You should be able to pass the project as part of the app deploy command:
gcloud app deploy ~/my_app/app.yaml --project=PROJECT
Look at the examples in the Documentation.
I had the same error message recently in a job which worked fine before.
Old:
# Deploy template
.deploy_template: &deploy_definition
image: google/cloud-sdk
stage: deploy
script:
- echo $GOOGLE_KEY > key.json
- gcloud auth activate-service-account --key-file key.json
- gcloud config set compute/zone $GC_REGION
- gcloud config set project $GC_PROJECT
- gcloud container clusters get-credentials $GC_XXX_CLUSTER
- kubectl set image deployment/XXXXXX --record
New:
# Deploy template
.deploy_template: &deploy_definition
image: google/cloud-sdk
stage: deploy
script:
- echo $GOOGLE_KEY > key.json
- gcloud auth activate-service-account --key-file key.json
- gcloud config set project $GC_PROJECT
- gcloud config set compute/zone $GC_REGION
- gcloud container clusters get-credentials $GC_XXX_CLUSTER
- kubectl set image deployment/XXXXXX --record
I just change the order of compute/zone, and it is worked like before.

serverless remove lamda using gitlab CI

I'm using gitlab CI for deployment.
I'm running into a problem when the review branch is deleted.
stop_review:
variables:
GIT_STRATEGY: none
stage: cleanup
script:
- echo "$AWS_REGION"
- echo "Stopping review branch"
- serverless config credentials --provider aws --key ${AWS_ACCESS_KEY_ID} --secret ${AWS_SECRET_ACCESS_KEY}
- echo "$CI_COMMIT_REF_NAME"
- serverless remove --stage=$CI_COMMIT_REF_NAME --verbose
only:
- branches
except:
- master
environment:
name: review/$CI_COMMIT_REF_NAME
action: stop
when: manual
error is This command can only be run in a Serverless service directory. Make sure to reference a valid config file in the current working directory if you're using a custom config file
I have tried different GIT_STRATEGY, can some point me in right direction?
In order to run serverless remove, you'll need to have the serverless.yml file available, which means the actual repository will need to be cloned. (or that file needs to get to GitLab in some way).
It's required to have a serverless.yml configuration file available when you run serverless remove because the Serverless Framework allows users to provision infrastructure using not only the framework's YML configuration but also additional resources (like CloudFormation in AWS) which may or may not live outside of the specified app or stage CF Stack entirely.
In fact, you can also provision infrastructure into other providers as well (AWS, GCP, Azure, OpenWhisk, or actually any combination of these).
So it's not sufficient to simply identify the stage name when running sls remove, you'll need the full serverless.yml template.

Using Packer, how does one build an Amazon ECR Instance remotely

My problem:
I want a docker image saved as an artifact in the Amazon EC2 Registry, built by packer (and ansible)
My limitations:
The build needs to be triggered by Bitbucket Pipelines. Therefore the build steps need to be executed either in Bitbucket Pipelines itself or in an AWS EC2 instance/container.
This is because not all dev machines necessarily have the permissions/packages to build from their local environment. I only want these images to be built as a result of an automated CI process.
What I have tried:
Using Packer, I am able to build AMIs remotely. And I am able to build Docker images using Packer (built locally and pushed remotely to Amazon ECR).
However the Bitbucket Pipeline, which executes build steps within a docker container already, does not have access to the docker daemon process 'docker run'.
The error I recieve in Bitbucket Pipelines:
+ packer build ${BITBUCKET_CLONE_DIR}/build/pipelines_builder/template.json
docker output will be in this color.
==> docker: Creating a temporary directory for sharing data...
==> docker: Pulling Docker image: hashicorp/packer
docker: Using default tag: latest
docker: latest: Pulling from hashicorp/packer
docker: 88286f41530e: Pulling fs layer
...
...
docker: 08d16a84c1fe: Pull complete
docker: Digest: sha256:c093ddf4c346297598aaa13d3d12fe4e9d39267be51ae6e225c08af49ec67fc0
docker: Status: Downloaded newer image for hashicorp/packer:latest
==> docker: Starting docker container...
docker: Run command: docker run -v /root/.packer.d/tmp/packer-docker426823595:/packer-files -d -i -t hashicorp/packer /bin/bash
==> docker: Error running container: Docker exited with a non-zero exit status.
==> docker: Stderr: docker: Error response from daemon: authorization denied by plugin pipelines: Command not supported..
==> docker: See 'docker run --help'.
==> docker:
Build 'docker' errored: Error running container: Docker exited with a non-zero exit status.
Stderr: docker: Error response from daemon: authorization denied by plugin pipelines: Command not supported..
See 'docker run --help'.
==> Some builds didn't complete successfully and had errors:
--> docker: Error running container: Docker exited with a non-zero exit status.
Stderr: docker: Error response from daemon: authorization denied by plugin pipelines: Command not supported..
See 'docker run --help'.
==> Builds finished but no artifacts were created.
The following quote says it all (taken from link):
Other commands, such as docker run, are currently forbidden for
security reasons on our shared build infrastructure.
So, I know why the following is happening. It is a limitation I am faced with. I am aware that I need to find an alternative.
A Possible Solution:
The only solution I can think of at the moment is, a Bitbucket Pipeline using an image with terraform and ansible installed, containing the following:
ansible-local:
terraform apply (spins up an instance/container from AMI with ansible and packer installed)
ansible-remote (to the instance mentioned above)
clone devops repo with packer build script on it
execute packer build command (build command depends on ansible, build creates ec2 container registry image)
ansible-local
terraform destroy
Is the above solution a viable option? Are there alternatives? Can Packer not run commands and commit from a container running remotely in ECS?
My long term solution will be to only use bitbucket pipelines to trigger lambda functions in AWS, which will spin up containers in our EC2 Container Registry and perform the builds there. More control, and we can have devs trigger the lambda functions from their machines (with more bespoke dynamic variables).
I setup some terraform scripts which can be used to execute from any CI tool, with a few pre-requisites:
CI tool must have API access tokens to AWS (the only cloud provider supported so far)
CI tool must be able to run Terraform or dockerized Terraform container
This will spin up a fresh EC2 instance in your own VPC of your choosing and execute a script.
For this stack overflow question, that script would contain some packer commands to build and push a docker image. The AMI for the EC2 instance would need packer and docker installed.
Find more info at: https://github.com/dnk8n/remote-provisioner
My understanding for your problem to block you is, the bitbucket pipelines (normally call them agents) have no enough permission to do the job (terraform apply, packer build) to your AWS Account.
Since bitbucket pipeline agents are running in Bitbucket cloud, not in your AWS account (which you can assign IAM role on them), you should create an account with IAM role (policies and permissions are list below) assign its AWS API keys (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and option of AWS_SESSION_TOKEN) as environment variables in your pipeline.
You can refer this document on how to add your AWS credentials to Bitbucket Pipelines
https://confluence.atlassian.com/bitbucket/deploy-to-amazon-aws-875304040.html
With that, you can run packer or terraform commands without issues.
For minimum policies you need to assign to run packer build, refer this document:
https://www.packer.io/docs/builders/amazon.html#using-an-iam-task-or-instance-role
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action" : [
"ec2:AttachVolume",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:CopyImage",
"ec2:CreateImage",
"ec2:CreateKeypair",
"ec2:CreateSecurityGroup",
"ec2:CreateSnapshot",
"ec2:CreateTags",
"ec2:CreateVolume",
"ec2:DeleteKeypair",
"ec2:DeleteSecurityGroup",
"ec2:DeleteSnapshot",
"ec2:DeleteVolume",
"ec2:DeregisterImage",
"ec2:DescribeImageAttribute",
"ec2:DescribeImages",
"ec2:DescribeInstances",
"ec2:DescribeRegions",
"ec2:DescribeSecurityGroups",
"ec2:DescribeSnapshots",
"ec2:DescribeSubnets",
"ec2:DescribeTags",
"ec2:DescribeVolumes",
"ec2:DetachVolume",
"ec2:GetPasswordData",
"ec2:ModifyImageAttribute",
"ec2:ModifyInstanceAttribute",
"ec2:ModifySnapshotAttribute",
"ec2:RegisterImage",
"ec2:RunInstances",
"ec2:StopInstances",
"ec2:TerminateInstances"
],
"Resource" : "*"
}]
}
For terraform plan/apply, you need to assign the most permission because terraform can take care of nearly all aws resources.
Second, for your existing requirement, you only need to run packer and terraform commands, you needn't run docker command in bitbucket pipeline.
So the normal pipeline with above aws API environment, it should work directly.
image: hashicorp/packer
pipelines:
default:
- step:
script:
- packer build <your_packer_json_file>
You should be fine to run terraform command in image hashicorp/terraform as well.
I think I would approach it like this:
Have a Packer build that produces a "Docker build AMI" that you can run on EC2. Essentially it would just be an AMI with Docker pre-installed, plus anything else you need. This Packer build can be stored in another BitBucket Git repo and you can have this image built and pushed to EC2 through another BitBucket pipeline so that any changes to your build AMI are automatically built and pushed as an AMI. As you already suggest, using the AWS Builder here for that.
Have a Terraform script as part of your current project that is called by your BitBucket pipeline to spin-up an instance of the above "Docker Build" AMI when your pipeline starts e.g terraform apply
Use the Packer Docker Builder on the above EC2 instance to build and push your Docker image to ECR (applying your Ansible scripts).
Terraform destroy the environment once the build is complete
Doing that keeps everything in infrastructure as code and should make it fairly trivial for you to move the Docker build locally into the BitBucket pipeline if they offer support for running docker run at any point.

How to deploy with Gitlab-Ci to EC2 using AWS CodeDeploy/CodePipeline/S3

I've been working on a SlackBot project based in Scala using Gradle and have been looking into ways to leverage Gitlab-CI for the purpose of deploying to AWS EC2.
I am able to fully build and test my application with Gitlab-CI.
How can I perform a deployment from Gitlab-CI to Amazon EC2 Using CodeDeploy and CodePipeline?
Answer to follow as a Guide to do this.
I have created a set of sample files to go with the Guide provided below.
These files are available at the following link: https://gitlab.com/autronix/gitlabci-ec2-deployment-samples-guide/
Scope
This guide assumes the following
Gitlab EE hosted project - may work on private CE/EE instances (not tested)
Gitlab as the GIT versioning repository
Gitlab-CI as the Continuous Integration Engine
Existing AWS account
AWS EC2 as the target production or staging system for the deployment
AWS EC2 Instance running Amazon Linux AMI
AWS S3 as the storage facility for deployment files
AWS CodeDeploy as the Deployment engine for the project
AWS CodePipeline as the Pipeline for deployment
The provided .gitlab-ci.yml sample is based on a Java/Scala + Gradle project.
The script is provided as a generic example and will need to be adapted to your specific needs when implementing Continuous Delivery through this method.
The guide will assume that the user has basic knowledge about AWS services and how to perform the necessary tasks.
Note: The guide provided in this sample uses the AWS console to perform tasks. While there are likely CLI equivalent for the tasks performed here, these will not be covered throughout the guide.
Motivation
The motivation for creating these scripts and deployment guide came from the lack of availability of a proper tutorial showing how to implement Continuous Delivery using Gitlab and AWS EC2.
Gitlab introduced their freely available CI engine by partnering with Digital Ocean, which enables user repositories to benefit from good quality CI for free.
One of the main advantages of using Gitlab is that they provide built-in Continuous Integration containers for running through the various steps and validate a build.
Unfortunately, Gitblab nor AWS provide an integration that would allow to perform Continuous Deliver following passing builds.
This Guide and Scripts (https://gitlab.com/autronix/gitlabci-ec2-deployment-samples-guide/) provide a simplified version of the steps that I've undertaken in order to have a successful CI and CD using both Gitlab and AWS EC2 that can help anyone else get started with this type of implementation.
Setting up the environment on AWS
The first step in ensuring a successful Continuous Delivery process is to set up the necessary objects on AWS in order to allow the deployment process to succeed.
AWS IAM User
The initial requirement will be to set up an IAM user:
https://console.aws.amazon.com/iam/home#users
Create a user
Attach the following permissions:
CodePipelineFullAccess
AmazonEC2FullAccess
AmazonS3FullAccess
AWSCodeDeployFullAccess
Inline Policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"autoscaling:*",
"codedeploy:*",
"ec2:*",
"elasticloadbalancing:*",
"iam:AddRoleToInstanceProfile",
"iam:CreateInstanceProfile",
"iam:CreateRole",
"iam:DeleteInstanceProfile",
"iam:DeleteRole",
"iam:DeleteRolePolicy",
"iam:GetInstanceProfile",
"iam:GetRole",
"iam:GetRolePolicy",
"iam:ListInstanceProfilesForRole",
"iam:ListRolePolicies",
"iam:ListRoles",
"iam:PassRole",
"iam:PutRolePolicy",
"iam:RemoveRoleFromInstanceProfile",
"s3:*"
],
"Resource": "*"
}
]
}
Generate security credentials
Note: The policies listed above are very broad in scope. You may adjust to your requirements by creating custom policies that limit access only to certain resources.
Note: Please keep these credentials in a safe location. You will need them in a later step.
AWS EC2 instance & Role
Instance Role for CodeDeploy
https://console.aws.amazon.com/iam/home#roles
Create a new Role that will be assigned to your EC2 Instance in order to access S3,
Set the name according to your naming conventions (ie. MyDeploymentAppRole)
Select Amazon EC2 in order to allow EC2 instances to run other AWS services
Attache the following policies:
AmazonEC2FullAccess
AmazonS3FullAccess
AWSCodeDeployRole
Note: The policies listed above are very broad in scope. You may adjust to your requirements by creating custom policies that limit access only to certain resources.
Launch Instance
https://console.aws.amazon.com/ec2/v2/home
Click on Launch Instance and follow these steps:
Select Amazon Linux AMI 2016.03.3 (HVM), SSD Volume Type
Select the required instance type (t2.micro by default)
Next
Select IAM Role to be MyDeploymentAppRole (based on the name created in the previous section)
Next
Select Appropriate Storage
Next
Tag your instance with an appropriate name (ie. MyApp-Production-Instance)
add additional tags as required
Next
Configure Security group as necessary
Next
Review and Launch your instance
You will be provided with the possibility to either generate or use SSH keys. Please select the appropriate applicable method.
Setting up instance environment
Install CodeDeploy Agent
Log into your newly created EC2 instance and follow the instructions:
http://docs.aws.amazon.com/codedeploy/latest/userguide/how-to-run-agent-install.html
CodeDeploy important paths:
CodeDeploy Deployment base directory: /opt/codedeploy-agent/deployment-root/
CodeDeploy Log file: /var/log/aws/codedeploy-agent/codedeploy-agent.log
Tip: run tail -f /var/log/aws/codedeploy-agent/codedeploy-agent.log to keep track of the deployment in real time.
Install your project prerequisites
If your project has any prerequisites to run, make sure that you install those before running the deployment, otherwise your startup script may fail.
AWS S3 repository
https://console.aws.amazon.com/s3/home
In this step, you will need to create an S3 bucket that will be holding your deployment files.
Simply follow these steps:
Choose Create Bucket
Select a bucket name (ie. my-app-codepipeline-deployment)
Select a region
In the console for your bucket select Properties
Expand the Versioning menu
choose Enable Versioning
AWS CodeDeploy
https://console.aws.amazon.com/codedeploy/home#/applications
Now that the basic elements are set, we are ready to create the Deployment application in CodeDeploy
To create a CodeDeploy deployment application follow these steps:
Select Create New Application
Choose an Application Name (ie. MyApp-Production )
Choose a Deployment Group Name (ie. MyApp-Production-Fleet)
Select the EC2 Instances that will be affected by this deployment - Search by Tags
Under Key Select Name
Under Value Select MyApp-Production-Instance
Under Service Role, Select MyDeploymentAppRole
Click on Create Application
Note: You may assign the deployment to any relevant Tag that applied to the desired instances targeted for deployment. For simplicity's sake, only the Name Tag has been used to choose the instance previously defined.
AWS CodePipeline
https://console.aws.amazon.com/codepipeline/home#/dashboard
The next step is to proceed with creating the CodePipeline, which is in charge of performing the connection between the S3 bucket and the CodeDeploy process.
To create a CodePipeline, follow these steps:
Click on Create Pipeline
Name your pipeline (ie. MyAppDeploymentPipeline )
Next
Set the Source Provider to Amazon S3
set Amazon S3 location to the address of your bucket and target deployment file (ie. s3://my-app-codepipeline-deployment/myapp.zip )
Next
Set Build Provider to None - This is already handled by Gitlab-CI as will be covered later
Next
Set Deployment Provider to AWS CodeDeploy
set Application Name to the name of your CodeDeploy Application (ie. MyApp-Production)
set Deployment Group to the name of your CodeDeploy Deployment Group (ie. MyApp-Production-Fleet )
Next
Create or Choose a Pipeline Service Role
Next
Review and click Create Pipeline
Setting up the environment on Gitlab
Now that The AWS environment has been prepared to receive the application deployment we can proceed with setting up the CI environment and settings to ensure that the code is built and deployed to an EC2 Instance using S3, CodeDeploy and the CodePipeline.
Gitlab Variables
In order for the deployment to work, we will need to set a few environment variables in the project repository.
In your Gitlab Project, navigate to the Variables area for your project and set the following variables:
AWS_DEFAULT_REGION => your AWS region
AWS_SECRET_ACCESS_KEY => your AWS user credential secret key (obtained when you generated the credentials for the user)
AWS_ACCESS_KEY_ID => your AWS user credential key ID (obtained when you generated the credentials for the user)
AWS_S3_LOCATION => the location of your deployment zip file (ie. s3://my-app-codepipeline-deployment/my_app.zip )
These variables will be accessible by the scripts executed by the Gitlab-CI containers.
Startup script
A simple startup script has been provided (https://gitlab.com/autronix/gitlabci-ec2-deployment-samples-guide/blob/master/deploy/extras/my_app.sh) to allow the deployment to perform the following tasks:
Start the application and create a PID file
Check the status of the application through the PID file
Stop the application
You may find this script under deploy/extras/my_app.sh
Creating gitlab-ci.yml
The gitlab-ci.yml file is in charge of performing the Continuous Integration tasks associated with a given commit.
It acts as a simplified group of shell scripts that are organized in stages which correspond to the different phases in your Continuous Integration steps.
For more information on the details and reference, please refer to the following two links:
http://docs.gitlab.com/ce/ci/quick_start/README.html
http://docs.gitlab.com/ce/ci/yaml/README.html
You may validate the syntax of your gitlab-ci.yml file at any time with the following tool: https://gitlab.com/ci/lint
For the purpose of deployment, we will cover only the last piece of the sample provided with this guide:
deploy-job:
# Script to run for deploying application to AWS
script:
- apt-get --quiet install --yes python-pip # AWS CLI requires python-pip, python is installed by default
- pip install -U pip # pip update
- pip install awscli # AWS CLI installation
- $G build -x test -x distTar # # Build the project with Gradle
- $G distZip # creates distribution zip for deployment
- aws s3 cp $BUNDLE_SRC $AWS_S3_LOCATION # Uploads the zipfile to S3 and expects the AWS Code Pipeline/Code Deploy to pick up
# requires previous CI stages to succeed in order to execute
when: on_success
stage: deploy
environment: production
cache:
key: "$CI_BUILD_NAME/$CI_BUILD_REF_NAME"
untracked: true
paths:
- build/
# Applies only to tags matching the regex: ie: v1.0.0-My-App-Release
only:
- /^v\d+\.\d+\.\d+-.*$/
except:
- branches
- triggers
This part represents the whole job associated with the deployment following the previous, if any, C.I. stages.
The relevant part associated with the deployment is this:
# Script to run for deploying application to AWS
script:
- apt-get --quiet install --yes python-pip # AWS CLI requires python-pip, python is installed by default
- pip install -U pip # pip update
- pip install awscli # AWS CLI installation
- $G build -x test -x distTar # # Build the project with Gradle
- $G distZip # creates distribution zip for deployment
- aws s3 cp $BUNDLE_SRC $AWS_S3_LOCATION # Uploads the zipfile to S3 and expects the AWS Code Pipeline/Code Deploy to pick up
The first step involves installing the python package management system: pip.
pip is required to install AWS CLI, which is necessary to upload the deployment file to AWS S3
In this example, we are using Gradle (defined by the environment variable $G); Gradle provides a module to automatically Zip the deployment files. Depending on the type of project you are deploying this method will be different for generating the distribution zip file my_app.zip.
The aws s3 cp $BUNDLE_SRC $AWS_S3_LOCATION command uploads the distribution zip file to the Amazon S3 location that we defined earlier. This file is then automatically detected by CodePipeline, processed and sent to CodeDeploy.
Finally, CodeDeploy performs the necessary tasks through the CodeDeploy agent as specified by the appspec.yml file.
Creating appspec.yml
The appspec.yml defines the behaviour to be followed by CodeDeploy once a deployment file has been received.
A sample file has been provided along with this guide along with sample scripts to be executed during the various phases of the deployment.
Please refer to the specification for the CodeDeploy AppSpec for more information on how to build the appspec.yml file: http://docs.aws.amazon.com/codedeploy/latest/userguide/app-spec-ref.html
Generating the Deployment ZipFile
In order for CodeDeploy to work properly, you must create a properly generated zip file of your application.
The zip file must contain:
Zip root
appspec.yml => CodeDeploy deployment instructions
deployment stage scripts
provided samples would be placed in the scripts directory in the zip file, would require the presence my_app.sh script to be added at the root of your application directory (ie. my_app directory in the zip)
distribution code - in our example it would be under the my_app directory
Tools such as Gradle and Maven are capable of generating distribution zip files with certain alterations to the zip generation process.
If you do not use such a tool, you may have to instruct Gitlab-CI to generate this zip file in a different manner; this method is outside of the scope of this guide.
Deploying your application to EC2
The final step in this guide is actually performing a successful deployment.
The stages of Continuous integration are defined by the rules set in the gitlab-ci.yml. The example provided with this guide will initiate a deploy for any reference matching the following regex: /^v\d+\.\d+\.\d+-.*$/.
In this case, pushing a Tag v1.0.0-My-App-Alpha-Release through git onto your remote Gitlab would initiate the deployment process. You may adjust these rules as applicable to your project requirements.
The gitlab-ci.yml example provided would perform the following jobs when detecting the Tag v1.0.0-My-App-Alpha-Release:
build job - compile the sources
test job - run the unit tests
deploy-job - compile the sources, generate the distribution zip, upload zip to Amazon S3
Once the distribution zip has been uploaded to Amazon S3, the following steps happen:
CodePipeline detects the change in the revision of the S3 zip file
CodePipeline validates the file
CodePipeline sends signal that the bundle for CodeDeploy is ready
CodeDeploy executes the deployment steps
Start - initialization of the deployment
Application Stop - Executes defined script for hook
DownloadBundle - Gets the bundle file from the S3 repository through the CodePipeline
BeforeInstall - Executes defined script for hook
Install - Copies the contents to the deployment location as defined by the files section of appspec.yml
AfterInstall - Executes defined script for hook
ApplicationStart - Executes defined script for hook
ValidateService - Executes defined script for hook
End - Signals the CodePipeline that the deployment has completed successfully
Successful deployment screenshots:
References
Gitlab-CI QuickStart: http://docs.gitlab.com/ce/ci/quick_start/README.html
Gitlab-CI .gitlab-ci.yml: http://docs.gitlab.com/ce/ci/yaml/README.html
AWS CodePipeline Walkthrough: http://docs.aws.amazon.com/codepipeline/latest/userguide/getting-started-w.html
Install or Reinstall the AWS CodeDeploy Agent: http://docs.aws.amazon.com/codedeploy/latest/userguide/how-to-run-agent-install.html
AWS CLI Getting Started - Env: http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-environment
AppSpec Reference: http://docs.aws.amazon.com/codedeploy/latest/userguide/app-spec-ref.html
autronix's answer is awesome, although in my case I had to gave up the CodePipeline part due to the following error : The deployment failed because a specified file already exists at this location : /path/to/file. This is because I already have files at the location since I'm using an existing instance with a server running already on it.
Here is my workaround :
In the .gitlab-ci.yml here is what I changed :
deploy:
stage: deploy
script:
- curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" # Downloading and installing awscli
- unzip awscliv2.zip
- ./aws/install
- aws deploy push --application-name App-Name --s3-location s3://app-deployment/app.zip # Adding revision to s3 bucket
- aws deploy create-deployment --application-name App-Name --s3-location bucket=app-deployment,key=app.zip,bundleType=zip --deployment-group-name App-Name-Fleet --deployment-config-name CodeDeployDefault.OneAtATime --file-exists-behavior OVERWRITE # Ordering the deployment of the new revision
when: on_success
only:
refs:
- dev
The important part is the aws deploy create-deployment line with it's flag --file-exists-behavior. There are three options available, OVERWRITE was the one I needed and I couldn't manage to set this flag with CodePipeline so I went with the cli option.
I've also changed a bit the part for the upload of the .zip. Instead of creating the .zip myself I'm using aws deploy push command which will create a .zip for me on the s3 bucket.
There is really nothing else to modify.

Resources