Codebuild Workflow with environment variables - bash

I have a monolith github project that has multiple different applications that I'd like to integrate with an AWS Codebuild CI/CD workflow. My issue is that if I make a change to one project, I don't want to update the other. Essentially, I want to create a logical fork that deploys differently based on the files changed in a particular commit.
Basically my project repository looks like this:
- API
-node_modules
-package.json
-dist
-src
- REACTAPP
-node_modules
-package.json
-dist
-src
- scripts
- 01_install.sh
- 02_prebuild.sh
- 03_build.sh
- .ebextensions
In terms of Deployment, my API project gets deployed to elastic beanstalk and my REACTAPP gets deployed as static files to S3. I've tried a few things but decided that the only viable approach is to manually perform this deploy step within my own 03_build.sh script - because there's no way to build this dynamically within Codebuild's Deploy step (I could be wrong).
Anyway, my issue is that I essentially need to create a decision tree to determine which project gets excecuted, so if I make a change to API and push, it doesn't automatically deploy REACTAPP to S3 unnecessarliy (and vica versa).
I managed to get this working on localhost by updating environment variables at certain points in the build process and then reading them in separate steps. However this fails on Codedeploy because of permission issues i.e. I don't seem to be able to update env variables from within the CI process itself.
Explicitly, my buildconf.yml looks like this:
version: 0.2
env:
variables:
VARIABLES: 'here'
AWS_ACCESS_KEY_ID: 'XXXX'
AWS_SECRET_ACCESS_KEY: 'XXXX'
AWS_REGION: 'eu-west-1'
AWS_BUCKET: 'mybucket'
phases:
install:
commands:
- sh ./scripts/01_install.sh
pre_build:
commands:
- sh ./scripts/02_prebuild.sh
build:
commands:
- sh ./scripts/03_build.sh
I'm running my own shell scripts to perform some logic and I'm trying to pass variables between scripts: install->prebuild->build
To give one example, here's the 01_install.sh where I diff each project version to determine whether it needs to be updated (excuse any minor errors in bash):
#!/bin/bash
# STAGE 1
# _______________________________________
# API PROJECT INSTALL
# Do if API version was changed in prepush (this is just a sample and I'll likely end up storing the version & previous version within the package.json):
if [[ diff ./api/version.json ./api/old_version.json ]] > /dev/null 2>&1
## then
echo "🤖 Installing dependencies in API folder..."
cd ./api/ && npm install
## Set a variable to be used by the 02_prebuild.sh script
TEST_API="true"
export TEST_API
else
echo "No change to API"
fi
# ______________________________________
# REACTAPP PROJECT INSTALL
# Do if REACTAPP version number has changed (similar to above):
...
Then in my next stage I read these variables to determine whether I should run tests on the project 02_prebuild.sh:
#!/bin/bash
# STAGE 2
# _________________________________
# API PROJECT PRE-BUILD
# Do if install was initiated
if [[ $TEST_API == "true" ]]; then
echo "🤖 Run tests on API project..."
cd ./api/ && npm run tests
echo $TEST_API
BUILD_API="true"
export BUILD_API
else
echo "Don't test API"
fi
# ________________________________
# TODO: Complete for REACTAPP, similar to above
...
In my final script I use the BUILD_API variable to build to the dist folder, then I deploy that to either Elastic Beanstalk (for API) or S3 (for REACTAPP).
When I run this locally it works, however when I run it on Codebuild I get a permissions failure presumably because my bash scripts cannot export ENV_VAR. I'm wondering either if anyone knows how to update ENV_VARIABLES from within the build process itself, or if anyone has a better approach to achieve my goals (conditional/ variable build process on Codebuild)
EDIT:
So an approach that I've managed to get working is instead of using Env variables, I'm creating new files with specific names using fs then reading the contents of the file to make logical decisions. I can access these files from each of the bash scripts so it works pretty elegantly with some automatic cleanup.
I won't edit the original question as it's still an issue and I'd like to know how/ if other people solved this. I'm still playing around with how to actually use the eb deploy and s3 cli commands within the build scripts as codebuild does not seem to come with the eb cli installed and my .ebextensions file does not seem to be honoured.

Source control repos like Github can be configured to send a post event to an API endpoint when you push to a branch. You can consume this post request in lambda through API Gateway. This event data includes which files were modified with the commit. The lambda function can then process this event to figure out what to deploy. If you’re struggling with deploying to your servers from the codebuild container, you might want to try posting an artifact to s3 with an installable package and then have your server grab it from there.

Related

Reuse a conda environment within the same gitlab CI pipeline?

Motivation
My main goal is this: Within a pipeline, I would like to reuse as much as possible (i.e. not build the conda environment multiple times, if all jobs share the same environment).
In my project, I use conda as depency manager and gitlab ci/cd for continuous integration. For the sake of simplicity, let's say I have a build job and a test job. The most straight forward approach would be to create the conda environment from the environment.yml in any job and then do the actual work. This adds an overhead of several minutes to any job. It also seems like overhead to me, since I would like to build the environment once in the build job and then use it in my test job (especially when creating multiple jobs for different tests).
Research Results
The first thing I need to do is to set the CONDA_ENVS_PATH to somewhere in my project directory.
I've looked at gitlab's caching mechanism, but found that it only helps for the same job in repeated runs of the same pipeline, but not not for different jobs of the same run within a pipeline.
I've also looked at gitlab's artifacts mechanism, but found that due the up- and download of those, they don't increase run time significantly (basically I only save time by not downloading many small packages and not having to compile them again, but loose time by compressing and decompressing them).
I've also tried to make use of the GIT_CLEAN_FLAGS by setting them to none in my test job. That way, the conda environment is not deleted when getting the latest data from git. This does cause a serious speedup in my pipelines, but it does not work all the time. Some jobs fail, not finding the conda environment. A simple rerun does magically work however. Of course, in a CI / CD setting, this nondeterminism is not practical.
As a workaround to the original question, we've come up with an intermediate solution. We introduce a docker image, holding our custom environment, through a minimal Dockerfile and a couple of changes to our .gitlab-ci.yml (see below for an example). By only executing the job that builds our custom docker image when the dockerfile or environment changed, we save valuable time on each run. At the same time, we keep the full flexibility in our environment definition and can adjust it exactly how we usually would: by changing the environment.yml.
Question
All solutions tried so far are not really satisfactory. Thus my question is: How can my test job reuse the same conda environment as my build job in gitlab-ci?
In case someone else would like to use a similar setup: Here is my current approach:
# Dockerfile
FROM continuumio/miniconda3:latest
COPY environment.yml .
RUN conda env create -f environment.yml
ENTRYPOINT [""]
# .gitlab-ci.yml
# Use the latest version of this project's docker file
# This will be the default image for all jobs unless specified otherwise
image: $CI_REGISTRY_IMAGE:latest
# Change cache directories to be inside the project directory since we can
# only cache local items.
variables:
PRE_COMMIT_HOME: "${CI_PROJECT_DIR}/.cache/pre-commit"
stages:
- build
- test
# Make conda environment available to all jobs
# This expects the conda environment to have the same name as the gitlab project path
# Avoid dashes and other non-alphabetical characters
default:
before_script:
- source activate "${CI_PROJECT_NAME}"
# Build the docker image including the correct conda environment for subsequent jobs
# This assumes a docker image registry being configured for your gitlab instance
dockerimage:
stage: build
image:
name: gcr.io/kaniko-project/executor:debug
entrypoint: [""]
rules:
- changes:
- Dockerfile
- environment.yml
before_script: [ ]
script:
- mkdir -p /kaniko/.docker
- echo "{\"auths\":{\"$CI_REGISTRY\":{\"username\":\"$CI_REGISTRY_USER\",\"password\":\"$CI_REGISTRY_PASSWORD\"}}}" > /kaniko/.docker/config.json
- /kaniko/executor --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/Dockerfile --destination $CI_REGISTRY_IMAGE:latest
# Run pytest
pytest:
stage: test
script:
- conda install pytest-cov
- pytest tests --cov=src
Edit: I replaced the example code for using GIT_CLEAN_FLAGS with our most recent approach: using a custom docker image.
Disclaimer: I saw this and this question, but they are both dated, don't have a satisfying answer and I only found them after writing this question, so I hope my additional question increases discoverability of the topic.

serverless remove lamda using gitlab CI

I'm using gitlab CI for deployment.
I'm running into a problem when the review branch is deleted.
stop_review:
variables:
GIT_STRATEGY: none
stage: cleanup
script:
- echo "$AWS_REGION"
- echo "Stopping review branch"
- serverless config credentials --provider aws --key ${AWS_ACCESS_KEY_ID} --secret ${AWS_SECRET_ACCESS_KEY}
- echo "$CI_COMMIT_REF_NAME"
- serverless remove --stage=$CI_COMMIT_REF_NAME --verbose
only:
- branches
except:
- master
environment:
name: review/$CI_COMMIT_REF_NAME
action: stop
when: manual
error is This command can only be run in a Serverless service directory. Make sure to reference a valid config file in the current working directory if you're using a custom config file
I have tried different GIT_STRATEGY, can some point me in right direction?
In order to run serverless remove, you'll need to have the serverless.yml file available, which means the actual repository will need to be cloned. (or that file needs to get to GitLab in some way).
It's required to have a serverless.yml configuration file available when you run serverless remove because the Serverless Framework allows users to provision infrastructure using not only the framework's YML configuration but also additional resources (like CloudFormation in AWS) which may or may not live outside of the specified app or stage CF Stack entirely.
In fact, you can also provision infrastructure into other providers as well (AWS, GCP, Azure, OpenWhisk, or actually any combination of these).
So it's not sufficient to simply identify the stage name when running sls remove, you'll need the full serverless.yml template.

Azure DevOps ThirdParty Tools for build / Deployment

List item
pipelines:
default:
- step:
name: Push changes to Commerce Cloud
script:
- dcu --putAll $OCCS_CODE_LOCATION --node $OCCS_ADMIN_URL --applicationKey $OCCS_APPLICATION_KEY
- step:
name: Publish changes Live Storefront
image: Python 3.5.1
script:
python publishDCUAuthoredChanges.py -u $OCCS_ADMIN_URL -k $OCCS_APPLICATION_KEY
environment variables:
$OCCS_CODE_LOCATION: Path to location of all OCCS code
$OCCS_ADMIN_URL: URL for the administration interface on the target Commerce Cloud instance
$OCCS_APPLICATION_KEY: application key to use to log into the target Commerce Cloud administration interface
So I want to use Azure Dev Repository to CI / CD.
in the above code block if you see I have specified - dcu & python code in two task.
dcu is nodejs third party oracle tool which needed to be used to migrate code to cloud system. I want to know how to use that tool in azure dev ops,
Second python (or) nodejs which I want to invoke to REST api to publish the changes.
So where to place those files and how do we invoke it.
*********** Update **************
I hosted the self pool agent and able to access the system.
Just start executing basic bash code, but end up in two issue -
1) the git extract files from the repository it is going to _work/1/s, not sure how that path is decided. How can I change that location s
2) I did 'pwd' to the correct path but it fails in 'dcu' command. I tried with npm and other few commands it fails. But things like mkdir , rmdir it create & remove folder correctly from the desired path. when I tried the 'dcu' cmd from the terminal manually from the system it works fine as expected.
You can follow below steps to use DCU tool and python in azure pipelines.
1, create a azure git repo to include dcu zip file and your .py files. You can follow the steps in this thread to create a azure git repo and push local files to azure repo.
2, create azure build pipeline. Please check here to create a yaml pipeline. Here is a good tutorial for you to get started.
To create a classic UI pipeline, please choose Use the classic editor in the pipeline setup wizard, and choose start with an Empty job to start with an empty pipeline and add your own steps.(I will use classic UI pipeline in below example.)
3, Click "+" and search for Extract files task to unzip the DCU zip file. Click the 3dots on the Destination folder field to select a destination folder for extracted dcu files. eg. $(agent.builddirectory). Please check my answer in this thread more information about predefined variables
4, click "+" to add a powershell task. Run below script in screenshot to install dcu and run dcu command. For environment variables (like $OCCS_CODE_LOCATION), please click the variables tab in below screenshot to define them
cd $(agent.builddirectory) #the folder where the unzipped dcu files reside. eg. $(agent.builddirectory)
npm install -g
.\dcu.cmd --putAll $(OCCS_CODE_LOCATION) --node $(OCCS_ADMIN_URL) --applicationKey $(OCCS_APPLICATION_KEY)
5, add Use python version task to define a python version to execute your .py file.
6, add Python script task to run your .py file. Click the 3dots on Script path field to locate your publishDCUAuthoredChanges.py file(this py file and the dcu zip file have been pushed to azure git repo in the above step 1).
You should be able to run the script of above question in the azure devops pipeline.
Update:
_work/1/s is the default working folder for the agent. You cannot change it. Though there are ways to change the location where the source code is cloned from git, the tasks' workingdirectory is still from the default folder.
However, You can change the workingdirectory inside the tasks. And there are predefined variables you can use to refer to the places in the agents. For below example:
$(Agent.BuildDirectory) is mapped to c:\agent_work\1
%(Build.ArtifactStagingDirectory) is mapped to c:\agent_work\1\a
$(Build.BinariesDirectory) is mapped to c:\agent_work\1\b
$(Build.SourcesDirectory) is mapped to c:\agent_work\1\s
The .sh scripts in the _temp folder are generated automatically by the agent which contains the scripts in the bash task.
For above dcu command not found error. You can try adding dcu command path to the system variables path for your local machine's environment variables. (path in user variables cannot be found by agent jobs, For the agent use a different user account to connect to local machine)
.
Or you can use the physically path to dcu command in the bash task. For example let's say the dcu.cmd in the c:\dcu\dcu.cmd on local machine. Then in the bash task use below script to run dcu command.
c:/dcu/dcu.cmd --putAll ...

How to deploy with Gitlab-Ci to EC2 using AWS CodeDeploy/CodePipeline/S3

I've been working on a SlackBot project based in Scala using Gradle and have been looking into ways to leverage Gitlab-CI for the purpose of deploying to AWS EC2.
I am able to fully build and test my application with Gitlab-CI.
How can I perform a deployment from Gitlab-CI to Amazon EC2 Using CodeDeploy and CodePipeline?
Answer to follow as a Guide to do this.
I have created a set of sample files to go with the Guide provided below.
These files are available at the following link: https://gitlab.com/autronix/gitlabci-ec2-deployment-samples-guide/
Scope
This guide assumes the following
Gitlab EE hosted project - may work on private CE/EE instances (not tested)
Gitlab as the GIT versioning repository
Gitlab-CI as the Continuous Integration Engine
Existing AWS account
AWS EC2 as the target production or staging system for the deployment
AWS EC2 Instance running Amazon Linux AMI
AWS S3 as the storage facility for deployment files
AWS CodeDeploy as the Deployment engine for the project
AWS CodePipeline as the Pipeline for deployment
The provided .gitlab-ci.yml sample is based on a Java/Scala + Gradle project.
The script is provided as a generic example and will need to be adapted to your specific needs when implementing Continuous Delivery through this method.
The guide will assume that the user has basic knowledge about AWS services and how to perform the necessary tasks.
Note: The guide provided in this sample uses the AWS console to perform tasks. While there are likely CLI equivalent for the tasks performed here, these will not be covered throughout the guide.
Motivation
The motivation for creating these scripts and deployment guide came from the lack of availability of a proper tutorial showing how to implement Continuous Delivery using Gitlab and AWS EC2.
Gitlab introduced their freely available CI engine by partnering with Digital Ocean, which enables user repositories to benefit from good quality CI for free.
One of the main advantages of using Gitlab is that they provide built-in Continuous Integration containers for running through the various steps and validate a build.
Unfortunately, Gitblab nor AWS provide an integration that would allow to perform Continuous Deliver following passing builds.
This Guide and Scripts (https://gitlab.com/autronix/gitlabci-ec2-deployment-samples-guide/) provide a simplified version of the steps that I've undertaken in order to have a successful CI and CD using both Gitlab and AWS EC2 that can help anyone else get started with this type of implementation.
Setting up the environment on AWS
The first step in ensuring a successful Continuous Delivery process is to set up the necessary objects on AWS in order to allow the deployment process to succeed.
AWS IAM User
The initial requirement will be to set up an IAM user:
https://console.aws.amazon.com/iam/home#users
Create a user
Attach the following permissions:
CodePipelineFullAccess
AmazonEC2FullAccess
AmazonS3FullAccess
AWSCodeDeployFullAccess
Inline Policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"autoscaling:*",
"codedeploy:*",
"ec2:*",
"elasticloadbalancing:*",
"iam:AddRoleToInstanceProfile",
"iam:CreateInstanceProfile",
"iam:CreateRole",
"iam:DeleteInstanceProfile",
"iam:DeleteRole",
"iam:DeleteRolePolicy",
"iam:GetInstanceProfile",
"iam:GetRole",
"iam:GetRolePolicy",
"iam:ListInstanceProfilesForRole",
"iam:ListRolePolicies",
"iam:ListRoles",
"iam:PassRole",
"iam:PutRolePolicy",
"iam:RemoveRoleFromInstanceProfile",
"s3:*"
],
"Resource": "*"
}
]
}
Generate security credentials
Note: The policies listed above are very broad in scope. You may adjust to your requirements by creating custom policies that limit access only to certain resources.
Note: Please keep these credentials in a safe location. You will need them in a later step.
AWS EC2 instance & Role
Instance Role for CodeDeploy
https://console.aws.amazon.com/iam/home#roles
Create a new Role that will be assigned to your EC2 Instance in order to access S3,
Set the name according to your naming conventions (ie. MyDeploymentAppRole)
Select Amazon EC2 in order to allow EC2 instances to run other AWS services
Attache the following policies:
AmazonEC2FullAccess
AmazonS3FullAccess
AWSCodeDeployRole
Note: The policies listed above are very broad in scope. You may adjust to your requirements by creating custom policies that limit access only to certain resources.
Launch Instance
https://console.aws.amazon.com/ec2/v2/home
Click on Launch Instance and follow these steps:
Select Amazon Linux AMI 2016.03.3 (HVM), SSD Volume Type
Select the required instance type (t2.micro by default)
Next
Select IAM Role to be MyDeploymentAppRole (based on the name created in the previous section)
Next
Select Appropriate Storage
Next
Tag your instance with an appropriate name (ie. MyApp-Production-Instance)
add additional tags as required
Next
Configure Security group as necessary
Next
Review and Launch your instance
You will be provided with the possibility to either generate or use SSH keys. Please select the appropriate applicable method.
Setting up instance environment
Install CodeDeploy Agent
Log into your newly created EC2 instance and follow the instructions:
http://docs.aws.amazon.com/codedeploy/latest/userguide/how-to-run-agent-install.html
CodeDeploy important paths:
CodeDeploy Deployment base directory: /opt/codedeploy-agent/deployment-root/
CodeDeploy Log file: /var/log/aws/codedeploy-agent/codedeploy-agent.log
Tip: run tail -f /var/log/aws/codedeploy-agent/codedeploy-agent.log to keep track of the deployment in real time.
Install your project prerequisites
If your project has any prerequisites to run, make sure that you install those before running the deployment, otherwise your startup script may fail.
AWS S3 repository
https://console.aws.amazon.com/s3/home
In this step, you will need to create an S3 bucket that will be holding your deployment files.
Simply follow these steps:
Choose Create Bucket
Select a bucket name (ie. my-app-codepipeline-deployment)
Select a region
In the console for your bucket select Properties
Expand the Versioning menu
choose Enable Versioning
AWS CodeDeploy
https://console.aws.amazon.com/codedeploy/home#/applications
Now that the basic elements are set, we are ready to create the Deployment application in CodeDeploy
To create a CodeDeploy deployment application follow these steps:
Select Create New Application
Choose an Application Name (ie. MyApp-Production )
Choose a Deployment Group Name (ie. MyApp-Production-Fleet)
Select the EC2 Instances that will be affected by this deployment - Search by Tags
Under Key Select Name
Under Value Select MyApp-Production-Instance
Under Service Role, Select MyDeploymentAppRole
Click on Create Application
Note: You may assign the deployment to any relevant Tag that applied to the desired instances targeted for deployment. For simplicity's sake, only the Name Tag has been used to choose the instance previously defined.
AWS CodePipeline
https://console.aws.amazon.com/codepipeline/home#/dashboard
The next step is to proceed with creating the CodePipeline, which is in charge of performing the connection between the S3 bucket and the CodeDeploy process.
To create a CodePipeline, follow these steps:
Click on Create Pipeline
Name your pipeline (ie. MyAppDeploymentPipeline )
Next
Set the Source Provider to Amazon S3
set Amazon S3 location to the address of your bucket and target deployment file (ie. s3://my-app-codepipeline-deployment/myapp.zip )
Next
Set Build Provider to None - This is already handled by Gitlab-CI as will be covered later
Next
Set Deployment Provider to AWS CodeDeploy
set Application Name to the name of your CodeDeploy Application (ie. MyApp-Production)
set Deployment Group to the name of your CodeDeploy Deployment Group (ie. MyApp-Production-Fleet )
Next
Create or Choose a Pipeline Service Role
Next
Review and click Create Pipeline
Setting up the environment on Gitlab
Now that The AWS environment has been prepared to receive the application deployment we can proceed with setting up the CI environment and settings to ensure that the code is built and deployed to an EC2 Instance using S3, CodeDeploy and the CodePipeline.
Gitlab Variables
In order for the deployment to work, we will need to set a few environment variables in the project repository.
In your Gitlab Project, navigate to the Variables area for your project and set the following variables:
AWS_DEFAULT_REGION => your AWS region
AWS_SECRET_ACCESS_KEY => your AWS user credential secret key (obtained when you generated the credentials for the user)
AWS_ACCESS_KEY_ID => your AWS user credential key ID (obtained when you generated the credentials for the user)
AWS_S3_LOCATION => the location of your deployment zip file (ie. s3://my-app-codepipeline-deployment/my_app.zip )
These variables will be accessible by the scripts executed by the Gitlab-CI containers.
Startup script
A simple startup script has been provided (https://gitlab.com/autronix/gitlabci-ec2-deployment-samples-guide/blob/master/deploy/extras/my_app.sh) to allow the deployment to perform the following tasks:
Start the application and create a PID file
Check the status of the application through the PID file
Stop the application
You may find this script under deploy/extras/my_app.sh
Creating gitlab-ci.yml
The gitlab-ci.yml file is in charge of performing the Continuous Integration tasks associated with a given commit.
It acts as a simplified group of shell scripts that are organized in stages which correspond to the different phases in your Continuous Integration steps.
For more information on the details and reference, please refer to the following two links:
http://docs.gitlab.com/ce/ci/quick_start/README.html
http://docs.gitlab.com/ce/ci/yaml/README.html
You may validate the syntax of your gitlab-ci.yml file at any time with the following tool: https://gitlab.com/ci/lint
For the purpose of deployment, we will cover only the last piece of the sample provided with this guide:
deploy-job:
# Script to run for deploying application to AWS
script:
- apt-get --quiet install --yes python-pip # AWS CLI requires python-pip, python is installed by default
- pip install -U pip # pip update
- pip install awscli # AWS CLI installation
- $G build -x test -x distTar # # Build the project with Gradle
- $G distZip # creates distribution zip for deployment
- aws s3 cp $BUNDLE_SRC $AWS_S3_LOCATION # Uploads the zipfile to S3 and expects the AWS Code Pipeline/Code Deploy to pick up
# requires previous CI stages to succeed in order to execute
when: on_success
stage: deploy
environment: production
cache:
key: "$CI_BUILD_NAME/$CI_BUILD_REF_NAME"
untracked: true
paths:
- build/
# Applies only to tags matching the regex: ie: v1.0.0-My-App-Release
only:
- /^v\d+\.\d+\.\d+-.*$/
except:
- branches
- triggers
This part represents the whole job associated with the deployment following the previous, if any, C.I. stages.
The relevant part associated with the deployment is this:
# Script to run for deploying application to AWS
script:
- apt-get --quiet install --yes python-pip # AWS CLI requires python-pip, python is installed by default
- pip install -U pip # pip update
- pip install awscli # AWS CLI installation
- $G build -x test -x distTar # # Build the project with Gradle
- $G distZip # creates distribution zip for deployment
- aws s3 cp $BUNDLE_SRC $AWS_S3_LOCATION # Uploads the zipfile to S3 and expects the AWS Code Pipeline/Code Deploy to pick up
The first step involves installing the python package management system: pip.
pip is required to install AWS CLI, which is necessary to upload the deployment file to AWS S3
In this example, we are using Gradle (defined by the environment variable $G); Gradle provides a module to automatically Zip the deployment files. Depending on the type of project you are deploying this method will be different for generating the distribution zip file my_app.zip.
The aws s3 cp $BUNDLE_SRC $AWS_S3_LOCATION command uploads the distribution zip file to the Amazon S3 location that we defined earlier. This file is then automatically detected by CodePipeline, processed and sent to CodeDeploy.
Finally, CodeDeploy performs the necessary tasks through the CodeDeploy agent as specified by the appspec.yml file.
Creating appspec.yml
The appspec.yml defines the behaviour to be followed by CodeDeploy once a deployment file has been received.
A sample file has been provided along with this guide along with sample scripts to be executed during the various phases of the deployment.
Please refer to the specification for the CodeDeploy AppSpec for more information on how to build the appspec.yml file: http://docs.aws.amazon.com/codedeploy/latest/userguide/app-spec-ref.html
Generating the Deployment ZipFile
In order for CodeDeploy to work properly, you must create a properly generated zip file of your application.
The zip file must contain:
Zip root
appspec.yml => CodeDeploy deployment instructions
deployment stage scripts
provided samples would be placed in the scripts directory in the zip file, would require the presence my_app.sh script to be added at the root of your application directory (ie. my_app directory in the zip)
distribution code - in our example it would be under the my_app directory
Tools such as Gradle and Maven are capable of generating distribution zip files with certain alterations to the zip generation process.
If you do not use such a tool, you may have to instruct Gitlab-CI to generate this zip file in a different manner; this method is outside of the scope of this guide.
Deploying your application to EC2
The final step in this guide is actually performing a successful deployment.
The stages of Continuous integration are defined by the rules set in the gitlab-ci.yml. The example provided with this guide will initiate a deploy for any reference matching the following regex: /^v\d+\.\d+\.\d+-.*$/.
In this case, pushing a Tag v1.0.0-My-App-Alpha-Release through git onto your remote Gitlab would initiate the deployment process. You may adjust these rules as applicable to your project requirements.
The gitlab-ci.yml example provided would perform the following jobs when detecting the Tag v1.0.0-My-App-Alpha-Release:
build job - compile the sources
test job - run the unit tests
deploy-job - compile the sources, generate the distribution zip, upload zip to Amazon S3
Once the distribution zip has been uploaded to Amazon S3, the following steps happen:
CodePipeline detects the change in the revision of the S3 zip file
CodePipeline validates the file
CodePipeline sends signal that the bundle for CodeDeploy is ready
CodeDeploy executes the deployment steps
Start - initialization of the deployment
Application Stop - Executes defined script for hook
DownloadBundle - Gets the bundle file from the S3 repository through the CodePipeline
BeforeInstall - Executes defined script for hook
Install - Copies the contents to the deployment location as defined by the files section of appspec.yml
AfterInstall - Executes defined script for hook
ApplicationStart - Executes defined script for hook
ValidateService - Executes defined script for hook
End - Signals the CodePipeline that the deployment has completed successfully
Successful deployment screenshots:
References
Gitlab-CI QuickStart: http://docs.gitlab.com/ce/ci/quick_start/README.html
Gitlab-CI .gitlab-ci.yml: http://docs.gitlab.com/ce/ci/yaml/README.html
AWS CodePipeline Walkthrough: http://docs.aws.amazon.com/codepipeline/latest/userguide/getting-started-w.html
Install or Reinstall the AWS CodeDeploy Agent: http://docs.aws.amazon.com/codedeploy/latest/userguide/how-to-run-agent-install.html
AWS CLI Getting Started - Env: http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-environment
AppSpec Reference: http://docs.aws.amazon.com/codedeploy/latest/userguide/app-spec-ref.html
autronix's answer is awesome, although in my case I had to gave up the CodePipeline part due to the following error : The deployment failed because a specified file already exists at this location : /path/to/file. This is because I already have files at the location since I'm using an existing instance with a server running already on it.
Here is my workaround :
In the .gitlab-ci.yml here is what I changed :
deploy:
stage: deploy
script:
- curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" # Downloading and installing awscli
- unzip awscliv2.zip
- ./aws/install
- aws deploy push --application-name App-Name --s3-location s3://app-deployment/app.zip # Adding revision to s3 bucket
- aws deploy create-deployment --application-name App-Name --s3-location bucket=app-deployment,key=app.zip,bundleType=zip --deployment-group-name App-Name-Fleet --deployment-config-name CodeDeployDefault.OneAtATime --file-exists-behavior OVERWRITE # Ordering the deployment of the new revision
when: on_success
only:
refs:
- dev
The important part is the aws deploy create-deployment line with it's flag --file-exists-behavior. There are three options available, OVERWRITE was the one I needed and I couldn't manage to set this flag with CodePipeline so I went with the cli option.
I've also changed a bit the part for the upload of the .zip. Instead of creating the .zip myself I'm using aws deploy push command which will create a .zip for me on the s3 bucket.
There is really nothing else to modify.

GitLab CI .yml file and connection with Web server

I have set GitLab on local server with Ubuntu instalation. Also, I managed to set GitLab CI with runners and now I am struggling a bit with them. So, I have PHP project on which couple of guys are working on. We have set .gitlab-ci.yml file to deploy files on web server (which is also on same local server just under different folder).
Main problem is that GitLab CI (runner basically) deploys all files everytime and not just pushed ones.
We would like to have an option to deploy only files which are changed.
Current yml file looks like:
pages:
stage: deploy
script:
mkdir -p /opt/lampp/htdocs/web/wp2/
mkdir -p .public
yes | cp -rf * .public /opt/lampp/htdocs/web/wp2/
only:
push
So, am I missing something huge here or there is a possibility just to deploy file which is pushed to repository? Runner is set to react on every push.
Thank you in advance for your kind replies.
cheers!
now, after some days and invested hours some notes did the trick...so with
pages:
stage: deploy
script:
- mkdir -p /opt/lampp/htdocs/web/wp2/
- mkdir -p .public
- cp -rfu * .public /opt/lampp/htdocs/web/wp2/
script I have managed to acheiev what I needed. "-rfu" part was the key which figures out should it replace the file if the source is newer than the destination (web server in my case).
So, this worked for me in .yml file and even that CI Lint gives the error that syntax is not correct runner gives an success. I hope that someone will find this thing useful :)

Resources