Using Packer, how does one build an Amazon ECR Instance remotely - amazon-ec2

My problem:
I want a docker image saved as an artifact in the Amazon EC2 Registry, built by packer (and ansible)
My limitations:
The build needs to be triggered by Bitbucket Pipelines. Therefore the build steps need to be executed either in Bitbucket Pipelines itself or in an AWS EC2 instance/container.
This is because not all dev machines necessarily have the permissions/packages to build from their local environment. I only want these images to be built as a result of an automated CI process.
What I have tried:
Using Packer, I am able to build AMIs remotely. And I am able to build Docker images using Packer (built locally and pushed remotely to Amazon ECR).
However the Bitbucket Pipeline, which executes build steps within a docker container already, does not have access to the docker daemon process 'docker run'.
The error I recieve in Bitbucket Pipelines:
+ packer build ${BITBUCKET_CLONE_DIR}/build/pipelines_builder/template.json
docker output will be in this color.
==> docker: Creating a temporary directory for sharing data...
==> docker: Pulling Docker image: hashicorp/packer
docker: Using default tag: latest
docker: latest: Pulling from hashicorp/packer
docker: 88286f41530e: Pulling fs layer
...
...
docker: 08d16a84c1fe: Pull complete
docker: Digest: sha256:c093ddf4c346297598aaa13d3d12fe4e9d39267be51ae6e225c08af49ec67fc0
docker: Status: Downloaded newer image for hashicorp/packer:latest
==> docker: Starting docker container...
docker: Run command: docker run -v /root/.packer.d/tmp/packer-docker426823595:/packer-files -d -i -t hashicorp/packer /bin/bash
==> docker: Error running container: Docker exited with a non-zero exit status.
==> docker: Stderr: docker: Error response from daemon: authorization denied by plugin pipelines: Command not supported..
==> docker: See 'docker run --help'.
==> docker:
Build 'docker' errored: Error running container: Docker exited with a non-zero exit status.
Stderr: docker: Error response from daemon: authorization denied by plugin pipelines: Command not supported..
See 'docker run --help'.
==> Some builds didn't complete successfully and had errors:
--> docker: Error running container: Docker exited with a non-zero exit status.
Stderr: docker: Error response from daemon: authorization denied by plugin pipelines: Command not supported..
See 'docker run --help'.
==> Builds finished but no artifacts were created.
The following quote says it all (taken from link):
Other commands, such as docker run, are currently forbidden for
security reasons on our shared build infrastructure.
So, I know why the following is happening. It is a limitation I am faced with. I am aware that I need to find an alternative.
A Possible Solution:
The only solution I can think of at the moment is, a Bitbucket Pipeline using an image with terraform and ansible installed, containing the following:
ansible-local:
terraform apply (spins up an instance/container from AMI with ansible and packer installed)
ansible-remote (to the instance mentioned above)
clone devops repo with packer build script on it
execute packer build command (build command depends on ansible, build creates ec2 container registry image)
ansible-local
terraform destroy
Is the above solution a viable option? Are there alternatives? Can Packer not run commands and commit from a container running remotely in ECS?
My long term solution will be to only use bitbucket pipelines to trigger lambda functions in AWS, which will spin up containers in our EC2 Container Registry and perform the builds there. More control, and we can have devs trigger the lambda functions from their machines (with more bespoke dynamic variables).

I setup some terraform scripts which can be used to execute from any CI tool, with a few pre-requisites:
CI tool must have API access tokens to AWS (the only cloud provider supported so far)
CI tool must be able to run Terraform or dockerized Terraform container
This will spin up a fresh EC2 instance in your own VPC of your choosing and execute a script.
For this stack overflow question, that script would contain some packer commands to build and push a docker image. The AMI for the EC2 instance would need packer and docker installed.
Find more info at: https://github.com/dnk8n/remote-provisioner

My understanding for your problem to block you is, the bitbucket pipelines (normally call them agents) have no enough permission to do the job (terraform apply, packer build) to your AWS Account.
Since bitbucket pipeline agents are running in Bitbucket cloud, not in your AWS account (which you can assign IAM role on them), you should create an account with IAM role (policies and permissions are list below) assign its AWS API keys (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and option of AWS_SESSION_TOKEN) as environment variables in your pipeline.
You can refer this document on how to add your AWS credentials to Bitbucket Pipelines
https://confluence.atlassian.com/bitbucket/deploy-to-amazon-aws-875304040.html
With that, you can run packer or terraform commands without issues.
For minimum policies you need to assign to run packer build, refer this document:
https://www.packer.io/docs/builders/amazon.html#using-an-iam-task-or-instance-role
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action" : [
"ec2:AttachVolume",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:CopyImage",
"ec2:CreateImage",
"ec2:CreateKeypair",
"ec2:CreateSecurityGroup",
"ec2:CreateSnapshot",
"ec2:CreateTags",
"ec2:CreateVolume",
"ec2:DeleteKeypair",
"ec2:DeleteSecurityGroup",
"ec2:DeleteSnapshot",
"ec2:DeleteVolume",
"ec2:DeregisterImage",
"ec2:DescribeImageAttribute",
"ec2:DescribeImages",
"ec2:DescribeInstances",
"ec2:DescribeRegions",
"ec2:DescribeSecurityGroups",
"ec2:DescribeSnapshots",
"ec2:DescribeSubnets",
"ec2:DescribeTags",
"ec2:DescribeVolumes",
"ec2:DetachVolume",
"ec2:GetPasswordData",
"ec2:ModifyImageAttribute",
"ec2:ModifyInstanceAttribute",
"ec2:ModifySnapshotAttribute",
"ec2:RegisterImage",
"ec2:RunInstances",
"ec2:StopInstances",
"ec2:TerminateInstances"
],
"Resource" : "*"
}]
}
For terraform plan/apply, you need to assign the most permission because terraform can take care of nearly all aws resources.
Second, for your existing requirement, you only need to run packer and terraform commands, you needn't run docker command in bitbucket pipeline.
So the normal pipeline with above aws API environment, it should work directly.
image: hashicorp/packer
pipelines:
default:
- step:
script:
- packer build <your_packer_json_file>
You should be fine to run terraform command in image hashicorp/terraform as well.

I think I would approach it like this:
Have a Packer build that produces a "Docker build AMI" that you can run on EC2. Essentially it would just be an AMI with Docker pre-installed, plus anything else you need. This Packer build can be stored in another BitBucket Git repo and you can have this image built and pushed to EC2 through another BitBucket pipeline so that any changes to your build AMI are automatically built and pushed as an AMI. As you already suggest, using the AWS Builder here for that.
Have a Terraform script as part of your current project that is called by your BitBucket pipeline to spin-up an instance of the above "Docker Build" AMI when your pipeline starts e.g terraform apply
Use the Packer Docker Builder on the above EC2 instance to build and push your Docker image to ECR (applying your Ansible scripts).
Terraform destroy the environment once the build is complete
Doing that keeps everything in infrastructure as code and should make it fairly trivial for you to move the Docker build locally into the BitBucket pipeline if they offer support for running docker run at any point.

Related

CI pipeline for AWS ECS from gitlab repository

I want to create a pipeline for ECS deployment.
So my .gitlab-ci.yml file is like that:
stages:
- build
- test
- review
- deploy
- production
- cleanup
variables:
AUTO_DEVOPS_PLATFORM_TARGET: ECS
CI_AWS_ECS_CLUSTER: shopservice
CI_AWS_ECS_SERVICE: sample-app-service
CI_AWS_ECS_TASK_DEFINITION: first-run-task-definition:1
AWS_REGION: us-west-2
include:
- template: Jobs/Build.gitlab-ci.yml
- template: Jobs/Deploy/ECS.gitlab-ci.yml
But on the stage production_ecs my pipeline was failed. And I gettin an error as :
Using docker image sha256:9920fa2b45873efd675cf992d7130e8a691133fd51ec88b2765d977f82695263 for registry.gitlab.com/gitlab-org/cloud-deploy/aws-ecs:latest with digest registry.gitlab.com/gitlab-org/cloud-deploy/aws-ecs#sha256:c833e508b00451a09639e96fb7551f62abb4f75ba1d31f88b4dc8299c608e0dd ...
22$ ecs update-task-definition
23Unable to locate credentials. You can configure credentials by running "aws configure".
25
Cleaning up project directory and file based variables
00:01
27ERROR: Job failed: exit code 1
Why I'm getting this error? How can I configure was on ECS?
You need to pass your credentials (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY) as environment variables and also set the region as environment variable AWS_DEFAULT_REGION. You can set them by going to your project (or group) and navigate to Settings > CI/CD.
You can find the documentation for ecs deployments here: https://docs.gitlab.com/ee/ci/cloud_deployment/#deploy-your-application-to-the-aws-elastic-container-service-ecs
and for configuring the aws cli (which is used in the template and GitLab AWS Docker images) here : https://docs.gitlab.com/ee/ci/cloud_deployment/#run-aws-commands-from-gitlab-cicd

How can we send aws address in my local machine to jenkins image run in docker container?

I am trying to send the path of aws in my host machine to jenkins that will be run in a docker container. So I downloaded jenkins image and I am trying to use aws cli command in jenkins pipeline in order to build nodejs application and then deploy it to s3 bucket. For the I need aws cli in jenkins image that I am running through docker. As far as I know, once you run any image in docker container, then it will be a seprate environemnt in itself so jenkins will not know that I have aws installed in my mac unless I send it address of aws in my mac which is what I am trying to do with
-v $(which aws): $(which aws)
command.
docker run -d -p 8080:8080 -p 50000:50000 -v ~/jenkins_directory:/var/jenkins_home -v $(which aws):$(which aws) jenkins/jenkins:2.190.2
However after I run this container in command line, it shows the following error response
docker: Error response from daemon: Mounts denied:
The path /usr/local/bin/aws
is not shared from OS X and is not known to Docker.
According to some of the answers I found in stackoverflow I then tried to add the address of aws in Docker file sharing panel. When I added the address of aws in docker, it again shows that
The path /usr is reserved by Docker however it may be possible to export specific subdirectories.
I have been able to get around this. I tried adding the whole
usr/local/bin/aws
in docker file sharing panel but still it shows the same problem. Does anyone have any idea what other things we can do in order to send the address of aws in my local container to jenkins image that I am trying to run in docker container?
You need to install aws-cli in your docker image, and then you will able to use aws-cli inside your container.
FROM jenkins/jenkins:2.190.2
USER root
RUN apt-get update && \
apt-get install awscli -y
USER jenkins
-v or volumes are not designed to bind the host executable, but they are designed for files and folders for persistent storage. If you need executable you need to add in your docker image.
To be able to save (persist) data and also to share data
between containers, Docker came up with the concept of volumes. Quite
simply, volumes are directories (or files) that are outside of the
default Union File System and exist as normal directories and files on
the host filesystem.
understanding-volumes-docker
For this question
I am trying to use aws CLI command in jenkins pipeline in order to
build nodejs application and then deploy it to s3 bucket.
If you are inside AWS, you can assign the IAM role to Jenkins server and you will not be required to bind host keys.
Or if you are outside AWS, then you just need bind host aws config and credentials ,
docker run -d -p 8080:8080 -p 50000:50000 -v ~/jenkins_directory:/var/jenkins_home -v $HOME/.aws/:/var/jenkins_home/.aws/ jenkins/jenkins:2.190.2

Running jenkins on docker (on Windows)... Proper steps to run pipeline job

I'm trying to implement docker with jenkins and am not sure if I am on the right track.
Given:
Running jenkins on docker from Windows
Plan on fetching code from github, building the solution, running functional tests, etc on a container somehow
What I've currently done:
(1) Installed Docker on Windows
(2) Successfully launched Jenkins on Docker with the command
"docker run –name myJenkins -p 8080:8080 -p 50000:50000 -v ~/Jenkins:/var/jenkins_home/ jenkins/jenkins:lts"
I believe this step binds the docker volume to my host machine's directory. This allows me to view and access the Jenkins content.
(3) In my host machine's Jenkins directory, I've created a plugin.txt (containing a variety of Jenkins plugins I want installed) and a Dockerfile. The Dockerfile installs the specified plugins in the plugins.txt file.
FROM jenkins/jenkins:lts
COPY plugins.txt /usr/share/jenkins/ref/plugins.txt
RUN /usr/local/bin/install-plugins.sh < /usr/share/jenkins/ref/plugins.txt
(4) In the windows command prompt, I built the Dockerfile with the command "docker build -t new_jenkins_image ."
(5) I stop my current container "myJenkins" and create a new container with the command "docker run –name myJenkins2 -p 8080:8080 -p 50000:50000 -v ~/Jenkins:/var/jenkins_home/ new_jenkins_image". This loads up Jenkins with the newly installed jenkins plugins.
What I'm stuck/confused on
(1) Do I have to create a new container with a new name every time I want to install new jenkins plugins through the Dockerfile? This seems like a manual process as well... There has to be a better way.
(2) I started a basic jenkins pipeline job with the "Pipeline script from SCM" option. I entered in the correct repository URL and credentials but left the "Script Path" blank for now (I do not have a Jenkinsfile yet). When I execute the build, Jenkins did not fetch the code from github.
java.lang.IllegalArgumentException: Empty path not permitted.
at org.eclipse.jgit.treewalk.filter.PathFilter.create(PathFilter.java:80)
at org.eclipse.jgit.treewalk.TreeWalk.forPath(TreeWalk.java:205)
at org.eclipse.jgit.treewalk.TreeWalk.forPath(TreeWalk.java:249)
at org.eclipse.jgit.treewalk.TreeWalk.forPath(TreeWalk.java:281)
at jenkins.plugins.git.GitSCMFile$3.invoke(GitSCMFile.java:171)
at jenkins.plugins.git.GitSCMFile$3.invoke(GitSCMFile.java:165)
at jenkins.plugins.git.GitSCMFileSystem$3.invoke(GitSCMFileSystem.java:193)
at org.jenkinsci.plugins.gitclient.AbstractGitAPIImpl.withRepository(AbstractGitAPIImpl.java:29)
at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.withRepository(CliGitAPIImpl.java:72)
at jenkins.plugins.git.GitSCMFileSystem.invoke(GitSCMFileSystem.java:189)
at jenkins.plugins.git.GitSCMFile.content(GitSCMFile.java:165)
at jenkins.scm.api.SCMFile.contentAsString(SCMFile.java:338)
at org.jenkinsci.plugins.workflow.cps.CpsScmFlowDefinition.create(CpsScmFlowDefinition.java:110)
at org.jenkinsci.plugins.workflow.cps.CpsScmFlowDefinition.create(CpsScmFlowDefinition.java:67)
at org.jenkinsci.plugins.workflow.job.WorkflowRun.run(WorkflowRun.java:293)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Finished: FAILURE
I believe it's because the docker container does not have git installed? The container cannot access the Git or MSBuild from my host machine... Do I have to create a new container here to simply fetch the code?
Can someone explain to me what I'm missing or where I went wrong?
From my understanding, the process goes like this: Create new pipeline job -> select pipeline script from scm -> enter repo URL, credentials, branch to build and Jenkinsfile -> Jenkinsfile will execute instructions to compile, test, and deploy.
Where does the Dockerfile come into play here? Is my thought process on the right track?
you need to create container every time if you change/update the image. but it not required to give a new name each time. Did you stopped and removed previously running container? if not so docker gives errors like same name container cannot start. So stop and remove your previous container. and you will be able to start a new container with an updated image.
Yes, you need to install git in the same container to pull the code. it cannot access git on the host machine. But the error you are showing is like validation error. (i mean Jenkins validates input even before trying to pull the code. if you add some fake name it will throw next error like git not found)
Your thought is on correct track. Create new pipeline job -> select pipeline script from scm -> enter repo URL, credentials, branch to build and Jenkinsfile -> Jenkinsfile will execute instructions to compile, test, and deploy.
At the end of the question, you mentioned about different Dockerfile, i assume you are talking about Dockerfile in your repository (git). you can run your pipeline in docker agent. This removes to setup everything on jenkins host means you dont need to install dependencies to run your pipeline code on host, for example, if you are trying to execute some nodejs code in pipe, you need to setup nodejs on Jenkins host before you run the pipe, to get rid of this you can run pipe in container where everything is pre-setup. But I don't think you can use this feature if you are running Jenkins itself in docker. you need to setup Jenkins on the host directly in that case.

How to deploy web application to AWS instance from GitLab repository

Right now, I deploy my (Spring Boot) application to EC2 instance like:
Build JAR file on local machine
Deploy/Upload JAR via scp command (Ubuntu) from my local machine
I would like to automate that process, but:
without using Jenkins + Rundeck CI/CD tools
without using AWS CodeDeploy service since that does not support GitLab
Question: Is it possible to perform 2 simple steps (that are now done manualy - building and deploying via scp) with GitLab CI/CD tools and if so, can you present simple steps to do it.
Thanks!
You need to create a .gitlab-ci.yml file in your repository with CI jobs defined to do the two tasks you've defined.
Here's an example to get you started.
stages:
- build
- deploy
build:
stage: build
image: gradle:jdk
script:
- gradle build
artifacts:
paths:
- my_app.jar
deploy:
stage: deploy
image: ubuntu:latest
script:
- apt-get update
- apt-get -y install openssh-client
- scp my_app.jar target.server:/my_app.jar
In this example, the build job run a gradle container and uses gradle to build the app. GitLab CI artifacts are used to capture the built jar (my_app.jar), which will be passed on to the deploy job.
The deploy job runs an ubuntu container, installs openssh-client (for scp), then executes scp to open my_app.jar (passed from the build job) to the target server.
You have to fill in the actual details of building and copying your app. For secrets like SSH keys, set project level CI/CD variables that will be passed in to your CI jobs.
Create shell file with the following contents.
#!/bin/bash
# Copy JAR file to EC2 via SCP with PEM in home directory (usually /home/ec2-user)
scp -i user_key.pem file.txt ec2-user#my.ec2.id.amazonaws.com:/home/ec2-user
#SSH to EC2 Instnace
ssh -T -i "bastion_keypair.pem" ec2-user#y.ec2.id.amazonaws.com /bin/bash <<-'END2'
#The following commands will be executed automatically by bash.
#Consdier this as remote shell script.
killall java
java -jar ~/myJar.jar server ~/config.yml &>/dev/null &
echo 'done'
#Once completed, the shell will exit.
END2
In 2020, this should be easier with GitLab 13.0 (May 2020), using an older feature Auto DevOps (introduced in GitLab 11.0, June 2018)
Auto DevOps provides pre-defined CI/CD configuration allowing you to automatically detect, build, test, deploy, and monitor your applications.
Leveraging CI/CD best practices and tools, Auto DevOps aims to simplify the setup and execution of a mature and modern software development lifecycle.
Overview
But now (May 2020):
Auto Deploy to ECS
Until now, there hasn’t been a simple way to deploy to Amazon Web Services. As a result, Gitlab users had to spend a lot of time figuring out their own configuration.
In Gitlab 13.0, Auto DevOps has been extended to support deployment to AWS!
Gitlab users who are deploying to AWS Elastic Container Service (ECS) can now take advantage of Auto DevOps, even if they are not using Kubernetes. Auto DevOps simplifies and accelerates delivery and cloud deployment with a complete delivery pipeline out of the box. Simply commit code and Gitlab does the rest! With the elimination of the complexities, teams can focus on the innovative aspects of software creation!
In order to enable this workflow, users need to:
define AWS typed environment variables: ‘AWS_ACCESS_KEY_ID’ ‘AWS_ACCOUNT_ID’ and ‘AWS_REGION’, and
enable Auto DevOps.
Then, your ECS deployment will be automatically built for you with a complete, automatic, delivery pipeline.
See documentation and issue

gitlab-runner - docker in docker (dind) and push to GitLab registry

I wish to use GitLab Container Registry to temporary store my newly built Docker image; in order to have Docker function (i.e. docker login, docker build, docker push), I applied docker-in-docker executor; then from GitLab Piplelines error messages, I realize I need to place a Dockerfile at project root:-
$ docker build --pull -t $CONTAINER_TEST_IMAGE .
unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /builds/xxxxx.com/laravel/Dockerfile: no such file or directory
My Dockerfile includes centos:7, php, nodejs, composer and sass installations. I observe after each commit, GitLab runner will go through the Dockerfile once and install all of them from beginning, which makes the whole build stage very slow and funny - how come I just want to amend 1 word in my project but I need to install so many things again for deployment?
From my imagination, it will be nice if I can pre-build a Docker image from a Dockerfile that contains the installations mentioned above plus Docker (so that docker login, docker build and docker push can work) and stored in the GitLab-runner server, and after each commit, this image can be reused to build the new image to be pushed to GitLab Container Registry.
However, I faced 2 problems:-
1) Even I include Docker installation in the pre-build a Docker image, I cannot systemctl docker start, due to some D-bus problem
Failed to get D-Bus connection: Operation not permitted
moreover some articles also mentioned a docker in container shall not run background services;
2) when I use dind, it will require a Dockerfile at project root; with the pre-build a Docker image, actually I have nothing to do with this Dockerfile at project root; hence is dind a wrong option?
Acutally, what is the proper way to push a Laravel project image to GitLab Container Registry? (where to place those npm install and composer install commands?)
image: docker:latest
services:
- docker:dind
stages:
- build
- test
- deploy
variables:
CONTAINER_TEST_IMAGE: xxxx
CONTAINER_RELEASE_IMAGE: yyyy
before_script:
- docker login -u xxx -p yyy registry.gitlab.com
build:
stage: build
script:
- npm install here?
- docker build --pull -t $CONTAINER_TEST_IMAGE .
- docker push $CONTAINER_TEST_IMAGE
There are many questions in your post. I would like to target them as follows:
You can pre-build a docker image and then use it in your gitlab-ci.yaml file. This can be used to add your specific dependencies.
image: my custom image
services:
- docker:dind
Important to add the service to the configuration.
You problem about trying to run the docker service inside the gitlab-ci.yml. You actually don't need to do that. Gitlab exposes the docker engine to the executor (either via unix:///var/run/docker.sock or tcp://localhost:2375/). Note, that if the runners are executed in a kubernetes environment, you need to specify the DOCKER_HOST as follows:
variable:
DOCKER_HOST: tcp://localhost:2375/
You question about where to place npm install is more a fundamental question about how docker images are build. In short, npm install should be place in the Dockerfile. For a long description, please this for example.
Some references:
https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#use-docker-in-docker-executor
https://nodejs.org/en/docs/guides/nodejs-docker-webapp/

Resources