My problem:
I want a docker image saved as an artifact in the Amazon EC2 Registry, built by packer (and ansible)
My limitations:
The build needs to be triggered by Bitbucket Pipelines. Therefore the build steps need to be executed either in Bitbucket Pipelines itself or in an AWS EC2 instance/container.
This is because not all dev machines necessarily have the permissions/packages to build from their local environment. I only want these images to be built as a result of an automated CI process.
What I have tried:
Using Packer, I am able to build AMIs remotely. And I am able to build Docker images using Packer (built locally and pushed remotely to Amazon ECR).
However the Bitbucket Pipeline, which executes build steps within a docker container already, does not have access to the docker daemon process 'docker run'.
The error I recieve in Bitbucket Pipelines:
+ packer build ${BITBUCKET_CLONE_DIR}/build/pipelines_builder/template.json
docker output will be in this color.
==> docker: Creating a temporary directory for sharing data...
==> docker: Pulling Docker image: hashicorp/packer
docker: Using default tag: latest
docker: latest: Pulling from hashicorp/packer
docker: 88286f41530e: Pulling fs layer
...
...
docker: 08d16a84c1fe: Pull complete
docker: Digest: sha256:c093ddf4c346297598aaa13d3d12fe4e9d39267be51ae6e225c08af49ec67fc0
docker: Status: Downloaded newer image for hashicorp/packer:latest
==> docker: Starting docker container...
docker: Run command: docker run -v /root/.packer.d/tmp/packer-docker426823595:/packer-files -d -i -t hashicorp/packer /bin/bash
==> docker: Error running container: Docker exited with a non-zero exit status.
==> docker: Stderr: docker: Error response from daemon: authorization denied by plugin pipelines: Command not supported..
==> docker: See 'docker run --help'.
==> docker:
Build 'docker' errored: Error running container: Docker exited with a non-zero exit status.
Stderr: docker: Error response from daemon: authorization denied by plugin pipelines: Command not supported..
See 'docker run --help'.
==> Some builds didn't complete successfully and had errors:
--> docker: Error running container: Docker exited with a non-zero exit status.
Stderr: docker: Error response from daemon: authorization denied by plugin pipelines: Command not supported..
See 'docker run --help'.
==> Builds finished but no artifacts were created.
The following quote says it all (taken from link):
Other commands, such as docker run, are currently forbidden for
security reasons on our shared build infrastructure.
So, I know why the following is happening. It is a limitation I am faced with. I am aware that I need to find an alternative.
A Possible Solution:
The only solution I can think of at the moment is, a Bitbucket Pipeline using an image with terraform and ansible installed, containing the following:
ansible-local:
terraform apply (spins up an instance/container from AMI with ansible and packer installed)
ansible-remote (to the instance mentioned above)
clone devops repo with packer build script on it
execute packer build command (build command depends on ansible, build creates ec2 container registry image)
ansible-local
terraform destroy
Is the above solution a viable option? Are there alternatives? Can Packer not run commands and commit from a container running remotely in ECS?
My long term solution will be to only use bitbucket pipelines to trigger lambda functions in AWS, which will spin up containers in our EC2 Container Registry and perform the builds there. More control, and we can have devs trigger the lambda functions from their machines (with more bespoke dynamic variables).
I setup some terraform scripts which can be used to execute from any CI tool, with a few pre-requisites:
CI tool must have API access tokens to AWS (the only cloud provider supported so far)
CI tool must be able to run Terraform or dockerized Terraform container
This will spin up a fresh EC2 instance in your own VPC of your choosing and execute a script.
For this stack overflow question, that script would contain some packer commands to build and push a docker image. The AMI for the EC2 instance would need packer and docker installed.
Find more info at: https://github.com/dnk8n/remote-provisioner
My understanding for your problem to block you is, the bitbucket pipelines (normally call them agents) have no enough permission to do the job (terraform apply, packer build) to your AWS Account.
Since bitbucket pipeline agents are running in Bitbucket cloud, not in your AWS account (which you can assign IAM role on them), you should create an account with IAM role (policies and permissions are list below) assign its AWS API keys (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and option of AWS_SESSION_TOKEN) as environment variables in your pipeline.
You can refer this document on how to add your AWS credentials to Bitbucket Pipelines
https://confluence.atlassian.com/bitbucket/deploy-to-amazon-aws-875304040.html
With that, you can run packer or terraform commands without issues.
For minimum policies you need to assign to run packer build, refer this document:
https://www.packer.io/docs/builders/amazon.html#using-an-iam-task-or-instance-role
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action" : [
"ec2:AttachVolume",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:CopyImage",
"ec2:CreateImage",
"ec2:CreateKeypair",
"ec2:CreateSecurityGroup",
"ec2:CreateSnapshot",
"ec2:CreateTags",
"ec2:CreateVolume",
"ec2:DeleteKeypair",
"ec2:DeleteSecurityGroup",
"ec2:DeleteSnapshot",
"ec2:DeleteVolume",
"ec2:DeregisterImage",
"ec2:DescribeImageAttribute",
"ec2:DescribeImages",
"ec2:DescribeInstances",
"ec2:DescribeRegions",
"ec2:DescribeSecurityGroups",
"ec2:DescribeSnapshots",
"ec2:DescribeSubnets",
"ec2:DescribeTags",
"ec2:DescribeVolumes",
"ec2:DetachVolume",
"ec2:GetPasswordData",
"ec2:ModifyImageAttribute",
"ec2:ModifyInstanceAttribute",
"ec2:ModifySnapshotAttribute",
"ec2:RegisterImage",
"ec2:RunInstances",
"ec2:StopInstances",
"ec2:TerminateInstances"
],
"Resource" : "*"
}]
}
For terraform plan/apply, you need to assign the most permission because terraform can take care of nearly all aws resources.
Second, for your existing requirement, you only need to run packer and terraform commands, you needn't run docker command in bitbucket pipeline.
So the normal pipeline with above aws API environment, it should work directly.
image: hashicorp/packer
pipelines:
default:
- step:
script:
- packer build <your_packer_json_file>
You should be fine to run terraform command in image hashicorp/terraform as well.
I think I would approach it like this:
Have a Packer build that produces a "Docker build AMI" that you can run on EC2. Essentially it would just be an AMI with Docker pre-installed, plus anything else you need. This Packer build can be stored in another BitBucket Git repo and you can have this image built and pushed to EC2 through another BitBucket pipeline so that any changes to your build AMI are automatically built and pushed as an AMI. As you already suggest, using the AWS Builder here for that.
Have a Terraform script as part of your current project that is called by your BitBucket pipeline to spin-up an instance of the above "Docker Build" AMI when your pipeline starts e.g terraform apply
Use the Packer Docker Builder on the above EC2 instance to build and push your Docker image to ECR (applying your Ansible scripts).
Terraform destroy the environment once the build is complete
Doing that keeps everything in infrastructure as code and should make it fairly trivial for you to move the Docker build locally into the BitBucket pipeline if they offer support for running docker run at any point.
I'm new to Jenkins and I have been searching around but I couldn't find what I was looking for.
I'd like to know how to run docker command in Jenkins (Build - Execute Shell):
Example: docker run hello-world
I have set Docker Installation for "Install latest from docker.io" in Jenkins Configure System and also have installed several Docker plugins. However, it still didn't work.
Can anyone help me point out what else should I check or set?
John
One of the following plugins should work fine:
CloudBees Docker Custom Build Environment Plugin
CloudBees Docker Pipeline Plugin
I normally run my builds on slave nodes that have docker pre-installed.
I came across another generic solution. Since I'm not expert creating a Jenkins-Plugin out of this, here the manual steps:
Create/change your Jenkins (I use portainer) with environment variables DOCKER_HOST=tcp://192.168.1.50 (when using unix protocol you also have to mount your docker socket) and append :/var/jenkins_home/bin to the actual PATH variable
On your docker host copy the docker command to the jenkins image "docker cp /usr/bin/docker jenkins:/var/jenkins_home/bin/"
Restart the Jenkins container
Now you can use docker command from any script or command line. The changes will persist an image update.
This is an abstract question and I hope that I am able to describe this clear.
Basically; What is the workflow in distributing of source code to Kubernetes that is running in production. As you don't run Docker with -v in production, how do you update running pods.
In production:
Do you use SaltStack to update each container in each pod?
Or
Do you rebuild Docker images and restart every pod?
Locally:
With Vagrant you can share a local folder for source code. With Docker you can use -v, but if you have Kubernetes running locally how would you mirror production as close as possible?
If you use Vagrant with boot2docker, how can you combine this with Docker -v?
Short answer is that you shouldn't "distribute source code", you should rather "build and deploy". In terms of Docker and Kubernetes, you would build by means of building and uploading the container image to the registry and then perform a rolling update with Kubernetes.
It would probably help to take a look at the specific example script, but the gist is in the usage summary in current Kubernetes CLI:
kubecfg [OPTIONS] [-u <time>] [-image <image>] rollingupdate <controller>
If you intend to try things out in development, and are looking for instant code update, I'm not sure Kubernetes helps much there. It's been designed for production systems and shadow deploys are not a kind of things one does sanely.
I have question about docker and using it in development on windows.
I have boot2docker installed and I am able to install a container and access it with ip provided by "boot2docker ip" command. But how should i set up my project on Widnows to edit code of my app in container. for example. I have a container with lighttp and some HTML5 and JS app inside. How can I enable my host machine (Windows) to access this code?
I know i could just make git repository on my local machine and commit code to remote repo on container, but it is not very practical.
If i understand correctly, you're developing inside a docker container and you are trying to get your source from your container?
I guess the easiest way would be to put your source inside a shared volume with the boot2docker and then use scp protocol to get those file back.
On another hand if you're wanting to share a folder between the boot2docker vm and your windows host i suggest you read this tutorial : http://www.incrediblemolk.com/sharing-a-windows-folder-with-the-boot2docker-vm/
Hope it was helpful.
I have Jenkins running on EC2: I use for that the standard Amazon AMI
based on CentOs
I would like to setup the SLOCCOUNT plugin the same way it runs on my
dev machine (running Ubuntu) but I can't find the package in the
Amazon AWS package repository (sloccount* brings no answer)
Does anyone know if SLOCCOUNT is in the AWS repository and its name ?
Thanks in advance
didier
It's not in the AWS repository but I've had luck getting the rpm from the fedora repos.
wget http://www.mirrorservice.org/sites/download.fedora.redhat.com/pub/fedora/linux/development/rawhide/x86_64/os/Packages/s/sloccount-2.26-12.fc17.x86_64.rpm
worked for me.