Basically I want to create a "snapshot" of my current Ubuntu box, which has compiled binaries and various apt-get packages installed on it. I want to create a docker instance of this as a file that I can distribute to my AWS ec2 instances which will be stored on S3 bucket that will be mounted by the ec2.
Is it possible to achieve this, and how do you get started?
You won't be able to take a snapshot of a current box and use it as a docker container, but you can certainly create a container to use on your EC2 instances.
Create a Dockerfile that builds the system exactly as you want it.
Once you've created the perfect Dockerfile, export a container to a tarball
Upload the tarball to S3
On your EC2 instances, download the tarball and import it as a Docker container.
Are you planning to use something like s3fs to mount an S3 bucket? Otherwise you can just copy the tarball from your bucket either as a userdata boot script or during a chef/puppet/ansible provisioning step. Depends how you want to structure it.
Related
I am attempting to update Grafana dashboards/data-sources automatically inside a Grafana Docker image using the exported relevant JSON which is stored (and routinely updated) in Github/Bitbucket.
E.g.:
Docker image running Grafana
The Dockerfile adds a Bash script which pulls from a Git source,
The script then copies the JSON files into the relevant directories (/etc/grafana/provisioning/datasource + /dashboards).
Graphs and datasources are updated without the manual intervention (other than updating the JSON stored in Github or Bitbucket).
I have EXEC'ed into the Grafana docker image and Grafana runs on a very basic linux system, therefore practically no commands can be used i.e., git, wget, apt.
Would I be silly in thinking I should create a Dockerfile from the base Debian image, running an apt update and installing git inside. Then somehow running Grafana and the script inside that image?
please feel free to ask for more information.
Consider a simpler approach that uses docker volumes:
grafana container uses docker volumes for /etc/grafana/provisioning/datasource + /dashboards
Those docker volumes are shared with other docker container, that you create.
Your docker container runs an incoming webhook server, publicly available.
If that webhook is triggered, then your script runs.
That script git pulls the changes from your repo and copies the JSON files into the relevant directories. The "relevant directories" are those shared docker volumes between your docker and grafana docker.
You register a webhook to be executed in the github repo on each push on master.
The whole process is automated and looks like this:
You push on master to your github repo with the relevant sources
Your docker with incoming webhook server is pocked by github
Your docker executes a script
That script git pulls the github repo and copies the JSON files into the shared folders
If you need ex. to restart the grafana container from that script, you can mount docker socket -v /var/run/docker.sock and execute docker commands from inside the container.
I am using Kubernetes to deploy all my microservices provided by Azure Kubernetes Services.
Whenever I release an update of my microservice which is getting frequently from last one month, it pulls the new image from the Azure Container Registry.
I was trying to figure out where do these images reside in the cluster?
Just like Docker stores, the pulled images in /var/lib/docker & since the Kubernetes uses Docker under the hood may be it stores the images somewhere too.
But if this is the case, how can I delete the old images from the cluster that are not in use anymore?
Clusters with Linux node pools created on Kubernetes v1.19 or greater default to containerd for its container runtime (Container runtime configuration).
To manually remove unused images on a node running containerd:
Identity node names:
kubectl get nodes
Start an interactive debugging container on a node (Connect with SSH to Azure Kubernetes Service):
kubectl debug node/aks-agentpool-11045208-vmss000003 -it --image=mcr.microsoft.com/aks/fundamental/base-ubuntu:v0.0.11
Setup crictl on the debugging container (check for newer releases of crictl):
The host node's filesystem is available at /host, so configure crictl to use the host node's containerd.sock.
curl -sL https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.23.0/crictl-v1.23.0-linux-amd64.tar.gz | tar xzf - -C /usr/local/bin \
&& export CONTAINER_RUNTIME_ENDPOINT=unix:///host/run/containerd/containerd.sock IMAGE_SERVICE_ENDPOINT=unix:///host/run/containerd/containerd.sock
Remove unused images on the node:
crictl rmi --prune
You are correct in guessing that it's mostly up to Docker, or rather to whatever the active CRI plugin is. The Kubelet automatically cleans up old images when disk space runs low so it's rare that you need to ever touch it directly, but if you did (and are using Docker as your runtime) then it would be the same docker image commands as per normal.
I was trying to figure out where do these images reside in the
cluster?
With the test and check, the result shows each node in the AKS cluster installed the Docker server, and the images stored like Docker as you say that the image layers stored in the directory /var/lib/docker/.
how can I delete the old images from the cluster that are not in use
anymore?
You can do this through the Docker command inside the node. Follow the steps in Connect with SSH to Azure Kubernetes Service (AKS) cluster nodes to make a connection to the node, then you could delete the image through the Docker CLI docker rmi image_name:tag, but carefully with it, make sure the image is really no more useful.
I have a AWS EC2 instance running with me and there is a maven project running on tomcat7. What I have tried is I am using Jenkins for the CI.So whenever the new push happens to the Git-hub Jenkins starts to build, after completion of build it will upload the war file to the AWS S3.
Where I have stuck is, I am not getting a way to deploy the war file to the AWS Ec2 instance.
I have tried to use Code Deployment where at a point it showed me that it supports only tar, tar.gz and zip is there any way out to deploy the war file to the AWS EC2 instance from the S3.
Thank you.
You can use Amazon Code Deploy which can manage deployment from a S3 bucket and can automate deployment to EC2 instance of your file/scripts.
From the Overview of a Deployment
Here's how it works:
First, you create deployable content – such as web pages, executable
files, setup scripts, and so on – on your local development machine or
similar environment, and then you add an application specification
file (AppSpec file). The AppSpec file is unique to AWS CodeDeploy; it
defines the deployment actions you want AWS CodeDeploy to execute. You
bundle your deployable content and the AppSpec file into an archive
file, and then upload it to an Amazon S3 bucket or a GitHub
repository. This archive file is called an application revision (or
simply a revision).
Next, you provide AWS CodeDeploy with
information about your deployment, such as which Amazon S3 bucket or
GitHub repository to pull the revision from and which set of instances
to deploy its contents to. AWS CodeDeploy calls a set of instances a
deployment group. A deployment group contains individually tagged
instances, Amazon EC2 instances in Auto Scaling groups, or both.
Each time you successfully upload a new application revision that you
want to deploy to the deployment group, that bundle is set as the
target revision for the deployment group. In other words, the
application revision that is currently targeted for deployment is the
target revision. This is also the revision that will be pulled for
automatic deployments.
Next, the AWS CodeDeploy agent on each
instance polls AWS CodeDeploy to determine what and when to pull the
revision from the specified Amazon S3 bucket or GitHub repository.
Finally, the AWS CodeDeploy agent on each instance pulls the target
revision from the specified Amazon S3 bucket or GitHub repository and,
using the instructions in the AppSpec file, deploys the contents to
the instance.
AWS CodeDeploy keeps a record of your deployments so
that you can get information such as deployment status, deployment
configuration parameters, instance health, and so on.
Good part is that code deploy has no additional cost, you only pay for the resources (EC2, S3) that are used in your pipeline
Assuming you have already created a S3 bucket.
Step 1: Create a IAM user / Role who have access to a s3 bucket where in you are placing the WAR file
Step 2: Write a custom script which will download WAR File from S3 to your EC2 instance.
You can also use aws cli to download contents from s3 to your local machine.
Create a startup.sh file and add these contents
aws s3 cp s3://com.yoursitename/warFile/sample.war /tmp
sudo mv /tmp/sample.war /var/lib/tomcat/webapps/ROOT.war
sudo service tomcat restart
Is it possible to create an AMI from an ISO?
I am implementing a build system which uses the base iso, modifies it, installs stuff and then outputs it in .ovf and AMI.
.ovf works. But for AMI, all I could figure out is it needs pre existing AMI. Is this correct?
Is there any way to use an iso and generate an AMI?
Thanks.
When you say from ISO that tells me you're looking to create a trusted base VM. You want to install from scratch locally first and import that to ec2 as a trusted private AMI. If you don't mind using veewee there's an awesome post using veewee instead of packer here: veewee It's all setup for CentOS. All you need to do is clone it and tweak it for your use case.
But since you're looking for packer like I was then what you need is the virtualbox-iso builder in packer and some aws-cli commands to upload and create an AMI out of the OVA. Packer doesn't have a post-processor for this unfortunately. Then you can use vagrant to reference the new AMI for ec2 based development and use the vagrant-aws plugin to create new ami's out of your trusted base ami.
Here are the steps to follow:
1.) Create an S3 bucket for image imports.
2.) Set up your AWS account. Create 'vmimport' IAM role and policy as well as X509 key and cert pair in case you don't have it. You'll need this to register a private AMI. You will also reference the bucket's name for the policy.
3.) Build a VM with VirtualBox using packer's virtualbox-iso builder and have it output an image in ova format.
4.) use aws-cli with your aws account to upload the OVA to the bucket you created. aws s3 cp command.
5.) Register the OVA as an ami. You will use the aws ec2 import-image command for this. (This part can take a long time 30 min - 1 hour).
You can track progress with: aws ec2 describe-import-image-tasks The AMI will appear in your Private AMI list when it's done.
Vagrant includes a useful little plugin called vagrant-ami which lets you create EC2 custom AMIs:
$ vagrant create-ami new_image --name my-ami --desc "My AMI"
Then you can replace the AMI ID in your Vagrantfile with your custom one.
I have some micro instances with EBS volumes and from the ec2 console you can right click and create a AMI image of the whole system.
But I bought some High-Memory Reserved Instances which had 500GB of storage so I installed a "Instance Store" ubuntu AMI image
Now I have configured everything on my server and want to create a instance store ami image so that I can install those images on new servers and I don't have to install everything again
How can I do this?
This is how you do it with Ubuntu:
Launch desired instance from here (pick one without EBS storage): http://cloud-images.ubuntu.com/releases/precise/release/
Follow this guide here (look below for hints concerning Ubuntu): http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/creating-snapshot-s3-linux.html
First you need to create you public key and certificate using this guide (you will need them later): http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-credentials.html#using-credentials-certificate
Also note your AWS Account ID:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-credentials.html#using-credentials-account-id
Upload your pk and cert to your ubuntu instance that you downloaded:
scp -i <path-to-your-ec2key>.pem <your-account-pk>.pem <your-account-cert>.pem ubuntu#<yourinstance>.<yourzone>.compute.amazonaws.com:~/
That puts the pk-file and cert-file in you home directory in your running instance. Now login and move these to the /mnt directory so that they do not get included when you bundle your AMI.
Now modify your image to your hearts content.
Install EC2 AMI Tools: sudo apt-get install ec2-ami-tools
Run the following command to create your bundle: ec2-bundle-vol -k <your-account-pk>.pem -c <your-account-cert>.pem -u <user_id>
Se guide above for the rest. You need to upload you bundle to S3 and then register your AMI so you can launch it.
Good Luck!