Is it possible to create an AMI from an ISO?
I am implementing a build system which uses the base iso, modifies it, installs stuff and then outputs it in .ovf and AMI.
.ovf works. But for AMI, all I could figure out is it needs pre existing AMI. Is this correct?
Is there any way to use an iso and generate an AMI?
Thanks.
When you say from ISO that tells me you're looking to create a trusted base VM. You want to install from scratch locally first and import that to ec2 as a trusted private AMI. If you don't mind using veewee there's an awesome post using veewee instead of packer here: veewee It's all setup for CentOS. All you need to do is clone it and tweak it for your use case.
But since you're looking for packer like I was then what you need is the virtualbox-iso builder in packer and some aws-cli commands to upload and create an AMI out of the OVA. Packer doesn't have a post-processor for this unfortunately. Then you can use vagrant to reference the new AMI for ec2 based development and use the vagrant-aws plugin to create new ami's out of your trusted base ami.
Here are the steps to follow:
1.) Create an S3 bucket for image imports.
2.) Set up your AWS account. Create 'vmimport' IAM role and policy as well as X509 key and cert pair in case you don't have it. You'll need this to register a private AMI. You will also reference the bucket's name for the policy.
3.) Build a VM with VirtualBox using packer's virtualbox-iso builder and have it output an image in ova format.
4.) use aws-cli with your aws account to upload the OVA to the bucket you created. aws s3 cp command.
5.) Register the OVA as an ami. You will use the aws ec2 import-image command for this. (This part can take a long time 30 min - 1 hour).
You can track progress with: aws ec2 describe-import-image-tasks The AMI will appear in your Private AMI list when it's done.
Vagrant includes a useful little plugin called vagrant-ami which lets you create EC2 custom AMIs:
$ vagrant create-ami new_image --name my-ami --desc "My AMI"
Then you can replace the AMI ID in your Vagrantfile with your custom one.
Related
I have set up a Windows server on Google Cloud. I also have a Google Storage Bucket. I want to transfer a zip to the VM. How can I do this?
I figured this out. Follow the directions to create your VM and storage bucket.
Start your vm and rdp into the server. THEN from WITHIN the VM instance run:
\Google\Cloud SDK>gsutil -m cp -r gs://[bucket]/[your file] C:/users/[computer name]/[location]
Replace [computer name] with your user's name on windows, and replace [location] with the location you want to transfer the file to.
I have a very interesting problem. Following is my current workflow of deployment in Amazon EC2 in classic mode.
Deploy host inside my Company's network.
Deploy Target is EC2 machine in AWS.
Have custom ruby gems inside the company's git account (Hence cannot install gems from outside my companies network).
To overcome the problem mentioned in Point #3. I have used reverse tunnelling between the deploy host and deploy target.
I am using capistrano for deployment.
Now the problem arises when we decided to move from Amazon Classic to Amazon VPC with deploy target having only private ip address. Here is the workflow I thought of for deploying code in VPC instances.
Create a deploy host in Amazon VPC and attach public dns to it so that I can access it from my main deploy host (which is inside my company's network.)
Deploy the code by running the deployment scripts from AWS deploy host.
The problem is that I am not able to find a way to install gems which are hosted inside the git account of my company. Can you guys help me with this problem?
Prior to deployment, you can just setup git mirrors of your production repositories by just pushing to git bare repositories in your AWS deploy host.
Then that AWS deploy host also has access to your VPC so you can do the deployment from there.
Hope it helps.
Download the gems first and then pass it to the ec2 instance in vpc using scp
scp -r -i key ubuntu#ip-address:/ruby-app
Then run gem install gem-name from the folder, it will install gem from within the folder matching with the name.
Run bundle package, this will download all the gems and will be present in vendor/cache folder. Now move this files to the ec2 instance.
Basically I want to create a "snapshot" of my current Ubuntu box, which has compiled binaries and various apt-get packages installed on it. I want to create a docker instance of this as a file that I can distribute to my AWS ec2 instances which will be stored on S3 bucket that will be mounted by the ec2.
Is it possible to achieve this, and how do you get started?
You won't be able to take a snapshot of a current box and use it as a docker container, but you can certainly create a container to use on your EC2 instances.
Create a Dockerfile that builds the system exactly as you want it.
Once you've created the perfect Dockerfile, export a container to a tarball
Upload the tarball to S3
On your EC2 instances, download the tarball and import it as a Docker container.
Are you planning to use something like s3fs to mount an S3 bucket? Otherwise you can just copy the tarball from your bucket either as a userdata boot script or during a chef/puppet/ansible provisioning step. Depends how you want to structure it.
How can I name my Amazon EC2 EBS volumes using the AWS Console? By default the name field is empty, and I can see no option to edit this, unlike the actual EC2 instance.
The "name" field ist just a tag. To edit this, klick on your EBS volume, go to "Tags" in the lower panel and you will already find a tag "Name" there. Fill in your desired name as the value and it will also show up in the overview panel.
The "Name" that shows up in the AWS console is just a tag. This applies to AMIs, volumes, instances, etc..
You can change it from the command line using ec2-api-tools with following command:
ec2addtag <entity_id> --tag Name=<name>
You can install ec2-api-tools (Ubuntu) by calling:
sudo apt-get install ec2-api-tools
I have some micro instances with EBS volumes and from the ec2 console you can right click and create a AMI image of the whole system.
But I bought some High-Memory Reserved Instances which had 500GB of storage so I installed a "Instance Store" ubuntu AMI image
Now I have configured everything on my server and want to create a instance store ami image so that I can install those images on new servers and I don't have to install everything again
How can I do this?
This is how you do it with Ubuntu:
Launch desired instance from here (pick one without EBS storage): http://cloud-images.ubuntu.com/releases/precise/release/
Follow this guide here (look below for hints concerning Ubuntu): http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/creating-snapshot-s3-linux.html
First you need to create you public key and certificate using this guide (you will need them later): http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-credentials.html#using-credentials-certificate
Also note your AWS Account ID:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-credentials.html#using-credentials-account-id
Upload your pk and cert to your ubuntu instance that you downloaded:
scp -i <path-to-your-ec2key>.pem <your-account-pk>.pem <your-account-cert>.pem ubuntu#<yourinstance>.<yourzone>.compute.amazonaws.com:~/
That puts the pk-file and cert-file in you home directory in your running instance. Now login and move these to the /mnt directory so that they do not get included when you bundle your AMI.
Now modify your image to your hearts content.
Install EC2 AMI Tools: sudo apt-get install ec2-ami-tools
Run the following command to create your bundle: ec2-bundle-vol -k <your-account-pk>.pem -c <your-account-cert>.pem -u <user_id>
Se guide above for the rest. You need to upload you bundle to S3 and then register your AMI so you can launch it.
Good Luck!