Sorry if this supposed to be easily understood from the docs, but I didn't - if I spin up an EC2 instance using one of the easily available Ubuntu EBS-boot AMI's, install a bunch of stuff and move some files around under "/", and then I create an Instance-Store AMI using ec2-bundle-vol, will the data that was actually residing on the EBS volume mounted at "/" make it into the AMI?
Considering that from a user point-of-view, I would expect to find the same things under "/" in a future spin-up of my custom AMI, that I had in the original instance. It would also kind of make sense for Amazon to take a snapshot of the "/" folder to create my AMI (otherwise, what would one take a snapshot of?!), even though the AMI itself is Instance Store based while the original instance was EBS-backed.
Please help me understand this.
What I'm referring to:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/creating-snapshot-s3-linux.html
http://docs.aws.amazon.com/AWSEC2/latest/CommandLineReference/CLTRG-ami-bundle-vol.html
Thanks.
Yes, the data on the EBS volume residing on the root volume will make it to the AMI.
From AWS documentation : "By default, the AMI bundling process creates a compressed, encrypted collection of files in the /tmp directory that represent your root volume." http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/creating-an-ami-instance-store.html
It will of course exclude the private keys and bash history... unless you use the --no-filter option : http://docs.aws.amazon.com/AWSEC2/latest/CommandLineReference/CLTRG-ami-bundle-vol.html
Procedure for the conversion:
It’s basically the procedure to create an instance store-backed AMI that needs to be followed.
You will have to indicate a compatible kernel when registering the AMI though.
set up the EC2 CLI tools on the instance you want to convert (if not already installed)
get a X.509 certificate and private key (it can be self signed: openssl req -x509 -newkey rsa:2048 -keyout private-key.pem -out cert.pem -days 385 -nodes)
connect to the instance you want to convert
move your X.509 certificate and private key to /tmp/ mv private-key.pem cert.pem /tmp/
create the folder /tmp/out/ mkdir /tmp/out
create your bundle: ec2-bundle-vol -k /tmp/private-key.pem -c /tmp/cert.pem -u <account_id> -r x86_64 -d /mnt/out See the documentation for more details http://docs.aws.amazon.com/AWSEC2/latest/CommandLineReference/CLTRG-ami-upload-bundle.html You may need to hange the block device mapping (e.g. -B root=/dev/sda1)
upload the bundle to a S3 bucket: ec2-upload-bundle -b <bucket_name>/<bundle_folder>/<bundle_name> -a <access_key> -s <secret_key> -m /tmp/out/image.manifest.xml --region <aws_region>
register the AMI: ec2-register --kernel <kernel_id> --region <aws_region> --name “<ami_name>" --description “<ami_description>" <bucket_name>/<bundle_folder>/<bundle_name>/image.manifest.xml -O <access_key> -W <secret_key> See the documentation for more details: http://docs.aws.amazon.com/AWSEC2/latest/CommandLineReference/ApiReference-cmd-RegisterImage.html (see --root-device-name and -b options)
The devices mapping and volumes organisation are different between ebs-backed and instance store-backed instances so you need to make sure everything is where the system expects it to be
Related
I'm working with multiple instances (10 and more), and I want to configure them without accessing to each of them. Currently I look through Puppet and it seems is what I need. I've tried it for two instances and it's ok, but I installed puppet manually in both of instances, and also manually sent certificate from agent via puppet agent. Is there any way to install puppet automatically and send certificate for each node, not accessing them?
You can use scripts within UserData to autoconfigure your instance (see Running Commands on Your Linux Instance at Launch) by installing puppet, configuring it, and running it. Keep in mind that UserData is normally limited to 16kb and that data in there is stored base-64 encoded.
You can also build your own AMI with configuration scripts that run on boot, and then use that to download configuration from a central server, or read it out of userdata (e.g. curl http://169.254.169.254/latest/user-data | bash -s).
For example this is something we had in our Cloudformation template that installed a configuration service on our hosts.
"UserData": { "Fn::Base64" : { "Fn::Join" : [ "\n", [
"#!/bin/sh",
"curl -k -u username:password -f -s -o /etc/init.d/ec2 https://scriptserver.example.com/scripts/ec2",
"chmod a+x /etc/init.d/ec2",
"/etc/init.d/ec2 start"] ] } }
Ideally the 'scriptserver' is in the same VPC since the username and password aren't terribly secure (they're stored unencrypted on the machine, the script server, and in the Cloudformation and EC2 services).
The advantage of bootstrapping everything with userdata instead of building an AMI is flexibility. You can update your bootstrap scripts, generate new instances, and you're done. The disadvantages are speed since you'll have wait for everything to install and configure each time an instance launches (beware Cloudformation timeouts) and stability since if your script installs packages from a public repository (e.g. apt-get install mysql), the packages can be updated at any time, potentially introducing untested software into your environment. The workaround for the latter is to install software from locations you control.
If I want to download all the contents of a directory on S3 to my local PC, which command should I use cp or sync ?
Any help would be highly appreciated.
For example,
if I want to download all the contents of "this folder" to my desktop, would it look like this ?
aws s3 sync s3://"myBucket"/"this folder" C:\\Users\Desktop
Using aws s3 cp from the AWS Command-Line Interface (CLI) will require the --recursive parameter to copy multiple files.
aws s3 cp s3://myBucket/dir localdir --recursive
The aws s3 sync command will, by default, copy a whole directory. It will only copy new/modified files.
aws s3 sync s3://mybucket/dir localdir
Just experiment to get the result you want.
Documentation:
cp command
sync command
Just used version 2 of the AWS CLI. For the s3 option, there is also a --dryrun option now to show you what will happen:
aws s3 --dryrun cp s3://bucket/filename /path/to/dest/folder --recursive
In case you need to use another profile, especially cross account. you need to add the profile in the config file
[profile profileName]
region = us-east-1
role_arn = arn:aws:iam::XXX:role/XXXX
source_profile = default
and then if you are accessing only a single file
aws s3 cp s3://crossAccountBucket/dir localdir --profile profileName
In the case you want to download a single file, you can try the following command:
aws s3 cp s3://bucket/filename /path/to/dest/folder
You've many options to do that, but the best one is using the AWS CLI.
Here's a walk-through:
Download and install AWS CLI in your machine:
Install the AWS CLI using the MSI Installer (Windows).
Install the AWS CLI using the Bundled Installer for Linux, OS X, or Unix.
Configure AWS CLI:
Make sure you input valid access and secret keys, which you received when you created the account.
Sync the S3 bucket using:
aws s3 sync s3://yourbucket/yourfolder /local/path
In the above command, replace the following fields:
yourbucket/yourfolder >> your S3 bucket and the folder that you want to download.
/local/path >> path in your local system where you want to download all the files.
sync method first lists both source and destination paths and copies only differences (name, size etc.).
cp --recursive method lists source path and copies (overwrites) all to the destination path.
If you have possible matches in the destination path, I would suggest sync as one LIST request on the destination path will save you many unnecessary PUT requests - meaning cheaper and possibly faster.
Question: Will aws s3 sync s3://myBucket/this_folder/object_file C:\\Users\Desktop create also the "this_folder" in C:\Users\Desktop?
If not, what would be the solution to copy/sync including the folder structure of S3? I mean I have many files in different S3 bucket folders sorted by year, month, day. I would like to copy them locally with the folder structure to be kept.
Is it possible to create an AMI from an ISO?
I am implementing a build system which uses the base iso, modifies it, installs stuff and then outputs it in .ovf and AMI.
.ovf works. But for AMI, all I could figure out is it needs pre existing AMI. Is this correct?
Is there any way to use an iso and generate an AMI?
Thanks.
When you say from ISO that tells me you're looking to create a trusted base VM. You want to install from scratch locally first and import that to ec2 as a trusted private AMI. If you don't mind using veewee there's an awesome post using veewee instead of packer here: veewee It's all setup for CentOS. All you need to do is clone it and tweak it for your use case.
But since you're looking for packer like I was then what you need is the virtualbox-iso builder in packer and some aws-cli commands to upload and create an AMI out of the OVA. Packer doesn't have a post-processor for this unfortunately. Then you can use vagrant to reference the new AMI for ec2 based development and use the vagrant-aws plugin to create new ami's out of your trusted base ami.
Here are the steps to follow:
1.) Create an S3 bucket for image imports.
2.) Set up your AWS account. Create 'vmimport' IAM role and policy as well as X509 key and cert pair in case you don't have it. You'll need this to register a private AMI. You will also reference the bucket's name for the policy.
3.) Build a VM with VirtualBox using packer's virtualbox-iso builder and have it output an image in ova format.
4.) use aws-cli with your aws account to upload the OVA to the bucket you created. aws s3 cp command.
5.) Register the OVA as an ami. You will use the aws ec2 import-image command for this. (This part can take a long time 30 min - 1 hour).
You can track progress with: aws ec2 describe-import-image-tasks The AMI will appear in your Private AMI list when it's done.
Vagrant includes a useful little plugin called vagrant-ami which lets you create EC2 custom AMIs:
$ vagrant create-ami new_image --name my-ami --desc "My AMI"
Then you can replace the AMI ID in your Vagrantfile with your custom one.
I have some micro instances with EBS volumes and from the ec2 console you can right click and create a AMI image of the whole system.
But I bought some High-Memory Reserved Instances which had 500GB of storage so I installed a "Instance Store" ubuntu AMI image
Now I have configured everything on my server and want to create a instance store ami image so that I can install those images on new servers and I don't have to install everything again
How can I do this?
This is how you do it with Ubuntu:
Launch desired instance from here (pick one without EBS storage): http://cloud-images.ubuntu.com/releases/precise/release/
Follow this guide here (look below for hints concerning Ubuntu): http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/creating-snapshot-s3-linux.html
First you need to create you public key and certificate using this guide (you will need them later): http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-credentials.html#using-credentials-certificate
Also note your AWS Account ID:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-credentials.html#using-credentials-account-id
Upload your pk and cert to your ubuntu instance that you downloaded:
scp -i <path-to-your-ec2key>.pem <your-account-pk>.pem <your-account-cert>.pem ubuntu#<yourinstance>.<yourzone>.compute.amazonaws.com:~/
That puts the pk-file and cert-file in you home directory in your running instance. Now login and move these to the /mnt directory so that they do not get included when you bundle your AMI.
Now modify your image to your hearts content.
Install EC2 AMI Tools: sudo apt-get install ec2-ami-tools
Run the following command to create your bundle: ec2-bundle-vol -k <your-account-pk>.pem -c <your-account-cert>.pem -u <user_id>
Se guide above for the rest. You need to upload you bundle to S3 and then register your AMI so you can launch it.
Good Luck!
My company currently is using S3fs and Ec2 from AWS. We have been mounted our s3 buckets on our Ec2 instances, but after some time (a week, for example) some of the buckets unmount by themselves and our server instances become nearly useless. The error is "Transport endpoint not connected."
S3fs version: 1.61 build from source
FUSE version: 2.84.1 build from source
OS: Linux, Ubuntu 11.04
Is there some kind of safe mechanism for preventing (or at least detecting) these problems?
Great insight. Hadn't thought about this. But here are 3 precautionary steps we can take:
1) Create an auto-mount so that in the very unlikely event that EC2 is down, S3 gets mounted back on once EC2 comes back via /etc/fstab
2) or/and if you prefer, create a secondary auto-mount using cron:
echo "/usr/bin/s3fs [s3 bucket name] [mountpoint path] -o allow_other" >> automount-s3
sudo mv automount-s3 /usr/sbin
sudo chown root:ubuntu /usr/sbin/automount-s3
sudo chmod +x /usr/sbin/automount-s3
crontab -e
add this line
#reboot /usr/sbin/automount-s3
3) I would also create another hourly cron to check whether S3 is still mounted - this can be done by checking if a dummy file exists in your EC2 path. If the file doesn't exist, cron will do a manual mount by calling "/usr/bin/s3fs -o allow_other [s3 bucket name] [mountpoint path]". It would be good to trigger an email to the admin and log it in the system as well.
s3fs is a nice idea but keep in mind that even though the call to s3 might be somewhat internal (or let's say "on their network"), you're still mounting a filesystem over HTTP. That is not going to be stable in the long-run.
Maybe you can re-phrase your question to ask for alternatives and share what you're trying to accomplish by using any kind of (I'm guessing) shared network filesystem. I can see the appeal, but with Amazon EC2 people usually use a shared nothing approach and anything extra network-related should be avoided to be able to recycle instances easier, etc..
Happy to extend my answer.