Production went down today with no disk space remaining error. After deleting files and restarting the machine, it still came up with this error, even if I just try to touch a new empty file.
It is probably caused by running out of inodes, but I went ahead and created an "Image" which seems to create an AMI, but after launching an instance of the AMI the same problem persisted... probably because it is using the same EBS volume.
Question is: how do I snapshot the EBS volume and then connect a new volume to the AMI as the root fs?
You care correct that the "Create Image" command creates an Amazon Machine Image (AMI). If you start a new EC2 instance with this AMI, it will contain the same data as the machine that was imaged. That's why you are copying your exiting problem to the new instance.
Check your disk space with df -h to confirm that you have space available.
If you require more disk space, you can copy your disk to a larger volume as follows:
Option 1: If you already have an AMI of the volume:
Launch a new instance using the AMI, but expand the size of the volume in the Add Storage options
Option 2: If you want to retain the same instance:
Stop your instance
Create Snapshot of the EBS Volume
Create Volume from the Snapshot, specifying a larger storage size
Detach the original root volume
Attach the new volume in its place (keep the same Device identifier)
In both cases, after startup confirm that the partition has automatically expanded. If not, use the resize2fs command to extend the partition.
When you create an image of an ec2 instance, it takes snapshots of the volumes also. You can see this in "Images > AMIs" and snapshots information is visible in "Block Devices" column (By default, this column is not visible) of the table.
Now, if you are getting the "no disk space error", you need to increase the size of root volume. You can do that by following the link below:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-expand-volume.html
Related
title says it all. I inadvertently terminated an EC2 Instance and desperately need to restart it/relaunch it!!!
Any help? It would be disastrous if I had to start all over!! Does anyone know how I can fix this?
Basically it goes like this:
Your machine is gone, you cannot restart, you need to create a new instance
all the data you had on an instance store volume are gone (you cannot recover those)
If you had EBS Volume attached and you had setting enable for 'delete on termination' the latest data are gone. You can recover if you have a snapshot from this volume.
If you had EBS Volume attached without flag for 'delete on termination' you can recover those data
So what you can do:
check your snapshots and Volumes in the ec2 console. If you have no snapshot/volumes, you cannot recover anything
if you have root volumes,
make a snapshot of those you want to recover
from the snapshot, make an image
from the ami, launch a new instance for the specific image you've just created
if you dont have root volumes
create volumes from snapshot if you have any snapshot you need to recover data
create a new ec2 instance
attach and mount the volumes to the new instance
read and backup your data
I'm using a community AMI, it's great but some of the stuff in it is outdated. Every time I spin up a new machine based on it I have to update all the libraries. I want to instead update them once and save the modified image. It's and EBS backed AMI. I tried creating a snapshot off of the volume of the running instance and then creating an AMI from the snapshot. The resulting AMI does indeed have all the modifications I made but the operating system is different! The original AMI has ubuntu while the thing that comes out is "other linux" - and some things are not working (CUDA). Both "RAM disk ID" and "kernel ID" in the original AMI details are blank so I leave them as "default" when creating the new AMI.
The preferred way to save a modified EC2 instance is to burn an AMI directly from the running instance, rather than taking a snapshot of its root volume.
If for any reason all you had was a snapshot of the root volume of a previously running instance, to create a bootable AMI you have to follow the following process: launch one of the stock EC2 AMIs, one that has the same OS as your EBS snapshot. Create an EBS volume from that snapshot. Stop the newly launched instance. Detach the root volume, and attach the new volume you created form the EBS snapshot as the root volume and start the instance. See Launching a Linux Instance from a Backup. NOTE: Although you can create a Windows AMI from a snapshot, you cannott successfully launch an instance from the AMI.
The easiest way to save an AMI with new modificationa, is to create the AMI image directly from the running instance, and not simply take a snapshot of the running volume.
From the AWS Management Console, click on the instance, then right-click Image -> Create Image.
From that dialog, set the Name, Description etc. Make sure to leave No Reboot unchecked. From the Instance Volumes section adjust the volume settings.
Note that your instance will reboot during the image creation process. Make sure you are prepared for the temporary loss of service of the instance during this time.
I ran into some issues with my EC2 micro instance and had to terminate it and create a new one in its place. But it seems even though the old instance is no longer visible in the list, it is still using up some space on my disk. My df -h is listed below:
Filesystem Size Used Avail Use%
/dev/xvda1 7.8G 7.0G 719M 91% /
When I go to the EC22 console I see there are 3 volumes each 8gb in the list. One of them is attached (/dev/xvda) and this one is showing as "in-use". The other 2 are simply showing as "Available"
Is the terminated instance really using up my disk space? If yes, how to free it up?
I have just solved my problem by running this command:
sudo apt autoremove
and a lot of old packages are going to be removed, for instance many files like this linux-aws-headers-4.4.0-1028
Amazon Elastic Block Storage (EBS) is a service that provides virtual disks for use with Amazon EC2. It is network-attached storage that persists even when an EC2 instance is stopped or terminated.
When launching an Amazon EC2 instance, a boot volume is automatically attached to the instance. The contents of the boot volume is copied from an Amazon Machine Image (AMI), which can be chosen from a pre-populated list (including the ability to create your own AMI).
When an Amazon EC2 instance is Stopped, all EBS volumes remain attached to the instance. This allows the instance to be Started with the same configuration as when it was stopped.
When an Amazon EC2 instance is Terminated, EBS volumes might or might not be deleted, based upon the Delete on Termination setting of each volume:
By default, boot volumes are deleted when an instance is terminated. This is because the volume was originally just a copy of an AMI, so there is unlikely to be any important data on the volume. (Hint: Don't store data on a boot volume.)
Additional volumes default to "do not delete on termination", on the assumption that they contain data that should be retained. When the instance is terminated, these volumes will remain in an Available state, ready to be attached to another instance.
So, if you do not require any content on your remaining EBS volumes, simply delete them. In future, when launching instances, keep an eye on the Delete on Termination setting to make the clean-up process simpler.
Please note that the df -h command is only showing currently-attached volumes. It is not showing the volumes in Available state, since they are not visible to that instance. The concept of "Disk Space" typical refers to the space within an EBS volume, while "EBS Storage" refers to the volumes themselves. So, the 7GB of the volume that is used is related to that specific (boot) volume.
If you are running out of space on an EBS volume, see: Expanding the Storage Space of an EBS Volume on Linux. Expanding the volume involves:
Creating a snapshot
Creating a new (bigger) volume from the snapshot
Swapping the disks (requiring a Stop/Start if you are swapping a boot volume)
These 2 steps add an extra hard drive to your EC2 and format it for use:
Attach an extra hard drive (EBS: Elastic Block Storage) to an EC2
Format an EBS drive attached to an EC2
Here's pricing info. Free Tier includes 30GB. Afterward it's $1.25/month for 10GB on a General Purpose SSD (gp2).
To see how much space you are using/need:
Check your current disk use/available in Linux with df -h.
Check the size of a directory in Linux with du -sh [path].
So I create an instance using one of the Public AMI EBS Ubuntu flavors. I create an EBS volume and attach it to the instance. I format the volume and add an entry to /etc/fstab to mount it on /vol. I add mysql to the AMI and move the data files to the EBS volume I formatted and mounted at /vol. I then create an AMI from the running instance. Then I terminate the running instance.
I start a new instance using the freshly created AMI (with mysql on it). The /vol is mounted has the mysql data files - good, I expect that. Here's where I am confused. When I create any directory or files on the EBS volume /vol they aren't there any more after I terminate the instance and create a new one. The mysql stuff is there but no new stuff I created. Aren't those files and directory supposed to be there? Or am I misunderstanding how this works?
When you create an AMI, "Amazon EC2 powers down the instance, takes images of any volumes that were attached, creates and registers the AMI, and then reboots the instance." -Amazon. When the AMI is used to launch an instance, the images (snapshots) of the attached drives are used to create new volumes. It is these new volumes that are attached to the new instance, not your original EBS. (This generates lots of orphan volumes and snapshots with ongoing use.)
There is no automatic attaching of the EBS volume you created. What is automatically attached is the volume it creates at the time of launching the instance from your AMI! It creates this volume from the snapshot it made of your EBS at the time of the AMI creation!
The way to avoid clone volumes from being created and attached to new instances is simple: detach your volumes before making AMIs. You need to attach your EBS volumes manually with the EC2 Web Control Panel, or programmatically with .net or Java programming, scripting or command line tools.
EBS volumes are not tied to an AMI, only to the literal instance you attach them to. When you created your AMI and a new instance from that, the EBS is not cloned, nor does it follow you to the new instance.
You could move the EBS drive to the new instance manually. Alternately you could snapshot the EBS volume & clone a new drive from that.
I have an ebs-backed instance running on EC2. I'm using it to do some computationally intensive text processing on around 16Gb of data which is stored on sdb (i.e. the larger ebs volume associated with the instance).
I'd like to parallelized the processing by creating replicas of this instance, each with its own copy of the data. I can create an AMI from the instance but I need the image to include BOTH sda (the root ebs volume) AND ALSO sdb, which is the volume where all the data is. How can I make a replica of the whole package?
Creating an image in the AWS Management Console just copies sda (i.e. the root volume, which is too small to hold my data).
Is this even possible?
(PS: I don't even see the sdb volume in the AWS Management Console Elastic Block Store->Volumes panel)
Thanks!
I once needed this sort of setting where I had to setup a MySQL on a EBS backed machine with data store in a separate EBS Volume. The AMI had to be such that every time you instanciate it, it should have the data volume (with static data in it) attached. This is how I did:
Created an EBS backed instance from any existing image
Attached a EBS volume, performed mkfs, mounted on /database
Copied data to the volume, e.g. under /database/mysql
Created image of this setup from AMI web console.
Now, every time I launch this image, I see the volume with all the data is there. I just mount it on /database and things get going.
I am not sure, if this is helpful to you but your problem seemed to close to this.
Update after #NAD's comment
Yeah, AMI creation process excludes stuffs that are under
/sys
/proc
/dev
/media
/mnt
So, the trick is to not have stuffs that you want to bundle up with your AMI under these directories.
Also, if you have volume that you want to auto-mount at boot, register it in fstab