Amazon Elastic File Storage(EFS) not automatically adding capacity - amazon-ec2

I have created an EFS volume and mounted it to a Linux EC2, and in turn mounted the EFS volume using samba for use on a Windows EC2.
When my drive reached near about 150gb, it did not grow automatically to accommodate new files.
This guide is basically what I did to create my drive mount.
It doesn't seem to me that it being Windows (which is officially not supported for EFS) should prevent it from growing as files are added to it over the network.

Related

Running out of disk space EC2

I ran into some issues with my EC2 micro instance and had to terminate it and create a new one in its place. But it seems even though the old instance is no longer visible in the list, it is still using up some space on my disk. My df -h is listed below:
Filesystem Size Used Avail Use%
/dev/xvda1 7.8G 7.0G 719M 91% /
When I go to the EC22 console I see there are 3 volumes each 8gb in the list. One of them is attached (/dev/xvda) and this one is showing as "in-use". The other 2 are simply showing as "Available"
Is the terminated instance really using up my disk space? If yes, how to free it up?
I have just solved my problem by running this command:
sudo apt autoremove
and a lot of old packages are going to be removed, for instance many files like this linux-aws-headers-4.4.0-1028
Amazon Elastic Block Storage (EBS) is a service that provides virtual disks for use with Amazon EC2. It is network-attached storage that persists even when an EC2 instance is stopped or terminated.
When launching an Amazon EC2 instance, a boot volume is automatically attached to the instance. The contents of the boot volume is copied from an Amazon Machine Image (AMI), which can be chosen from a pre-populated list (including the ability to create your own AMI).
When an Amazon EC2 instance is Stopped, all EBS volumes remain attached to the instance. This allows the instance to be Started with the same configuration as when it was stopped.
When an Amazon EC2 instance is Terminated, EBS volumes might or might not be deleted, based upon the Delete on Termination setting of each volume:
By default, boot volumes are deleted when an instance is terminated. This is because the volume was originally just a copy of an AMI, so there is unlikely to be any important data on the volume. (Hint: Don't store data on a boot volume.)
Additional volumes default to "do not delete on termination", on the assumption that they contain data that should be retained. When the instance is terminated, these volumes will remain in an Available state, ready to be attached to another instance.
So, if you do not require any content on your remaining EBS volumes, simply delete them. In future, when launching instances, keep an eye on the Delete on Termination setting to make the clean-up process simpler.
Please note that the df -h command is only showing currently-attached volumes. It is not showing the volumes in Available state, since they are not visible to that instance. The concept of "Disk Space" typical refers to the space within an EBS volume, while "EBS Storage" refers to the volumes themselves. So, the 7GB of the volume that is used is related to that specific (boot) volume.
If you are running out of space on an EBS volume, see: Expanding the Storage Space of an EBS Volume on Linux. Expanding the volume involves:
Creating a snapshot
Creating a new (bigger) volume from the snapshot
Swapping the disks (requiring a Stop/Start if you are swapping a boot volume)
These 2 steps add an extra hard drive to your EC2 and format it for use:
Attach an extra hard drive (EBS: Elastic Block Storage) to an EC2
Format an EBS drive attached to an EC2
Here's pricing info. Free Tier includes 30GB. Afterward it's $1.25/month for 10GB on a General Purpose SSD (gp2).
To see how much space you are using/need:
Check your current disk use/available in Linux with df -h.
Check the size of a directory in Linux with du -sh [path].

what's the difference between attach and mount in ebs for amazon ec2

If I attach an EBS instance to my EC2 instance, isn't that the same as mounting? According to various guides they do different things. What's the difference between attaching an EBS instance and mounting?
Attaching a volume simply attaches the volume as a block device to the instance. This action only allows the device to be visible within the operating system.
To use it, you will need to format it, and mount it to the file system. Mount is terminology use more often in Linux than Windows. In Linux you are actually using the mount command to assign the device to a point in the file system. In windows you would assign a drive letter to the volume via disk management.

Create directory structure in Amazon Elastic Block Storage(EBS) and save files from Java Servlets?

I am very new to this and trying to do this the first time. I have learned that Amazon Elastic Block Storage(EBS) can be used in a similar way as a hard Disk when mounted on Amazon EC2. Now I wish to create a directory structure in EBS and save files from Java Servlet in EBS?
I have also learned that the code used by the servlet in development machine can be used to create a directory structure access files in EBS also
#MultipartConfig( location = "d:\\tmp",
fileSizeThreshold = 1024 * 1024,
maxFileSize = 1024 * 1024 * 5,
maxRequestSize = 1024 * 1024 * 5 * 5 )
I have Amazon Linux installed on my Amazon EC2, any pointers will be great help?
EBS isn't similar to a hard disk, it behaves exactly as a hard disk from the perspective of your application (except that it's slower than a desktop hard disk unless you stripe multiple EBS volumes into a software RAID configuration).
After you have mounted your EBS volume, you use the EBS storage exactly as you would any other storage.
Instructions on how to mount the volume for Linux can be found here:
http://docs.amazonwebservices.com/AWSEC2/latest/UserGuide/ebs-using-volumes.html
Following those steps, you would end up with a directory
/mnt/data-store
that corresponds to the EBS volume. If you don't like the name data-store you can change it to something else.
I did notice your example code refers to "d:\\tmp" which is a Windows path specification, but you state that your instance is running Linux. Be sure that you adjust any paths to point to /mnt/data-store.
The easiest way to start using an EBS volume is to launch an EBS-backed EC2 instance. Those come pre-connected to an EBS volume (it is the boot volume).

Can some other Azure role read unwiped disk data left by my role when my role Azure VM is reclaimed?

Suppose my Azure role stores some data on the VM local disk and is then terminated. The local disk was mapped onto some physical storage and so the data stored onto the local disk was written into that storage. When my role terminates the VM is reclaimed and the physical storage is also reclaimed.
Now some other role is started and its local disk happens to be mapped onto the same physical storage as was used my my role. I'm well aware that the logical structure of the new local disk is completely rebuilt and all files possibly left by my role will just disappear. However the physical storage underneath the newly created logical disk happens to be the same.
Specifically suppose the new role creates an empty file and then calls SetEndOfFile() to "extend" the file and then opens it for reading and reads the data currently stored on the logical disk. Unless special measures are taken in the Azure infrastructure I'm not sure this won't result in extending the file over data stored by my role and reading that data.
Is it technically possible for the new role to read the data written by my role?
The short answer is no,
All I/O requests from the guest os are handled by the hypervisor, the hypervisor ensures that an insance can only access the assigned storage.
The only way to get access to data from old roles is to get physical access in the containers and grab it from there (if you ever succeed to get passed the datacenters physical security measures and into the sealed containers.) And even then it's not going to be easy as it's my understanding that logical disks do not map one-to-one to individual physical drives, but to clusters of drives, so physically your data will be dispersed across several disks as well.
Furthermore there are also offical disposal procedures in place that ensure that all data is removed from disks that are being disposed of.
Kind regards,
Yves
+1 to #Yves and yes the answer is NO.
I would like to add more information on how virtual drives are created and used in any Windows Azure VM. As you may already know, each Role (Web or Worker) has minimum 3 virtual drives in it:
Drive E: is application drive which is 1GB fixed size and created dynamically by FC using the Package updated by user. This drive is not designed to have any User data on it. This drive is created per user deployment so it new for every role. This drive is provided to Azure Virtual machine by FC and attached to VM during provisioning time.
Drive D: is the OS/SYSTEM drive (about 25GB size) which is attached to the role and identical to each role depend on OS version. This is ready only drive to web role however startup task and worker role can write to it. The drive is dedicated OS drives and considered that user should not place any of their content on it.
Drive C: is user drive in which user data is located. When you have local storage in your application the storage is created here. This drive is virtually created depend on Role VM size.
On a Windows Azure Host machine you can create a small VM or Extra Large VM so depend on your role size, your VM will get ~250GB C drive or 2TB C Drive and this storage is acquired from host machine.
On Host machine there are bunch of disks connected to provide large logical space to fulfill small to large VM size local storage requirement. When the role VM is provisioned, depend on what kind of role VM is created on Host, a virtual HDD is created to from the total logical space and attached to VM as user drive.
When there is any Guest OS update or Azure Application Update:
The updated happens in Drive D: through Diff image
The update happens in Drive E: through Diff Image
As Drive C is user drive and "Local Storage" is not directly affected by Guest update or role update however if "Local Storage - Clean on Role Recycle" property is set, the Local Storage will be clean by when role will be recycle.
So what happens when you remove your application from Azure:
OS Drive D:/Diff drive is discarded
Application Drive E:/Diff drive is also discarded
User Drive C: is removed and the space is claimed back by the host machine.
Now When a new VM is created on Host machine a new user drive C: is created and the space is allocated from the available physical space and it could be any size from ~250GB to ~2TB depend on Role VM size.
Even when next time an extra-large guest VM is provisioned on host machine, which requires maximum size 2TB Virtual disk, the VHD is rebuild again from scratch. So back to back 2TB virtual disk for XLarge VM are still not same.
So there are no chances that your old files could be recovered from previous disk even when you use file system API you mentioned in your question above.
(Sorry for writing large post)

Can I create an AMI that includes multiple ebs volumes (i.e. both sda and sdb)

I have an ebs-backed instance running on EC2. I'm using it to do some computationally intensive text processing on around 16Gb of data which is stored on sdb (i.e. the larger ebs volume associated with the instance).
I'd like to parallelized the processing by creating replicas of this instance, each with its own copy of the data. I can create an AMI from the instance but I need the image to include BOTH sda (the root ebs volume) AND ALSO sdb, which is the volume where all the data is. How can I make a replica of the whole package?
Creating an image in the AWS Management Console just copies sda (i.e. the root volume, which is too small to hold my data).
Is this even possible?
(PS: I don't even see the sdb volume in the AWS Management Console Elastic Block Store->Volumes panel)
Thanks!
I once needed this sort of setting where I had to setup a MySQL on a EBS backed machine with data store in a separate EBS Volume. The AMI had to be such that every time you instanciate it, it should have the data volume (with static data in it) attached. This is how I did:
Created an EBS backed instance from any existing image
Attached a EBS volume, performed mkfs, mounted on /database
Copied data to the volume, e.g. under /database/mysql
Created image of this setup from AMI web console.
Now, every time I launch this image, I see the volume with all the data is there. I just mount it on /database and things get going.
I am not sure, if this is helpful to you but your problem seemed to close to this.
Update after #NAD's comment
Yeah, AMI creation process excludes stuffs that are under
/sys
/proc
/dev
/media
/mnt
So, the trick is to not have stuffs that you want to bundle up with your AMI under these directories.
Also, if you have volume that you want to auto-mount at boot, register it in fstab

Resources