I have hosted my app on aws ec2 instance with 16 GiB Ram and 8 GB of storage. I used to save logs her and sometimes get "Memory full issue". So can anyone suggest me that how can i increase the storage and how much it will cost.
Try following before increasing instance storage and upgrading
instance type.
Clear temporary files and Application logs on daily basis.
Execute following command to open crontab,
crontab -e
Enter following line into cron tab.
0 1 * * * sudo find /tmp -type f -atime +10 -delete
Use above command into cron job to clear all application temporary files accessed before 10 days and this command will get get executed at 1AM everyday.
also remove older log files though cronjob.
If you are Using AWS for infrastructure then will suggest you to upload logs on cloudwatch using log agent.
Refer following link for Quickstart for logs Agent.
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/QuickStartEC2Instance.html
If you arable Cloudwatch then all of you application logs will get stream to S3 and you will not required even 10 days of logs on instance.
What's the class of the EC2 instance ? You can monitor your instance performance using Cloud Watch and then you can decide (based on the Cpu and memory usage) to which new instance you have to migrate. For costs estimation you can use the link provided by #wayne Phipps
if you have memory problem then why you want more storage try ?
Which class of ec2 you are using ?
For more you can attach the EBS to ec2 if want to increase the volume and to increase memory you can change the class of ec2 if using t2.micro you change it.
auto scaling can also one option or migrate to another ec2 having a more space and memory.
if you are using docker images or containers using Jenkins so maybe you can use this command to clean up space consumed by unwanted docker images.
sudo docker rmi $(sudo docker images -f "dangling=true" -q)
Related
I want to work with an EBS snapshot in an EMR job. Because the mapper reads from the snapshot, I want the snapshot mounted on every node. Is there an easy way to do that other than logging in to each node? I guess I could make the first step of my mapreduce job to mount it, but that seems wrong. Is there an easier way to do it?
It is possible, but you'll have to jump through some hoops to get it to work. Assuming you have recipe to create an EBS volume from the EBS snapshot in a shell script. EMR provides bootstrap actions, which are just shell scripts you can create and run. Bootstrap actions are run before any jobs (steps in EMR) are allowed to run.
Here are the steps you need to have your shell script perform:
Create a new EBS volume based on your snapshot. The aws binary is installed on all EMR instances, so that's your best bet. Assuming you know the snapshot id, this should be straightforward:
http://docs.aws.amazon.com/cli/latest/reference/ec2/create-volume.html
Make sure you include the DeleteOnTermination attachment.
You will need to parse the response to get the EBS volume id.
Attach the volume you just created (using the EBS volume id) to the current instance:
http://docs.aws.amazon.com/cli/latest/reference/ec2/attach-volume.html
To get the current instance id, use the metadata service:
wget -q -O - http://instance-data/latest/meta-data/instance-id
Once you have your shell script, you need to upload it to S3, and then add that script as a bootstrap action to your cluster:
http://docs.aws.amazon.com/ElasticMapReduce/latest/DeveloperGuide/emr-plan-bootstrap.html
Also beware, you will be charged for each EBS volume you create, so ensure the delete on termination logic is setup properly!
I ran into some issues with my EC2 micro instance and had to terminate it and create a new one in its place. But it seems even though the old instance is no longer visible in the list, it is still using up some space on my disk. My df -h is listed below:
Filesystem Size Used Avail Use%
/dev/xvda1 7.8G 7.0G 719M 91% /
When I go to the EC22 console I see there are 3 volumes each 8gb in the list. One of them is attached (/dev/xvda) and this one is showing as "in-use". The other 2 are simply showing as "Available"
Is the terminated instance really using up my disk space? If yes, how to free it up?
I have just solved my problem by running this command:
sudo apt autoremove
and a lot of old packages are going to be removed, for instance many files like this linux-aws-headers-4.4.0-1028
Amazon Elastic Block Storage (EBS) is a service that provides virtual disks for use with Amazon EC2. It is network-attached storage that persists even when an EC2 instance is stopped or terminated.
When launching an Amazon EC2 instance, a boot volume is automatically attached to the instance. The contents of the boot volume is copied from an Amazon Machine Image (AMI), which can be chosen from a pre-populated list (including the ability to create your own AMI).
When an Amazon EC2 instance is Stopped, all EBS volumes remain attached to the instance. This allows the instance to be Started with the same configuration as when it was stopped.
When an Amazon EC2 instance is Terminated, EBS volumes might or might not be deleted, based upon the Delete on Termination setting of each volume:
By default, boot volumes are deleted when an instance is terminated. This is because the volume was originally just a copy of an AMI, so there is unlikely to be any important data on the volume. (Hint: Don't store data on a boot volume.)
Additional volumes default to "do not delete on termination", on the assumption that they contain data that should be retained. When the instance is terminated, these volumes will remain in an Available state, ready to be attached to another instance.
So, if you do not require any content on your remaining EBS volumes, simply delete them. In future, when launching instances, keep an eye on the Delete on Termination setting to make the clean-up process simpler.
Please note that the df -h command is only showing currently-attached volumes. It is not showing the volumes in Available state, since they are not visible to that instance. The concept of "Disk Space" typical refers to the space within an EBS volume, while "EBS Storage" refers to the volumes themselves. So, the 7GB of the volume that is used is related to that specific (boot) volume.
If you are running out of space on an EBS volume, see: Expanding the Storage Space of an EBS Volume on Linux. Expanding the volume involves:
Creating a snapshot
Creating a new (bigger) volume from the snapshot
Swapping the disks (requiring a Stop/Start if you are swapping a boot volume)
These 2 steps add an extra hard drive to your EC2 and format it for use:
Attach an extra hard drive (EBS: Elastic Block Storage) to an EC2
Format an EBS drive attached to an EC2
Here's pricing info. Free Tier includes 30GB. Afterward it's $1.25/month for 10GB on a General Purpose SSD (gp2).
To see how much space you are using/need:
Check your current disk use/available in Linux with df -h.
Check the size of a directory in Linux with du -sh [path].
I am looking to have multiple Amazon EC2 instances use the same data store. Amazon does not condone mounting an S3 Bucket as a file system, so I am trying to avoid this solution. Is there a way to synchronize an EBS volume with S3 or would it be best to use rsync and cron?
Do you really have to have the files locally available from within EBS? What if instead you served them to yourself via CloudFront, and restricted the permissions so that only your instances (or only your Security Group) could see the files?
Come Fall 2015 you'll be able to use Elastic File Storage (EFS) for this. But until then, I would suppose the next best thing is to use the AWS command-line to sync down from S3 to your volume:
aws s3 sync s3://my-bucket/my/folder /mnt/my-ebs/
After the initial run, that sync command is surprisingly fast. So from there you could just cron it to run hourly or so?
I see these steps in seting up the disk for MapR installation at link
To determine if a disk or partition is ready for use by MapR:
Run the command sudo lsof to determine whether any processes are
already using the disk or partition.
There should be no output when running sudo fuser , indicating there is no > process accessing the specific disk or partition.
The disk or partition should not be mounted, as checked via the output of the mount ?command.
The disk or partition should not have an entry in the /etc/fstab file.
The disk or partition should be accessible to standard Linux tools such as
mkfs. You should be able to successfully format the partition using a
command like sudo mkfs.ext3 as this is similar to the
operations MapR performs during installation. If mkfs fails to access
and format the partition, then it is highly likely MapR will encounter
the same problem.
I have issues in acheiving this on amazon EC2 instance.
Steps that i have tried
I have created a large EC2 instance.
Created the snapsot of that volume associated with that instance
Created a new volume with 500 GB from the snapshot created above
I am not sure, how to unmount this new volume and make it available for MapR. I also see an entry in /etc/fstab for this new volume.
Can some one give a step-by-step approach to create a disk or partition which satisfies the above mentioned criteria for MapR?
MapR runs on raw disks, eg, directly on /dev/sdb. Use the disksetup command to add disks to MapR. See http://mapr.com/doc/display/MapR/disksetup for information on how to use.
As I understood, for an EBS backed EC2 instance, it's root device will be an EBS volume. Now if I want to have the content of the EBS volume to be a snapshot that I took earlier (for the root device of another EBS backed EC2 instance), how can I do that?
The short version is that you find the snapshot in the AWS management console, click the Launch button, and follow the steps in the wizard (to e.g. select availability zone).
There is a detailed walk through here:
http://www.techrepublic.com/blog/datacenter/how-to-create-a-new-ami-from-a-snapshot-and-launch-a-new-vm/5349
This can also be done a number of other ways, including
From the command line / a script
Programmatically through the API
Automatically e.g. using Auto Scaling