Signal Kill on Amazon ec2 User - go

When I am running my application on EC2-user sever, which is related to S3 bucket. I am getting the below error. The RAM size of server is 450MB and free space is 72MB. What can be the issue? Is this error related to RAM size?
I believe its RAM issue. But I would like to know the various reasons this error can occur.

Related

AWS EMR Hadoop Mapreduce physical memory limit error

I keep getting this error when running some of my steps:
Container [pid=5784,containerID=container_1482150314878_0019_01_000015] is running beyond physical memory limits. Current usage: 5.6 GB of 5.5 GB physical memory used; 10.2 GB of 27.5 GB virtual memory used. Killing container.
I searched over the web and people say to increase the memory limits. This error is after I already increased to the maximum allowed on the instance I'm using c4.xlarge. Can I get some assistance about this error and how to solve this?
Also, I don't understand why mapreduce will throw this error and won't just swap or even work slower but just continue to work ...
NOTE: This error started happening after I changed to a custom output compression so it should be related to that.
Thanks!

Running out of disk space EC2

I ran into some issues with my EC2 micro instance and had to terminate it and create a new one in its place. But it seems even though the old instance is no longer visible in the list, it is still using up some space on my disk. My df -h is listed below:
Filesystem Size Used Avail Use%
/dev/xvda1 7.8G 7.0G 719M 91% /
When I go to the EC22 console I see there are 3 volumes each 8gb in the list. One of them is attached (/dev/xvda) and this one is showing as "in-use". The other 2 are simply showing as "Available"
Is the terminated instance really using up my disk space? If yes, how to free it up?
I have just solved my problem by running this command:
sudo apt autoremove
and a lot of old packages are going to be removed, for instance many files like this linux-aws-headers-4.4.0-1028
Amazon Elastic Block Storage (EBS) is a service that provides virtual disks for use with Amazon EC2. It is network-attached storage that persists even when an EC2 instance is stopped or terminated.
When launching an Amazon EC2 instance, a boot volume is automatically attached to the instance. The contents of the boot volume is copied from an Amazon Machine Image (AMI), which can be chosen from a pre-populated list (including the ability to create your own AMI).
When an Amazon EC2 instance is Stopped, all EBS volumes remain attached to the instance. This allows the instance to be Started with the same configuration as when it was stopped.
When an Amazon EC2 instance is Terminated, EBS volumes might or might not be deleted, based upon the Delete on Termination setting of each volume:
By default, boot volumes are deleted when an instance is terminated. This is because the volume was originally just a copy of an AMI, so there is unlikely to be any important data on the volume. (Hint: Don't store data on a boot volume.)
Additional volumes default to "do not delete on termination", on the assumption that they contain data that should be retained. When the instance is terminated, these volumes will remain in an Available state, ready to be attached to another instance.
So, if you do not require any content on your remaining EBS volumes, simply delete them. In future, when launching instances, keep an eye on the Delete on Termination setting to make the clean-up process simpler.
Please note that the df -h command is only showing currently-attached volumes. It is not showing the volumes in Available state, since they are not visible to that instance. The concept of "Disk Space" typical refers to the space within an EBS volume, while "EBS Storage" refers to the volumes themselves. So, the 7GB of the volume that is used is related to that specific (boot) volume.
If you are running out of space on an EBS volume, see: Expanding the Storage Space of an EBS Volume on Linux. Expanding the volume involves:
Creating a snapshot
Creating a new (bigger) volume from the snapshot
Swapping the disks (requiring a Stop/Start if you are swapping a boot volume)
These 2 steps add an extra hard drive to your EC2 and format it for use:
Attach an extra hard drive (EBS: Elastic Block Storage) to an EC2
Format an EBS drive attached to an EC2
Here's pricing info. Free Tier includes 30GB. Afterward it's $1.25/month for 10GB on a General Purpose SSD (gp2).
To see how much space you are using/need:
Check your current disk use/available in Linux with df -h.
Check the size of a directory in Linux with du -sh [path].

Hadoop 2.x on amazon ec2 t2.micro

I'm trying to install and configure Hadoop 2.6 on Amazon EC2 t2.micro instance (The Free one, with only 1GB RAM) in Pseudo-Distributed Mode.
I could configure and start all the daemons (ie. Namenode,Datanode,ResourceManager,NodeManager). But When I tried to run a mapreduce wordcount example, it is failing.
I dont know if its failing due to low memory ( Since t2.micro has only 1GB of memory and some of it is taken up by Host OS, Ubuntu in my case). Or can it be some other reason?
I'm using default memory settings. If I can tweak down everything to minimum memory settings will it solve the problem? What is the minimum memory in mb that can be assigned to containers.
Thanks a lot Guys. I'll appreciate if you can provide me with some information.
Without tweaking any memory settings I could run a pi example with 1 mapper and 1 reducer sometimes only on the free tier t2.micro instance, it fails most of the time.
By using the memory optimized r3.large instance with 15GB RAM everything works perfect. All jobs get completed without failure.

SQLServer using too much memory

I have installed on my desktop machine (with windows 7) SQLServer 2008 R2 Express.
I have only one local server running (./SQLEXPRESS) but the sqlserver process is taking ALL the RAM possible.
With an machine with 3GB of RAM the things starts to get slow, so I limited the maximun amount of RAM in the server, and now, constantly the SQLServer give some error messages that the memory is not enought. It's using 1GB of RAM with only one LOCAL server with 2 databases completely empty, how 1GB of RAM isn't enought ?
When the process start it's using an really acceptable amount of memory (around 80MB) but it's keep increasing until it reaches the maximun defined and start to complain about having not enought memory available. In that point I have to restart the server to use it again.
I have read about an hotfix to solve one of the errors I got from sqlserver:
There is insufficient system memory in resource pool 'internal' to run this query
But it's already installed on my sqlserver.
Why it's using so much memory?
You can try configuring the 'max server memory' configuration option:
For additional details check:
http://technet.microsoft.com/en-us/library/ms178067(v=sql.105).aspx
http://support.microsoft.com/kb/321363
http://social.msdn.microsoft.com/Forums/en-US/sqldatabaseengine/thread/df51cd87-68ce-439a-87fa-d5e5cd93ab31
I had the problem like this.
You can increase the cache size of DB.
On MSSQL server properties, choose memory, there have "maximum server memory (in MB)" You can increase this cell.
Or same thing with query:
EXEC sp_configure'Show Advanced Options',1;
GO
RECONFIGURE;
GO
EXEC sp_configure'max server memory (MB)',3500;
GO
RECONFIGURE;
GO

Does Creating an Image of an Amazon EC2 Linux instance cause downtime?

Does Creating an Image of an Amazon EC2 Linux instance cause any downtime? Can I image a running server?
Already answered correctly, but I wanted to add a couple of caveats:
--no-reboot, no guarantee: When you create an image of a AMI with EBS backed root device, you may opt for --no-reboot, but AWS warns about this. It does not guarantee integrity of the file system. If it's really busy instance and heavy RW operations going on, you may get a corrupted image.
Instance Store, no reboot: Creating an instance store backed image never required reboot to me. It's three simple steps -- bundle-image, upload to S3, and register the image without any rebooting in this whole process.
It is my opinion that No Reboot should prevent the image creation from rebooting.
If you are the api user, it also provides argument --no-reboot to do it.

Resources