Cloudwatch for Memory and Diskspace for multiple ec2 instances - amazon-ec2

I need to configure Memory and Diskspace alarms for multiple ec2 instances,i found one option "across all instances" but in there i did not find memory and Diskspace metrics.so do i need to create alarms for each instances separately?
I tried "across all instances" options ,but it did not show memory or diskspace metrics.
NA
need one alarm each for memory,cpu,diskspace for multiple ec2 instances.

It is not showing any metrics because, first, you need to publish those metrics to Cloudwatch. To publish metrics to Cloudwatch, you will need to install Cloudwatch agent on each instance and use cron to schedule those scripts.
Here you can find more info on how to send RAM and Diskspace metrics to Cloudwatch:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/mon-scripts.html#mon-scripts-systems

Related

What is the default disk space of a ECS task running on a EC2 cluster?

I'm using an ECS cluster that is running on EC2 instances, I have several instances of my app running as tasks in the cluster. I want to add a cache layer for my app, this layer will write the data in the disk. Furthermore, I also want to know how much memory an AWS ECS task will give to my container? and what happens to the files that I had in my container after a deployment?.
I already look in google for answers, but I only found information for tasks that are running in a Fargate cluster.

Monitoring EBS volumes for istances with CloudWatch Agent and CDK

I'm trying to set up a way to monitor disk usage for instances belonging to an AutoScaling Group, and add an alarm when the volumes associated to the instances are almost full.
Since it seems there are no metrics normally offered by Amazon to do that, I resorted using the CloudWatch Agent to get what I wanted. So far so good, I can create graphs and alarms for the metrics I want using the CloudWatch console.
My issue is how to automate everything with CDK. How can I automate the creation of the metric for each instance, without knowing the instance id beforehand? Is there a solution for this issue?
You can install and config CloudWatch agent via EC2 user data and the auto scaling group uses launch template to launch EC2 instance. All of those things can be done by AWS CDK.
There is an example from this open source project for your reference.
Another approach you could take is using AWS Systems Manager. Essentially, you install an SSM agent for your instances, and create an SSM Document (think Shell/Python script) that will run your setup script/automation.
You then create a State Manager Association, tying the SSM Document with your instances based on EC2 tags e.g. Application=MyApp or Team=MyTeam. This way, you don't have to provide any resource ids, just the tag key value pair which could extend multiple instances and future instance replacements. You can schedule it to run at specific times (cron) or at a certain frequency (rate) to enforce state.

service was unable to place a task because no container instance met requirements. The closest matching container-instance has insufficient CPU units

I am trying to run 2 tasks on the same EC2 container. The EC2 container is running on a t2.large type EC2 instance.
One of the tasks (which is a daemon) starts fine and is RUNNING.
The other task which is an application, does not start and I see the following errors in the Events tab.
service test-service was unable to place a task because no container instance met all of its requirements. The closest matching container-instance xxxxxx has insufficient CPU units available. For more information, see the Troubleshooting section.
service test-service was unable to place a task because no container instance met all of its requirements. The closest matching container-instance xxxxxx has insufficient memory available. For more information, see the Troubleshooting section.
I looked at the CPU and memory section for the container instance and the values are -
Registered Available
CPU 1024 1014
Memory 985 729
My task definitions for the task that does not run has the following CPU and Memory values -
"memory": 512,
"cpu": 10
The daemon that successfully runs on the same EC2 container instance also has the same values for memory and CPU.
I read thru the AWS docs here at https://aws.amazon.com/premiumsupport/knowledge-center/ecs-container-instance-requirement-error/ and tried to reduce the CPU and memory requirements for the test-service task definition but nothing helped. I also changed the instance type to something bigger but that did not help either.
Can someone please help me with what I should do CPU and memory for both the tasks (daemon and application) so they can run on the same EC2 container instance ?
Note: I cannot add another container to the ECS cluster.
The task definition sets CPU limit to 10 units which is probably insufficient in your case. ECS can manage CPU resources dynamically when you set up the value to 0. However it is not possible in case of memory.

Cloudwatch default metrics EC2 DiskReadOps and DiskWriteOps not reporting

AWS EC2 instances by default include DiskReadOps and DiskWriteOps metric for attached EBS volumes. I have checked multiple running EC2 Instances running both Windows and Linux and they all display no value data other then 0.
See http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/ec2-metricscollected.html
Any idea what the issue may be?
EC2 Instance DiskReadOps and DiskWriteOps Metric are applicable to Instance Store Volumes Only. They do not refer to EBS volumes that are attached to instance.
If you go to Cloudwatch Metrics EBS section you can identify the volume-name attached to the instance and look for VolumeReadOps and VolumeWriteOps metric.
This was an oversight on my part when first asking this question.

Error in autoscaling in ec2?

I am using the autoscaling feature.
I set up the entire thing but the instance were automatically launched and terminated even if the instance does not reach the threshold.
I followed the steps:
Created an instance
Created a load balancer and registered an instance
Created a launch configuration
Created a cloud watch that cpu >=50 %\
An autoscale policy that launches and terminates the instance when CPU >=50 %
But as soon as I apply the policy the instances begin to launch and terminated without any CPU load and it continues
Cause: At 2014-01-14T10:51:08Z an instance was started in response to a difference between desired and actual capacity, increasing the capacity from 0 to 1.
StartTime: 2014-01-14T10:51:08.791Z
Cause: At 2014-01-14T10:02:16Z an instance was taken out of service in response to a system health-check.
UPDATE
Documentation:
Follow the instructions on how to Set Up an Auto-Scaled and Load-Balanced Application
Notes:
The instance, created outside of AutoScaling Group can be added to Elastic Load Balancer, but will not be monitored or managed by AutoScaling group.
Instance, created outside of AutoScaling Group can be marked as unhealthy by Elastic Load Balancer if the health check fails, but it will not cause AutoScaling Group to spawn a new instance.

Resources