AWS EC2 instances by default include DiskReadOps and DiskWriteOps metric for attached EBS volumes. I have checked multiple running EC2 Instances running both Windows and Linux and they all display no value data other then 0.
See http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/ec2-metricscollected.html
Any idea what the issue may be?
EC2 Instance DiskReadOps and DiskWriteOps Metric are applicable to Instance Store Volumes Only. They do not refer to EBS volumes that are attached to instance.
If you go to Cloudwatch Metrics EBS section you can identify the volume-name attached to the instance and look for VolumeReadOps and VolumeWriteOps metric.
This was an oversight on my part when first asking this question.
Related
I'm trying to set up a way to monitor disk usage for instances belonging to an AutoScaling Group, and add an alarm when the volumes associated to the instances are almost full.
Since it seems there are no metrics normally offered by Amazon to do that, I resorted using the CloudWatch Agent to get what I wanted. So far so good, I can create graphs and alarms for the metrics I want using the CloudWatch console.
My issue is how to automate everything with CDK. How can I automate the creation of the metric for each instance, without knowing the instance id beforehand? Is there a solution for this issue?
You can install and config CloudWatch agent via EC2 user data and the auto scaling group uses launch template to launch EC2 instance. All of those things can be done by AWS CDK.
There is an example from this open source project for your reference.
Another approach you could take is using AWS Systems Manager. Essentially, you install an SSM agent for your instances, and create an SSM Document (think Shell/Python script) that will run your setup script/automation.
You then create a State Manager Association, tying the SSM Document with your instances based on EC2 tags e.g. Application=MyApp or Team=MyTeam. This way, you don't have to provide any resource ids, just the tag key value pair which could extend multiple instances and future instance replacements. You can schedule it to run at specific times (cron) or at a certain frequency (rate) to enforce state.
I want to change the instance type of an aws ec2 instance based on the cpu utilization of the instance. Basically i want the instance type to be t3.large if the cpu utilization is > 80% and t3.medium if it is below that. Is there any way to achieve this without any downtime? Can i achieve this with aws cloudformation? If so can you provide me a guide or link which i can refer to?
Unfortunately, you can't do this. Changing instance type requires stopping your instance:
You must stop your Amazon EBS–backed instance before you can change its instance type.
However, you could setup your instance(s) to operate in an autoscaling group. This way you could use UpdatePolicy to perform rolling or replacing update. This could be use to ensure that part or full fleet of your instances is operating while the new instance types are rolled out.
Is there a way to view/monitor AWS Sagemaker's usage of EC2 instances?
I am running a Sagemaker endpoint and tried to find its instances (ml.p3.2xlarge in this case) in the EC2 UI, but couldn't find them.
ml EC2 instances do not appear in the EC2 console. You can find their metrics in Cloudwatch though, and create dashboards to monitor what you need:
In my project I have a constraint where all of the traffic received will go to a certain IP. The Elastic IP feature works well for this.
My question is, considering we are using Amazon's docker service (ECS) without autoscaling (so instances/tasks will be scaled manually), can we treat the instances created by the ECS service as we would treat normal, on-demand instances? As in they won't be terminated/stopped unless explicitly done by a user (or API call or whatever).
As is mentioned in the Scaling a Cluster documentation, if you created your cluster after November 24th, 2015 using either the first run wizard, or the Create Cluster wizard, then an Autoscaling group would have been created to manage the instances backing your cluster.
For the most part, the answer to your question is Yes. The instances wouldn't normally go about getting replaced. It is also important to note that because this is backed by an auto scaling group, AutoScaling might go about Replacing unhealthy instances for you. If an instance fails it EC2 Health Checks for some reason, it will be marked as unhealthy, and scheduled for replacement.
By default, my understanding is there are no CloudWatch Alarms or Scaling Policies effecting this AutoScaling group, so it should just be when an instance becomes healthy that it would get replaced.
Amazon's new EBS-based EC2 instances have two options to shutdown: terminate or stop. Stopped instances can be later started again, automatically continuing from the same EBS root disk state they had when they were stopped.
But what happens when an Amazon datacenter has a hardware failure, and the EC2 instance is forced to shutdown. Does it terminate or stop? If the instance has been configured to stop by default on shutdown, can I rely on it being stopped also in this situation, and being able to start it again later?
An EC2 instance can be terminated at any time and one must account for this indeed, as already mentioned in David's answer (+1). You can arrange for a failed instance's Elastic Block Store (EBS) to remain available regardless though, see e.g. the respective FAQ What happens to my data when a system terminates?:
The data stored on a local instance store will persist only as long as
that instance is alive. However, data that is stored on an Amazon EBS
volume will persist independently of the life of the instance. If you
are using an Amazon EBS volume as a root partition, then you have set
the Delete On Terminate flag to "N" for your Amazon EBS volume to
persist outside the life of the instance. [emphasis mine]
This is explained in more detail in section 2. Delete on Termination within Eric Hammond's recommended article Three Ways to Protect EC2 Instances from Accidental Termination and Loss of Data:
Though EBS volumes created and attached to an instance at
instantiation are preserved through a “stop”/”start” cycle, by default
they are destroyed and lost when an EC2 instance is terminated. This
behavior can be changed with a delete-on-termination boolean value
buried in the documentation for the --block-device-mapping option of
ec2-run-instances.
He is referring to the ec2-run-instances documentation, and all this is meanwhile illustrated in more detail within Amazon EC2 Root Device Storage Concepts as well:
By default, the root device volume and the other volumes created when
an Amazon EBS-backed instance is launched are automatically deleted
when the instance terminates [...]. You can change
the default behavior by setting the DeleteOnTermination flag to the
value you want when you launch the instance. For an example of how to
change the flag at launch time, see Using Amazon EC2 Root Device Storage.
I assume you mean the CPU related hardware fails, rather than the network disk. The way I treat EC2 is to create a system that can go up and down without data loss. Anything important you should use an S3 bucket, not EBS.