Sagemaker usage of EC2 instances - amazon-ec2

Is there a way to view/monitor AWS Sagemaker's usage of EC2 instances?
I am running a Sagemaker endpoint and tried to find its instances (ml.p3.2xlarge in this case) in the EC2 UI, but couldn't find them.

ml EC2 instances do not appear in the EC2 console. You can find their metrics in Cloudwatch though, and create dashboards to monitor what you need:

Related

AWS ECS: Will Service Auto Scaling create EC2 instance automately?

I am confused about how Service AutoScaling automately works. Will it create EC2 instance automately?
I create it and add it to a Cluster's service, but it does no create EC2 for placing my required number of tasks. Is any thing wrong with my settings? I check the [Events] and see "service s2 was unable to place a task because no container instance met all of its requirements. ", but shouldn't it create a EC2 if no instance met? Please give me some advice, thanks in advance.
but shouldn't it create a EC2 if no instance met
Not really. There are two types of scaling policies: scaling policies on an ECS service and scaling policies on the ECS cluster. Instances are added based on cluster scaling policies, and that's what you should set up in addition to your service scaling policy.
AWS has a couple of detailed tutorials on scaling ECS clusters:
https://aws.amazon.com/blogs/compute/automatic-scaling-with-amazon-ecs/
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/cloudwatch_alarm_autoscaling.html
AWS Elastic Container Services has two methods to deploy containers over aws environment
Where you no need to worry about orchestration of containers (task in aws)
Fargate (Available in few regions like N.Virginia)
Using EC2 in ECS
I guess you are using 2nd option to deploy application over ECS where you can provide details of scaling tasks/containers not ec2 instances.
For Auto-scaling of ec2 instances you should look into ASG of AWS.
As far as AWS ECS is concerned you need some building blocks which are as follows-
Cluster
Task definition (Memory, Network and Storage configs of tasks)
Service contains EC2 instance configuration
Auto scaling policies if you want to auto-scale tasks

ECS service launching through created EC2 instance

I have created my own EC2 instance in AWS. That AMI is AWS ECS optimized AMI for launching ecs service from my EC2 instance. I previously discussed the same thing. And tried with that approach. The link is below,
Microservice Deployment Using AWS ECS Service
I created my cluster and configured that cluster name when I am creating optimized AMI by following code snippet in advanced userdata section,
#!/bin/bash
echo ECS_CLUSTER=your_cluster_name >> /etc/ecs/ecs.config
I followed the documentation of cluster creation from following link,
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create_cluster.htmlecs
But, no result - when creating cluster and ECS task definitions it creates and launches into one EC2. And again creating another EC2 by specifying above code. So total 2 Ec2. I already created my own ECS optimized.
I am finding for launching ECS service from my own AMI (that I created). Actually I need to launch my ECS service from my Ec2 (I had created my machine Amazon optimized AMI).
The reason behind this requirement is I don't want to launch my services in machine that owned by others. I need to launch from my machine. And also I need to host my angular application in the same my machine. So I need control of my machine. How can I do this?
Sounds like you just need to create a Launch Configuration. With this you can specify the User Data settings that should be applied when a host is setup.
After you create your Launch Configuration, create a new Auto Scaling Group based off of it (there's a drop-down to select the launch configuration you want to use).
From here, any new instances launched under that ASG will apply the settings you've configured in the associated Launch Configuration.

How to update code on multiple EC2 instances?

I created an Elastic Load Balancer in front of two EC2 instances. However, I discovered an issue that requires me to update the code on both EC2 instances.
I can access each instance individually to update code via github, or I could create an AMI to launch a new instance. It's very unfavorable.
How can I synchronize code between the two EC2 instances?
In situations like this either a code pipeline would be helpful OR better yet switch to Elastic Beanstalk.

Cloudwatch default metrics EC2 DiskReadOps and DiskWriteOps not reporting

AWS EC2 instances by default include DiskReadOps and DiskWriteOps metric for attached EBS volumes. I have checked multiple running EC2 Instances running both Windows and Linux and they all display no value data other then 0.
See http://docs.aws.amazon.com/AmazonCloudWatch/latest/monitoring/ec2-metricscollected.html
Any idea what the issue may be?
EC2 Instance DiskReadOps and DiskWriteOps Metric are applicable to Instance Store Volumes Only. They do not refer to EBS volumes that are attached to instance.
If you go to Cloudwatch Metrics EBS section you can identify the volume-name attached to the instance and look for VolumeReadOps and VolumeWriteOps metric.
This was an oversight on my part when first asking this question.

Amazon EC2 cluster setup

I am working on a HDFS high availability project.
I have configured Hadoop on one Amazon EC2 instance. It is small instance (AMI: Ubuntu server)
I want to form a cluster of EC2 instances. So, i am thinking of replicating the same machine. Does anybody have a clue about how to duplicate this instance on another instance of EC2. If yes, please share.
Thanks!
If your instance is EBS backed, you can make a snapshot and then run as many instance as you want from it.

Resources