Error in autoscaling in ec2? - amazon-ec2

I am using the autoscaling feature.
I set up the entire thing but the instance were automatically launched and terminated even if the instance does not reach the threshold.
I followed the steps:
Created an instance
Created a load balancer and registered an instance
Created a launch configuration
Created a cloud watch that cpu >=50 %\
An autoscale policy that launches and terminates the instance when CPU >=50 %
But as soon as I apply the policy the instances begin to launch and terminated without any CPU load and it continues
Cause: At 2014-01-14T10:51:08Z an instance was started in response to a difference between desired and actual capacity, increasing the capacity from 0 to 1.
StartTime: 2014-01-14T10:51:08.791Z
Cause: At 2014-01-14T10:02:16Z an instance was taken out of service in response to a system health-check.
UPDATE

Documentation:
Follow the instructions on how to Set Up an Auto-Scaled and Load-Balanced Application
Notes:
The instance, created outside of AutoScaling Group can be added to Elastic Load Balancer, but will not be monitored or managed by AutoScaling group.
Instance, created outside of AutoScaling Group can be marked as unhealthy by Elastic Load Balancer if the health check fails, but it will not cause AutoScaling Group to spawn a new instance.

Related

Does ECS update-service command marks the container instance state to draining when use with the --force-new-deployment option?

The command:
aws ecs update-service --service my-http-service --task-definition amazon-ecs-sample --force-new-deployment
As per AWS docs: You can use this option (--force-new-deployment) to trigger a new deployment with no service definition changes. For example, you can update a service's tasks to use a newer Docker image with the same image/tag combination (my_image:latest ) or to roll Fargate tasks onto a newer platform version.
My question, if I use '--force-new-deployment' (as I will use the exisiting tag or definition), will the underline 'ECS Instance' automatically set to DRAINING state, so that any new task (if any) will not start in the EXISTING ecs-instance that is suppose to go away during rolling-update deployment strategy (or deployment controller) ?
In other words, will there be any chance:
For a new task to be created on the existing/old container instance, that is suppose to go away during rolling update.
Also, what would happen with the ongoing task that is running on this existing/old container instance, that is suppose to go away during rolling update.
Ref: https://docs.aws.amazon.com/cli/latest/reference/ecs/update-service.html
Please note that no Container instance is going anywhere with this 'update-service' command. This command will only create a new Deployment under the ECS service and when the new tasks become healthy, remove the old task(s).
Edit 1:
How about the request that were served by old task?
I am assuming the tasks are behind an Application Load Balancer. In this case, old tasks will be deregistered from the ALB.
Note: In the following discussion, target is the ECS Task.
To give you a brief description on how the Deregistration Delay works with ECS, the following is the sequential order when a task connected to an ALB is stopped. It can be due to a scale in event, deployment of a new task definition, decrease of the number of tasks, force deployment, etc.
ECS sends DeregisterTargets call and the targets change the status to "draining". New connections will not be served to these targets.
If the deregistration delay time elapsed and there are still in-flight requests, the ALB will terminate them and clients will receive 5XX responses originated from the ALB.
The targets are deregistered from the target group.
ECS will send the stop call to the tasks and the ECS-agent will gracefully stop the containers (SIGTERM).
If the containers are not stopped within the stop timeout period (ECS_CONTAINER_STOP_TIMEOUT by default 30s) they will force stopped (SIGKILL).
As per the ELB documentation [1] if a deregistering target has no in-flight requests and no active connections, Elastic Load Balancing immediately completes the deregistration process, without waiting for the deregistration delay to elapse. However, even though target deregistration is complete, the status of the target will be displayed as draining and you can see the registered Target of the old task is still present in the TargetGroup console with status as draining until the deregistration delay elapses.
The design of ECS is to stop the Task after the completion of Draining process as mentioned in the ECS document [2] and hence the ECS Service waits for the TargetGroup to complete the Draining process before issuing the stop call.
Ref:
[1] Target Groups for Your Application Load Balancers - Deregistration Delay - https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-target-groups.html#deregistration-delay
[2] Updating a Service - https://docs.aws.amazon.com/AmazonECS/latest/developerguide/update-service.html

ECS Blue/Green Deployment with CodeDeploy - Autoscaling Instances Number

I'm using ECS (on EC2) with blue/green deployment powered by AWS CodeDeploy.
After creating a new task definition, and updating the service to use it, a new deployment on AWS CodeDeploy starts. However, the new task is not able to run on my service due to this error:
service my-service-dev was unable to place a task because no container instance met all of its requirements...
I understand that the current instances used by the cluster cannot run the new task. If I manually add new instances, by increasing the minimal capacity in the auto-scaling group of the cluster, then the deployment takes place successfully.
I wanted to know if there is a way to make this happen automatically. Increasing the maximum capacity (in the same place) doesn't seem to help.

Lifecycle of an EC2 Container Service Instance

In my project I have a constraint where all of the traffic received will go to a certain IP. The Elastic IP feature works well for this.
My question is, considering we are using Amazon's docker service (ECS) without autoscaling (so instances/tasks will be scaled manually), can we treat the instances created by the ECS service as we would treat normal, on-demand instances? As in they won't be terminated/stopped unless explicitly done by a user (or API call or whatever).
As is mentioned in the Scaling a Cluster documentation, if you created your cluster after November 24th, 2015 using either the first run wizard, or the Create Cluster wizard, then an Autoscaling group would have been created to manage the instances backing your cluster.
For the most part, the answer to your question is Yes. The instances wouldn't normally go about getting replaced. It is also important to note that because this is backed by an auto scaling group, AutoScaling might go about Replacing unhealthy instances for you. If an instance fails it EC2 Health Checks for some reason, it will be marked as unhealthy, and scheduled for replacement.
By default, my understanding is there are no CloudWatch Alarms or Scaling Policies effecting this AutoScaling group, so it should just be when an instance becomes healthy that it would get replaced.

How to replace ECS cluster instances without downtime or reduced redundancy?

I currently have a try-out environment with ~16 services divided over 4 micro-instances. Instances are managed by an autoscaling group (ASG). When I need to update the AMI of my cluster instances, currently I do:
Create new launch config, edit ASG with new launch config.
Detach all instances with replacement option from the ASG and wait until the new ones are listed in the cluster instance list.
MANUALLY find and deregister the old instances from the ECS cluster (very tricky)
Now the services are killed by ECS due to deregistering the instances :(
Wait 3 minutes until the services are restarted on the new instances
MANUALLY find the EC2 instances in the EC2 instance list and terminate them (be very very careful not to terminate the new ones).
With this approach I have about 3 minutes of downtime and I shiver from the idea to do this in production envs.. Is there a way to do this without downtime but keeping the overall amount of instance the same (so without 200% scaling settings etc.).
You can update the Launch Configuration with the new AMI and then assign it to the ASG. Make sure to include the following in the user-data section:
echo ECS_CLUSTER=your_cluster_name >> /etc/ecs/ecs.config
Then terminate one instance at a time, and wait until the new one is up and automatically registered before terminating the next.
This could be scriptable to be automated too.

How to perform autoscaling in ec2 inside a vpc?

I want to perform autoscaling without using CLI tools. I want to do it from the console itself.
The instance is in vpc ? how can i apply the autoscale policy on instance
Any lead is appriciated.
Thanks in advance.
Documentation:
Follow the instructions on how to Set Up an Auto-Scaled and Load-Balanced Application
Notes:
The instance, created outside of AutoScaling Group can be added to Elastic Load Balancer, but will not be monitored or managed by AutoScaling group.
Instance, created outside of AutoScaling Group can be marked as unhealthy by Elastic Load Balancer if the health check fails, but it will not cause AutoScaling Group to spawn a new instance.

Resources