AWS Fleet Management vs Dynamic Scaling - amazon-ec2

What is difference between AWS EC2 Fleet Management and Dynamic Scaling?
Fleet Management will do health checks and keeps your fleet at desired computing capacity. Dynamic Scaling will automatically increase or decrease compute capacity based on load or other metrics.
Both of these seems pretty much the same. What is main difference between them?
Can you please explain with some example?

I have created a post about that and you can find it here: Dynamic scaling VS Fleet management scaling
Fleet management
Used for:
Replacing unhealthy instances;
Distributing instances among availability-zones to maximize resilience; E.g: You're running instances in us-east, so auto-scaling can provision instances in the following AZs: us-east-1a, us-east-1b, us-east-1c, us-east-1d, and us-east-1e;
Dynamic scaling
Used for:
Scaling based on cloudWatch alarm metrics or a metric type (more on that later) when a threshold is met or different measures should be taken depending on the breach of a cloudWatch alarm threshold.
Types of Dynamic scaling:
Simple scaling: Scales based on a single cloudwatch alarm metric, and apply the measures you define;
Step scaling: Scales based on different levels of cloud watch alarm metrics, and apply the actions you define;
Target tracking scaling: Scales based on a metric type, but delegates the action to be taken to AWS;
Which one to use?
That's not the right question to ask. Actually you'll be using Fleet Management out-of-the-box, with the possibility of configuring Dynamic Scaling to take some custom actions;

Related

For long running important apps what Aws EC2 instance type select?

I have been checking all EC2 instances and.. from all types I'm debating between choosing -reserved instance or dedicated instance-
Spot and on demand doesn't meet the requirement I think.
I don't have more information about the type of app to be run.
It sounds like you are concerned with the availability (or up time) of different EC2 instances. The difference between reserved, dedicated, spot and on-demand has more to do with cost than availability. AFAIK AWS does not guarantee different levels of availability for different EC2 instance types or cost structures.
Dedicated instances are ones that won't run on the same hardware as other instances, but dedicated instances can run on the same hardware as other non-dedicated instances in the same account. Dedicated instances shouldn't have any affect on availability.
The other options, (reserved, spot and on-demand) are just different cost structures. They don't affect performance or availability.
AWS advertises 99.99% uptime. This is from their SLA:
AWS will use commercially reasonable efforts to make the Included Services each available for each AWS region with a Monthly Uptime Percentage of at least 99.99%, in each case during any monthly billing cycle (the “Service Commitment”). In the event any of the Included Services do not meet the Service Commitment, you will be eligible to receive a Service Credit as described below.
So, any instance will be appropriate for important long running apps, you just need to select the instance type that is large enough and pick the right cost structure. You probably don't need dedicated instances, unless you have determined that steel time is going to impact your app's performance.
I dont think its Instance Types you need to worry about - they are just different sizes of machines that have different ratios of CPU/RAM/IO. High-Availability can only be achieved by designing it into both the application and infrastructure. Reservation Types (spot/on-demand/dedicated) are more about cost optimisation based on how long and much capacity you need.
Spot instances are transient - you can reserve a spot for upto 6 hours then it will be terminated. Probably not a good fit, but it depends on how long is "long running"? You get a significant discount for using spot instances if you can live with the limitations.
On-Demand and Reserved are basically the same service, but on-demand is pay-as-you-go vs reserved instance is a fixed length contract and upfront payments. Reserved instances give you the best value for money over the long term, but on-demand gives you flexibility to change instance size or turn off the instance to save money.
Going shared vs Dedicated is generally a compliance question as it gets a lot more expensive to reserve your own hardware to run just your own instances.

Distributed calculation on Cloud Foundry with help of auto-scaling

I have some computation intensive and long-running task. It can easily be split into sub-tasks and also it would be kind of easy to aggregate the results later on. For example Map/Reduce would work well.
I have to solve this on Cloud Foundry and there I want to get advantage from autos-caling, that is creation of additional instances due to high CPU loads. Normally I use Spring boot for developing my cf apps.
Any ideas are welcome of how to divide&conquer in an elastic way on cf. It would be great to have as many instances created as cf would do, without needing to configure the amount of available application instances in the application. Also I need to trigger the creation of instances by loading the CPUs to provoke auto-scaling.
I have to solve this on Cloud Foundry
It sounds like you're on the right track here. The main thing is that you need to write your app so that it can coexist with multiple instances of itself (or perhaps break it into a primary node that coordinates work and multiple worker apps). However you architect the app, being able to scale up instances is critical. You can then simply cf scale to add or remove nodes and increase capacity.
If you wanted to get clever, you could set up a pipeline to run your jobs. Step one would be to scale up the worker nodes of your app, step two would be to schedule the work to run, step three would be to clean up and scale down your nodes.
I'm suggesting this because manual scaling is going to be the simplest path forward (please read on for why).
and there I want to get advantage from autos-caling, that is creation of additional instances due to high CPU loads.
As to autoscaling, I think it's possible but I also think it's making the problem more complicated than it needs to be. Auto scaling by CPU on Cloud Foundry is not as simple as it seems. The way Linux reports CPU usage, you can exceed 100%, it's 100% per CPU core. Pair this with the fact that you may not know how many CPU cores are on your Cells (like if you're using a public CF provider), the fact that the number of cores could change over time (if your provider changes hardware), and that makes it's difficult to know at what point you should scale your application.
If you must autoscale, I would suggest trying to autoscale on some other metric. What metrics are available, will depend on the autoscaler tool you are using. The best would be if you could have some custom metric, then you could use work queue length or something that's relevant to your application. If custom metrics are not supported, you could always hack together your own autoscaler that does work with metrics relevant to your application (you can scale up and down by adjusting the instance cound of your app using the CF API).
You might also be able to hack together a solution based on the metrics that your autoscaler does provide. For example, you could artificially inflate a metric that your autoscaler does support in proportion to the workload you need to process.
You could also just scale up when your work day starts and scale down at the end of the day. It's not dynamic, but it simple and it will get you some efficiency improvements.
Hope that helps!

How to monitor different metrics for scaling an application on Amazon

Is that possible to decide about scalability based upon multiple metrics. For example, I wanna check both the "CPU Utilization" and "number of requests" together and then decide whether I need to Scale my application or not.
At this time, I do not believe this is possible as desired based on https://forums.aws.amazon.com/thread.jspa?messageID=223753 and https://forums.aws.amazon.com/thread.jspa?threadID=94984.
You do have the avenue of using custom CloudWatch metrics as your source for scaling. So, rather than trying to use two metrics, you could create your own metric based off that other data to decide how/when to scale.
Here's the original blog post regarding the availability of custom CloudWatch metrics. Also, here's a post about using AutoScaling with them.
This should give you the ability to be as granular as you like in terms of what constitutes a a scalable event for your system.

AWS Autoscaling and Elastic load balancing

For my application I am using auto scaling, without using elastic load balancing, is there any performance issue for directly using Auto scaling without ELB?
Adi,
David is right.
Autoscaling allows you to scale instances (based on cloudwatch metrics, a single event, or on a recurring schedule).
Suppose you have three instances running (scaled with Autoscaling): how is traffic going to reach them? You need to implement a Load Balancing somewhere, that's why Elastic Load Balancing is so useful.
Without that, your traffic can only be directed in a poorly-engineered manner.
See Slide #5 of this presentation on slideshare, to get a sense of the architecture: http://www.slideshare.net/harishganesan/scale-new-business-peaks-with-auto-scaling
Best,
Autoscaling determines, based on some measurement (CPU load is a common measurement), whether or not to increase/decrease the number of instances running.
Load balancing relates to how you distribute traffic to your instances based on domain name lookup, etc. Somewhere you must have knowledge of which IP addresses are those currently assigned to the instances that the autoscaling creates.
You can have multiple IP address entries for A records in the DNS settings and machines will be allocated in a roughly round-robin fashion from that pool. But, keeping the pool up to date in real-time is hard.
The load balancer gives you an easy mechanism to provide a single interface/IP address to the outside world and it has knowledge of which instances it is load balancing in real time.
If you are using autoscaling, unless you are going to create a fairly complex monitoring and DNS updating system, you can reasonably assume that you must use a load balancer as well.

Rule of thumb for determining web server scale up and down?

What's a good rule of thumb for determining whether to scale up or down the number of cloud based web servers I have running? Are there any other metrics besides request execution time, processor utilization, available memory, and requests per second that should be monitored for this purpose? Should the weighted average, standard deviation or some other calculation be used for determining scale up or down? And finally, are there any particular values that are best for determining when to add or reduce server instances?
This question of dynamically allocating compute instances brings back memories of my control systems classes in engineering school. It seems we should be able to apply classical digital control systems algorithms (think PID loops and Z-transforms) to scaling servers. Spinning up a server instance is analogous to moving an engine throttle's stepper motor one notch to increase the fuel and oxygen rate in response to increased load. Respond too slowly and the performance is sluggish (overdamped), too quickly and the system becomes unstable (underdamped).
In both the compute and physical domains, the goal is to have resources match load. The good news is that compute systems are among the easier control systems to deal with, since having too many resources doesn't cause instability; it just costs money, similar to generating electricity in a system with a resistor bank to burn off excess.
It’s great to see how the fundamentals keep coming around again! We learned all that for a reason.
Your question is a hot research area right now. However, the web-server utilization can be automated by the cloud providers in different ways. For the details, how it works? Which metrics effects the scale up and down: you can glance at this paper.
Amazon has announced Elastic Beanstalk, which lets you deploy an application to Amazon’s EC2 (Elastic Compute Cloud) and have it scale up or down, by launching or terminating server instances, according to demand. There is no additional cost for using Elastic Beanstalk; you are charged for the instances you use.
Also, you can check Auto Scaling which Amazon AWS offers.
Auto Scaling allows you to scale your
Amazon EC2 capacity automatically up
or down according to conditions you
define. With Auto Scaling, you can
ensure that the number of Amazon EC2
instances you’re using increases
seamlessly during demand spikes to
maintain performance and decreases
automatically during demand lulls to
minimize costs. Auto Scaling is
particularly well suited for
applications that experience hourly,
daily, or weekly variability in usage.
Auto Scaling is enabled by Amazon
CloudWatch and available at no
additional charge beyond Amazon
CloudWatch fees.
I recommend you to read the details from Amazon AWS to dig how their system utilize scale up and down for web servers.

Resources