Beanstalk does the typical thing an autoscaling app would do: waits for demand (in the form of web requests) to spike, then it spawns new slaves for the cluster based on the registered AMI.
My question is, if I have an actor-based system, can I manually trigger the expansion of the grid? or do I have to route requests to my agents through some kind of http pipeline that AWS would recognize?
Seems like other solutions on AWS (e.g. Hadoop) allow for explicit control. Also, other solutions on top of AWS (e.g. Heroku) do as well.
This is an old question but I think an answer might be useful for those that come after. This looks like a case for AWS Autoscaling. See http://aws.amazon.com/documentation/autoscaling/.
Related
hypothesis
Suppose I want to roll out my own FaaS hosting, a service like Lambda, not on Lambda.
analogy
I have an abstract understanding of other cloud services as follows
1. Infrastructure as a service (IaaS): Create virtual machines for tenants on your hardware.
2. Platform as a service (PaaS): Create VM and run script that loads the required environment.
The above could also be achieved with docker images.
What about FaaS?
AWS uses firecracker VM for Lambda functions. But what's not clear is how the VMs are triggered on and off, how they're orchestrated on multiple pieces of hardware in a multi-tenant environment. Could someone explain how the complete life cycle works?
The main features of AWS Lambda and Cloud Function can be found in
https://cloud.google.com/docs/compare/aws/compute#faas_comparison
I can include the information of what I know, that is Google Cloud Functions.
Triggers
Cloud Functions can be triggered in two ways: HTTP request or Event-triggered. Events and Triggers. The events are things that happen into your project: A file is updated in Cloud Storage or Cloud Firestore. Other events are: a Compute Engine instance (VM) is initialized or the source code is updated in your repository.
All these events can be the trigger of a Cloud Function. This function, when triggered, is executed in a VM that will receive a HTTP request, and context information to perform its duty.
Auto-scaling and machine-type
If the volume that arrives to a Cloud Function increases, it auto-scales. That is that instead of having one VM executing one request at a time. You will have more than one VMs that server one request at a time. In any instance, only one request at a time will be analyzed.
If you want more information, you can check it on the official documentation.
Im hoping to move an application to AWS.
I would like to use the AutoScaling so not all my EC2 instances are in use when the application use is quiet.
My problem is.....
I have one service account used for all communication between the various components of the application and the servers in that environment
We have a security exception with my company which allows us to use the service account to perform its actions on each individual server.
Every time we introduce a new server to the environment, we have to request that the security team update our exception list to allow the new server in as well.
There is no automatic method for doing this. We have to submit a request to the security team asking for the new server to be added to the exception.
So while AutoScaling would be prefect how can it work in this case if each time a server is added the security team needs to be notified so they can add the new server to the exception list?
Thanks
You can get notifications when your autoscale group scales either up or down. SNS can send a variety of things, including SMS (text) messages to a cell phone.
While this would work, it is incredibly manual. The goal of an autoscale group is to let the environment expand and contract without human intervention. I personally would not implement this as, depending on the availability of your security team they may be a bottle neck to scaling up. If for some reason they miss the scale up event that signals them to do something then you've got orphan machines that you're paying for that are doing nothing.
Additionally, there are also ways to script the provisioning of a new machine. Perhaps there is a way to add what you want automatically. AWS calls this userdata - you can learn a bit more about it from the AWS EC2 docs.
But ultimately I'd really take a step back and look at your architecture. If you can't script the machine provisioning then autoscaling is not very worthwhile - it's just plain "have devops add another machine if needed and hope they remember to take it down when it's not needed".
I am looking forward to work in a Amazon Lambda with Node.js
They call it server less, So is it a better way to host our code then traditional hosting servers ?
I am open for the suggestions, thanks in advance!!
It is called serverless as you dont manage and maintain the underlying server and the runtime.
Basically you write your code in one of the supported languages, say node.js, and then configure events that will trigger your code.
Example in case of AWS, the events can be a API GW call, a SQS message, a SNS notification etc.
So it can be better depending on what you are planning on doing.
Do note that there are certain limits that AWS imposes by default on accounts for AWS Lambda.
Also there can be slight startup penalty for a Lambda.
A plus point of Lambda vs Hosting your code in EC2 is that with Lambda you dont get charged if your code is not used/triggered.
However, do note that for functions that have heavy usage it might be better to
host your own EC2.
Most important a Lambda has to be stateless.
Considering all the above factors you can take a call on whether AWS Lambda and Serverless Architecture fits your needs.
It seems to me that there are two alternate ways of having an autoscaled web service running on EC2 instances behind an ELB:
1) create an Auto Scaling Launch Config that specifies the image id of my custom AMI (and the instance type to use). Then, when the Auto Scaling trigger is triggered, it will simply spin up new EC2 instances using that AMI.
2) Use ELB and ECS instead as ECS seems to have its own Auto Scaling feature.
In what circumstances is it better to use ECS?
The two options are not mutually exclusive. To answer your last question first, you use ECS when you run containerized applications. You can scale these applications using the Service AutoScaling service in ECS. This will help you bring up additional containers when you're running short of resources to attend incoming requests.
This is different from scaling EC2 instances with a Launch Configuration. You'll need additional instances when you run out of resources in order to spin up new containers in your ECS cluster. By the way, you should always use a Launch Configuration in order bring new EC2 instances within an AutoScaling group bound to your ECS cluster, since it makes everything a lot easier. See for example this tutorial.
As for the ELB, in ECS you actually are better off using ALB. In containerized applications, you need to map their exposed ports to random ports in the host machine. This makes the registration with the load balance a lot more complicated. However, ALB is integrated with ECS so that, whenever you launch a new task, the ECS Service running the task requests a registration of the task as a target for the ALB (see this link for a more detailed explanation). It takes care of everything on your behalf.
I've already got an RDS instance configured and running, but currently we're still running on our old web host. We'd like to decrease latency by hosting the code within AWS.
Two concerns:
1) Future scalability
2) Redundancy ... not a huge concern but AWS does occasionally go down.
Has anyone had this problem where they just need to cheaply run what is essentially a database interface via a language such as PHP/Ruby, in 2 regions? (with one as a failover)
Does Amazon offer something that automatically manages resources, that's also cost effective?
Amazon's Elastic Beanstalk service supports both PHP and Ruby apps natively, and allows you to scale your app servers automatically.
In a second region, run a slave RDS instance off of your master (easy to set up in RDS) and have another beanstalk setup there ready as a failover.