dynamic ec2 resourcing in declarative cloud formation/terraform - amazon-ec2

We are moving our infrastructure to cloud formation since it's much easier to describe the infrastructure in a nice manner. This works fantastically well for things like security groups, routing, VPCs, transit gateways.
However, we have two issues which we are struggling with and I don't think fit the declarative, infrastructure-as-code paradigm which things like terrafrom and cloud formation are.
(1) We have a business requirement where we run a scheduled batch at specific times in the day. These are very computationally intensive. To save costs, we run these on an EC2 which is brought up at that time, then torn down when the batch is finished. However, this seems to require a temporary change to the terraform/CF files, then a change back. Is there a more native way of doing this?
(2) We dynamically store and allow to be edited by clients their firewalling rules on their load balancer (ALB). This information cannot be stored in the terraform/CF files since it can be changed by clients on demand.
Is there a way of properly doing these things in CF/Terraform?

(1) If you have to use EC2, you could create a Lambda that would start your EC2 instances. Then, create a CloudWatch Event that triggers the Lambda at your specified date / time. For more details you can see https://aws.amazon.com/premiumsupport/knowledge-center/start-stop-lambda-cloudwatch/. Once the job is done, have your EC2 shut itself down using the awssdk or awscli.
Alternatively, you could use AWS Lambda to run your batch job. You only get charged when the Lambda runs. Likewise, create a CloudWatch Event rule that schedules the Lambda.
(2) You could store the firewall rules in your own DB and modify the actual ALB SG rules using the awssdk. I don't think it's a good idea to store these things in Terraform/CF. IMHO Terraform/CF are great for declaring infrastructure but won't be a good solution for resources that are dynamically changing, especially by third parties like your clients.

Related

Databricks or AWS Lambda for low throughput event driven architecture

I am looking to setup an event driven architecture to process messages from SQS and load into AWS S3. The events will be low volume and I was looking at either using Databricks or AWS lambda to process these messages as these are the 2 tools we already have procured.
I wanted to understand which one would be best to use as I'm struggling to differentiate them for this task as the throughput is only up to 1000 messages per day and unlikely to go higher at the moment so both are capable.
I just wanted to see what other people would consider and see as the differentiators between the two of these products so I can make sure this is future proofed as best I can?
We have used lambda more where I work and it may help to keep it consistent as we have more AWS skills in house but we are looking to build out databricks capability and I do personally find it easier to use.
If it was big data then I would have made the decision easier.
Thanks
AWS Lambda seems to be a much better choice in this case. Following are some benefits you will get with Lambda as compared to DataBricks.
Pros:
Free of cost: AWS Lambda is free for 1 Million requests per month and 400,000 GB-seconds of compute time per month, which means your request rate of 1000/day will easily be covered under this. More details here.
Very simple setup: The Lambda function implementation will be very straight-forward. Connect the SQS Queue with your Lambda function using the AWS Console or AWS cli. More details here. The Lambda function code will just be a couple of lines. It receives the message from SQS queue and writes to S3.
Logging and monitoring: You won't need any separate setup to track the performance metrics - How many messages were processed by Lambda, how many were successful, how much time it took. All these metrics are automatically generated by AWS CloudWatch. You also get an in-built retry mechanism, just specify the retry policy and AWS Lambda will take care of the rest.
Cons:
One drawback of this approach would be that each invocation of Lambda will write to a separate file in S3 because S3 doesn't provide APIs to append to existing files. So you will get 1000 files in S3 per day. Maybe you are fine with this (depends on what you want to do with this data in S3). If not, you will either need a separate job to join all files periodically or do a download of existing file from S3, append to it and upload back, which makes your Lambda a bit more complex.
DataBricks on the other hand, is built for different kind of use cases - Loading large datasets from Amazon S3 and performing analytics, SQL-like queries, builing ML models etc. It won't be suitable for this use case.

Fargate vs Lambda, when to use which?

I'm pretty new to the whole Serverless landscape, and am trying to wrap my head around when to use Fargate vs Lambda.
I am aware that Fargate is a serverless subset of ECS, and Lambda is serverless as well but driven by events. But I'd like to be able to explain the two paradigms in simple terms to other folks that are familiar with containers but not that much with AWS and serverless.
Currently we have a couple of physical servers in charge of receiving text files, parsing them out, and populating several db tables with the results. Based on my understanding, I think this would be a use case better suited for Lambda because the process that parses out the text files is triggered by a schedule, is not long running, and ramps down when not in use.
However, if we were to port over one of our servers that receive API calls, we would probably want to use Fargate because we would always need at least one instance of the image up and running.
In terms of containers, and in very general terms would it be safe to say that if the container is designed to do:
docker run <some_input>
Then it is a job for Lambda.
But if the container is designed to do something like:
docker run --expose 80
Then it is a job for Fargate.
Is this a good analogy?
That's the start of a good analogy. However Lambda also has limitations in terms of available CPU and RAM, and a maximum run time of 15 minutes per invocation. So anything that needs more resources, or needs to run for longer than 15 minutes, would be a better fit for Fargate.
Also I'm not sure why you say something is a better fit for Fargate because you "always need at least one instance running". Lambda+API Gateway is a great fit for API calls. API Gateway is always ready to receive the API call and it will then invoke a Lambda function to process it (if the response isn't already cached).
It is important to notice that with Lambda you don't need to build, secure, or maintain a container. You just worry about the code. Now as mentioned already, Lambda has a max run time limit and 3GB memory limit (CPU increases proportionally). Also if it is used sporadically it may need to be pre-warmed (called on a schedule) for extra performance.
Fargate manages docker containers, which you need to define, maintain and secure. If you need more control of what is available in the environment where your code runs, you could potentially use a container (or a server), but that again comes with the management. You also have more options on Memory/CPU size and length of time your run can take to run.
Even for an API server as you mentioned you could put API gateway in front and call Lambda.
As Mark has already mentioned, you can Lambda + API Gateway to expose your lambda function as API.
But lambda has significant limitations in terms of function executions. There are limitations on the programming languages supported, memory consumption and execution time (It was increased to 15 mins recently from the earlier 5 mins). This is where AWS Fargate can help by giving the benefits of both container world and Serverless (FaaS) world. Here you worry only about container (its CPU, memory requirements, IAM policies..) and leave the rest to Amazon ECS by choosing Fargate launch type. ECS will choose the right instance type, manage your cluster, it's auto scaling, optimum utilization.
This is the right analogy, but it is not an exhaustive list to be able to explain the two paradigms.
In general, Lambda is more suitable for serverless applications. Its nature is a function-as-a-service (FaaS). It just does the simple tasks and that’s all. Don’t expect too much more.
It should be considered as the first option for serverless module. But it has more limitations and restrictions. Module architecture elaborated from functional and not-functional requirements, surrounded infrastructure and many other factors.
To make a decision minimum you must review the list of restrictions such as:
Portability
Environment control
Trigger type
Response time
Response size
Process time
Memory usage
These are the main factors. But the list hasn’t covered all the factors and restrictions to consider between both these serverless technologies.
To know more about I recommend this article https://greenm.io/aws-lambda-or-aws-fargate/

Oracle Monitoring on AWS: EM Express vs Cloud Control

I am looking for general advice from anyone who has experience monitoring Oracle RDS databases in AWS. The system that I am working with will involve several enterprise Oracle RDS databases (on the order of a few dozen) in AWS. My organization is considering two options for monitoring:
Setting up Cloud Control in AWS, by housing the OMS and the repository database on an EC2 instance and enabling the OEM_AGENT on our RDS instances.
Relying entirely on EM Express/CloudWatch and any other third party software that we can use without the overhead of Cloud Control.
The concern with option 1 is that it undermines our reasons for moving to RDS, namely, to remove some of the administrative overhead of maintaining traditional on-premises Oracle databases. The OEM repository database cannot be housed in RDS as the OMS requires SYS-level access to the repository and RDS does not allow for this. As a result, having Cloud Control would require a lot of the kind of maintenance we were hoping to move away from.
The problem with option 2 is mainly the lack of metric alerting. CloudWatch/Enhanced Monitoring provide some basic metrics for alerts, but more specific metrics and alerts, such as those on alert log errors, tablespaces, long-running queries, archive area used, etc are lacking. We do not mind the lack of centralization as we will simply create an internal page with links to all of the different databases, and EM Express gives us what we need from a performance monitoring standpoint. The only concern really is the lack of metrics alerting. If there is not some other way of doing this, we may also simply write our own PL/SQL scripts to trigger the alerts.
However, I am curious to know how others solved this problem or even just generally, what kinds of AWS-based Oracle monitoring systems have been set up and how they work.
A problem almost all the enterprises which moving to cloud are facing today. Companies moving to cloud to get rid of some of their admin tasks and then they are figuring out they can't do all the customization that they had in on-prem.
So, here is how you can make the option 2 better. Especially to address your concern
The only concern really is the lack of metrics alerting
RDS events are a good way for monitoring. You can subscribe to the events and get notified in multiple ways such as group email, slack channels or to a third part monitoring tool like pagerduty.
Using RDS Events integration with Lambda. I strongly suggest to have a look on Lambda. As I mentioned above, apart from subscribing to the events, you can also call/trigger a lambda function to take actions for certain events. We use Lambda for overcoming the slave skip error in mysql.
Another use case of Lambda is an alternative to cron job. Things like checking disk space every day, to make sure incremental backups are taken over night.
Let me know, if you have specific question on "how to implement" these options. I will be glad to add more information.

Cloudformation extra when compared with Chef

What can I do with Cloudformation that I can not do with chef? It looks like chef support spawning node in the ec2 and I can read back the information of the spawned nodes (ip,..) so why would still need cloudformation for?
Very little, but what it does makes it worth using.
The primary advantage of CloudFormation is that Amazon ties created resources together. In other words if your application comprises a db, four webservers, an autoscaling group, a launch configuration, ingress rules, some VPC subnets, an internet gateway or two, and a VPN connection, you can manage them in a single place, as a CloudFormation stack. Want to shoot them? That's easy. Kill the stack and every resource dies with it.
Sure you could technically do this with Chef and the Amazon API. CloudFormation is little more than a way of generating extremely sophisticated sets of AWS API calls + Magic (tm), so you could always roll your own. Netflix sort of did that with their OS tool, Asgard, and RightScale and other services are more or less that. If those softwares don't meet your needs or if you can't afford them, CloudFormation is a nice supplement to AWS deployments. In fact, you don't even need to rely on it. It's quite simple to launch chef-solo from CloudFormation, which allows you to leverage the advantages of both.

Basic AWS questions

I'm newbie on AWS, and it has so many products (EC2, Load Balancer, EBS, S3, SimpleDB etc.), and so many docs, that I can't figure out where I must start from.
My goal is to be ready for scalability.
Suppose I want to set up a simple webserver, which access a database in mongolab. I suppose I need one EC2 instance to run it. At this point, do I need something more (EBS, S3, etc.)?
At some point of time, my app has reached enough traffic and I must scale it. I was thinking of starting a new copy (instance) of my EC2 machine. But then it will have another IP. So, how traffic is distributed between both EC2 instances? Is that did automatically? Must I hire a Load Balancer service to distribute the traffic? And then will I have to pay for 2 EC2 instances and 1 LB? At this point, do I need something more (e.g.: Elastic IP)?
Welcome to the club Sony Santos,
AWS is a very powerfull architecture, but with this power comes responsibility. I and presumably many others have learned the hard way building applications using AWS's services.
You ask, where do I start? This is actually a very good question, but you probably won't like my answer. You need to read and do research about all the technologies offered by amazon and even other providers such as Rackspace, GoGrid, Google's Cloud and Azure. Amazon is not easy to get going but its not meant to be really, its focus is more about being very customizable and have a very extensive api. But lets get back to your question.
To run a simple webserver you would need to start an EC2 instance this instance by default runs on a diskdrive called EBS. Essentially an EBS drive is a normal harddrive except that you can do lots of other cool stuff with it like take it off one server and move it to another. S3 is really more of a file storage system its more useful if you have a bunch of images or if you want to store a lot of backups of your databases etc, but its not a requirement for a simple webserver. Just running an EC2 instance is all you need, everything else will happen behind the scenes.
If you app reaches a lot of traffic you have two options. You can scale your machine up by shutting it off and starting it with a larger instance. Generally speaking this is the easiest thing to do, but you'll get to a point where you either cannot handle all the traffic with 1 instance even at the larger size and you'll decide you need two OR you'll want a more fault tolerant application that will still be online in the event of a failure or update.
If you create a second instance you will need to do some form of loadbalancing. I recommend using amazons Elastic Load Balancer as its easy to configure and its integration with the cloud is better than using Round Robin DNS or a application like haproxy. Elastic Load Balancers are not expensive, I believe they cost around $18 / month + data that's passed between the loadbalancer.
But no, you don't need anything else to do scale up your site. 2 EC2 instances and a ELB will do the trick.
Additional questions you didn't ask but probably should have.
How often does an EC2 instance experience hardware failure and crash my server. What can I do if this happens?
It happens frequently, usually in batches. Sometimes I go months without any problems then I will get a few servers crash at a time. But its defiantly something you should plan for I didn't in the beginning and I paid for it. Make sure you create scripts and have backups and a backup plan ready incase your server fails. Be ok with it being down or have a load balanced solution from day 1.
Whats the hardest part about scalabilty?
Testing testing testing testing... Don't ever assume anything. Also be prepared for sudden spikes in your traffic. You have to be prepared for anything if you page goes from 1 to 1000 people over night are you prepared to handle it? Have you tested what you "think" will happen?
Best of luck and have fun... I know I have :)

Resources