Send logs from AWS to Elasticcloud - elasticsearch

I am using Elasticcloud (hosted elasticsearch) to index my app data. Now I want to start streaming logs from my AWS lambda functions to my Elasticcloud account. I have googled and I can see that there are couple of ways to do this:
Functionbeat
Cloudwatch-> Elasticsearch subscription filter
Cloudwatch-> Lambda subscription filter
My questions are
which is the most cost efficient and performant way to stream logs from AWS cloudwatch to elasticcloud
For functionbeat is it necessary to first send logs to a S3 bucket? (I am referring to this https://www.elastic.co/guide/en/beats/functionbeat/current/configuration-functionbeat-options.html)

First question:
Since Functionbeat is deployed to Lambda in case of AWS, no.1 and no.3 cost the same. No.1 is faster to deploy because you need to create Lambda by yourself in no.3.
As for performance, of course it depends on the implementation, I guess there is no big difference between two methods unless millisecond latency has impact to you.
If you are using Elastic Cloud you can't use no.2, which works with Amazon Elasticsearch Service. These two are completely different services. (see this page, I know it's a bit confusing!)
Second question:
No, you don't have to. Functionbeat directly gets logs from CloudWatch.
S3 bucket is used to store Function beat module itself before being deployed to Lambda.

Related

Amazon CloudWatch SubscriptionFilter Elastic search Terraform support

I am trying the stream cloudwatch logs to elastic search using elastic search subscription filter.
I want to automate with terraform, but didn't find if terraform supports this resource type.
Please let me know if it is feasible.
In AWS REST API nor AWS CLI there is no such thing as a subscription to ElasticSearch. Only the following subscriptions are supported:
An Amazon Kinesis stream belonging to the same account as the subscription filter, for same-account delivery.
A logical destination that belongs to a different account, for cross-account delivery.
An Amazon Kinesis Firehose delivery stream that belongs to the same account as the subscription filter, for same-account delivery.
An AWS Lambda function that belongs to the same account as the subscription filter, for same-account delivery.
What you see in AWS Console, is console only shortcut for that. Basically, when you create a "subscription" to ES, console will just provision a lambda function and create subscription it. The lambda will get the log events and inject into the ES domain.
Therefor, to inject logs into ES in terraform, you have to construct such "subscription" yourself. This is done through actual subscription to a lambda function. To simplify the development, you can take the lambda function that AWS creates and use that, instead of developing your own code for injecting logs to ES.

How does functions as a service ( FaaS) hosting work under the hood?

hypothesis
Suppose I want to roll out my own FaaS hosting, a service like Lambda, not on Lambda.
analogy
I have an abstract understanding of other cloud services as follows
1. Infrastructure as a service (IaaS): Create virtual machines for tenants on your hardware.
2. Platform as a service (PaaS): Create VM and run script that loads the required environment.
The above could also be achieved with docker images.
What about FaaS?
AWS uses firecracker VM for Lambda functions. But what's not clear is how the VMs are triggered on and off, how they're orchestrated on multiple pieces of hardware in a multi-tenant environment. Could someone explain how the complete life cycle works?
The main features of AWS Lambda and Cloud Function can be found in
https://cloud.google.com/docs/compare/aws/compute#faas_comparison
I can include the information of what I know, that is Google Cloud Functions.
Triggers
Cloud Functions can be triggered in two ways: HTTP request or Event-triggered. Events and Triggers. The events are things that happen into your project: A file is updated in Cloud Storage or Cloud Firestore. Other events are: a Compute Engine instance (VM) is initialized or the source code is updated in your repository.
All these events can be the trigger of a Cloud Function. This function, when triggered, is executed in a VM that will receive a HTTP request, and context information to perform its duty.
Auto-scaling and machine-type
If the volume that arrives to a Cloud Function increases, it auto-scales. That is that instead of having one VM executing one request at a time. You will have more than one VMs that server one request at a time. In any instance, only one request at a time will be analyzed.
If you want more information, you can check it on the official documentation.

Accessing Amazon Aurora from Lambda?

I am a beginner in AWS development and I had a question regarding accessing amazon aurora from lambda.
I have read that all instances of Amazon Aurora needs to be created inside a VPC. However, it seems that Lambda will incure massive latency for setting up elastic network interface (ENI) everytime it tried to access resources which is inside a VPC
https://medium.freecodecamp.org/lambda-vpc-cold-starts-a-latency-killer-5408323278dd
Since this could increase the cold start time by around 10s , is there a way to avoid this ENI setup latency while using Lambda to access Amazon RDS?
No. There is currently no "good" way to reliably prevent the coldstart.
(1) Yes, keeping the lambda function warm can help reduce the problem, but it will still be present.
(2) The only way would be if you run your rds "outside" a VPC (i.e. make it publicly available) and secure it using security groups. But this is a really bad idea for a lot of reasons (lambda ip addresses change so you need to leave the rds instance wide open for any attacker, violates aws best practices, etc).
AWS lambda + rds is currently not suitable if you need responsiveness. That's why Amazon is pushing the use of dynamodb with lambda so much (since that uses https).
Tldr if you need responsiveness + security stay away from lambda + rds.
What you need to do is make sure your lambda role has the AWSLambdaVPCAccessExecutionRole policy attached to it.
Your ENI is created on cold start. Avoid the cold start by creating another lambda to invoke your current lambda on a schedule to keep it warm.

AWS Lambda vs Elastic Beanstalk

Im new to aws.
I am going to develop a REST full app which is going host on aws.
I decided to use
Amazon S3 for static contents
Amazon Cognito User Pool for Authentication
Amazon DynamoDB as db
I am confused on where my app is going to be hosted. I have 2 ideas for that.
AWS Lambda Function + api gateway
Can I implement entire app on it ?
Elastic Beanstalk
Can i integrate all the above aws services with it ?
(Backend on .net core web api 2.0)
Please guid me
As the experience of working with cloud, after 1y 6m I can give a proper answer for my own question.
Yes.
There is a possibility to use API Gateway + Lambda for the entire app as the back end. But you have to manage your most of the app logic from the front end. On there you have to get a risk because the source code can be viewed by the public.
Keeping your all business logic in the client code is not a good practice. And keeping all the logic in the Lambda also not easy or cost effective. The reason is when you making a real world app, you will need thousands of functions. To do one task, you will have to call many functions (Then its a function run time). So it will be very expensive.
Best solution is hosting the backend on Elastic Beanstalk and front end on S3. If you have any heavy task ? then you can make Lambda functions for that.
Lambda is best for CPU bounded functions. But not to have all the application logic on it.
Since you might not be interested in managing the underlying system, you should opt for AWS Lambda + API Gateway.

How to setup Amazon Lambda with micro services in Node.js

I am looking forward to work in a Amazon Lambda with Node.js
They call it server less, So is it a better way to host our code then traditional hosting servers ?
I am open for the suggestions, thanks in advance!!
It is called serverless as you dont manage and maintain the underlying server and the runtime.
Basically you write your code in one of the supported languages, say node.js, and then configure events that will trigger your code.
Example in case of AWS, the events can be a API GW call, a SQS message, a SNS notification etc.
So it can be better depending on what you are planning on doing.
Do note that there are certain limits that AWS imposes by default on accounts for AWS Lambda.
Also there can be slight startup penalty for a Lambda.
A plus point of Lambda vs Hosting your code in EC2 is that with Lambda you dont get charged if your code is not used/triggered.
However, do note that for functions that have heavy usage it might be better to
host your own EC2.
Most important a Lambda has to be stateless.
Considering all the above factors you can take a call on whether AWS Lambda and Serverless Architecture fits your needs.

Resources