Terraform: cloudwatch logs to elasticsearch - elasticsearch

I am trying to push the cloudwatch logs to elastic search either using a Lambda function or Amazon Kinesis. I have the log groups setup and the elastic search domain running using terraform. Please suggest on how can I push the logs from the log group to elastic search. Please share if you have the terraform codes for the same.

This answer documents some example Terraform code for creating a lambda and Cloudwatch subscription that ships logs from a Cloudwatch log group to a Sumologic HTTP collector (just a basic HTTP POST endpoint). The Cloudwatch subscription invokes the Lambda every time a new batch of log entries is posted to the log group.
The cloudwatch-sumologic-lambda referred to in that Terraform code was patterned off of the Sumologic Lambda example.
I'd imagine you would to do something similar, but re-writing the Lambda to format the HTTP however ElasticSearch requires. I'd bet some quick googling on your part will turn up plenty of examples.
Alternatively to all this Terraform config though, you can just go to your Cloudwatch console, select the log group you're interested in and select "Stream to Amazon ElasticSearch".
Though I think that will only work if you're using the AWS "ElasticSearch service" offering - meaning if you installed/configured ElasticSearch on some EC2 instances yourself it probably won't work.

Related

Cloudfoundry logs to Elastic SAAS

In the documentation of Cloudfoundry, the Elastic SAAS service is not mentioned
https://docs.cloudfoundry.org/devguide/services/log-management-thirdparty-svc.html
So was wondering if anyone has done it and how?
I know one way is to use a logstash instance in cf, feed the syslog to it and then ship it to Elastic. But just wondering if there is a direct possibility to skip the logstash deployment on cf?
PS. We also log using the ECS format.

EKS EFK logging approach

I am trying to decide an approach for logs processing in a EKS cluster. Idea is to use EFK. We thought we can use fluentd to push the logs to elastic search. But most of the blogs uses fluentd to send the logs to cloudwatch and then a lambda to send the cloudwatch logs to elastic search. Why is this approach preferred? What could be the drawbacks of pushing logs directly to elastic?
Thanks!
I have been using EKF in EKS and sending logs directly to elasticsearch using a dynamic index key.
My elasticsearch cluster is also running inside Kubernetes and I am running fluentd as a daemon set. I haven't found any problems yet in this approach.

OpenStack VM creating Using Alerts from Splunk

As per my understanding, in AWS, we can combine AWS CloudWatch and AWS Elastic Beanstalk for the automation of VM creation. For example, We can configure CloudWatch to trigger an alert for certain condition and depending on that we can create/alter a VM. Is there a way to do the same with OpenStack using Terraform scripts?
Currently, we are creating and managing OpenStack VM's using terraform and ansible scripts. We have Splunk for dashboard and alerts. Is there a way to execute terraform scripts for VM's as we get an alert from Splunk? Please correct me if my understanding is wrong.
Is there a way to execute terraform scripts for VM's as we get an alert from Splunk?
AWX (or its Tower friend) will trivially(?) do that, via /api/v2/job_templates/{id}/launch/, or if there needs to be some API massaging (either to keep the credentials out of Splunk or to reshape the webhook payload) then I would guess a lambda function could do that
I would guess that if you are using terraform to drive ansible (instead of the other way around) then you could use Atlantis or TerraHub in roughly the same manner

AWS Aurora - adding error logs to Cloudwatch? (errorlog)

I know you can export Aurora's audit logs to Cloudwatch, but I don't see any way to do that for the error logs/errorlog. That would enable us to set up a Cloudwatch alarm for failed scheduled events, etc. Am I just missing it?
It looks like they support it for MySQL RDS, but not Aurora.
I already have code that will query the error log files, but being able to set up an alert would make more sense (plus I wouldn't scan the logs every x minutes).
The AWS doc does not provide any info about this in the English page but it does in the French page.
Please see the link here.
Then you could translate thanks to Chrome.
Publication logs of Aurora MySQL in CloudWatch Logs with AWS management Consoles.
You can publish logs of Aurora MySQL in CloudWatch Logs by the way of the console. To publish newspapers Aurora MySQL from the console :
Open the console Amazon RDS to the address https://console.aws.amazon.com/rds/
In the shutter of navigation, choose Clusters.
Choose the cluster of database Aurora MySQL whose data of logs you want to publish.
For Actions, choose Modify the cluster.
In the section Exports of logs, choose the logs which you want to begin to publish into CloudWatch Logs.
Choose Continue, then Modify DB Cluster on the recapitulative page.

send logs from multiple AWS lambda to s3

I have a process which includes many lambdas that are called in sequence. Currently each lambda logs to its own cloudwatch log. I need a way to aggregate the logs from all the lambda's into one place (s3 or cloudwatch). I tried to change the name of the cloudwatch log in lambda's context but that did not work. Can anybody provide possible solutions
To do what you want, you'll have to create another Lambda function that uses those particular logs as Event Source.
You can find more information about it here: Supported Event Sources -
CloudWatch Logs.

Resources