toggle specific plugin in FluentD (through k8s manifest file) - elasticsearch

I have an EFK stack that outputs EKS cluster logs to both Elasticsearch and S3. I wonder if there's a way to add a switch to enable/disable outputs to S3 .. maybe using an ENV in the FluentD manifest file. Would appreciate help if anyone knows how to implement this feature.
P.S: can share files as needed

Related

how to configure filebeat configured as agent in kubernetes cluster

I am trying to add ELK to my project which is running on kubernetes. I want to pass by filebeat -> logstach then elastic search. I prepared my filebeat.yml file and in my company the filebeat is configured as an agent in the cluster which i don't realy know what it means? I want to know how to configure the filebeat in this case ? just adding the file in the project and it will be taken into considiration once the pod started or how does it work ?
You can configure the Filebeat in some ways.
1 - You can configure it using the DeamonSet, meaning each node of your Kubernetes architecture will have one POD of Filebeat. Usually, in this architecture, you'll need to use only one filebeat.yaml configuration file and set the inputs, filters, outputs (output to Logstash, Elasticsearch, etc.), etc. In this case, your filebeat will need root access inside your cluster.
2 - Using Filebeat as a Sidecar with your application k8s resource. You can configure an emptyDir in the Deployment/StatefulSet, share it with the Filebeat Sidecar, and set the Filebeat to monitor this directory.

Cloudfoundry logs to Elastic SAAS

In the documentation of Cloudfoundry, the Elastic SAAS service is not mentioned
https://docs.cloudfoundry.org/devguide/services/log-management-thirdparty-svc.html
So was wondering if anyone has done it and how?
I know one way is to use a logstash instance in cf, feed the syslog to it and then ship it to Elastic. But just wondering if there is a direct possibility to skip the logstash deployment on cf?
PS. We also log using the ECS format.

Sending log files/data from one EC2 instance to another

So i have one EC2 instance with logstash, elastichsearch and kibana installed on it. and i have another EC2 instance thats running a dummy apache server. Now i know that i should install filebeat on the apache server instance to send the log files to the logstash instance but im not sure how to configure the files.
My main goal is to send the log files from one instance basically to another for processing and viewing aka ES and Kibana. Any help or advice is greatly appreciated.
Thanks in advance!
Cheers!
So as you have already stated, the easiest way to send log events from one machine to an Elastic instance is to install the filebeat agent on the machine the apache is running.
Filebeat has its own Apache module that makes the configuration even easier! In the module you specify the paths of the desired log files.
Then you also need a configuration of Filebeat itself. In the filebeat.yml you need to define the logstash destination under
output.logstash
This configuration guide gets into more details
Take a look at the filbeat.yml reference on all configuration settings.
If you are familiar with docker, there is also a guide on how to run filebeat on docker.
Have fun! :-)

Showing crashed/terminated pod logs on Kibana

I am currently working on the ELK setup for my Kubernetes clusters. I set up logging for all the pods and fortunately, it's working fine.
Now I want to push all terminated/crashed pod logs (which we get by describing but not as docker logs) as well to my Kibana instance.
I checked on my server for those logs, but they don't seem to be stored anywhere on my machine. (inside /var/log/)
maybe it's not enabled or I might not aware where to find them.
If these logs are available in a log file similar to the system log then I think it would be very easy to put them on Kibana.
It would be a great help if anyone can help me achieve this.
You need to use kube-state-metrics by which you can get all pod related metrics. You can configure to your kube-state-metrics to connect elastic search. It will create an index for a different kind of metrics. Then you can easily use that index to display your charts/graphs in Kibana UI.
https://github.com/kubernetes/kube-state-metrics

How to change default GKE stackdriver logging to fluentd

My GKE cluster is currently in default settings and is logging to stack driver. However, I would like to be able to log to elastic stack that I am deploying at elastic.co.
https://cloud.google.com/solutions/customizing-stackdriver-logs-fluentd
I see that I am able to customize filtering and parsing of default fluentd daemonset but how do I install elasticsearch output plugin so that I can stream logs to my elasticsearch endpoint instead of Stackdriver?
The tutorial you linked to answers your question. You need to create a GKE cluster without the built-in fluentd (by passing the --no-enable-cloud-logging flag when creating the cluster) and then install a custom daemon set with the fluentd configuration you want to use.

Resources