Collect stdout/stderr logs with sidecar container - elasticsearch

I use company originated cloud environment which doesn't allow us to use node log collector and asks us to use a sidecar container to ship logs to ELK. However our container prints all the logs to stdout/stderr by following the 12 factor app practice.
Is there any way to collect stdout/stderr logs using a sidecar container (Prefered to use Filebeat)? I was checking documentation but it's not clearly mentioned anywhere.
There's another way in which I print all the logs to the specific directory within app container and share the volume with the sidecar Filebeat container. However, with this approach, I should think about the log rotation as well, which introduces more complexity. (Probably I need to create another container to aggregate logs and rotate logs.)
I think the recommended approach could be
container logs go stdout/stderr
Filebeat is deployed as Daemon Set
Use Hint Base Autodiscover

Consider one of the below options
Deploy Fluentd daemonset to collect the logs from all pods from a node
OR
Deploy Fluent-bit daemonset to collect logs and get it forward to Fluentd pod
Configure Fluentd pod to push the logs to Elasticsearch.

Related

Kubernetes: How can I save logs of the pod before termination using PreStop hook?

I want the pods running in my cluster to save their logs just before termination somewhere, so that I can access these logs later and know the termination reason.
Can this be accomplished using the PreStop hook? If yes, please guide me how to do so.
Any other approaches are also welcomed.
Use fluentd or fluent bit to send logs to a log aggregator system such as elastic search(EFK stack) or splunk.
Fluentd can run as daemonset in each node and send logs to EFK/Splunk.
Fluentbit can run as a sidecar and send logs to EFK/Splunk
https://kubernetes.io/docs/concepts/cluster-administration/logging/#basic-logging-in-kubernetes

Kubernetes pods wrting to persistent volume, need to push the logs to ElasticSearch

I have kubernetes pods writing logs to multiple log files using a persistent volume - nfs drive. I need a way to push the logs real time from the log files to ELastic Search.
I am trying to set up a filebeat as the sidecar container but not sure how it will help
Please suggest recommended approach with examples.

Need to ship logs to elastic from EKS

We have an EKS cluster running and we are looking for best practices to ship application logs from pods to Elastic.
In the EKS workshop there is an option to ship the logs to cloudwatch and then to Elastic.
Wondered if there is an option to ship the logs directly to Elastic, or to understand best practices.
Additional requirement:
We need the logs to determine from which namespace the logs is coming from and to deliver a dedicated index
You can deploy EFK stack in kubernetes cluster. Follow the reference --> https://github.com/acehko/kubernetes-examples/tree/master/efk/production
Fluentd would be deployed as DaemonSet so that one replica is run on each node collecting the logs from all pods and push them to elasticsearch

Unable to view the Kubernetes logs in Kibana dashboard

I am trying to do the log monitoring of Kubernetes cluster using EFK. I got Kibana dashboard but it doesn't show any logs of Kubernetes cluster.
Here is the link which I followed in my task.By default my dashboard shows like
After that i changed the index-pattern in dashboard as
Then it showed as
My dought is, how Can i view the logs of each and every pod logs in kubernetes cluster?
Could anybody suggest me how to do the log monitoring of kubernetes cluster using EFK?
Note: in order for Fluentd to work, every Kubernetes node must be
labeled with beta.kubernetes.io/fluentd-ds-ready=true, as otherwise
the Fluentd DaemonSet will ignore them.
Have you made sure to address this?

Getting logs of tomcat containers running in kubernetes pods using fluentd, elsasticsearch and kibana

We are using Kubernetes and we have multiple tomcat/jws containers running on multiple pods. What would be best approach for centralized logging using fluentd, Elasticsearch and Kibana.
The main purpose is to get the tomcat logs which are running in pods (example: access.log and catalina.log), also the application log which is deployed on the tomcat.
Also we need to differentiate the logs coming from different pods (tomcat container).
I followed below link
https://access.redhat.com/documentation/en/red-hat-enterprise-linux-atomic-host/7/getting-started-with-containers/chapter-11-using-the-atomic-rsyslog-container-image
From this I am only able to get container logs but not able to get tomcat log.
-Praveen
have a look at this example:
https://github.com/kubernetes/contrib/tree/master/logging/fluentd-sidecar-es
The basic idea is to deploy an additional fluentd container in your pod and share a volume between the containers. The application container writes the logs into the volume and the fluentd container mounts the same volume readonly and feeds the logs to elasticsearch. In the default configuration the log events get a tag like "file.application.log".
We evaluate this setup at the moment but we have more application containers with the same logfile name. So there is still work todo.

Resources