ELK to monitor Kubernetes - performance

I have Kubernetes cluster runnning and created the ELK stack on different machine.
Now I want to ship the logs from Kubernetes cluster to ELK how can I achieve it?
The ELK stack is outside the cluster.

Have you tried fluentd? Logging agent that collects logs and able to ship logs to Elastic search.
UPDATE
I just found some examples in kops repo. You can check here

You can run filebeat to collect logs from kubernetes.
Follow the instruction of documentation on link:
After you download kubernetes.yaml change:
- name: ELASTICSEARCH_HOST
value: [your elastic search domain]
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: elastic
- name: ELASTICSEARCH_PASSWORD
value: changeme
Pay attention! You need admin privileges for creating filebeat ServiceAccount

We can use EFK stack for Kubernetes Logging and Monitoring. We need a Kubernetes cluster with following capabilities.
Ability to run privileged containers.
Helm and tiller enabled.
Statefulsets and dynamic volume provisioning capability: Elasticsearch is deployed as stateful set on Kubernetes. It’s best to use latest version of Kubernetes (v 1.10 as of this writing)
Please refer https://platform9.com/blog/kubernetes-logging-and-monitoring-the-elasticsearch-fluentd-and-kibana-efk-stack-part-2-elasticsearch-configuration/ for step by step guide.

You can use logging modules like Winston to ship logs to elastic with the plugins they provide
It is very direct and easy to setup
In my node application I used this
Winston plugin

Related

how to configure filebeat configured as agent in kubernetes cluster

I am trying to add ELK to my project which is running on kubernetes. I want to pass by filebeat -> logstach then elastic search. I prepared my filebeat.yml file and in my company the filebeat is configured as an agent in the cluster which i don't realy know what it means? I want to know how to configure the filebeat in this case ? just adding the file in the project and it will be taken into considiration once the pod started or how does it work ?
You can configure the Filebeat in some ways.
1 - You can configure it using the DeamonSet, meaning each node of your Kubernetes architecture will have one POD of Filebeat. Usually, in this architecture, you'll need to use only one filebeat.yaml configuration file and set the inputs, filters, outputs (output to Logstash, Elasticsearch, etc.), etc. In this case, your filebeat will need root access inside your cluster.
2 - Using Filebeat as a Sidecar with your application k8s resource. You can configure an emptyDir in the Deployment/StatefulSet, share it with the Filebeat Sidecar, and set the Filebeat to monitor this directory.

metricbeat agent running on ELK cluster?

Does metricbeat need always an agent running separately from the ELK cluster or it provides a plugin/agent/approach to run metricbeat on the cluster side?
If I understand your question, you want to know if their is a way to monitor your cluster without installing a beat.
You can enable monitoring in the stack monitoring tab of Kibana.
If you want more, beats are standalone objects pluggables with logstash or Elasticsearch.
Latest versions of Elastic Stack (formally known as ELK ) offer more centralized configurations in Kibana, and the 7.9 version introduce a unified elastic agent in Beta to gather several beats in one and manage you "fleet" on agent within Kibana.
But information used by your beats are not directly part of Elastic (CPU, RAM, Logs, etc...)
So you'll still have to install a daemon on your system.

Need to ship logs to elastic from EKS

We have an EKS cluster running and we are looking for best practices to ship application logs from pods to Elastic.
In the EKS workshop there is an option to ship the logs to cloudwatch and then to Elastic.
Wondered if there is an option to ship the logs directly to Elastic, or to understand best practices.
Additional requirement:
We need the logs to determine from which namespace the logs is coming from and to deliver a dedicated index
You can deploy EFK stack in kubernetes cluster. Follow the reference --> https://github.com/acehko/kubernetes-examples/tree/master/efk/production
Fluentd would be deployed as DaemonSet so that one replica is run on each node collecting the logs from all pods and push them to elasticsearch

How to change default GKE stackdriver logging to fluentd

My GKE cluster is currently in default settings and is logging to stack driver. However, I would like to be able to log to elastic stack that I am deploying at elastic.co.
https://cloud.google.com/solutions/customizing-stackdriver-logs-fluentd
I see that I am able to customize filtering and parsing of default fluentd daemonset but how do I install elasticsearch output plugin so that I can stream logs to my elasticsearch endpoint instead of Stackdriver?
The tutorial you linked to answers your question. You need to create a GKE cluster without the built-in fluentd (by passing the --no-enable-cloud-logging flag when creating the cluster) and then install a custom daemon set with the fluentd configuration you want to use.

How can I install the stackdriver elasticsearch plugin to monitor an ES instance running inside k8s v1.11 on GKE?

I'm running an elasticsearch cluster using StatefulSets on Google Container Engine (GKE) (my k8s configs are very similar to the ones here: https://github.com/pires/kubernetes-elasticsearch-cluster/tree/master/stateful)
I created the k8s cluster with --enable-stackdriver-kubernetes
Now I want to also install & use the Stackdriver elasticsearch plugin:
https://cloud.google.com/monitoring/agent/plugins/elasticsearch
Should I install the Stackdriver monitoring agent+plugin inside the ES pods? or on the nodes?
If you're using Stackdriver logging agent to generate/export logs for pods running elasticsearch on a Kubernetes cluster, you can have Stackdriver logging enabled for the cluster (this is enabled by default and can be enabled/disabled through the Console), the Stackdriver logging agent will be deployed on the cluster.
If the logging agent is running on the cluster, logs from each container will automatically be gathered, formatted and exported by the logging agent to Stackdriver Logging [1] for the deployed pods/containers, including elasticsearch.
Kubernetes does things differently to Compute Engine instances in terms of the monitoring agent. If Stackdriver Monitoring is enabled for the cluster, pods are deployed running the Kubernetes Engine version of the Stackdriver Agent and in the case of Kubernetes these come in the form of heapster pods as explained in more detail here [2].
[1] https://cloud.google.com/kubernetes-engine/docs/how-to/loggin
[2] https://cloud.google.com/monitoring/kubernetes-engine/customizing

Resources