how to use kubernetes lib to watch all events - events

guys
I want to watch all kubernetes events and I find the source code here: https://github.com/kubernetes/client-go/blob/master/informers/events/v1beta1/event.go
However, I can not find any examples about how to use the functions.
Can anyone help me, thanks a lot!

I’d like to collect the event logs with kubectl or REST API[2] as JSON, then you
can send the logs to fluentd for centralized monitoring such as Elasticsearch.
Here is a good sample;[0], though it's OpenShift, but if oc cmd replace with kubectl cmd, it's same with Kubernetes. (Yeah, OpenShift is Enterprise Kubernetes).
[1] is how to implement the fluentd - Elasticsearch stack.
I hope this help you.
[0] [https://docs.openshift.com/container-platform/3.9/security/monitoring.html#security-monitoring-events]
[1] [https://docs.fluentd.org/v0.12/articles/recipe-json-to-elasticsearch]
[2] [https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.10/#list-all-namespaces-292]

One or several of these could help:
"watches" for (quote) "...efficient change notifications on resources" - see Kubernetes API Concepts as well as the API Reference for a particular version. Example: GET /api/v1/namespaces/test/pods?watch=1&resourceVersion=10245
Event Read Operations.
kubectl get allows you to specify the -w or --watch flag to start watching updates to a particular object.
I believe the events are for a particular resource or collection of resources, not for all resources.

Related

How can we get nginx access log on laravel

As title, I need to get data from nginx access log to handle and store in db. So anyone have any ideas about this ? Thank you for reading this post
You should not be storing nginx logs in the DB and trying to read them through Laravel, it will very quickly cause you performance and storage issues especially on production. Other issues will be if you have various servers, how would you aggregate all the logs?
Common practice is to use NoSQL for such tasks. So you can setup another dedicated server where you export all your logs and analyze them. You use an exporter that you install on every one of your servers, point them to your log file and they export the logs to a central logs server. You can set this up yourself using something like ELK stack. With ELK stack you can use filebeat and logstash for this.
Better would be to use some of the services out there such as GCP logging, splunk, etc. You have to pay for them but they offer a lot of benefits. Splunk would provide you with an exporter, with gcp you could use fluentd. If you are using containers, you can also setup a fluentd container and shared volumes to export the logs.

Elastic Uptime Monitors using Heartbeat --Few Monitors are missing in kibana

I have the elk setup in a ec2 server.With Beats like metricbeat,filebeat,heartbeat.
I have setup the elastic apm for some applications like jenkins & sonarqube.
Now In uptime I can see only few monitors like sonarqube and jenkins
Other application are missing..
When I see data from yesterday not available in elasticsearch for particular application
The best way to troubleshoot what is going on is to check if the events from Heartbeat are being collected. The Uptime application only displays events from Heartbeat, and therefore — this is the Beat that you need to check.
First, check the connectivity of Heartbeat and the configured output:
metricbeat test output
Secondly, check if the events are being generated. You can check this by commenting out your existing output (Likely Elasticsearc/Elastic Cloud) and enabling either the Console output or the File output. Then start your Metricbeat and check if events are being generated. If they are, then it might be something with the backend side of things; maybe Elasticsearch is rejecting the documents sent and refusing to index them.
Apropos, Elastic is implementing a native Jenkins plugin that allows you to observe your CI pipeline using OpenTelemetry compatible backends such as Elastic APM. You can learn more about this plugin here.

Kubernetes event logs to elasticsearch

I'm trying to forward kubernetes-event logs to elasticsearch using fluentd.I currently use fluent/fluentd-kubernetes-daemonset:v1.10.1-debian-elasticsearch7-1.0as container image to forward my application logs to elasticsearch cluster.I've searched enough & my problem is that this image doesn't have enough documentation as to accomplishing this task(i.e; forward kubernetes event related logs).
I've found this plugin from splunk which has desired output but this has overhead like :
add above plugin's gem to bundler.
install essential tools like make etc.
install the plugin .
Sure I can do above steps using init-container, but above operations are adding ~200MB to disk space .I'd like to know if it can be accomplished with smaller footprint or other way.
Any help is appreciated.
Thanks.
You can try this: https://github.com/opsgenie/kubernetes-event-exporter
It is able to export Kube events to Elasticsearch.

Showing crashed/terminated pod logs on Kibana

I am currently working on the ELK setup for my Kubernetes clusters. I set up logging for all the pods and fortunately, it's working fine.
Now I want to push all terminated/crashed pod logs (which we get by describing but not as docker logs) as well to my Kibana instance.
I checked on my server for those logs, but they don't seem to be stored anywhere on my machine. (inside /var/log/)
maybe it's not enabled or I might not aware where to find them.
If these logs are available in a log file similar to the system log then I think it would be very easy to put them on Kibana.
It would be a great help if anyone can help me achieve this.
You need to use kube-state-metrics by which you can get all pod related metrics. You can configure to your kube-state-metrics to connect elastic search. It will create an index for a different kind of metrics. Then you can easily use that index to display your charts/graphs in Kibana UI.
https://github.com/kubernetes/kube-state-metrics

Sensu AWS plugin to get ec2-metrics which are under a load balancer

I have been trying to write a aws sensu plugin which will get the instance id's of all the healthy instances which are under a load balancer and then get the stats for each of the instances like CPU Utilization Network In and Network Out etc and using graphite and graphane generate graphs.
I was searching the open source plugins in the sensu community, I could not find any. Is it possible write the script or plugin for this. Or anyone has done it before??
Kindly help me out
I don't believe a Sensu-specific plugin exists for this. However, since Sensu can run any Nagios plugin, you could use one of those: This one looks like it would get basic information on how many hosts are healthy. You could also write your own plugin using your language of choice (check out the available SDKs) to get more detailed metrics for each of the instances.
I wrote a plugin to do the same. It use to work fine then. I have testing on newer version of API. Let me know if you face any problem. I will help to fix the same.

Resources