We use ELK for log management and nagios for centralized monitoring.
I want to setup pattern match alerts in kibana and notify via nagios.
Can you please let me know if this is feasible.
Related
In the documentation of Cloudfoundry, the Elastic SAAS service is not mentioned
https://docs.cloudfoundry.org/devguide/services/log-management-thirdparty-svc.html
So was wondering if anyone has done it and how?
I know one way is to use a logstash instance in cf, feed the syslog to it and then ship it to Elastic. But just wondering if there is a direct possibility to skip the logstash deployment on cf?
PS. We also log using the ECS format.
My use case is: I have a java application running in a weblogic. I want to monitor this applications log in real time. The log is created using log4j. Is it possible to use Kibana or configure Kibana to monitor these logs in real time.?
yes you can, but just that Kibana needs that log data. You export/load that log data either using Filebeat or Logstash into Elasticsearch. Use Kibana to set up watchers, alerts etc to prompt you your 400s, 401s, 500s error codes etc.
Not sure if you have Elasticsearch cluster built already, but Kibana works with Elasticsearch (not directly on logfile's machine).
I have the elk setup in a ec2 server.With Beats like metricbeat,filebeat,heartbeat.
I have setup the elastic apm for some applications like jenkins & sonarqube.
Now In uptime I can see only few monitors like sonarqube and jenkins
Other application are missing..
When I see data from yesterday not available in elasticsearch for particular application
The best way to troubleshoot what is going on is to check if the events from Heartbeat are being collected. The Uptime application only displays events from Heartbeat, and therefore — this is the Beat that you need to check.
First, check the connectivity of Heartbeat and the configured output:
metricbeat test output
Secondly, check if the events are being generated. You can check this by commenting out your existing output (Likely Elasticsearc/Elastic Cloud) and enabling either the Console output or the File output. Then start your Metricbeat and check if events are being generated. If they are, then it might be something with the backend side of things; maybe Elasticsearch is rejecting the documents sent and refusing to index them.
Apropos, Elastic is implementing a native Jenkins plugin that allows you to observe your CI pipeline using OpenTelemetry compatible backends such as Elastic APM. You can learn more about this plugin here.
I am currently working on the ELK setup for my Kubernetes clusters. I set up logging for all the pods and fortunately, it's working fine.
Now I want to push all terminated/crashed pod logs (which we get by describing but not as docker logs) as well to my Kibana instance.
I checked on my server for those logs, but they don't seem to be stored anywhere on my machine. (inside /var/log/)
maybe it's not enabled or I might not aware where to find them.
If these logs are available in a log file similar to the system log then I think it would be very easy to put them on Kibana.
It would be a great help if anyone can help me achieve this.
You need to use kube-state-metrics by which you can get all pod related metrics. You can configure to your kube-state-metrics to connect elastic search. It will create an index for a different kind of metrics. Then you can easily use that index to display your charts/graphs in Kibana UI.
https://github.com/kubernetes/kube-state-metrics
We are using ELK to monitor the system performance and our application logs. If there is an error in the logs, we want to create an issue in ServiceNow from ELK. Is there a way to do this? Any pointers would help.
I don't know about ELK specifically, but perhaps you could make a SOAP/REST call to do it?
Just make sure your ELK service account has sufficient permissions, and get the WSDL by going to http://yourinstance.service-now.com/tablename.do?WSDL