We get data in Kibana, but we can't make any sense of it. We want to visualize the OS metrics in Kibana, but I don't seem to get them in percentages and I want them to update automatically.
We are using the full elkstack with metricbeat and we want the data to go through logstash to keep it more future proof.
Related
When comes to centralized log tools, I see lot of comparison of ELK vs EFK vs Loki vs other.
But I have hard time to actually see information about "ELG", ELK (or EFK) but with Grafana instead of Kibana.
I know Grafana can use Elasticsearch as datasource, so it should be technically working. But how good is it? Any drawback compare to using Kibana? Maybe there are more existing dashboard for Kibana than Grafana when it comes to log?
I am asking this as I would like to have one UI system for both my metrics dashboard and my logs dashboard.
Kibana is part of the stack, so it is deeply integrated with elasticsearch, you have a lot of pre-built dashboards and apps inside Kibana like SIEM and Observability. If you use filebeat, metricbeat or any other beat to collect data it will have a lot of dashboards for a lot of systems, services and devices, so it is pretty easy to visualize your data without having to do a lot of work, basically you just need to follow the documentation.
But if you have some data that doesn't fit with one of pre-built dashboards, or want more flexibility and creat your own dashboards, Kibana needs more work than Grafana, and Kibana also only works with elasticsearch, so if you have other datasources you would need to put the data in elasticsearch. Also, if you want to have map visualizations, Kibana Map app is pretty good.
The Grafana plugin for Elasticsearch has some small bugs, but in overall it works fine, things probably will change for better since Elastic and Grafana made a partnership to improve the plugin.
So, if all your data is in elasticsearch, use Kibana, if you have different datasources, use grafana.
In my ELK setup, when filebeat is stopped for sometime, Kibana starts updating from the timestamp where the filebeat is started. No data available(under Discover tab) for the filebeat not functioning timeframe. Once filebeat is started, there are spikes in "Discover" tab initially which means data is updated under wrong time stamp.
How can I resolve this?
This is because some component (probably Logstash) is adding #timestamp, on the moment of sending the data to Elasticsearch. The solution: add #timestamp at the source, and don't overwrite it.
You can add a #timestamp value with a Filebeat Processor, or in Logstash, or by adding a valid #timestamp to your logs from the source and logging directly in JSON, foregoing any regex/grok or Processors.
Whichever way you use, you have to go through the pipeline to make sure it does not tamper with #timestamp.
Objective is to create a Dashboard in Kibana that include visualizations based on some special queries to monitor Elasticsearch health and status, like GET /_cluster/settings?include_defaults=true&filter_path=defaults. the problem is this query is based on no index. how can i go thru it?
Please install the free version of xpack , cluster monitoring is free.
I am using that already.
I have started working on ELK recently and have a doubt regarding handling of multiple types of logs.
I have two sets of logs on my server that I want to analyse, one from my android application and the other from my website. I have successfully transferred logs from this server via filebeat to the ELK server.
I have created two filters for either types of logs and have successfully imported these logs into logstash and then Kibana.
This link helped do the above stuff.
https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-elk-stack-on-centos-7
The above link directs to use the logs in the filebeat index in Kibana and start analysing(I successfully did for one type of logs). But the problem that I am facing is that since both these logs are very different, they need to be analysed differently. How do I do this in Kibana. Should I create multiple filebeat indexes there and import them, or should it be just one single index, or some other way. I am not very clear on this(could not find much documentation), hence would request to please help and guide me here.
Elasticsearch organizes by index and type. Elastic used to compare these to SQL concepts, but now offers a new explanation.
Since you say that the logs are very different, Elastic is saying that you should use different indexes.
In Kibana, the visualization is tied to an index. If you had one panel from each index, you can show them both on the same dashboard.
I am having an issue in kibana. It does not show any results in the Discover tab.
Please look here for more information.
Do we have any Kibana alternatives that the community has used? I searched on the internet and I could find only Head elasticSearch plugin. If nothing works, then I will work on consuming the ElasticSearch JSON feed using .Net and asp.net charts.
The only thing I know of would be Grafana. But that won't support ES until version 2.5. So currently you're going to have to make due with Kibana or manual labor.
EDIT
Grafana 2.5 has been released and features a ElasticSearch query editor.
I assume you are talking about Kibana 4 or 5. When this happens to me it usually means that the time filter is set to a period when there is no data for or documents do not have time stamps or the mapping of time stamp field is not set to 'date'. So the solution is to use Kibana 3 as your discovery panel. Here is a link to a fork that supports aggregations and Elasticsearch 2.x and 5.x.
https://github.com/immunochomik/kibana3
In Kibana 3 you can remove time filter completely so the time histogram will try to show you all the data in the index, also if there are no time stamps you can still look at data in terms panels and documents panels.
Another interesting alternative is redash, you can build dashboards combining many sources of data including Elasticsearch. Drawback is that you need to know how to write a query.
Open source options: Grafana, Redash
If you are open to commercial solutions, Knowi might be an option for more advanced needs (multi-index/multi-database joins, AI etc). See their ElasticSearch playground.