When comes to centralized log tools, I see lot of comparison of ELK vs EFK vs Loki vs other.
But I have hard time to actually see information about "ELG", ELK (or EFK) but with Grafana instead of Kibana.
I know Grafana can use Elasticsearch as datasource, so it should be technically working. But how good is it? Any drawback compare to using Kibana? Maybe there are more existing dashboard for Kibana than Grafana when it comes to log?
I am asking this as I would like to have one UI system for both my metrics dashboard and my logs dashboard.
Kibana is part of the stack, so it is deeply integrated with elasticsearch, you have a lot of pre-built dashboards and apps inside Kibana like SIEM and Observability. If you use filebeat, metricbeat or any other beat to collect data it will have a lot of dashboards for a lot of systems, services and devices, so it is pretty easy to visualize your data without having to do a lot of work, basically you just need to follow the documentation.
But if you have some data that doesn't fit with one of pre-built dashboards, or want more flexibility and creat your own dashboards, Kibana needs more work than Grafana, and Kibana also only works with elasticsearch, so if you have other datasources you would need to put the data in elasticsearch. Also, if you want to have map visualizations, Kibana Map app is pretty good.
The Grafana plugin for Elasticsearch has some small bugs, but in overall it works fine, things probably will change for better since Elastic and Grafana made a partnership to improve the plugin.
So, if all your data is in elasticsearch, use Kibana, if you have different datasources, use grafana.
Related
We get data in Kibana, but we can't make any sense of it. We want to visualize the OS metrics in Kibana, but I don't seem to get them in percentages and I want them to update automatically.
We are using the full elkstack with metricbeat and we want the data to go through logstash to keep it more future proof.
I have a ton of services: Node(s), MySQL(s), Redis(s), Elastic(s)...
I want to monitor how they connect to each other: Connection rate, Number alive connection... (Node1 create 30 connection to Node2/MySQL/Redis per second...) like Haproxy stat image attached below.
Currently i have two options:
Haproxy (proxy): I want to use single service Haproxy to archive this but it's seem very hard to use ALC detect what connection need forward to what service.
ELK (log center): I need to create log files on each service (Node, MySQL, Redis...) and then show them on the log center. I see that a ton of works to do that without built-in feature like Haproxy stat page.
How to do this? Is log center good in this case?
The problem
I think your problem is not collecting and pipelining the statistics to Elasticsearch, but instead the ton of work extracting metrics from your services because most of them do not have metric files/logs.
You'd then need to export them with some custom script, log them and capture it with filebeat, stream to a logstash for text processing and metric extraction so they are indexed in a way you can do some sort of analytics, and then send it to elasticsearch.
My take on the answer
At least for the 3 services you've referenced, there are Prometheus exporters readily available and you can find them here. The exporters are simple processes that will query your services native statistics APIs and expose a prometheus metric API for Prometheus to Scrape (poll).
After you have Prometheus scraping the metrics, you can display them in dashboards via Grafana (which is the de facto visualization layer for Prometheus) or bulk export your metrics to wherever you want (Elasticsearch, etc..) for visualization and exploration.
Conclusion
The benefits of this approach:
Prometheus can auto-discover new nodes you add to your networks
Readily available exporters from haproxy, redis and mysql for
Prometheus
No code needed, each exporter requires minimal
configuration specific to each monitored technology, it can easily
be containerized and deployed if your environment is container
oriented, otherwise you just need to run each exporter in the
correct machines
Prometheus is very, very easy to deploy
Use ELK - elasticsearch logstash and kibana stack with filebeat. Filebeat -will share the log file content with logstash
Logstash-will scan, filter and share the needed content to elastic search
Elasticsearch- will work as a db, store the content from logstash in json format as documents.
Kibana- with kibana you can search the required info. Also you can plot graphs and other visuals with the relevant data.
can kibana's console (in Dev Tools) be used for writing and implementing elasticsearch ? I am new to elasticsearch and very confused when it comes to doing hands-on it. thank you in advance.
kibana Dev tools makes calling elastic search API's easier so you can develop what ever you want in kibana Dev tools to make aggregation call or make query string to call the API's.
on the other hand you should use it with an SDK in your application like Elasticsearch JS for javascript so you can use the developed queries and aggregations in kibana to be used in your application and more you can monitor your shards health or put mapping for your indexes and more of functionality which can be found in Documentation, Although, you can find JS API's Documentation here
You can use Kibana Dev Tools to invoke REST API commands to perform cluster level actions such as taking snapshots, restore etc and also index simple documents. But, if you are looking to writing data to Elastic on a regular basis like ingesting server/ app logs or server metrics (CPU, memory, Disk usage etc) you should look at installing filebeats or metricbeats.
I have started working on ELK recently and have a doubt regarding handling of multiple types of logs.
I have two sets of logs on my server that I want to analyse, one from my android application and the other from my website. I have successfully transferred logs from this server via filebeat to the ELK server.
I have created two filters for either types of logs and have successfully imported these logs into logstash and then Kibana.
This link helped do the above stuff.
https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-elk-stack-on-centos-7
The above link directs to use the logs in the filebeat index in Kibana and start analysing(I successfully did for one type of logs). But the problem that I am facing is that since both these logs are very different, they need to be analysed differently. How do I do this in Kibana. Should I create multiple filebeat indexes there and import them, or should it be just one single index, or some other way. I am not very clear on this(could not find much documentation), hence would request to please help and guide me here.
Elasticsearch organizes by index and type. Elastic used to compare these to SQL concepts, but now offers a new explanation.
Since you say that the logs are very different, Elastic is saying that you should use different indexes.
In Kibana, the visualization is tied to an index. If you had one panel from each index, you can show them both on the same dashboard.
I am having an issue in kibana. It does not show any results in the Discover tab.
Please look here for more information.
Do we have any Kibana alternatives that the community has used? I searched on the internet and I could find only Head elasticSearch plugin. If nothing works, then I will work on consuming the ElasticSearch JSON feed using .Net and asp.net charts.
The only thing I know of would be Grafana. But that won't support ES until version 2.5. So currently you're going to have to make due with Kibana or manual labor.
EDIT
Grafana 2.5 has been released and features a ElasticSearch query editor.
I assume you are talking about Kibana 4 or 5. When this happens to me it usually means that the time filter is set to a period when there is no data for or documents do not have time stamps or the mapping of time stamp field is not set to 'date'. So the solution is to use Kibana 3 as your discovery panel. Here is a link to a fork that supports aggregations and Elasticsearch 2.x and 5.x.
https://github.com/immunochomik/kibana3
In Kibana 3 you can remove time filter completely so the time histogram will try to show you all the data in the index, also if there are no time stamps you can still look at data in terms panels and documents panels.
Another interesting alternative is redash, you can build dashboards combining many sources of data including Elasticsearch. Drawback is that you need to know how to write a query.
Open source options: Grafana, Redash
If you are open to commercial solutions, Knowi might be an option for more advanced needs (multi-index/multi-database joins, AI etc). See their ElasticSearch playground.