I am using ES and Kibana 7.12.0
I am having an issue where my Kibana instance randomly will take long times to carry out actions, sometimes even fetching the list of spaces can take 20s. I have a 3 node ES cluster, and a single node hosting Kibana. I tried an entirely separate Kibana instance but it had no impact on the behavior of kibana. I went to the developer tools section of kibana and wrote a simple query to hit ES with and then timed it and sometimes it will complete in 20-50ms and other times it will take 9-20s. I then try that same query from curl against my ES cluster and I see that ES always completes the query in sub 100ms. I have tried altering the configuration of Kibana to point at each node in ES individually to see if it was a particular node that was causing the issue and that had no impact.
I did see this pattern in the metrics for kibana:
It seems suspicious to me that the memory seems to have that pattern. Anyone have any ideas on how I can resolve this?
Thank you!
Turns out this was being caused by an issue with 7.12.0 Kibana, upgrading to 7.12.1 alleviated the outcome.
As indicated here
Related
I am running elasticsearch on a dedicated server on a Saas platform. The problem is that when cron jobs execute, and massively update/insert new values in elastic search, the front-office(the site) when it tries to connect to elasticsearch it returns false (the connection fails).
Anyone knows what can be the problem and how it can be fixed? We are running elasticsearch latest stable elastic search version.
This happens on and off, meaning when i refresh the page in the front office sometimes it cannot connect to elastic search, after another refresh it works again and so on, until the heavy load passes.
We have nvme hdds and elastic search is only running on that server not multi-nodes.
When i say heavily, I mean 1000-2000 updates per second.
I have started working on ELK recently and have a doubt regarding handling of multiple types of logs.
I have two sets of logs on my server that I want to analyse, one from my android application and the other from my website. I have successfully transferred logs from this server via filebeat to the ELK server.
I have created two filters for either types of logs and have successfully imported these logs into logstash and then Kibana.
This link helped do the above stuff.
https://www.digitalocean.com/community/tutorials/how-to-install-elasticsearch-logstash-and-kibana-elk-stack-on-centos-7
The above link directs to use the logs in the filebeat index in Kibana and start analysing(I successfully did for one type of logs). But the problem that I am facing is that since both these logs are very different, they need to be analysed differently. How do I do this in Kibana. Should I create multiple filebeat indexes there and import them, or should it be just one single index, or some other way. I am not very clear on this(could not find much documentation), hence would request to please help and guide me here.
Elasticsearch organizes by index and type. Elastic used to compare these to SQL concepts, but now offers a new explanation.
Since you say that the logs are very different, Elastic is saying that you should use different indexes.
In Kibana, the visualization is tied to an index. If you had one panel from each index, you can show them both on the same dashboard.
I have a couchbase cluster setup as the primary source for data. From this a subset of data is synced to a elasticsearch cluster via the Couchbase Transport Plugin for ElasticSearch(https://github.com/couchbaselabs/elasticsearch-transport-couchbase) which sets up an XDCR stream from couchbase to elasticsearch.
Due to some issues with the elasticsearch cluster all data needs to be synced again from couchbase to elasticsearch. I have tried recreating XDCR but that does not seem to help as it only copies a very small subset of documents. Is there a way by which this can be achieved?
Additional details
Couchbase version: 3.1.0
Number of couchbase documents: 50K+
Documents synced to elasticsearch: around 700 (expected 20K+)
If a document in couchbase is modified it is successfully synced to elasticsearch
The issue you're experiencing is likely in one of the following: XDCR, the Couchbase Transport Plugin for Elasticsearch, or Elasticsearch itself.
Start by checking for XDCR errors. You can find your XDCR logs using these instructions. Be aware that the Transport Plugin uses XDCR v1 and almost everything else in Couchbase uses v2.
Consult the advice in troubleshooting the Couchbase Transport Plugin for Elasticsearch. Instructions should work for you even though they are from the 4.0 docs.
Pay attention to how your documents are being mapped to Elasticsearch. You mention that you're expecting only a subset of documents to be synced to Elasticsearch, so it's possible that you have lost a setting or misconfigured something. You can enable logging and observe a small set of test data. At TRACE level, you should be able to see each document that is inspected.
If all of that fails, make sure the basics are working by indexing the beer sample dataset, following the directions in the Couchbase docs. ES is probably not the issue, but test with a fresh ES instance will rule out problems on that side.
I am having an issue in kibana. It does not show any results in the Discover tab.
Please look here for more information.
Do we have any Kibana alternatives that the community has used? I searched on the internet and I could find only Head elasticSearch plugin. If nothing works, then I will work on consuming the ElasticSearch JSON feed using .Net and asp.net charts.
The only thing I know of would be Grafana. But that won't support ES until version 2.5. So currently you're going to have to make due with Kibana or manual labor.
EDIT
Grafana 2.5 has been released and features a ElasticSearch query editor.
I assume you are talking about Kibana 4 or 5. When this happens to me it usually means that the time filter is set to a period when there is no data for or documents do not have time stamps or the mapping of time stamp field is not set to 'date'. So the solution is to use Kibana 3 as your discovery panel. Here is a link to a fork that supports aggregations and Elasticsearch 2.x and 5.x.
https://github.com/immunochomik/kibana3
In Kibana 3 you can remove time filter completely so the time histogram will try to show you all the data in the index, also if there are no time stamps you can still look at data in terms panels and documents panels.
Another interesting alternative is redash, you can build dashboards combining many sources of data including Elasticsearch. Drawback is that you need to know how to write a query.
Open source options: Grafana, Redash
If you are open to commercial solutions, Knowi might be an option for more advanced needs (multi-index/multi-database joins, AI etc). See their ElasticSearch playground.
I have lot of data indexed in my elasticsearch.
I deleted elasticsearch folder and then extarct again fresh zip of elasticsearch and start the elasticsearch server.
I am surprised because after staring new elasticsearch server, I again found all old data and this problem persists again and again.
Can any please help me? I don't want to get all old data indexed in elasticsearch.
Regards
Given the cluster health response it's not a problem with multiple nodes running on the same cluster as suggested by Igor. I'd suggest you to check the java processes running. You could maybe have an elasticsearch hanging somewhere which keeps writing in that folder.