Why Elasticsearch and Kibana generating garbage data ceaselessly? - elasticsearch

I just installed Elasticsearch and Kibana. X-pack has been enable. I never put any data into Elasticsearch, but when I use Kibana to monitor elasticsearch, I found the number of data increased automatically. I have to stop Elasticsearch and Kibana to make it stop. Why this phenomenon happen? And how can I fix it?
PS: Elasticsearch and Kibana version is 7.11.1.
The increasing number of data: kibana>stack monitoring>Elasticsearch>Overview>data.
The red part was from: 2M ---> 314M, I don't know what it done.

Related

Which log message indicates that the index is sent from filebeat to elasticsearch?

I'm trying to setup filebeat, elasticsearch and kibana on microk8s (single node Kubernetes implementation) using the helm charts from https://github.com/elastic/helm-charts.
My aim is to make the filebeat index show up in kibana. The metricbeat index already is showing up. I had it working for a few seconds, then it disappeared and now I'm working on it to appear again for 30 hours in the last days. Due to the sheer number of things I've done, I cannot list them. None of them seemed to make more sense than the other.
After enabling all log messages in filebeat.yml with
logging.level: debug
logging.selectors: '*'
logging.metrics.enabled: false
I still have no clue whether output is sent to elasticsearch or not. Or whether elasticsearch receives it. Or receives it and refuses it. Therefore I need a hint which message would indicate that the data for the index is sent to elasticsearch so that I can invest the next 30 hours in debugging either filebeat or elasticsearch.
I'm aware that the official docs doesn't use helm charts, but it's slighly sadomasochist to not do that in the k8s world. The docs only say apply yaml file xy which is not a documentation on elastic, but the shortest possible example for kubectl apply any way.

Speed up parsing in Elasticsearch Logstash Kibana

I am new to ELK [ Elasticsearch Logstash and Kibana]. I installed Elasticsearch Logstash and Kibana in one server. Then installed Logstash in two machines. Total RAM in each system is around 30 GB. Total file to parse is around 300 GB. It took 6 days to filter out the searchd item[I searched for 10 digit number, timestamp and Name of one API from all these logs]and dispay it in Kibana. Am i doing something wrong here. Is there any other way to speed up this process.
Thanks in Advance,
Paul
You can filter out based on the time in Kibana UI. Also, if you are pushing to Logstash from any beat logger, Logstash takes time to push it to Elastic Search.
There are many beat applications which will directly push the data to Elastic Search.

Elasticsearch not immediately available for search through Logstash

I want to send queries to Elasticsearch through the Elasticsearch plugin within Logstash for every event in process. However, Logstash sends requests to Elasticsearch in bulk and indexed events are not immediately made available for search in Elasticsearch. It seems to me that there will be a lag (up to in process a second or more) between an index passing through Logstash and it being searchable. I don't know how to solve this.
Do you have any idea ?
Thank you for your time.
Joe

Visualize Elasticsearch index size in Kibana

is it possible to show the size (physical size, e.g. MB) of one or more ES indices in Kibana?
Thanks
Kibana only:
It's not possible out of the box to view the disk-size of indices in Kibana.
Use the cat command to know how big your indices are (thats even possible without any Kibana).
If you need to view that data in Kibana index the output from the cat command to a dedicated Elasticsearch index and analyse it then in Kibana.
If other plugins/tools then Kibana are acceptable, read the following:
Check the Elasticsearch community plugins. The Head-Plugin (which I would recommand to you) gives you the info you want in addition to many other infos, like stats about your Shards, Nodes, etc...
Alternatively you could use the commerical Marvel Plugin from Elastic. I have never used it before, but it should be capeable of what you want, and much more. But Marvel is likely an overkill for what you want - so I wouldn't recommand that in the first place.
Although not a plugin of Kibana, cerebro is the official replacement of Kopf and runs as a standalone web server that can connect remotely to ElasticSearch instances. The UI is very informational and functional.
https://github.com/lmenezes/cerebro

Elasticsearch getting data repeatedly

I search data from MYSQL with elasticsearch but when I restart the elasticsearch, Then search result comes duplicate. What should I do? Can anybody help?

Resources