I've set up an Elastic Stack 5.3 to aggregate logs from a bunch of servers, with Filebeat in each of the servers scraping the logs and sending them to a centralised Logstash, Elasticsearch and Kibana.
I've set up my Logstash configuration to extract some custom string fields but I wish to change the index template to change their type from "text" to "keyword". I've found the configuration directives to specify my own template, but where can I find Logstash's default template so I can use it as a starting point? I've searched under /etc/logstash and /usr/share/logstash (I've installed a vanilla Logstash 5.3 RPM on RHEL 7) but couldn't find anything.
Any good example of how to create a non-standard index template on logstash 5.x would be really handy; most of the examples I have found predate Beats and the new string types in 5.x. The documentation leaves something to be desired.
The default elasticsearch index template can be found in the logstash-output-elasticsearch plugin repository at https://github.com/logstash-plugins/logstash-output-elasticsearch/tree/master/lib/logstash/outputs/elasticsearch
You'll find different templates in there, for ES 2.x, 5.x and 6.x, the one you're looking for is probably the 5.x one.
Related
Why are there two modules in metric beat for ES
Elasticsearch
Elasticsearch-pack
Both has same configurations in the modules.d directory.
Kibana page for Elasticsearch module suggests to use Elasticsearch module.
But documentation of Elasticsearch modules suggests the later one. Reference
Alternatively, run metricbeat modules disable elasticsearch and metricbeat modules enable elasticsearch-xpack.
It's so confusing. I think that if I need to use ES-wtih-Xpack, then the later module. But from 6.7.0 onwards, ES ships basic features of x-pack with open source one.
Thanks.
The configuration are almost the same, the elasticsearch-xpack has the option xpack.enabled: true, which is not present in the elasticsearch module, and in the elasticsearch-xpack modulo you also do not specify any metricsets.
If you are using the monitoring UI in Kibana, then you should use the elasticsearch-xpack module, which will collect the metrics that kibana needs.
If you are not using the monitoring UI in Kibana, or are not even using Kibana and just want to collect the metrics, then you need to use the elasticsearch module and specify the metricsets that you want to collect.
The elasticsearch-xpack is just the elasticsearch module without any metricsets configured and with the option xpack.enabled: true.
This is specifically for Zipkin's Elastic Search storage connector. Which does not do the index that you can use Curator.
Is there a way of automatically removing old traces and have that as part of the ElasticSearch configuration (rather than building yet another service or cron job) Since I am using it for a development server I just need it wiped every hour or so.
From zipkin docs:
There is no support for TTL through this SpanStore. It is recommended instead to use Elastic Curator to remove indices older than the point you are interested in.
I want to update all elasticsearch indices from elasticsearch 5.6.2 to 6.8 version.
I don't want to use X-PACK for migration.
I am not using any elastic products.
So, Is there any proper guideline of way for this migration?
I can do it manually like write new mappings without type for my existing indices and then dump documents from my current indices to new one.
But I've more than 150 indices and it will take time to create new mappings and migrate it manually.
Elastic is the company behind Elasticsearch, so by using that you are using an Elastic product ;-)
The only upgrade assistant I'm aware of is in the free Basic (X-Pack) version of Kibana. Otherwise you'll have to manually run through the features described in https://www.elastic.co/guide/en/kibana/6.0/xpack-upgrade-assistant.html.
I have installed two ES on my machine. One is 5 version(localhost:9200) and the other is 6 version(localhost:9500). Version 5 is used to index and store data alone while Version 6 is used to do some analytics using kibana dashboards.When i start kibana, the kibana automatically stops stating that all the ES should be on the same version. Is there any way, i can stop kibana from reading localhost:9200 ?
like #Abhijit Bashetti stated in the comment, you need to modify kibana.yml file in order to point kibana to the elasticsearch instance you wish.
you should change "localhost:9200" to "localhost:9500" in order for kibana to reach the ES v6.
We are using ElasticSearch 1.x on production for sometime now with millions of records.
We want to upgrade the version from 1.x to 6.x as:
There have been multiple updates by the company and the support for older versions is discontinued.
1.x does not support Kibana.
What's the best way to do it with explicit steps on data security?
Thanks!
I've recently did a migration from Elasticsearch 1.5 to 6.2.
Steps, that needs to be performed:
Update the mappings, there are a lot of changes that happened between those 2 versions (just as an example _all field is disable starting from 6.0). The official documentation should help you here.
After you updated the mappings you would need another cluster set up with desired version of Elasticsearch. Also update if needed your Logstash/Kibana.
Enable it to access your old cluster by adding your old cluster to the reindex.remote.whitelist in elasticsearch.yml, by doing: reindex.remote.whitelist: oldhost:9200
For each index that you need to migrate, you would need to manually create a new index in your new сluster with updated mappings from #1
Reindex from remote to pull documents from the old index into the new 6.x index
Full documentation regarding this one is available here - https://www.elastic.co/guide/en/elasticsearch/reference/current/reindex-upgrade-remote.html