Elasticsearch | list all routing - elasticsearch

Is there any way to get all the routings I have already used to index different documents? For example I am using month to define the routing. All documents with value as July in response date would be moved to a single shard and so on. Now can I get all the months which are used as Routing value through some query?

Related

Suggest Feature in Elastic Search

I am trying to implement suggest feature - Suggest Usage | Elasticsearch .NET Client [8.4] | Elastic 1 for handling misspelled words in my search implementation.
My search query is executed across multiple indices but while trying to use the suggest functionality , i am running into failures due to unmappaed fields.
Suppose i have an index named People which has a field - "name". Another index named news which has a field named - "title". My query was executed across both indices at the same time and search query had rules defined for both name and title fields. But while using suggest, i only want to return suggestions for name field in person index as part of the same query. As a result of this my news index is returning a failure that no mapping found for field name.
Is there a work- around in the suggest functionality via which i can specify an index name for the field mentioned in suggest - Suggest Usage | Elasticsearch .NET Client [8.4] | Elastic 1 OR can i ignore unmapped fields and continue to return search results from the other index (news) without returning any suggestions for misspelled words for that index.

geoip.location doesnot work with modified indexnames sent via logstash

geoip.location is of geo_point datatype when an event is sent from logstash to elasticsearch with default indexName. As geoip.location has geo_point datatype, i can view the plotting of location in maps in kibana as kibana looks for geo_point datatype for maps.
geoip.location becomes geoip.location.lat, geoip.location.lon with number datatype, when an event is sent from logstash to elasticsearch with modified indexName. Due to this i'm not able to view the plotting of location in maps in kibana.
i don't understand why elasticsearch would behave differently when i try to add data to a modifiedIndexName. is this a bug with elasticsearch?
For my usecase i need to use modified indexname, as i need new index for each day. The plan is to store the logs of a particular day in a single index. so, if there are 7 days then i need to have 7 indexes that contains logs of each day (new index should be created based on currentdate).
i searched around for solution, but i'm not able to comprehend and make it to work for me. Kindly help me out on this
Update (what i did after reading xeraa's answer?)
In the devtools in kibana,
GET _template/logstash - showed the allowed patterns in index_patterns property along with other properties
i included my pattern (dave*) inside index_patterns and triggered the PUT request. You have to pass the entire existing body content (which you would receive in the GET request) inside PUT request along with your required index_patterns, otherwise the default setting will disappear as the PUT api will replace whatever data you pass in the PUT body
PUT _template/logstash
{
...
"index_patterns": [
"logstash-*","dave*"
],
...
}
I'd guess that there is a template set for the default name, which isn't happening if you rename it.
Check with GET _template if any match your old index name and update the setting so that it also gets applied to the new one.

ElasticSearch query specifying an indexname using todays date

I'm using logstash to populate ES with a number of metrics from our live services across a number of machines. Logstash creates a new index each day and i am finding that querying ES without specifying the index, is running slowly. ( i currently maintain 5 days of indicies). If i specify the specific index eg today
.es(index=logstash-2018.01.15, q= examplequery
it runs very quickly
Is there a way i can specify todays index using the date field?
eg
.es(index=logstash-'get date', q= examplequery
You can use the query for getting the indices of today's date:
.es(index='<logstash-{now/d}>')
An interesting read with all the options available in elastic search to include date math in index names:
https://www.elastic.co/guide/en/elasticsearch/reference/current/date-math-index-names.html
By looking at the syntax I guess you are using Timelion or something that uses query string. There is a good tutorial here that includes specifying index patterns:
https://www.elastic.co/blog/timelion-tutorial-from-zero-to-hero
In your case it will be
.es(index=logstash-*, q= examplequery
or
.es(index=logstash-2018.01.*, q= examplequery
if you need this year january and the index pattern is 'logstash-YYYY.MM.dd'

Elastic Search - Find document with a conflicting field type

I'm using Elastic Search 5.6.2 with Kibana and I'm currently facing a problem
My documents are indexed on the field timestamp which is normally an integer, however recently somebody has logged a document with a timestamp that is not an integer, and Kibana complains of conflicting type.
The discover panels display nothing and the following errors pop:
Saved "field" parameter is now invalid. Please select a new field.
Discover: "field" is a required parameter
How can I look for the document(s) causing these conflicts so that to find the service creating bad logs ?
The field type (either integer or text/keyword) is not defined on per document basis but rather on per index basis (in the mappings). I guess you are manipulating timeseries data, and you probably have un index per day (or per month or ...).
In Kibana Dev Tools:
List the created indices with GET _cat/indices
For each index (logstash-2017.09.28 in my example) do a GET logstash-2017.09.28/_mapping and check the type of the field in #timestamp
The field type is probably different between indices.
You won't be able to change the field type on created indices. Deleting document won't solve you're problem. The only solution is to drop the index or reindex the whole index with a new field type (in a specific mapping).
To avoid this problem on future indices, the solution is to create an index template with a mapping telling that the field #timestamp is of type date or whatever.

Filter by timestamp in nested fields in Kibana 4

Is it possible to create a mapping (e.g. nested) that allows to filter by individual timestamps of orders, where the orders are nested properties of the indexed documents (products)?
In other words, I would like to define a time range in Kibana and receive a list of matching products that contain any orders matching the given time range.
As I knew, Kibana can not handle with nested Json, so first you need to change it to standard Json.
As per your question:- I would like to define a time range in Kibana and receive a list of matching products that contain any orders matching the given time range.
For this you can load the data in Kibana & filter the time in Kibana UI using Time Filter which will show you the orders matching the time range using the timestamp field as mentioned.

Resources