Elasticsearch Multiple indice wilcard querystring not working - elasticsearch

In the Current [5.0] elasticsearch doc it was said that
All multi indices API support the following url query string :
ignore_unavailable and allow_no_indices
I delete all exiting indice and try to create a new one with mapping
curl -XDELETE "http://elastic:elastic#127.0.0.1:9200/mail-*?pretty=true"
curl -XPUT "http://elastic:elastic#127.0.0.1:9200/mail-*?ignore_unavailable=true&pretty=true" -d ' {
"mappings": {
"ex": {
"properties": {
...
I got this error :
"request [/mail-*] contains unrecognized parameter: [ignore_unavailable]"
i need to create this mapping because index are created by logstash with a new index every day index => "mail-%{+YYYY.MM.dd}"
if i remove the wilcard in indice name it works !
why i need to do this beacause i use the geoip filter in logstash but the geoip.location is not in the type "geo_point" and kibana tile map doesnt work without this

Related

How to create document in elasticsearch to save data and to search it?

Here it is my requirement, This is my 3levels of data which I am gettting from DB , my requirement is when I search for Developer I should get all the values of Developer such as Geo and Graph from Data2 in a list and while coming to support my values should contain Server and Data in a list and then on the basis of selection of Data1 . Data3 should be able to do the search , like suppose when we select developer then Geopos and Graphpos...
the logic which i need to use here is of elasticsearch
data1 data2 data3
Developer GEO GeoPos
Developer GRAPH GraphPos
Support SERVER ServerPos
Support Data DataPos
this is what I have done to crete the index and to get the values
curl -X PUT http://localhost:9200/mapping_log
{ "mappings":{ "properties":{"data1:{"type": "text","fields":{"keyword":{"type":"keyword"}}}, {"data2":{"type": "text","fields":{"keyword":{"type":"keyword"}}}, {"data3":{"type": "text","fields":{"keyword":{"type":"keyword"}}}, } } } 
searching values , I am not sure what I am going to get can u pls help with search dsl query too
curl -X GET "localhost:9200/mapping_log/_search?pretty" -H 'Content-Type: application/json' -d'
{
"query": {
"match": {
"data1.data2": "product"
}
}
}
How to create document for such type of Data can we create json and post it through postman or curl ?
If your documents are not indexed in elastic search first you need to ingest them to an existing index in elastic with the aid of Logstah , you can find many configuration file related to you input database.
Before transforming your documents create and index in elastic with multi fields mapping, you can use dynamic mapping(elastic default mapping) also and change your Dsl query but I recommend to use multi fields mapping as follow
PUT /mapping{
"mappings":
{"properties": {"rating":{"type": "float"},
"content":{"type": "text"},
"author":{"properties": {
"name":{"type": "text"},
"email":{"type": "keyword"}
}}
}}
}
The result will be
Mapping result
then you can query the fields in kibana Dev tools with DSL query like below
GET /mapping/_search{
"query": {"match":
{ "author.email": "SOMEMAIL"}}
}

Change geoip.location mapping from number to geo_point

I am using a logstash filter to convert my filebeat IIS logs into location:
filter {
geoip {
source => "clienthost"
}
}
But the data type in elasticSearch is:
geoip.location.lon = NUMBER
geoip.location.lat = NUMBER
But in order to map points, I need to have
geoip.location = GEO_POINT
Is there a way to change the mapping?
I tried posting a changed mapping
sudo curl -XPUT "http://localhost:9200/_template/filebeat" -d#/etc/filebeat/geoip-mapping-new.json
with a new definition but it's not making a difference:
{
"mappings": {
"geoip": {
"properties": {
"location": {
"type": "geo_point"
}
}
}
},
"template": "filebeat-*"
}
Edit: I've tried this with both ES/Kiabana/Logstash 5.6.3 and 5.5.0
This is not a solution but I deleted all the data and reinstalled ES, Kiabana, Logstash and Filebeat 5.5
And now ES recognizes location as a geopoint - I guess previously even though I had changed the data mapping, there was still data that was mapped incorrectly and Kibana was assuming the incorrect data type - probably a reindex of the complete data would have fixed the problem

Elasticsearch query with wildcards

Use their data on Elasticsearch tutorials as an example, the following uri search hits 9 records,
curl -XGET 'remotehost:9200/bank/_search?q=city:R*d&_source_include=city&pretty&pretty'
while the following reques body search hits 0 records,
curl -XGET 'remotehost:9200/bank/_search?pretty' -H 'Content-Type: application/json'
-d'{"query": {"wildcard": {"city": "R*d"} },
"_source": ["city"]
}
'
But the two methods shoud be equivalent to each other. Any idea why this is happening? I use Elasticsearch 5.5.1 in docker.
You can get your expected result by hitting the below command. This commands add an extra .keyword with your command in field city.
curl -XGET 'localhost:9200/bank/_search?pretty' -H 'Content-Type: application/json' -d'{"query": {"wildcard": {"city.keyword": "R*d"} }, "_source": ["city"]}'
Reason of adding .keyword
When you insert data to elasticsearch, you will notice a .keyword field and that field is not_analyzed. By default, the field you have inserted data, is standard analyzed and there is a multifield .keyword . If you create a field city with data, then it create a field city with standard analyzer and added a multifield .keyword which is not_analyzed.
In your case you need a not_analyzed field to query (as wildcard query). So, your query should be on city.keyword field which is by default not_analyzed.
In the first case, you have hit a get request to elasticsearch with query parameter. Elasticsearch will automatically converted the query as like second format.
For reliable source, you can follow the Official docs
The string field has split into two new types: text, which should be
used for full-text search, and keyword, which should be used for
keyword search.
To make things better, Elasticsearch decided to borrow an idea that
initially stemmed from Logstash: strings will now be mapped both as
text and keyword by default. For instance, if you index the
following simple document:
{
"foo": "bar"
}
Then the following dynamic mappings will be created:
{
"foo": {
"type" "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
}
}
As a consequence, it will both be possible to perform full-text search
on foo, and keyword search and aggregations using the foo.keyword
field.

Change Mapping for Field for ALL OF LOGSTASH Created indexes

I would like to change the type of the field location to geo_point. I'm using ES with Logstash, as y'all know, indices are generated with the name logstash-yyyy-mm-dd
I first created a logstash index and named it logstash-2016-03-29, like so:
curl -XPUT 'http://localhost:9200/logstash-2016-03-29'
then, I changed the mapping for supposedly all the indices called Logstash-* using the following:
curl -XPOST "http://localhost:9200/logstash-*/_mapping/logs" -d '{
"properties" : {
"location" : { "type":"geo_point" }
}
}'
And when I ran the Logstash configuration file, all the location fields in the index logstash-2016-03-29 were indeed of type geo_point.
However, today, the auto-generated index logstash-2016-03-30 had field location of type String instead of geo_point. I thought the type should be applied on ANY index that starts with the name logstash-*. Apparently, I was wrong. How can I fix this so that any future index created by logstash that have the location field would have that field type set to geo_point instead of String?
Thanks.
You should define it using the index template
curl -XPUT localhost:9200/_template/template_2 -d '
{
"template" : "logstash-",
"mappings" : {
"logs" : {
"properties": {
"location" : { "type" : "geo_point" }
}
}
}
}
https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-templates.html

How to create a common mapping template for indices?

For the app i created, the indices are generated once in a week. And the type and nature of the data is not varying and that implies, I need the same mapping type for these indices. Is it possible in elasticsearch to apply the same mapping to all the indices as they are created?. This could avoid me the overhead of defining mapping each time the index is created.
Definitely, you can use what is called an index template. Since your mapping type is stable, that's the perfect condition for using index templates.
It's as easy as creating an index. See below, whenever you want to index a document in an index whose name matches my_*, ES will select that template and create the index for you using the given mappings, settings and aliases:
curl -XPUT localhost:9200/_template/template_1 -d '{
"template" : "my_*",
"settings" : {
"number_of_shards" : 1
},
"aliases" : {
"my_alias" : {}
},
"mappings" : {
"my_type" : {
"properties" : {
"my_field": { "type": "string" }
}
}
}
}'
It's basically the technique used by Logstash when it needs to index new logs for each new day in a new daily index.
You can employ index template to address your problem. The official documentation can be found here.
A use case of how to apply the same with examples can be found in this blog

Resources