Elasticsearch not storing field, what am I doing wrong? - elasticsearch

I have something like the following template in my Elasticsearch. I just want certain part of the data returned, so I turn the source off, and explicitly stated store for the fields I want.
{
"template_1" : {
"order" : 20,
"template" : "test*",
"settings" : { },
"mappings" : {
"_default_" : {
"_source" : {
"enabled" : false
}
},
"type_1" : {
"mydata" :
"store" : "yes",
"type" : "string"
}
}
}
}
}
However, when I query the data, I don't get the fields back. The query works, however, if I enable the _source field. I am just starting with Elasticsearch, so I am not quite sure what I am doing wrong. Any help would be appreciated.

Field definitions should be wrapped in properties section of your mapping:
"type_1" : {
"properties": {
"mydata" :
"store" : "yes",
"type" : "string"
}
}
}

Related

Setting enabled to true Elasticsearch

I'm new to elasticsearch. I have an index type as follows
{
"myindex" : {
"mappings" : {
"systemChanges" : {
"_all" : {
"enabled" : false
},
"properties" : {
"autoChange" : {
"type" : "boolean"
},
"changed" : {
"type" : "object",
"enabled" : false
},
"created" : {
"type" : "date",
"format" : "strict_date_optional_time||epoch_millis"
}
}
}
}
}
}
I'm unable to fetch the details having changed.new = completed. After some research i have found that it's because the changed field is set to enabled : false. and I need to change the same. I tried as follows
curl -X PUT "localhost:9200/myindex/" -H 'Content-Type: application/json' -d' {
"mappings": {
"systemChanges" : {
"properties" : {
"changed" : {
"enabled" : true
}
}
}
}
}'
But I'm getting error as following.
{"error":{"root_cause":[{"type":"index_already_exists_exception","reason":"already exists","index":"myindex"}],"type":"index_already_exists_exception","reason":"already exists","index":"myindex"},"status":400}
How can I change the enabled to true in order to fetch the details of the changed.new field?
you are trying to add an index again with the same name and hence the error.
See the below link for updating a mapping
https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-put-mapping.html
The enabled setting can be updated on existing fields using the PUT mapping API.

elasticsearch percolator filter fails

I'm using a document query against a percolator that works ok. When I try to filter the percolator queries against which document percolate using queries ids, it doesn't return any result. For example:
{
"doc" : {
"text" : "This is the text within my document"
},
"highlight" : {
"order" : "score",
"pre_tags" : ["<example>"],
"post_tags" : ["</example>"],
"fields" : {
"text" : { "number_of_fragments" : 0 }
}
},
"filter":{"ids":{"values":[11,15]}}
,
"size" : 100
}
I know for sure that those ids are correct, but allways obtain "matches" : [ ]. When I don't use filter, ES retrieves correct matches.
Thanks for your help.
I think I've solved it. It seems that the filter only works on the "metadata" fields, meaning that you have to add customized fields to the queries indexed in the percolator in order to use them to filter when you need.
Using my previous example, I would have to index in percolator queries like:
{
"query" : {
"match_phrase" : {
"text" : "document"
}
},
"id" : 11
}
Adding "manually" a redundant id field in order to use it later as filter reference.
At percolation time, you have to use something like:
{
"doc" : {
"text" : "This is the text within my document"
},
"filter":{"match":{"id":11}},
"highlight" : {
"order" : "score",
"pre_tags" : ["<example>"],
"post_tags" : ["</example>"],
"fields" : {
"text" : { "number_of_fragments" : 0 }
}
},
"size" : 100
}
In order to use only that percolator query. Complementary information can be found here.

How to add default values while adding a new field in existing mapping in elasticsearch

This is my existing mapping in elastic search for one of the child document
sessions" : {
"_routing" : {
"required" : true
},
"properties" : {
"operatingSystem" : {
"index" : "not_analyzed",
"type" : "string"
},
"eventDate" : {
"format" : "dateOptionalTime",
"type" : "date"
},
"durations" : {
"type" : "integer"
},
"manufacturer" : {
"index" : "not_analyzed",
"type" : "string"
},
"deviceModel" : {
"index" : "not_analyzed",
"type" : "string"
},
"applicationId" : {
"type" : "integer"
},
"deviceId" : {
"type" : "string"
}
},
"_parent" : {
"type" : "userinfo"
}
}
in above mapping "durations" field is an integer array. I need to update the existing mapping by adding a new field called "durationCount" whose default value should be the size of durations array.
PUT sessions/_mapping
{
"properties" : {
"sessionCount" : {
"type" : "integer"
}
}
}
using above mapping I am able to update the existing mapping but I am not able to figure out how to assign a value ( which would vary for each session document like it should be durations array size ) while updating the mapping. any ideas ?
Well 2 recommendations here -
Instead of adding default value , you can adjust it in the query using missing filter. Lets say , you want to search based on a match query - Instead of just match query , use a bool query with should clause having the match and missing filter. inside filtered query. This way , those documents which did not have the field is also accounted.
If you absolutely need the value in that field for existing documents , you need to reindex the whole set of documents. Or , use the out of box plugin , update by query -

Specify Routing on Index Alias's Term Lookup Filter

I am using Logstash, ElasticSearch and Kibana to allow multiple users to log in and view the log data they have forwarded. I have created index aliases for each user. These restrict their results to contain only their own data.
I'd like to assign users to groups, and allow users to view data for the computers in their group. I created a parent-child relationship between the groups and the users, and I created a term lookup filter on the alias.
My problem is, I receive a RoutingMissingException when I try to apply the alias.
Is there a way to specify the routing for the term lookup filter? How can I lookup terms on a parent document?
I posted the mapping and alias below, but a full gist recreation is available at this link.
curl -XPUT 'http://localhost:9200/accesscontrol/' -d '{
"mappings" : {
"group" : {
"properties" : {
"name" : { "type" : "string" },
"hosts" : { "type" : "string" }
}
},
"user" : {
"_parent" : { "type" : "group" },
"_routing" : { "required" : true, "path" : "group_id" },
"properties" : {
"name" : { "type" : "string" },
"group_id" : { "type" : "string" }
}
}
}
}'
# Create the logstash alias for cvializ
curl -XPOST 'http://localhost:9200/_aliases' -d '
{
"actions" : [
{ "remove" : { "index" : "logstash-2014.04.25", "alias" : "cvializ-logstash-2014.04.25" } },
{
"add" : {
"index" : "logstash-2014.04.25",
"alias" : "cvializ-logstash-2014.04.25",
"routing" : "intern",
"filter": {
"terms" : {
"host" : {
"index" : "accesscontrol",
"type" : "user",
"id" : "cvializ",
"path" : "group.hosts"
},
"_cache_key" : "cvializ_hosts"
}
}
}
}
]
}'
In attempting to find a workaround for this error, I submitted a bug to the ElasticSearch team, and received an answer from them. It was a bug in ElasticSearch where the filter is applied before the dynamic mapping, causing some erroneous output. I've included their workaround below:
PUT /accesscontrol/group/admin
{
"name" : "admin",
"hosts" : ["computer1","computer2","computer3"]
}
PUT /_template/admin_group
{
"template" : "logstash-*",
"aliases" : {
"template-admin-{index}" : {
"filter" : {
"terms" : {
"host" : {
"index" : "accesscontrol",
"type" : "group",
"id" : "admin",
"path" : "hosts"
}
}
}
}
},
"mappings": {
"example" : {
"properties": {
"host" : {
"type" : "string"
}
}
}
}
}
POST /logstash-2014.05.09/example/1
{
"message":"my sample data",
"#version":"1",
"#timestamp":"2014-05-09T16:25:45.613Z",
"type":"example",
"host":"computer1"
}
GET /template-admin-logstash-2014.05.09/_search

Kibana 3 bettermap ploting lat/long in wrong location on map

I am trying to use bettermap in Kibana 3 to see lat/long data. My geo location is shown in the map in Africa where as the lat/long I have specified are for US.
Any help on the same would be appreciated?
my mapping file is
{"mappings" : {
"livestats" : {
"_source" : {
"enabled" : true
},
"_timestamp" : {
"enabled" : true
},
"_all" : {
"enabled" : false
},
"properties" : {
"state" : { "type" : "string", "index" : "not_analyzed", "store" : "yes" },
"LatLng" : { "type" : "geo_point" , "index" : "analyzed", "store" : "yes"}
}
}
}
}
and dummy data is
{"create":{"_index":"livestats","_type":"livestats"}}
{"state":"CA","LatLng":[-118.252,34.0433]}
thanks
Try with lonlat as the dummy data.
The tooltip on better map says: "geoJSON array! Long,Lat NOT Lat,Long"
The problem is that you have to specify the full fieldname in Bettermaps (Kibana 3.0.0). The autosuggested fieldnames, when you preload the fieldname's of your index, are missleading.
Here is my suggestion:
Replace the mapping of LatLon with:
"LatLng" : { "type" : "geo_point"}
Make sure you comply with GeoJSON Specification: [lon,lat] and not [lat,lon]
In Kibana, use the following field:
"livestats.LatLng"
That should do the trick!

Resources