Index fields with hyphens in Elasticsearch - elasticsearch
I'm trying to work out how to configure elasticsearch so that I can make query string searches with wildcards on fields that include hyphens.
I have documents that look like this:
{
"tags":[
"deck-clothing-blue",
"crew-clothing",
"medium"
],
"name":"Crew t-shirt navy large",
"description":"This is a t-shirt",
"images":[
{
"id":"ba4a024c96aa6846f289486dfd0223b1",
"type":"Image"
},
{
"id":"ba4a024c96aa6846f289486dfd022503",
"type":"Image"
}
],
"type":"InventoryType",
"header":{
}
}
I have tried to use a word_delimiter filter and a whitespace tokenizer:
{
"settings" : {
"index" : {
"number_of_shards" : 1,
"number_of_replicas" : 1
},
"analysis" : {
"filter" : {
"tags_filter" : {
"type" : "word_delimiter",
"type_table": ["- => ALPHA"]
}
},
"analyzer" : {
"tags_analyzer" : {
"type" : "custom",
"tokenizer" : "whitespace",
"filter" : ["tags_filter"]
}
}
}
},
"mappings" : {
"yacht1" : {
"properties" : {
"tags" : {
"type" : "string",
"analyzer" : "tags_analyzer"
}
}
}
}
}
But these are the searches (for tags) and their results:
deck* -> match
deck-* -> no match
deck-clo* -> no match
Can anyone see where I'm going wrong?
Thanks :)
The analyzer is fine (though I'd lose the filter), but your search analyzer isn't specified so it is using the standard analyzer to search the tags field which strips out the hyphen then tries to query against it (run curl "localhost:9200/_analyze?analyzer=standard" -d "deck-*" to see what I mean)
basically, "deck-*" is being searched for as "deck *" there is no word that has just "deck" in it so it fails.
"deck-clo*" is being searched for as "deck clo*", again there is no word that is just "deck" or starts with "clo" so the query fails.
I'd make the following modifications
"analysis" : {
"analyzer" : {
"default" : {
"tokenizer" : "whitespace",
"filter" : ["lowercase"] <--- you don't need this, just thought it was a nice touch
}
}
}
then get rid of the special analyzer on the tags
"mappings" : {
"yacht1" : {
"properties" : {
"tags" : {
"type" : "string"
}
}
}
}
let me know how it goes.
Related
How do I configure elastic search to use the icu_tokenizer?
I'm trying to search a text indexed by elasticsearch and the icu_tokenizer but can't get it working. My testcase is to tokenize the sentence “Hello. I am from Bangkok”, in thai สวัสดี ผมมาจากกรุงเทพฯ, which should be tokenized to the five words สวัสดี, ผม, มา, จาก, กรุงเทพฯ. (Sample from Elasticsearch - The Definitive Guide) Searching using any of the last four words fails for me. Searching using any of the space separated words สวัสดี or ผมมาจากกรุงเทพฯ works fine. If I specify the icu_tokenizer on the command line, like curl -XGET 'http://localhost:9200/icu/_analyze?tokenizer=icu_tokenizer' -d "สวัสดี ผมมาจากกรุงเทพฯ" it tokenizes to five words. My settings are: curl http://localhost:9200/icu/_settings?pretty { "icu" : { "settings" : { "index" : { "creation_date" : "1474010824865", "analysis" : { "analyzer" : { "nfkc_cf_normalized" : [ "icu_normalizer" ], "tokenizer" : "icu_tokenizer" } } }, "number_of_shards" : "5", "number_of_replicas" : "1", "uuid" : "tALRehqIRA6FGPu8iptzww", "version" : { "created" : "2040099" } } } } The index is populated with curl -XPOST 'http://localhost:9200/icu/employee/' -d ' { "first_name" : "John", "last_name" : "Doe", "about" : "สวัสดี ผมมาจากกรุงเทพฯ" }' Searching with curl -XGET 'http://localhost:9200/_search' -d' { "query" : { "match" : { "about" : "กรุงเทพฯ" } } }' Returns nothing ("hits" : [ ]). Performing the same search with one of สวัสดี or ผมมาจากกรุงเทพฯ works fine. I guess I've misconfigured the index, how should it be done?
The missing part is: "mappings": { "employee" : { "properties": { "about":{ "type": "text", "analyzer": "icu_analyzer" } } } } In the mapping, the document field have to be specified the analyzer to be using. [Index] : icu [type] : employee [field] : about PUT /icu { "settings": { "analysis": { "analyzer": { "icu_analyzer" : { "char_filter": [ "icu_normalizer" ], "tokenizer" : "icu_tokenizer" } } } }, "mappings": { "employee" : { "properties": { "about":{ "type": "text", "analyzer": "icu_analyzer" } } } } } test the custom analyzer using followings DSLJson POST /icu/_analyze { "text": "สวัสดี ผมมาจากกรุงเทพฯ", "analyzer": "icu_analyzer" } The result should be [สวัสดี, ผม, มา, จาก, กรุงเทพฯ] My suggestion would be : Kibana : Dev Tool could help you for effective query crafting
Elasticsearch: simple query string with latin characters
I'm using a simple query string over the following text: Jiboia de três metros é capturada em avenida de Governador Obs: This is the content of my message field My query string (no results) "simple_query_string":{ "query":"tr\u00eas", "fields":["message","author.name","author.id"], "default_operator":"AND" } My query string (1 result) "simple_query_string":{ "query":"Jiboia", "fields":["message","author.name","author.id"], "default_operator":"AND" } Have a trick for latin characters? My mapping: {"mentions-2016.02.26":{"aliases":{"mentions_ro":{},"mentions_rw":{}},"mappings":{"mention":{"dynamic_templates":[{"analyzer":{"mapping":{"type":"string","index":"not_analyzed","store":"no"},"match":"*","match_mapping_type":"string"}}],"date_detection":false,"properties":{"analytics":{"properties":{"collect_delay":{"type":"long"},"number_of_replies":{"type":"long"},"twitter_reach":{"type":"long"},"youtube_views":{"type":"long"}}},"author":{"properties":{"gender":{"type":"string","index":"not_analyzed"},"id":{"type":"string"},"locale":{"properties":{"area":{"type":"string","index":"not_analyzed"},"country":{"type":"string","index":"not_analyzed"}}},"name":{"type":"string"},"platform_id":{"type":"long"}}},"created_at":{"type":"date","format":"dateOptionalTime"},"elastic_date":{"type":"date","format":"dateOptionalTime"},"id":{"type":"long"},"items_batch_created":{"type":"date","format":"dateOptionalTime"},"message":{"type":"string"},"metadata":{"properties":{"event":{"type":"string","index":"not_analyzed"},"timestamp":{"type":"long"}}},"monitoring":{"properties":{"id":{"type":"long"},"owner":{"properties":{"email":{"type":"string","index":"not_analyzed"},"id":{"type":"long"},"plan":{"properties":{"active":{"type":"string","index":"not_analyzed"},"name":{"type":"string","index":"not_analyzed"},"paid":{"type":"string","index":"not_analyzed"}}}}}}},"parent_id":{"type":"long"},"published_at":{"type":"date","format":"dateOptionalTime"},"raw_content":{"properties":{"actor_link":{"type":"string","index":"not_analyzed"},"aid":{"type":"string","index":"not_analyzed"},"atom_content":{"type":"string","index":"not_analyzed"},"attachment_content":{"type":"string","index":"not_analyzed"},"attachment_image":{"type":"string","index":"not_analyzed"},"attachment_text":{"type":"string","index":"not_analyzed"},"attachment_url":{"type":"string","index":"not_analyzed"},"attribution":{"type":"string","index":"not_analyzed"},"author":{"type":"string","index":"not_analyzed"},"author_name":{"type":"string","index":"not_analyzed"},"author_uri":{"type":"string","index":"not_analyzed"},"can_comment":{"type":"string","index":"not_analyzed"},"caption":{"type":"string","index":"not_analyzed"},"cast":{"type":"string","index":"not_analyzed"},"category":{"type":"string","index":"not_analyzed"},"channellink":{"type":"string","index":"not_analyzed"},"channeltitle":{"type":"string","index":"not_analyzed"},"comment_id":{"type":"string","index":"not_analyzed"},"comment_info":{"type":"string","index":"not_analyzed"},"comment_real_id":{"type":"string","index":"not_analyzed"},"comments":{"type":"string","index":"not_analyzed"},"content":{"type":"string","index":"not_analyzed"},"created_at":{"type":"string","index":"not_analyzed"},"created_time":{"type":"string","index":"not_analyzed"},"createdat":{"type":"long"},"date_timestamp":{"type":"string","index":"not_analyzed"},"dateuploaded":{"type":"string","index":"not_analyzed"},"description":{"type":"string","index":"not_analyzed"},"displayName":{"type":"string","index":"not_analyzed"},"download":{"type":"string","index":"not_analyzed"},"downloadurl":{"type":"string","index":"not_analyzed"},"duration":{"type":"string","index":"not_analyzed"},"embed":{"type":"string","index":"not_analyzed"},"embed_privacy":{"type":"string","index":"not_analyzed"},"farm":{"type":"long"},"firstname":{"type":"string","index":"not_analyzed"},"flickrid":{"type":"string","index":"not_analyzed"},"fonte_id":{"type":"string","index":"not_analyzed"},"format":{"type":"string","index":"not_analyzed"},"fotoPai":{"type":"string","index":"not_analyzed"},"from_id":{"type":"string","index":"not_analyzed"},"from_name":{"type":"string","index":"not_analyzed"},"from_user":{"type":"string","index":"not_analyzed"},"from_user_id":{"type":"string","index":"not_analyzed"},"from_user_profile_image_url":{"type":"string","index":"not_analyzed"},"gdcomments":{"type":"string","index":"not_analyzed"},"gender":{"type":"string","index":"not_analyzed"},"guid":{"type":"string","index":"not_analyzed"},"height":{"type":"string","index":"not_analyzed"},"icon":{"type":"string","index":"not_analyzed"},"idComment":{"type":"string","index":"not_analyzed"},"idVideo":{"type":"string","index":"not_analyzed"},"id_externo":{"type":"string","index":"not_analyzed"},"idexterno":{"type":"string","index":"not_analyzed"},"image":{"type":"string","index":"not_analyzed"},"imagem":{"type":"string","index":"not_analyzed"},"impactoyoutube":{"type":"string","index":"not_analyzed"},"inReplyTo":{"properties":{"id":{"type":"string","index":"not_analyzed"},"url":{"type":"string","index":"not_analyzed"}}},"in_reply_to_screen_name":{"type":"string","index":"not_analyzed"},"in_reply_to_status_id":{"type":"long"},"incontest":{"type":"string","index":"not_analyzed"},"isPicture":{"type":"boolean"},"is_hd":{"type":"string","index":"not_analyzed"},"is_private":{"type":"string","index":"not_analyzed"},"is_transcoding":{"type":"string","index":"not_analyzed"},"iso_language_code":{"type":"string","index":"not_analyzed"},"klout":{"type":"long"},"language":{"type":"string","index":"not_analyzed"},"like_info":{"type":"string","index":"not_analyzed"},"likes":{"type":"string","index":"not_analyzed"},"link":{"type":"string","index":"not_analyzed"},"link_related":{"type":"string","index":"not_analyzed"},"link_self":{"type":"string","index":"not_analyzed"},"location":{"type":"string","index":"not_analyzed"},"mediacategory":{"type":"string","index":"not_analyzed"},"mediacontent":{"type":"string","index":"not_analyzed"},"mediadescription":{"type":"string","index":"not_analyzed"},"mediakeywords":{"type":"string","index":"not_analyzed"},"mediaplayer":{"type":"string","index":"not_analyzed"},"mediathumbnail":{"type":"string","index":"not_analyzed"},"mediatitle":{"type":"string","index":"not_analyzed"},"message":{"type":"string","index":"not_analyzed"},"modified_date":{"type":"string","index":"not_analyzed"},"monitoramento_id":{"type":"string","index":"not_analyzed"},"name":{"type":"string","index":"not_analyzed"},"note_count":{"type":"long"},"number_of_comments":{"type":"string","index":"not_analyzed"},"number_of_likes":{"type":"string","index":"not_analyzed"},"number_of_plays":{"type":"string","index":"not_analyzed"},"owner":{"type":"string","index":"not_analyzed"},"parent_id":{"type":"string","index":"not_analyzed"},"permalink":{"type":"string","index":"not_analyzed"},"photo":{"properties":{"default":{"type":"boolean"},"prefix":{"type":"string","index":"not_analyzed"},"suffix":{"type":"string","index":"not_analyzed"}}},"picture":{"type":"string","index":"not_analyzed"},"post_id":{"type":"string","index":"not_analyzed"},"privacy":{"type":"string","index":"not_analyzed"},"profile_image_url":{"type":"string","index":"not_analyzed"},"profile_picture":{"type":"string","index":"not_analyzed"},"pubdate":{"type":"string","index":"not_analyzed"},"publicado":{"type":"string","index":"not_analyzed"},"published":{"type":"string","index":"not_analyzed"},"realname":{"type":"string","index":"not_analyzed"},"removido":{"type":"string","index":"not_analyzed"},"retroactive":{"type":"boolean"},"secret":{"type":"string","index":"not_analyzed"},"secretkey":{"type":"string","index":"not_analyzed"},"server":{"type":"string","index":"not_analyzed"},"share_id":{"type":"string","index":"not_analyzed"},"slide_id":{"type":"string","index":"not_analyzed"},"slideshowembedurl":{"type":"string","index":"not_analyzed"},"slideshowtype":{"type":"string","index":"not_analyzed"},"source":{"type":"string","index":"not_analyzed"},"src_big":{"type":"string","index":"not_analyzed"},"status":{"type":"string","index":"not_analyzed"},"summary":{"type":"string","index":"not_analyzed"},"t_id":{"type":"string","index":"not_analyzed"},"tags":{"type":"string","index":"not_analyzed"},"text":{"type":"string","index":"not_analyzed"},"texto":{"type":"string","index":"not_analyzed"},"thumbnail":{"type":"string","index":"not_analyzed"},"thumbnails":{"type":"string","index":"not_analyzed"},"thumbnailsize":{"type":"string","index":"not_analyzed"},"thumbnailsmallurl":{"type":"string","index":"not_analyzed"},"thumbnailurl":{"type":"string","index":"not_analyzed"},"thumbnailxlargeurl":{"type":"string","index":"not_analyzed"},"thumbnailxxlargeurl":{"type":"string","index":"not_analyzed"},"tip_id":{"type":"string","index":"not_analyzed"},"title":{"type":"string","index":"not_analyzed"},"to":{"type":"string","index":"not_analyzed"},"to_user":{"type":"string","index":"not_analyzed"},"to_user_id":{"type":"long"},"tumblr_id":{"type":"string","index":"not_analyzed"},"tweet_id":{"type":"string","index":"not_analyzed"},"type":{"type":"string","index":"not_analyzed"},"update_id":{"type":"string","index":"not_analyzed"},"updated":{"type":"string","index":"not_analyzed"},"updatedVideo":{"type":"string","index":"not_analyzed"},"updated_time":{"type":"string","index":"not_analyzed"},"upload_date":{"type":"string","index":"not_analyzed"},"url":{"type":"string","index":"not_analyzed"},"urls":{"type":"string","index":"not_analyzed"},"user_id":{"type":"string","index":"not_analyzed"},"user_url":{"type":"string","index":"not_analyzed"},"userimageurl":{"type":"string","index":"not_analyzed"},"username":{"type":"string","index":"not_analyzed"},"userurl":{"type":"string","index":"not_analyzed"},"veioDoAlbum":{"type":"boolean"},"video":{"type":"long"},"vimeo_id":{"type":"string","index":"not_analyzed"},"wall_id":{"type":"string","index":"not_analyzed"},"width":{"type":"string","index":"not_analyzed"},"ytduration":{"type":"string","index":"not_analyzed"}}},"raw_content_hash":{"type":"string","index":"not_analyzed"},"search":{"properties":{"id":{"type":"long"},"social_network":{"type":"string","index":"not_analyzed"},"type":{"type":"string","index":"not_analyzed"},"type_id":{"type":"string","index":"not_analyzed"}}},"sentiment":{"properties":{"automatic":{"properties":{"active":{"type":"long"},"precision":{"type":"long"},"value":{"type":"string","index":"not_analyzed"}}},"value":{"type":"string","index":"not_analyzed"}}},"tag":{"properties":{"count":{"type":"long"},"ids":{"type":"string","index":"not_analyzed"}}},"title":{"type":"string","index":"not_analyzed"},"type":{"type":"string","index":"not_analyzed"},"updated_at":{"type":"date","format":"dateOptionalTime"},"words":{"type":"string","index":"not_analyzed"}}}},"settings":{"index":{"refresh_interval":"2s","number_of_shards":"7","gc_deletes":"1814400","creation_date":"1456497520658","number_of_replicas":"2","version":{"created":"1050299"},"uuid":"sp4CJpxMRf-_z0bUtHTrjA"}},"warmers":{}}}
Do you need let elasticsearch know how handle your characters. I did an example using an custom tokenizer like this: curl -XPOST "http://192.168.99.100:9200/my_type/my_type/my_type" -d' { "settings" : { "index" : { "number_of_shards" : 1, "number_of_replicas" : 1 }, "analysis" : { "filter" : { "custom_filter" : { "type" : "word_delimiter", "type_table": ["ê => ALPHA", "Ê => ALPHA"] } }, "analyzer" : { "custom_analyzer" : { "type" : "custom", "tokenizer" : "whitespace", "filter" : ["lowercase", "custom_filter"] } } } }, "mappings" : { "my_type" : { "properties" : { "msg" : { "type" : "string", "analyzer" : "custom_analyzer" } } } } }' I just created an analyzer using a tokenizer that know that ê and Ê need to be interpreted as characters. after that i just do a search in my msg field curl -XPOST "http://192.168.99.100:9200/my_type/my_type/my_type/my_type" -d' { "msg":"três" }' And will work :D
I found the problem. Im using javascript atob function to decode the message after index it on elastic. The atob function does not work well with my latin characters and break it. I change atob for the native Buffer class on node js. Obs: The default analizer work perfect with latin chars! Sorry!
Indexing a comma-separated value field in Elastic Search
I'm using Nutch to crawl a site and index it into Elastic search. My site has meta-tags, some of them containing comma-separated list of IDs (that I intend to use for search). For example: contentTypeIds="2,5,15". (note: no square brackets). When ES indexes this, I can't search for contentTypeIds:5 and find documents whose contentTypeIds contain 5; this query returns only the documents whose contentTypeIds is exactly "5". However, I do want to find documents whose contentTypeIds contain 5. In Solr, this is solved by setting the contentTypeIds field to multiValued="true" in the schema.xml. I can't find how to do something similar in ES. I'm new to ES, so I probably missed something. Thanks for your help!
Create custom analyzer which will split indexed text into tokens by commas. Then you can try to search. In case you don't care about relevance you can use filter to search through your documents. My example shows how you can attempt search with term filter. Below you can find how to do this with sense plugin. DELETE testindex PUT testindex { "index" : { "analysis" : { "tokenizer" : { "comma" : { "type" : "pattern", "pattern" : "," } }, "analyzer" : { "comma" : { "type" : "custom", "tokenizer" : "comma" } } } } } PUT /testindex/_mapping/yourtype { "properties" : { "contentType" : { "type" : "string", "analyzer" : "comma" } } } PUT /testindex/yourtype/1 { "contentType" : "1,2,3" } PUT /testindex/yourtype/2 { "contentType" : "3,4" } PUT /testindex/yourtype/3 { "contentType" : "1,6" } GET /testindex/_search { "query": {"match_all": {}} } GET /testindex/_search { "filter": { "term": { "contentType": "6" } } } Hope it helps.
POST _analyze { "tokenizer": { "type": "char_group", "tokenize_on_chars": [ "whitespace", "-", "\n", "," ] }, "text": "QUICK,brown, fox" }
Specify Routing on Index Alias's Term Lookup Filter
I am using Logstash, ElasticSearch and Kibana to allow multiple users to log in and view the log data they have forwarded. I have created index aliases for each user. These restrict their results to contain only their own data. I'd like to assign users to groups, and allow users to view data for the computers in their group. I created a parent-child relationship between the groups and the users, and I created a term lookup filter on the alias. My problem is, I receive a RoutingMissingException when I try to apply the alias. Is there a way to specify the routing for the term lookup filter? How can I lookup terms on a parent document? I posted the mapping and alias below, but a full gist recreation is available at this link. curl -XPUT 'http://localhost:9200/accesscontrol/' -d '{ "mappings" : { "group" : { "properties" : { "name" : { "type" : "string" }, "hosts" : { "type" : "string" } } }, "user" : { "_parent" : { "type" : "group" }, "_routing" : { "required" : true, "path" : "group_id" }, "properties" : { "name" : { "type" : "string" }, "group_id" : { "type" : "string" } } } } }' # Create the logstash alias for cvializ curl -XPOST 'http://localhost:9200/_aliases' -d ' { "actions" : [ { "remove" : { "index" : "logstash-2014.04.25", "alias" : "cvializ-logstash-2014.04.25" } }, { "add" : { "index" : "logstash-2014.04.25", "alias" : "cvializ-logstash-2014.04.25", "routing" : "intern", "filter": { "terms" : { "host" : { "index" : "accesscontrol", "type" : "user", "id" : "cvializ", "path" : "group.hosts" }, "_cache_key" : "cvializ_hosts" } } } } ] }'
In attempting to find a workaround for this error, I submitted a bug to the ElasticSearch team, and received an answer from them. It was a bug in ElasticSearch where the filter is applied before the dynamic mapping, causing some erroneous output. I've included their workaround below: PUT /accesscontrol/group/admin { "name" : "admin", "hosts" : ["computer1","computer2","computer3"] } PUT /_template/admin_group { "template" : "logstash-*", "aliases" : { "template-admin-{index}" : { "filter" : { "terms" : { "host" : { "index" : "accesscontrol", "type" : "group", "id" : "admin", "path" : "hosts" } } } } }, "mappings": { "example" : { "properties": { "host" : { "type" : "string" } } } } } POST /logstash-2014.05.09/example/1 { "message":"my sample data", "#version":"1", "#timestamp":"2014-05-09T16:25:45.613Z", "type":"example", "host":"computer1" } GET /template-admin-logstash-2014.05.09/_search
Elasticsearch search fo words having '#' character
For example, I am right now searching like this: http://localhost:9200/posts/post/_search?q=content:%23sachin But, I am getting all the results with 'sachin' and not '#sachin'. Also, I am writing a regular expression for getting the count of terms. The facet looks like this: "facets": { "content": { "terms": { "field": "content", "size": 1000, "all_terms": false, "regex": "#sachin", "regex_flags": [ "DOTALL", "CASE_INSENSITIVE" ] } } } This is not returning any values. I think it has something to do with escaping the '#' inside the regular expression, but I am not sure how to do it. I have tried to escape it \ and \\, but it did not work. Can anyone help me in this regard?
This article gives information on how save # and # using custom analyzers: https://web.archive.org/web/20160304014858/http://www.fullscale.co/blog/2013/03/04/preserving_specific_characters_during_tokenizing_in_elasticsearch.html curl -XPUT 'http://localhost:9200/twitter' -d '{ "settings" : { "index" : { "number_of_shards" : 1, "number_of_replicas" : 1 }, "analysis" : { "filter" : { "tweet_filter" : { "type" : "word_delimiter", "type_table": ["# => ALPHA", "# => ALPHA"] } }, "analyzer" : { "tweet_analyzer" : { "type" : "custom", "tokenizer" : "whitespace", "filter" : ["lowercase", "tweet_filter"] } } } }, "mappings" : { "tweet" : { "properties" : { "msg" : { "type" : "string", "analyzer" : "tweet_analyzer" } } } } }' This isn't dealing with facets, but the redefining of the type of those special characters in the analyzer could help.
Another approach that worth to consider is to index a special (e.g. "reserved") word instead of hash symbol. For example: HASHSYMBOLCHAR. Make sure that you will replace '#' chars in query as well.