In my situation, my field is like "abc,123", I want it can be searched either "abc" or "123".
my index mapping is just like the code below
{
"myfield": {
"type": "text",
"analyzer": "stop",
"search_analyzer": "stop" }
But when I use es _analyzer API to test, I got the result
{
"tokens": [
{
"token": "abc",
"start_offset": 0,
"end_offset": 3,
"type": "word",
"position": 0
}
]
}
"123" was lost.
If I want to meet my situation, do I need to choose some other analyzer or just to add some special configs?
You need to choose standard analyzer instead as stop analyzer breaks text into terms whenever it encounters a character which is not a letter and removes stop words like 'the'. In your case "abc,123" results in token abc when using stop analyzer. Using standard analyzer it returns abc and 123 as shown below
POST _analyze
{
"analyzer": "standard",
"text": "abc, 123"
}
Output:
{
"tokens": [
{
"token": "abc",
"start_offset": 0,
"end_offset": 3,
"type": "<ALPHANUM>",
"position": 0
},
{
"token": "123",
"start_offset": 5,
"end_offset": 8,
"type": "<NUM>",
"position": 1
}
]
}
EDIT1 Using Simple Pattern Split Tokenizer
PUT my_index
{
"settings": {
"analysis": {
"analyzer": {
"my_analyzer": {
"tokenizer": "my_tokenizer"
}
},
"tokenizer": {
"my_tokenizer": {
"type": "simple_pattern_split",
"pattern": ","
}
}
}
}
}
POST my_index/_analyze
{
"analyzer": "my_analyzer",
"text": "abc,123"
}
Output:
{
"tokens": [
{
"token": "abc",
"start_offset": 0,
"end_offset": 3,
"type": "word",
"position": 0
},
{
"token": "123",
"start_offset": 4,
"end_offset": 7,
"type": "word",
"position": 1
}
]
}
Related
In elasticsearch, I am trying to use an analyzer on a field which will use a filter to replace all characters after a ? is encountered into a whitespace. To do so, I am using the following filter.
"filter_name":{
"type": "pattern_replace",
"pattern": "\\?(.*)",
"replacement": ""
}
But this is not working as expected. Is there something I'm missing?
Use pattern: "(?<=\\?)(.*)" and replacement: ""
Please see the below. I've created a sample mapping and a sample _analyze query to see how tokens are created.
Mapping
PUT my_index
{
"settings": {
"analysis": {
"analyzer": {
"my_analyzer": {
"tokenizer": "standard",
"char_filter": [
"my_char_filter"
]
}
},
"char_filter": {
"my_char_filter": {
"type": "pattern_replace",
"pattern": "(?=.*)\\?(.*)",
"replacement": ""
}
}
}
}
}
Query
POST my_index/_analyze
{
"analyzer": "my_analyzer",
"text": "Do you know? Life is crazy"
}
Analysis Result
{
"tokens": [
{
"token": "Do",
"start_offset": 0,
"end_offset": 2,
"type": "<ALPHANUM>",
"position": 0
},
{
"token": "you",
"start_offset": 3,
"end_offset": 6,
"type": "<ALPHANUM>",
"position": 1
},
{
"token": "know",
"start_offset": 7,
"end_offset": 26,
"type": "<ALPHANUM>",
"position": 2
}
]
}
Hope this helps!
I have this index with pipe as custom analyzer. When I am trying to test it, it returns every char, and not pipe delimited words.
I am trying to construct for an use case where my input line keywords looks like: crockpot refried beans|corningware replacement|crockpot lids|recipe refried beans and EL will return matches after it has been exploded.
{
"keywords": {
"aliases": {
},
"mappings": {
"cloud": {
"properties": {
"keywords": {
"type": "text",
"analyzer": "pipe"
}
}
}
},
"settings": {
"index": {
"number_of_shards": "5",
"provided_name": "keywords",
"creation_date": "1513890909384",
"analysis": {
"analyzer": {
"pipe": {
"type": "custom",
"tokenizer": "pipe"
}
},
"tokenizer": {
"pipe": {
"pattern": "|",
"type": "pattern"
}
}
},
"number_of_replicas": "1",
"uuid": "DOLV_FBbSC2CBU4p7oT3yw",
"version": {
"created": "6000099"
}
}
}
}
}
When I am trying to test it following this guide.
curl -XPOST 'http://localhost:9200/keywords/_analyze' -d '{
"analyzer": "pipe",
"text": "pipe|pipe2"
}'
I get back char-by-char results.
{
"tokens": [
{
"token": "p",
"start_offset": 0,
"end_offset": 1,
"type": "word",
"position": 0
},
{
"token": "i",
"start_offset": 1,
"end_offset": 2,
"type": "word",
"position": 1
},
{
"token": "p",
"start_offset": 2,
"end_offset": 3,
"type": "word",
"position": 2
},
{
"token": "e",
"start_offset": 3,
"end_offset": 4,
"type": "word",
"position": 3
},
Good work, you're almost there. Since the pipe | character is a reserved character in regular expressions, you need to escape it like this:
"tokenizer": {
"pipe": {
"pattern": "\\|", <--- change this
"type": "pattern"
}
}
And then your analyzer will work and produce this:
{
"tokens": [
{
"token": "pipe",
"start_offset": 0,
"end_offset": 4,
"type": "word",
"position": 0
},
{
"token": "pipe2",
"start_offset": 5,
"end_offset": 10,
"type": "word",
"position": 1
}
]
}
I have one field in which I am storing values like O5467508 (Starting with alphabet "O")
Below is my query.
{"from":0,"size":10,"query":{"bool":{"must":[{"match":{"field_LIST_105":{"query":"o5467508","type":"phrase_prefix"}}},{"bool":{"must":[{"match":{"RegionName":"Virginia"}}]}}]}}}
it is giving me correct result, But when i am searching for only numeric value "5467508", query result is empty.
Thanks in advance.
One of the possible solution that could help you, use word_delimiter filter, with the option preserve_original, which will save original token.
Something like this:
{
"settings": {
"analysis": {
"analyzer": {
"so_analyzer": {
"type": "custom",
"tokenizer": "keyword",
"filter": [
"lowercase",
"my_word_delimiter"
]
}
},
"filter": {
"my_word_delimiter": {
"type": "word_delimiter",
"preserve_original": true
}
}
}
},
"mappings": {
"my_type": {
"properties": {
"field_LIST_105": {
"type": "text",
"analyzer": "so_analyzer"
}
}
}
}
}
I did a quick test of analysis, and this is the tokens that it give to me.
{
"tokens": [
{
"token": "o5467508",
"start_offset": 0,
"end_offset": 8,
"type": "word",
"position": 0
},
{
"token": "o",
"start_offset": 0,
"end_offset": 1,
"type": "word",
"position": 0
},
{
"token": "5467508",
"start_offset": 1,
"end_offset": 8,
"type": "word",
"position": 1
}
]
}
For more information - https://www.elastic.co/guide/en/elasticsearch/reference/current/analysis-word-delimiter-tokenfilter.html
I have a string property called summary that has analyzer set to trigrams and search_analyzer set to words.
"filter": {
"words_splitter": {
"type": "word_delimiter",
"preserve_original": "true"
},
"english_words_filter": {
"type": "stop",
"stop_words": "_english_"
},
"trigrams_filter": {
"type": "ngram",
"min_gram": "2",
"max_gram": "20"
}
},
"analyzer": {
"words": {
"filter": [
"lowercase",
"words_splitter",
"english_words_filter"
],
"type": "custom",
"tokenizer": "whitespace"
},
"trigrams": {
"filter": [
"lowercase",
"words_splitter",
"trigrams_filter",
"english_words_filter"
],
"type": "custom",
"tokenizer": "whitespace"
}
}
I need that query strings given in input like React and HTML (or React, html) are being matched to documents that contain in the summary the words React, reactjs, react.js, html, html5. As more matching keywords they have, an higher score they have (I would expect lower scores on documents that have just a word matching not even at 100%, ideally).
The thing is, I guess at the moment react.js is split in both react and js since I get all the documents that contain js as well. On the other hand, Reactjs returns nothing. I also think to need words_splitter in order to ignore the comma.
You can solve the problem with names like react.js with a keyword marker filter and by defining the analyzer so that it uses the keyword filter. This will prevent react.js from being split into react and js tokens.
Here is an example configuration for the filter:
"filter": {
"keywords": {
"type": "keyword_marker",
"keywords": [
"react.js",
]
}
}
And the analyzer:
"analyzer": {
"main_analyzer": {
"type": "custom",
"tokenizer": "standard",
"filter": [
"lowercase",
"keywords",
"synonym_filter",
"german_stop",
"german_stemmer"
]
}
}
You can see whether your analyzer behaves as required using the analyze command:
GET /<index_name>/_analyze?analyzer=main_analyzer&text="react.js is a nice library"
This should return the following tokens where react.js is not tokenized:
{
"tokens": [
{
"token": "react.js",
"start_offset": 1,
"end_offset": 9,
"type": "<ALPHANUM>",
"position": 0
},
{
"token": "is",
"start_offset": 10,
"end_offset": 12,
"type": "<ALPHANUM>",
"position": 1
},
{
"token": "a",
"start_offset": 13,
"end_offset": 14,
"type": "<ALPHANUM>",
"position": 2
},
{
"token": "nice",
"start_offset": 15,
"end_offset": 19,
"type": "<ALPHANUM>",
"position": 3
},
{
"token": "library",
"start_offset": 20,
"end_offset": 27,
"type": "<ALPHANUM>",
"position": 4
}
]
}
For the words that are similar but not exactly the same as: React.js and Reactjs you could use a synonym filter. Do you have a fixed set of keywords that you want to match?
I found a solution.
Basically I'm going to define the word_delimiter filter with catenate_all active
"words_splitter": {
"catenate_all": "true",
"type": "word_delimiter",
"preserve_original": "true"
}
giving it to the words analyzer with a keyword tokenizer
"words": {
"filter": [
"words_splitter"
],
"type": "custom",
"tokenizer": "keyword"
}
Calling http://localhost:9200/sample_index/_analyze?analyzer=words&pretty=true&text=react.js I get the following tokens:
{
"tokens": [
{
"token": "react.js",
"start_offset": 0,
"end_offset": 8,
"type": "word",
"position": 0
},
{
"token": "react",
"start_offset": 0,
"end_offset": 5,
"type": "word",
"position": 0
},
{
"token": "reactjs",
"start_offset": 0,
"end_offset": 8,
"type": "word",
"position": 0
},
{
"token": "js",
"start_offset": 6,
"end_offset": 8,
"type": "word",
"position": 1
}
]
}
I'm using Elasticsearch 2.2.0 and I'm trying to use the lowercase + asciifolding filters on a field.
This is the output of http://localhost:9200/myindex/
{
"myindex": {
"aliases": {},
"mappings": {
"products": {
"properties": {
"fold": {
"analyzer": "folding",
"type": "string"
}
}
}
},
"settings": {
"index": {
"analysis": {
"analyzer": {
"folding": {
"token_filters": [
"lowercase",
"asciifolding"
],
"tokenizer": "standard",
"type": "custom"
}
}
},
"creation_date": "1456180612715",
"number_of_replicas": "1",
"number_of_shards": "5",
"uuid": "vBMZEasPSAyucXICur3GVA",
"version": {
"created": "2020099"
}
}
},
"warmers": {}
}
}
And when I try to test the folding custom filter using the _analyze API, this is what I get as an output of http://localhost:9200/myindex/_analyze?analyzer=folding&text=%C3%89sta%20est%C3%A1%20loca
{
"tokens": [
{
"end_offset": 4,
"position": 0,
"start_offset": 0,
"token": "Ésta",
"type": "<ALPHANUM>"
},
{
"end_offset": 9,
"position": 1,
"start_offset": 5,
"token": "está",
"type": "<ALPHANUM>"
},
{
"end_offset": 14,
"position": 2,
"start_offset": 10,
"token": "loca",
"type": "<ALPHANUM>"
}
]
}
As you can see, the returned tokens are: Ésta, está, loca instead of esta, esta, loca. What's going on? it seems that this folding analyzer is being ignored.
Looks like a simple typo when you are creating your index.
In your "analysis":{"analyzer":{...}} block, this:
"token_filters": [...]
Should be
"filter": [...]
Check the documentation for confirmation of this. Because your filter array wasn't named correctly, ES completely ignored it, and just decided to use the standard analyzer. Here is a small example written using the Sense chrome plugin. Execute them in order:
DELETE /test
PUT /test
{
"analysis": {
"analyzer": {
"folding": {
"type": "custom",
"filter": [
"lowercase",
"asciifolding"
],
"tokenizer": "standard"
}
}
}
}
GET /test/_analyze
{
"analyzer":"folding",
"text":"Ésta está loca"
}
And the results of the last GET /test/_analyze:
"tokens": [
{
"token": "esta",
"start_offset": 0,
"end_offset": 4,
"type": "<ALPHANUM>",
"position": 0
},
{
"token": "esta",
"start_offset": 5,
"end_offset": 9,
"type": "<ALPHANUM>",
"position": 1
},
{
"token": "loca",
"start_offset": 10,
"end_offset": 14,
"type": "<ALPHANUM>",
"position": 2
}
]