Match fails elasticsearch - elasticsearch

I have the following index in which I index mail addresses.
PUT _myindex
{
"settings" : {
"analysis" : {
"filter" : {
"email" : {
"type" : "pattern_capture",
"preserve_original" : true,
"patterns" : [
"^(.*?)#",
"(\\w+(?=.*#))"]
}
},
"analyzer" : {
"email" : {
"tokenizer" : "uax_url_email",
"filter" : [ "lowercase","email", "unique" ]
}
}
}
},
"mappings": {
"emails": {
"properties": {
"email": {
"type": "text",
"analyzer": "email"
}
}
}
}
My e-mail in the following form "example.elastic#yahoo.com". When i index them they get analysed like example.elastic#yahoo.com, example.elastic, elastic, example.
When i run a match
GET _myindex/_search
{
"query": {
"match": {
"email": "example.elastic#yahoo.com"
}
}
}
or using as a query string example, elastic, Elastic it works and retrieves results. But the problem is when I have "example.elastic.blabla#yahoo.com", it also returns the same results. What can be the problem?

Using term query instead of match query will solve this.
Reason is, The match query will apply analyzer to the search term and will therefore match what is stored in the index. The term query does not apply any analyzers to the search term, so will only look for that exact term in the index.
Ref: https://stackoverflow.com/a/23151332/6546289
GET _myindex/_search
{
"query": {
"term": {
"email": "example.elastic#yahoo.com"
}
}
}

Related

Place an Analyzer on a a specific array item in a nested object

I have the following mapping
"mappings":{
"properties":{
"name": {
"type": "text"
},
"age": {
"type": "integer"
},
"customProps":{
"type" : "nested",
"properties": {
"key":{
"type": "keyword"
},
"value": {
"type" : "keyword"
}
}
}
}
}
example data
{
"name" : "person1",
"age" : 10,
"customProps":[
{"hairColor":"blue"},
{"height":"120"}
]
},
{
"name" : "person2",
"age" : 30,
"customProps":[
{"jobTitle" : "software engineer"},
{"salaryAccount" : "AvGhj90AAb"}
]
}
so i want to be able to search for document by salary account case insensitive, i am also searching using wild card
example query is
{
"query": {
"bool": {
"should": [
{
"nested": {
"path": "customProps",
"query": {
"bool": {
"must": [
{ "match": { "customProps.key": "salaryAccount" } },
{ "wildcard": { "customProps.value": "*AvG*"
}
}
]}}}}]}}}
i tried adding analyzer with PUT using the following syntax
{
"settings":{
"index":{
"analysis":{
"analyzer":{
"analyzer_case_insensitive" : {
"tokenizer":"keyword",
"filter":"lowercase"
}
}
}
}
},
"mappings":{
"people":{
"properties":{
"customProps":{
"properties":{
"value":{
"type": "keyword",
"analyzer": "analyzer_case_insensitive"
}
}
}
}
}
}
}
im getting the following error
"type" : "mapper_parsing_exception",
"reason" : "Root mapping definition has unsupported parameters: [people: {properties={customProps={properties={value={analyzer=analyzer_case_insensitive, type=keyword}}}}}]"
any idea how to do the analyzer for the salary account object in the array when it exists?
Your use case is quite clear, that you want to search on the value of salaryAccount only when this key exists in customProps array.
There are some issues with your mapping definition :
You cannot define a custom analyzer for keyword type field, instead you can use a normalizer
Based on the mapping definition you added at the beginning of the question, it seems that you are using elasticsearch version 7.x. But the second mapping definition that you provided, in that you have added mapping type also (i.e people), which is deprecated in 7.x
There is no need to add the key and value fields in the index mapping.
Adding a working example with index mapping, search query, and search result
Index Mapping:
PUT myidx
{
"mappings": {
"properties": {
"customProps": {
"type": "nested"
}
}
}
}
Search Query:
You need to use exists query, to check whether a field exists or not. And case_insensitive param in Wildcard query is available since elasticsearch version 7.10. If you are using a version below this, then you need to use a normalizer, to achieve case insensitive scenarios.
POST myidx/_search
{
"query": {
"bool": {
"should": [
{
"nested": {
"path": "customProps",
"query": {
"bool": {
"must": [
{
"exists": {
"field": "customProps.salaryAccount"
}
},
{
"wildcard": {
"customProps.salaryAccount.keyword": {
"value": "*aVg*",
"case_insensitive": true
}
}
}
]
}
}
}
}
]
}
}
}
Search Result:
"hits" : [
{
"_index" : "myidx",
"_type" : "_doc",
"_id" : "2",
"_score" : 2.0,
"_source" : {
"name" : "person2",
"age" : 30,
"customProps" : [
{
"jobTitle" : "software engineer"
},
{
"salaryAccount" : "AvGhj90AAb"
}
]
}
}
]

Undesired Stopwords in Elastic Search

I am using Elastic Search 6.This is query
PUT /semtesttest
{
"settings": {
"index" : {
"analysis" : {
"filter": {
"my_stop": {
"type": "stop",
"stopwords_path": "analysis1/stopwords.csv"
},
"synonym" : {
"type" : "synonym",
"synonyms_path" : "analysis1/synonym.txt"
}
},
"analyzer" : {
"my_analyzer" : {
"tokenizer" : "standard",
"filter" : ["synonym","my_stop"]
}
}
}
}
},
"mappings": {
"all_questions": {
"dynamic": "strict",
"properties": {
"kbaid":{
"type": "integer"
},
"answer":{
"type": "text"
},
"question": {
"type": "text",
"analyzer": "my_analyzer"
}
}
}
}
}
PUT /semtesttest/all_questions/1
{
"question":"this is hippie"
}
GET /semtesttest/all_questions/_search
{
"query":{
"fuzzy":{"question":{"value":"hippie","fuzziness":2}}
}
}
GET /semtesttest/all_questions/_search
{
"query":{
"fuzzy":{"question":{"value":"this is","fuzziness":2}}
}
}
in synonym.txt it is
this, that, money => sainai
in stopwords.csv it is
hello
how
are
you
The first get ('hippie') return empty
only the second get ('this is') return results
what is the problem? It looks like the stop word "this is" is filtered in the first query, but I have specified my stop words explicitly?
fuzzy is a term query. It is not going to analyze the input, so your query was looking for the exact term this is (applying some fuzzy fun).
So you either want to build a query off those two terms, or use a full text query instead. If fuzziness is important, I think the only full text query is match:
GET /semtesttest/all_questions/_search?pretty
{
"query":{
"match":{"question":{"query":"this is","fuzziness":2}}
}
}
If match phrases is important, you may want to look at this answer and work with span queries.
This might also help you so you can see how your analyzer is being used:
GET /semtesttest/_analyze?analyzer=my_analyzer&field=question&text=this is

exact match in elasticSearch after incorporating hunspell filter

We have added the hunspell filter to our elastic search instance. Nothing fancy...
{
"index" : {
"analysis" : {
"tokenizer" : {
"comma" : {
"type" : "pattern",
"pattern" : ","
}
},
"filter": {
"en_GB": {
"type": "hunspell",
"language": "en_GB"
}
},
"analyzer" : {
"comma" : {
"type" : "custom",
"tokenizer" : "comma"
},
"en_GB": {
"filter": [
"lowercase",
"en_GB"
],
"tokenizer": "standard"
}
}
}
}
}
Now though we seem to have lost the built in facility to do exact match queries using quotation marks. So searching for "lace" will also do an equal score search for "lacy" for example. I understand this is kind of the point of including hunspell but I would like to be able to force exact matches by using quotes
I am doing boolean queries for this by the way. Along the lines of (in java)
"bool" : {
"must" : {
"query_string" : {
"query" : "\"lace\"",
"fields" :
...
or (postman direct to 9200 ...
{
"query" : {
"query_string" : {
"query" : "\"lace\"",
"fields" :
....
Is this possible ? I'm guessing this might be something we would do in the tokaniser but I'm not quite sure where to start...?
You will not be able to handle this tokenizer level, but you can tweak configurations at mapping level to use multi-fields, you can keep a copy of the same field which will not be analyzed and later use this in query to support your usecase.
You can update your mappings like following
"mappings": {
"desc": {
"properties": {
"labels": {
"type": "string",
"analyzer": "en_GB",
"fields": {
"raw": {
"type": "keyword"
}
}
}
}
}
}
Furthur modify your query to search on raw field instead of analyzed field.
{
"query": {
"bool": {
"must": [{
"query_string": {
"default_field": "labels.raw",
"query": "lace"
}
}]
}
}
}
Hope this helps
Thanks

How do I search for partial accented keyword in elasticsearch?

I have the following elasticsearch settings:
"settings": {
"index":{
"analysis":{
"analyzer":{
"analyzer_keyword":{
"tokenizer":"keyword",
"filter":["lowercase", "asciifolding"]
}
}
}
}
}
The above works fine for the following keywords:
Beyoncé
Céline Dion
The above data is stored in elasticsearch as beyonce and celine dion respectively.
I can search for Celine or Celine Dion without the accent and I get the same results. However, the moment I search for Céline, I don't get any results. How can I configure elasticsearch to search for partial keywords with the accent?
The query body looks like:
{
"track_scores": true,
"query": {
"bool": {
"must": [
{
"multi_match": {
"fields": ["name"],
"type": "phrase",
"query": "Céline"
}
}
]
}
}
}
and the mapping is
"mappings" : {
"artist" : {
"properties" : {
"name" : {
"type" : "string",
"fields" : {
"orig" : {
"type" : "string",
"index" : "not_analyzed"
},
"simple" : {
"type" : "string",
"analyzer" : "analyzer_keyword"
}
},
}
I would suggest this mapping and then go from there:
{
"settings": {
"index": {
"analysis": {
"analyzer": {
"analyzer_keyword": {
"tokenizer": "whitespace",
"filter": [
"lowercase",
"asciifolding"
]
}
}
}
}
},
"mappings": {
"test": {
"properties": {
"name": {
"type": "string",
"analyzer": "analyzer_keyword"
}
}
}
}
}
Confirm that the same analyzer is getting used at query time. Here are some possible reasons why that might not be happening:
you specify a separate analyzer at query time on purpose that is not performing similar analysis
you are using a term or terms query for which no analyzer is applied (See Term Query and the section title "Why doesn’t the term query match my document?")
you are using a query_string query (E.g. see Simple Query String Query) - I have found that if you specify multiple fields with different analyzers and so I have needed to separate the fields into separate queries and specify the analyzer parameter (working with version 2.0)

Elastic search - no hit though there should be result

I've encountered the following problem with Elastic search, does anyone know where should I troubleshoot?
I'm happily retrieving result with the following query:
{
"query" : {
"match" : { "name" : "A1212001" }
}
}
But when I refine the value of the search field "name" to a substring, i've not no hit?
{
"query" : {
"match" : { "name" : "A12120" }
}
}
"A12120" is a substring of already hit query "A1212001"
If you don't have too many documents, you can go with a regexp query
POST /index/_search
{
"query" :{
"regexp":{
"name": "A12120.*"
}
}
}
or even a wildcard one
POST /index/_search
{
"query": {
"wildcard" : { "name" : "A12120*" }
}
}
However, as #Waldemar suggested, if you have many documents in your index, the best approach for this is to use an EdgeNGram tokenizer since the above queries are not ultra-performant.
First, you define your index settings like this:
PUT index
{
"settings" : {
"analysis" : {
"analyzer" : {
"my_analyzer" : {
"type": "custom",
"tokenizer" : "edge_tokens",
"filter": ["lowercase"]
}
},
"tokenizer" : {
"edge_tokens" : {
"type" : "edgeNGram",
"min_gram" : "1",
"max_gram" : "10",
"token_chars": [ "letter", "digit" ]
}
}
}
},
"mappings": {
"my_type": {
"properties": {
"name": {
"type": "string",
"analyzer": "my_analyzer",
"search_analyzer": "standard"
}
}
}
}
}
Then, when indexing a document whose name field contains A1212001, the following tokens will be indexed: A, A1, A12, A121, A1212, A12120, A121200, A1212001. So when you'll search for A12120 you'll find a match.
Are you using a Match Query this query will check for terms inside lucene and your term is A1212001 if you need to find a part of your term do you can use a Regex Query but you need know that will be there some internal impacts using regex because the shard will check in all of your terms.
If you need a more "professional" way to search a part of a term do you can use NGrams

Resources