ElasticSearch - Searching with hyphens - elasticsearch

Elastic Search 1.6
I want to index text that contains hyphens, for example U-12, U-17, WU-12, t-shirt... and to be able to use a "Simple Query String" query to search on them.
Data sample (simplified):
{"title":"U-12 Soccer",
"comment": "the t-shirts are dirty"}
As there are quite a lot of questions already about hyphens, I tried the following solution already:
Use a Char filter: ElasticSearch - Searching with hyphens in name.
So I went for this mapping:
{
"settings":{
"analysis":{
"char_filter":{
"myHyphenRemoval":{
"type":"mapping",
"mappings":[
"-=>"
]
}
},
"analyzer":{
"default":{
"type":"custom",
"char_filter": [ "myHyphenRemoval" ],
"tokenizer":"standard",
"filter":[
"standard",
"lowercase"
]
}
}
}
},
"mappings":{
"test":{
"properties":{
"title":{
"type":"string"
},
"comment":{
"type":"string"
}
}
}
}
}
Searching is done with the following query:
{"_source":true,
"query":{
"simple_query_string":{
"query":"<Text>",
"default_operator":"AND"
}
}
}
What works:
"U-12", "U*", "t*", "ts*"
What didn't work:
"U-*", "u-1*", "t-*", "t-sh*", ...
So it seems the char filter is not executed on search strings?
What could I do to make this work?

The answer is really simple:
Quote from Igor Motov: Configuring the standard tokenizer
By default the simple_query_string query doesn't analyze the words
with wildcards. As a result it searches for all tokens that start with
i-ma. The word i-mac doesn't match this request because during
analysis it's split into two tokens i and mac and neither of these
tokens starts with i-ma. In order to make this query find i-mac you
need to make it analyze wildcards:
{
"_source":true,
"query":{
"simple_query_string":{
"query":"u-1*",
"analyze_wildcard":true,
"default_operator":"AND"
}
}
}

the Quote from Igor Motov is true, you have to add "analyze_wildcard":true, in order to make it worked with regex. But it is important to notice that the hyphen actually tokenizes "u-12" in "u" "12", two separated words.
if preserve the original is important do not use Mapping char filter. Otherwise is kind of useful.
Imagine that you have "m0-77", "m1-77" and "m2-77", if you search m*-77 you are going to have zero hits. However you can remplace "-" (hyphen) with AND in order to connect the two separed words and then search m* AND 77 that is going to give you a correct hit.
you can do it in the client front.
In your problem u-*
{
"query":{
"simple_query_string":{
"query":"u AND 1*",
"analyze_wildcard":true
}
}
}
t-sh*
{
"query":{
"simple_query_string":{
"query":"t AND sh*",
"analyze_wildcard":true
}
}
}

If anyone is still looking for a simple workaround to this issue, replace hyphen with underscore _ when indexing data.
For eg, O-000022334 should indexed as O_000022334.
When searching, replace underscore back to hyphen again when displaying results. This way you can search for "O-000022334" and it will find a correct match.

Related

Elastic query bool must match issue

Below is the query part in Elastic GET API via command line inside openshift pod , i get all the match query as well as unmatch element in the fetch of 2000 documents. how can i limit to only the match element.
i want to specifically get {\"kubernetes.container_name\":\"xyz\"}} only.
any suggestions will be appreciated
-d ' {\"query\": { \"bool\" :{\"must\" :{\"match\" :{\"kubernetes.container_name\":\"xyz\"}},\"filter\" : {\"range\": {\"#timestamp\": {\"gte\": \"now-2m\",\"lt\": \"now-1m\"}}}}},\"_source\":[\"#timestamp\",\"message\",\"kubernetes.container_name\"],\"size\":2000}'"
For exact matches there are two things you would need to do:
Make use of Term Queries
Ensure that the field is of type keyword datatype.
Text datatype goes through Analysis phase.
For e.g. if you data is This is a beautiful day, during ingestion, text datatype would break down the words into tokens, lowercase them [this, is, a, beautiful, day] and then add them to the inverted index. This process happens via Standard Analyzer which is the default analyzer applied on text field.
So now when you query, it would again apply the analyzer at querying time and would search if the words are present in the respective documents. As a result you see documents even without exact match appearing.
In order to do an exact match, you would need to make use of keyword fields as it does not goes through the analysis phase.
What I'd suggest is to create a keyword sibling field for text field that you have in below manner and then re-ingest all the data:
Mapping:
PUT my_sample_index
{
"mappings": {
"properties": {
"kubernetes":{
"type": "object",
"properties": {
"container_name": {
"type": "text",
"fields":{ <--- Note this
"keyword":{ <--- This is container_name.keyword field
"type": "keyword"
}
}
}
}
}
}
}
}
Note that I'm assuming you are making use of object type.
Request Query:
POST my_sample_index
{
"query":{
"bool": {
"must": [
{
"term": {
"kubernetes.container_name.keyword": {
"value": "xyz"
}
}
}
]
}
}
}
Hope this helps!

In Elasticsearch, how do I search for an arbitrary substring?

In Elasticsearch, how do I search for an arbitrary substring, perhaps including spaces? (Searching for part of a word isn't quite enough; I want to search any substring of an entire field.)
I imagine it has to be in a keyword field, rather than a text field.
Suppose I have only a few thousand documents in my Elasticsearch index, and I try:
"query": {
"wildcard" : { "description" : "*plan*" }
}
That works as expected--I get every item where "plan" is in the description, even ones like "supplantation".
Now, I'd like to do
"query": {
"wildcard" : { "description" : "*plan is*" }
}
...so that I might match documents with "Kaplan isn't" among many other possibilities.
It seems this isn't possible with wildcard, match prefix, or any other query type I might see. How do I simply search on any substring? (In SQL, I would just do description LIKE '%plan is%')
(I am aware any such query would be slow or perhaps even impossible for large data sets.)
Have you tried the regxp query in elasticsearch? It sure does sound like something you might be interested in.
I was hoping there might be something built-in for this Elasticsearch, given that this simple substring search seems like a very basic capability (Thinking about it, it is implemented as strstr() in C, LIKE '%%' in SQL, Ctrl+F in most text editors, String.IndexOf in C#, etc.), but this seems not to be the case. Note that the regexp query doesn't support case insensitivity, so I also needed to pair it with this custom analyzer, so that the index matches all-lowercase. Then I can convert my search string to lowercase as well.
{
"settings": {
"analysis": {
"analyzer": {
"lowercase_keyword": {
"type": "custom",
"tokenizer": "keyword",
"filter": [ "lowercase" ]
}
}
}
},
"mappings": {
...
"description": {"type": "text", "analyzer": "lowercase_keyword"},
}
}
Example query:
"query": {
"regexp" : { "description" : ".*plan is.*" }
}
Thanks to Jai Sharma for leading me; I just wanted to provide more detail.

Elasticsearch find missing word in phrase

How can i use Elasticsearch to find the missing word in a phrase? For example i want to find all documents which contain this pattern make * great again, i tried using a wildcard query but it returned no results:
{
"fields": [
"file_name",
"mime_type",
"id",
"sha1",
"added_at",
"content.title",
"content.keywords",
"content.author"
],
"highlight": {
"encoder": "html",
"fields": {
"content.content": {
"number_of_fragments": 5
}
},
"order": "score",
"tags_schema": "styled"
},
"query": {
"wildcard": {
"content.content": "make * great again"
}
}
}
If i put in a word and use a match_phrase query i get results, so i know i have data which matches the pattern.
Which type of query should i use? or do i need to add some type of custom analyzer to the field?
Wildcard queries operate on terms, so if you use it on an analyzed field, it will actually try to match every term in that field separately. In your case, you can create a not_analyzed sub-field (such as content.content.raw) and run the wildcard query on that. Or just map the actual field to not be analyzed, if you don't need to query it in other ways.

Elasticsearch term query does not give any results

I am very new to Elasticsearch and I have to perform the following query:
GET book-lists/book-list/_search
{
"query":{
"filtered":{
"filter":{
"bool":{
"must":[
{
"term":{
"title":"Sociology"
}
},
{
"term":{
"idOwner":"17xxxxxxxxxxxx45"
}
}
]
}
}
}
}
}
According to the Elasticsearch API, it is equivalent to pseudo-SQL:
SELECT document
FROM book-lists
WHERE title = "Sociology"
AND idOwner = 17xxxxxxxxxxxx45
The problem is that my document looks like this:
{
"_index":"book-lists",
"_type":"book-list",
"_id":"AVBRSvHIXb7carZwcePS",
"_version":1,
"_score":1,
"_source":{
"title":"Sociology",
"books":[
{
"title":"The Tipping Point: How Little Things Can Make a Big Difference",
"isRead":true,
"summary":"lorem ipsum",
"rating":3.5
}
],
"numberViews":0,
"idOwner":"17xxxxxxxxxxxx45"
}
}
And the Elasticsearch query above doesn't return anything.
Whereas, this query returns the document above:
GET book-lists/book-list/_search
{
"query":{
"filtered":{
"filter":{
"bool":{
"must":[
{
"term":{
"numberViews":"0"
}
},
{
"term":{
"idOwner":"17xxxxxxxxxxxx45"
}
}
]
}
}
}
}
}
This makes me suspect that the fact that "title" is the same name for the two fields is for something.
Is there a way to fix this without having to rename any of the fields. Or am I missing it somewhere else?
Thanks for anyone trying to help.
Your problem is described in the documentation.
I suspect that you don't have any explicit mapping on your index, which means elasticsearch will use dynamic mapping.
For string fields, it will pass the string through the standard analyzer which lowercases it (among other things). This is why your query doesn't work.
Your options are:
Specify an explicit mapping on the field so that it isn't analyzed before storing in the index (index: not_analyzed).
Clean your term query before sending it to elasticsearch (in this specific query lowercasing will work, but note that the standard analyzer also does other things like remove stop words, so depending on the title you may still have issues).
Use a different query type (e.g., query_string instead of term which will analyze the query before running it).
Looking at the sort of data you are storing you probably need to specify an explicit not_analyzed mapping.
For option three your query would look something like this:
{
"query":{
"filtered":{
"filter":{
"bool":{
"must":[
{
"query_string":{
"fields": ["title"],
"analyzer": "standard",
"query": "Sociology"
}
},
{
"term":{
"idOwner":"17xxxxxxxxxxxx45"
}
}
]
}
}
}
}
}
Note that the query_string query has special syntax (e.g., OR and AND are not treated as literals) which means you have to be careful what you give it. For this reason explicit mapping with a term filter is probably more appropriate for your use case.
I have described this issue in this blog.
The issue is coming due to default tokenization in Elasticsearch.
In the same , I have outlined 2 solutions.
One is enabling not_analyzed flag on the required field and other is to use keyword tokenizer.
To expand on solarissmoke's solution, while the contents of that field will be passed through the standard analyzer, your query will not. If you refer to the Elasticsearch documentation on the term query, you will see that term queries are not analyzed.
The match query is probably more appropriate for your case. What you query will be analyzed in the same way as the contents of the title field by default. The query_string query brings a lot more to the table and you should review the documentation if you plan on using that.
So again pretty much what you had with the small tweak:
GET book-lists/book-list/_search
{
"query":{
"filtered":{
"filter":{
"bool":{
"must":[
{
"match":{
"title":"Sociology"
}
},
{
"term":{
"idOwner":"17xxxxxxxxxxxx45"
}
}
]
}
}
}
}
}
It is important to note passing lowercase version of the terms to the term query (hack - does not seem like a good idea given what solarissmoke describe about the other features of the Standard analyzer like the stop filter), using the query_string query, or using the match query is still very different from the SQL query you described:
SELECT document
FROM book-lists
WHERE title = "Sociology"
AND idOwner = 17xxxxxxxxxxxx45
With those Elasticsearch queries, you can match records where idOwner might be the same but title might be something like "Another Sociology Title" which is different from what you would expect with that SQL. Here is some great stuff from the documentation and another stackoverflow post that will elaborate on what was going on, where term queries and filters are appropriate, and getting exact matches:
Elasticsearch : Finding Exact Values
Stackoverflow : Exact (not substring) matching in Elasticsearch

problems with phrase matching in elasticsearch

I'm trying to perform Phrase matching using elasticsearch.
Here is what I'm trying to accomplish:
data - 1: {
"test" {
"title" : "text1 text2"
}
}
2: {
"test" {
"title" : "text3 text4"
}
}
3: {
"test" {
"title" : "text5"
}
}
4: {
"test" {
"title" : "text6"
}
}
Search terms:
If I lookup for "text0 text1 text2 text3" - It should return #1 (matches full string)
If I lookup for "text6 text5 text4 text3" - It should return #4, #3, but not #2 as its not in same order.
Here is what I've tried:
set the index_analyzer as keyword, and search_analyzer as standard
also tried creating custom tokens
but none of my solution allows me to lookup a substring match from search query against keyword in document.
If anyone has written similar queries, can you provide how the mappings are configured and what kind of query is been used.
What I see here is this: You want your search to match on any tokens sent from the query. If those tokens do match, it must be an exact match to the title.
This means that indexing your title field as keyword would get you that mandatory match. However, the standard analyzer for search would never match titles spaces as you'd have your index token {"text1 text2"} and your search token [{"text1},{"text2"}]. You can't use a phrase match with any sloppy value or else your token order requirement will be ignored.
So, what you really need is to generate keyword tokens during the index, but you need to generate shingles whenever you search. Your shingles will maintain order and if one of them matches, consider it a go. I would set to not output unigrams, but do allow unigrams if no shingles. This means that if you have just one word, it will output that token, but it if can combine your search words into various number of shingled tokens, it will not emit single word tokens.
PUT
{ "settings":
{
"analysis": {
"filter": {
"my_shingle": {
"type": "shingle",
"max_shingle_size": 50,
"output_unigrams": false
}
},
"analyzer": {
"my_shingler": {
"filter": [
"lowercase",
"asciifolding",
"my_shingle"
],
"type": "custom",
"tokenizer": "whitespace"
}
}
}
}
}
Then you just want to set your type mapping to use the keyword analyzer for index and the `my_shingler` analyzer for search.
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/analysis-shingle-tokenfilter.html

Resources