I am trying to get boolean queries which are stored in ES using Percolate API.
Index mapping is given below
curl -XPUT 'localhost:9200/my-index' -d '{
"mappings": {
"taggers": {
"properties": {
"content": {
"type": "string"
}
}
}
}
}'
I am inserting records like this (Queries contain proper boolean format (AND, OR, NOT etc) as given in below example)
curl -XPUT 'localhost:9200/my-index/.percolator/1' -d '{
"query" : {
"match" : {
"content" : "Audi AND BMW"
}
}
}'
And then I am posting a document to get matched queries.
curl -XGET 'localhost:9200/my-index/my-type/_percolate' -d '{
"doc" : {
"content" : "I like audi very much"
}
}'
In above case no records should come because boolean query is "Audi AND BMW" but it is still giving record. It means that it is ignoring AND condition. I am not able to figure out that why it is not working for boolean queries.
You need to percolate this query instead, match queries do not understand the AND operator (they will treat it like the normal token and), but query_string does.
curl -XPUT 'localhost:9200/my-index/.percolator/1' -d '{
"query" : {
"query_string" : {
"query" : "Audi AND BMW",
"default_field": "content"
}
}
}'
Related
I've a field containing values like 2011-10-20 with the mapping :
"joiningDate": { "type": "date", "format": "dateOptionalTime" }
The following query ends up in a SearchPhaseExecutionException.
"wildcard" : { "ingestionDate" : "2011*" }
Seems like ES(v1.1) doesn't provide that much of ecstasy. This post suggests the idea of scripting (unaccepted answer says even more). I'll try that, just asking if anyone has did it already ?
Expectation
A search string 13 should match all documents where the joiningDate field has values :
2011-10-13
2013-01-11
2100-13-02
I'm not sure if I understand your needs correctly, but I would suggest you to use "range query" for the date field.
The code below will return the results what you want to get.
{
"query": {
"range": {
"joiningDate": {
"gt": "2011-01-01",
"lt": "2012-01-01"
}
}
}
}'
I hope this could help you.
Edit (Searching date containing "13" itself.)
I suggest you to use "Multi field" functionality of Elasticsearch.
It means you can index "joiningDate" field by two different field type at the same time.
Please see and try the example codes below.
Create a index
curl -XPUT 'localhost:9200/blacksmith'
Define mapping in which the type of "joiningDate" field is "multi_field".
curl -XPUT 'localhost:9200/blacksmith/my_type/_mapping' -d '{
"my_type" : {
"properties" : {
"joiningDate" : {
"type": "multi_field",
"fields" : {
"joiningDate" : {
"type" : "date",
"format" : "dateOptionalTime"
},
"verbatim" : {
"type" : "string",
"index" : "not_analyzed"
}
}
}
}
}
}'
Indexing 4 documents (3 documents containing "13")
curl -s -XPOST 'localhost:9200/blacksmith/my_type/1' -d '{ "joiningDate": "2011-10-13" }'
curl -s -XPOST 'localhost:9200/blacksmith/my_type/2' -d '{ "joiningDate": "2013-01-11" }'
curl -s -XPOST 'localhost:9200/blacksmith/my_type/3' -d '{ "joiningDate": "2130-12-02" }'
curl -s -XPOST 'localhost:9200/blacksmith/my_type/4' -d '{ "joiningDate": "2014-12-02" }' # no 13
Try wildcard query to the "joiningDate.verbatim" field NOT the "joiningDate" field.
curl -XGET 'localhost:9200/blacksmith/my_type/_search?pretty' -d '{
"query": {
"wildcard": {
"joiningDate.verbatim": {
"wildcard": "*13*"
}
}
}
}'
We're trying to set up and use percolate, but we aren't quite getting results as expected.
First, I register a few queries:
curl -XPUT 'localhost:9200/index-234234/.percolator/query1' -d '{
"query" : {
"range" : {
"price" : { "gte": 100 }
}
}
}'
curl -XPUT 'localhost:9200/index-234234/.percolator/query2' -d '{
"query" : {
"range" : {
"price" : { "gte": 200 }
}
}
}'
And then, when I try to match it against 150, which should ideally match only query1, instead it matches both queries:
curl -XGET 'localhost:9200/index-234234/message/_percolate' -d '{
"doc" : {
"price" : 150
}
}'
{"took":4,"_shards":{"total":5,"successful":5,"failed":0},"total":2,"matches":[{"_index":"index-234234","_id":"query1"},{"_index":"index-234234","_id":"query2"}]}
Any pointers as to why this is happening would be much appreciated.
The problem is that you are registering your percolator queries prior to setting up the mappings for the document. The percolator has to register the query without a defined mapping and this can be an issue particularly for range queries.
You should start over again by deleting the index and then run this mapping command first:
curl -XPOST localhost:9200/index-234234 -d '{
"mappings" : {
"message" : {
"properties" : {
"price" : {
"type" : "long"
}
}
}
}
}'
Then execute your previous commands (register the two percolator queries and then percolate one document) you will get the following correct response:
{"took":3,"_shards":{"total":5,"successful":5,"failed":0},"total":1,"matches":[{"_index":"index-234234","_id":"query1"}]}
You may find this discussion from a couple of years ago helpful:
http://grokbase.com/t/gg/elasticsearch/124x6hq4ev/range-query-in-percolate-not-working
Not a solution, but this works (without knowing why) for me:
Register both percolator queries
Do the _percolator request (returns your result: "total": 2)
Register both percolator queries again (both are now in version 2)
Do the _percolator request again (returns right result: "total": 1)
I have just started exploring Elasticsearch. I created a document as follows:
curl -XPUT "http://localhost:9200/cities/city/1" -d'
{
"name": "Saint Louis"
}'
I now tried do a fuzzy search on the name field with a Levenshtein distance of 5 as follows :
curl -XGET "http://localhost:9200/_search " -d'
{
"query": {
"fuzzy": {
"name" : {
"value" : "St. Louis",
"fuzziness" : 5
}
}
}
}'
But its not returning any match. I expect the Saint Louis record to be returned. How can i fix my query ?
Thanks.
The problem with your query is that only a maximum edit distance of 2 is allowed.
In the case above what you probably want to do is have a synonym for St. to Saint, and that would match for you. Of course, this would depend on your data as St could also be "street".
If you want to just test the fuzzy searching, you could try this example
curl -XGET "http://localhost:9200/_search " -d'
{
"query": {
"fuzzy": {
"name" : {
"value" : "Louiee",
"fuzziness" : 2
}
}
}
}
I am experimenting with Elasticsearch parent/child with some simple examples from fun-with-elasticsearch-s-children-and-nested-documents/. I am able to query child elements by running the query in the blog
curl -XPOST localhost:9200/authors/bare_author/_search -d '{
However, I could not tweak the example for has_parent query. Can someone please point what I am doing wrong, as I keep getting 0 results.
This is what I tried
#Returns 0 hits
curl -XPOST localhost:9200/authors/book/_search -d '{
"query": {
"has_parent": {
"type": "bare_author",
"query" : {
"filtered": {
"query": { "match_all": {}},
"filter" : {"term": { "name": "Alastair Reynolds"}}
}
}
}
}
}'
#did not work either
curl -XPOST localhost:9200/authors/book/_search -d '{
"query": {
"has_parent" : {
"type" : "bare_author",
"query" : {
"term" : {
"name" : "Alastair Reynolds"
}
}
}
}
}'
This works with match but its just matching the first name
#works but matches just first name
curl -XPOST localhost:9200/authors/book/_search -d '{
"query": {
"has_parent" : {
"type" : "bare_author",
"query" : {
"match" : {"name": "Alastair"}
}
}
}
}'
I suppose you are using the default mappings, thus analysing the name field using the standard analyzer. On the other hand, term query and term filter don't support text analysis thus you search for the token Alastair Reynolds while in the index you have alastair and reynolds as two different tokens and lowercased.
The match query returns result because it's analyzed, thus underneath lowercased and it finds matches. You can just change your term query and make it a match query, it will find matches even with multiple terms, because in that case it will be tokenized on whitespaces and will generate a boolean or dismax query out of the different terms provided.
for particular query, how can i define separate query analyzers by
field (phonetic_name, name). Just define search_analyzers for phonetic_name & name in Put Mapping of Index/Type?
{
"query_string" : {
"fields" : ["phonetic_name", "name^5"],
"query" : "italian food",
"use_dis_max" : true
}
}
You can specify the analyzer for a field when the index is created, for example:
curl -s -XPOST localhost:9200/myindex -d '{
"mappings":{
"mytype":{
"properties":{
"field1":{"store":"yes","index":"not_analyzed","type":"string"},
"field2":{"store":"yes","analyzer":"whitespace","type":"string"},
"field3":{"store":"yes","analyzer":"simple","type":"string"},
}
}
}
}'