Elasticsearch does not filter as expected - elasticsearch

I am using Elasticsearch 1.4
I have an Index:
curl -XPUT "http://localhost:49200/customer" -d '{"mappings": {"venues": {"properties": {"party_id": {"type": "string"},"sup_party_id": {"type": "string"},"location": {"type": "geo_point"} } } }}'
And put some data, for instances:
curl -XPOST "http://localhost:49200/customer/venues/RO2" -d '{ "party_id":"RO2", "sup_party_id": "SUP_GT_R1A_0001","location":{ "lat":"21.030347","lon":"105.842896" }}'
curl -XPOST "http://localhost:49200/customer/venues/RO3" -d '{ "party_id":"RO3", "sup_party_id": "SUP_GT_R1A_0004","location":{ "lat":"20.9602051","lon":"105.78709179999998" }}'
and my filter is:
{"constant_score":
{"filter":
{"and":
[{"terms":
{"sup_party_id":["SUP_GT_R1A_0004","SUP_GT_R1A_0001","RO2","RO3","RO4"]
}
},{"geo_bounding_box":
{"location":
{"top_left":{"lat":25.74546096707413,"lon":70.43503197075188},
"bottom_right":{"lat":6.342579199578783,"lon":168.96042259575188}
}
}
}]
}
}
}
the above query does not return data but It return data when I remove the following terms:
{"terms":
{"sup_party_id":["SUP_GT_R1A_0004","SUP_GT_R1A_0001","RO2","RO3","RO4"]
}
}
Please show me the problem, any suggestions is appreciated!

That's because the sup_party_id field is an analyzed string. Change your mapping like this instead and it will work:
curl -XPUT "http://localhost:49200/customer" -d '{
"mappings": {
"venues": {
"properties": {
"party_id": {
"type": "string"
},
"sup_party_id": {
"type": "string",
"index": "not_analyzed" <--- add this
},
"location": {
"type": "geo_point"
}
}
}
}
}'

Related

error in elasticserach while importing single line via json

ubuntu#ip-172-31-5-121:~$ curl -XPUT localhost:9200/movies -d'
{
"mapping": {
"properties": {
"year": {
"type": "date"
}
}
}
}'
{"error":{"root_cause":[{"type":"parse_exception","reason":"unknown key [mapping] for create index"}],"type":"parse_exception","reason":"unknown key [mapping] for create index"},"status":400}
Tldr;
Be careful with typos. "mappings" takes an S
To solve
curl -XPUT localhost:9200/movies -d'
{
"mappings": {
"properties": {
"year": {
"type": "date"
}
}
}
}
'

Copy field value to a new field in existing index

I have a document that has the structure with an field object with a nested field internally. The nested field is responsible for storing all interactions that occurred in an internal communication.
It happens that I need to create a new field inside the nested field, with a new type that will now be used to store the old field with a new parser.
How can I copy the data from the old field to the new field inside the nested field?
My document:
curl -XPUT 'localhost:9200/problems?pretty' -H 'Content-Type: application/json' -d '
{
"settings": {
"number_of_shards": 1
},
"mappings": {
"problem": {
"properties": {
"problemid": {
"type": "long"
},
"subject": {
"type": "text",
"index": true
},
"usermessage": {
"type": "object",
"properties": {
"content": {
"type": "nested",
"properties": {
"messageid": {
"type": "long",
"index": true
},
"message": {
"type": "text",
"index": true
}
}
}
}
}
}
}
}
}'
My New Field:
curl -XPUT 'localhost:9200/problems/_mapping/problem?pretty' -H 'Content-Type: application/json' -d '
{
"properties": {
"usermessage": {
"type": "object",
"properties": {
"content": {
"type": "nested",
"properties": {
"message_accents" : {
"type" : "text",
"analyzer" : "ignoreaccents"
}
}
}
}
}
}
}
'
Data Example:
{
"problemid": 1,
"subject": "Test",
"usermessage": [
{
"messageid": 1
"message": "Hello"
},
{
"messageid": 2
"message": "Its me"
},
]
}'
My script to copy fields:
curl -XPOST 'localhost:9200/problems/_update_by_query' -H 'Content-Type: application/json' -d '
{
"query": {
"match_all": {
}
},
"script": "ctx._source.usermessage.content.message_accents = ctx._source.usermessage.content.message"
}'
I tried the code below but it didn't work, it returns an error.
curl -XPOST 'localhost:9200/problems/_update_by_query' -H 'Content-Type: application/json' -d '
{
"query": {
"match_all": {
}
},
"script": "ctx._source.usermessage.content.each { elm -> elm.message_accents = elm.message }"
}
'
Error:
"script":"ctx._source.usermessage.content.each { elm -> elm.message_accents = elm.message }","lang":"painless","caused_by":{"type":"illegal_argument_exception","reason":"unexpected token ['{'] was expecting one of [{, ';'}]."}},"status":500}%

Best way to search/index the data - with and without whitespace

I am having a problem indexing and searching for words that may or may not contain whitespace...Below is an example
Here is how the mappings are set up:
curl -s -XPUT 'localhost:9200/test' -d '{
"mappings": {
"properties": {
"name": {
"street": {
"type": "string",
"index_analyzer": "index_ngram",
"search_analyzer": "search_ngram"
}
}
}
},
"settings": {
"analysis": {
"filter": {
"desc_ngram": {
"type": "edgeNGram",
"min_gram": 3,
"max_gram": 20
}
},
"analyzer": {
"index_ngram": {
"type": "custom",
"tokenizer": "keyword",
"filter": [ "desc_ngram", "lowercase" ]
},
"search_ngram": {
"type": "custom",
"tokenizer": "keyword",
"filter": "lowercase"
}
}
}
}
}'
This is how I built the index:
curl -s -XPUT 'localhost:9200/test/name/1' -d '{ "street": "Lakeshore Dr" }'
curl -s -XPUT 'localhost:9200/test/name/2' -d '{ "street": "Sunnyshore Dr" }'
curl -s -XPUT 'localhost:9200/test/name/3' -d '{ "street": "Lake View Dr" }'
curl -s -XPUT 'localhost:9200/test/name/4' -d '{ "street": "Shore Dr" }'
Here is an example of the query that is not working correctly:
curl -s -XGET 'localhost:9200/test/_search?pretty=true' -d '{
"query":{
"bool":{
"must":[
{
"match":{
"street":{
"query":"lake shore dr",
"type":"boolean"
}
}
}
]
}
}
}';
If a user attempts to search for "Lake Shore Dr", I want to only match to document 1/"Lakeshore Dr"
If a user attempts to search for "Lakeview Dr", I want to only match to document 3/"Lake View Dr"
So is the issue with how I am setting up the mappings (tokenizer?, edgegram vs ngrams?, size of ngrams?) or the query (I have tried things like setting the minimum_should_match, and the analyzer to use), but I have not been able to get the desired results.
Thanks all.

accessing _id or _parent fields in script query in elasticsearch

when writing a search query with a script, I can access fields using "doc['myfield']"
curl -XPOST 'http://localhost:9200/index1/type1/_search' -d '
{
"query": {
"filtered": {
"query": {
"match_all": {}
},
"filter": {
"script": {
"script": "doc[\"myfield\"].value>0",
"params": {},
"lang":"python"
}
}
}
}
}'
how do I go about accessing the _id or _parent fields?
The "ctx" object does not seem to be available in a search query (while it is accessible in an update API request, why?).
Mind you, I am using the python language instead of mvel, but both of them pose the same question.
By default, both document id and parent id are indexed in uid format: type#id. Elasticsearch provides a few methods that can be used to extract type and id from uid string. Here is an example of using these methods in MVEL:
curl -XDELETE localhost:9200/test
curl -XPUT localhost:9200/test -d '{
"settings": {
"index.number_of_shards": 1,
"index.number_of_replicas": 0
},
"mappings": {
"doc": {
"properties": {
"name": {
"type": "string"
}
}
},
"child_doc": {
"_parent": {
"type": "doc"
},
"properties": {
"name": {
"type": "string"
}
}
}
}
}'
curl -XPUT "localhost:9200/test/doc/1" -d '{"name": "doc 1"}'
curl -XPUT "localhost:9200/test/child_doc/1-1?parent=1" -d '{"name": "child 1-1 of doc 1"}'
curl -XPOST "localhost:9200/test/_refresh"
echo
curl "localhost:9200/test/child_doc/_search?pretty=true" -d '{
"script_fields": {
"uid_in_script": {
"script": "doc[\"_uid\"].value"
},
"id_in_script": {
"script": "org.elasticsearch.index.mapper.Uid.idFromUid(doc[\"_uid\"].value)"
},
"parent_uid_in_script": {
"script": "doc[\"_parent\"].value"
},
"parent_id_in_script": {
"script": "org.elasticsearch.index.mapper.Uid.idFromUid(doc[\"_parent\"].value)"
},
"parent_type_in_script": {
"script": "org.elasticsearch.index.mapper.Uid.typeFromUid(doc[\"_parent\"].value)"
}
}
}'
echo

How to match on prefix in Elasticsearch

let's say that in my elasticsearch index I have a field called "dots" which will contain a string of punctuation separated words (e.g. "first.second.third").
I need to search for e.g. "first.second" and then get all entries whose "dots" field contains a string being exactly "first.second" or starting with "first.second.".
I have a problem understanding how the text querying works, at least I have not been able to create a query which does the job.
Elasticsearch has Path Hierarchy Tokenizer that was created exactly for such use case. Here is an example of how to set it for your index:
# Create a new index with custom path_hierarchy analyzer
# See http://www.elasticsearch.org/guide/reference/index-modules/analysis/pathhierarchy-tokenizer.html
curl -XPUT "localhost:9200/prefix-test" -d '{
"settings": {
"analysis": {
"analyzer": {
"prefix-test-analyzer": {
"type": "custom",
"tokenizer": "prefix-test-tokenizer"
}
},
"tokenizer": {
"prefix-test-tokenizer": {
"type": "path_hierarchy",
"delimiter": "."
}
}
}
},
"mappings": {
"doc": {
"properties": {
"dots": {
"type": "string",
"analyzer": "prefix-test-analyzer",
//"index_analyzer": "prefix-test-analyzer", //deprecated
"search_analyzer": "keyword"
}
}
}
}
}'
echo
# Put some test data
curl -XPUT "localhost:9200/prefix-test/doc/1" -d '{"dots": "first.second.third"}'
curl -XPUT "localhost:9200/prefix-test/doc/2" -d '{"dots": "first.second.foo-bar"}'
curl -XPUT "localhost:9200/prefix-test/doc/3" -d '{"dots": "first.baz.something"}'
curl -XPOST "localhost:9200/prefix-test/_refresh"
echo
# Test searches.
curl -XPOST "localhost:9200/prefix-test/doc/_search?pretty=true" -d '{
"query": {
"term": {
"dots": "first"
}
}
}'
echo
curl -XPOST "localhost:9200/prefix-test/doc/_search?pretty=true" -d '{
"query": {
"term": {
"dots": "first.second"
}
}
}'
echo
curl -XPOST "localhost:9200/prefix-test/doc/_search?pretty=true" -d '{
"query": {
"term": {
"dots": "first.second.foo-bar"
}
}
}'
echo
curl -XPOST "localhost:9200/prefix-test/doc/_search?pretty=true&q=dots:first.second"
echo
There is also a much easier way, as pointed out in elasticsearch documentation:
just use:
{
"text_phrase_prefix" : {
"fieldname" : "yourprefix"
}
}
or since 0.19.9:
{
"match_phrase_prefix" : {
"fieldname" : "yourprefix"
}
}
instead of:
{
"prefix" : {
"fieldname" : "yourprefix"
}
Have a look at prefix queries.
$ curl -XGET 'http://localhost:9200/index/type/_search' -d '{
"query" : {
"prefix" : { "dots" : "first.second" }
}
}'
You should use a commodin chars to make your query, something like this:
$ curl -XGET http://localhost:9200/myapp/index -d '{
"dots": "first.second*"
}'
more examples about the syntax at: http://lucene.apache.org/core/old_versioned_docs/versions/2_9_1/queryparsersyntax.html
I was looking for a similar solution - but matching only a prefix. I found #imtov's answer to get me almost there, but for one change - switching the analyzers around:
"mappings": {
"doc": {
"properties": {
"dots": {
"type": "string",
"analyzer": "keyword",
"search_analyzer": "prefix-test-analyzer"
}
}
}
}
instead of
"mappings": {
"doc": {
"properties": {
"dots": {
"type": "string",
"index_analyzer": "prefix-test-analyzer",
"search_analyzer": "keyword"
}
}
}
}
This way adding:
'{"dots": "first.second"}'
'{"dots": "first.third"}'
Will add only these full tokens, without storing first, second, third tokens.
Yet searching for either
first.second.anyotherstring
first.second
will correctly return only the first entry:
'{"dots": "first.second"}'
Not exactly what you asked for but somehow related, so I thought could help someone.

Resources