elasticsearch update a source field without re-indexing entire document - elasticsearch

I have document {customerID: 111, name: bob, approved: yes}
The field "approved" is not indexed. I have a mapping set as "approved": { "type" : "string", "index" : "no" }
So only the fields "customerID" and "name" are indexed.
How can I update just the approved field in the _source without re-indexing the entire document? I can pass the partial document to update such as {approved: no}
Is this possible?

What you're looking for is partial update. The problem is this will actually perform delete+put+index implicitly, but you just leave this hustle for ES and will not lose time for network roundtrip. Probably ES will optimize such query (in case of unindexed fields, but AFAIK it doesn't do such for now)
POST so/t3/1
{
"name": "Bob",
"id": 1,
"approved": "no"
}
GET so/t3/_search
POST so/t3/1/_update
{
"doc": {
"approved": "yes"
}
}
{
"took": 2,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 1,
"max_score": 1,
"hits": [
{
"_index": "so",
"_type": "t3",
"_id": "1",
"_score": 1,
"_source": {
"name": "Bob",
"id": 1,
"approved": "yes"
}
}
]
}
}

Related

ElasticSearch: Exact match on Keyword datatype field with array of values

In ElasticSearch, I have a mapping for an email field and title field as given below:
{
"person": {
"mappings": {
"_doc": {
"email": {
"type": "keyword",
"boost": 80
},
"title": {
"type": "text",
"boost": 70
}
}
}
}
Each person can have more than one email address and title. So, I'm storing the values in arrays.
I use query_string to search for persons with an email address and/or title. Email address needs to match exactly.
I have indexed a document with the following data. Calling GET person/_search in Kibana will yield the following document in the result.
{
"took": 0,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"skipped": 0,
"failed": 0
},
"hits": {
"total": 1,
"max_score": 1,
"hits": [
{
"_index": "person",
"_type": "_doc",
"_id": "101",
"_score": 1,
"_source": {
"title": """["Actor", "Hero", "Model"]""",
"email": """["jdepp#hotmail.com", "johnny#hollywood.com", "jdepp#gmail.com", "johnny.depp#yahoo.com"]""",
"SEARCH_ENTITY": "PERSON"
}
}
]
}
}
Now when I add some email search parameter I don't get the document back in the result. Remember email is of type keyword.
Request:
GET person/_search
{
"query" : {
"query_string" : {
"query" : "SEARCH_ENTITY:PERSON AND (email: (johnny.depp#yahoo.com))"
}
}
}
Response:
{
"took": 1,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"skipped": 0,
"failed": 0
},
"hits": {
"total": 0,
"max_score": null,
"hits": []
}
}
But the same kind of query works for title field which is of type text.
Request:
GET person/_search
{
"query" : {
"query_string" : {
"query" : "SEARCH_ENTITY:PERSON AND (title: ((actor)))"
}
}
}
Response:
{
"took": 3,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"skipped": 0,
"failed": 0
},
"hits": {
"total": 1,
"max_score": 20.137747,
"hits": [
{
"_index": "person",
"_type": "_doc",
"_id": "101",
"_score": 20.137747,
"_source": {
"ID": "101",
"title": """["Actor", "Hero", "Model"]""",
"email": """["jdepp#hotmail.com", "johnny#hollywood.com", "jdepp#gmail.com", "johnny.depp#yahoo.com"]"""
}
}
]
}
}
Can someone tell me what I need to do to make this work for email field which is of keyword type?
Note: If I store only one email address without using an array, it works fine.
Thanks.
Make sure you parse the json array strings in title and email like so before you index your docs:
POST person/_doc/101
{
"title": [
"Actor",
"Hero",
"Model"
],
"email": [
"jdepp#hotmail.com",
"johnny#hollywood.com",
"jdepp#gmail.com",
"johnny.depp#yahoo.com"
],
"SEARCH_ENTITY": "PERSON"
}
Nothing needs to be changed about the mapping -- just the field values.

MLT (More Like This) elasticsearch query

I'm trying to use elasticsearch MLT (More Like This) query.
Only one doc in store:
{
"_index": "monitors",
"_type": "monitor",
"_id": "AVTnvJ8SancUpEdFLMiq",
"_score": 1,
"_source": {
"ProcessGroup": "test",
"ProcessName": "test",
"OpName": "test",
"Domain": "test",
"LogLevel": "Info",
"StartDateTime": "2016-05-04 04:46:47",
"EndDateTime": "2016-05-04 04:47:47",
"MessageDateTime": "2016-05-04 04:46:47",
"ApplicationCode": "test",
"Status": "10",
}
}
Query:
POST /_search
{
"query": {
"more_like_this" : {
"fields" : ["ProcessName"],
"like" : "test",
"min_term_freq" : 1,
"max_query_terms" : 12
}
}
}
ProcessName is a not analyzed field.
I was expected to get this document as a response, but instead i got nada:
{
"took": 1,
"timed_out": false,
"_shards": {
"total": 2,
"successful": 2,
"failed": 0
},
"hits": {
"total": 0,
"max_score": null,
"hits": []
}
}
Why is that ?
Another question:
Suppose I have search engines docs, and I search for "stph". I expect to get "Stephan Curry" suggestion because it's commonly searched. Fuzzy search doesn't fit because distance is greater than 2, so does using MLT query is a good option for this scenario ?

I want to use a wildcard query for url in elasticsearch. I am using elasticsearch 2.3.0

My index looks like this:
GET pibtest1/_search
{
"took": 5,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 11,
"max_score": 1,
"hits": [
{
"_index": "pibtest1",
"_type": "SearchTech",
"_id": "_update",
"_score": 1,
"_source": {
"script": "ctx._source.remove(\"wiki_collection\")"
}
},
{
"_index": "pibtest1",
"_type": "SearchTech",
"_id": "http://www.searchtechnologies.com/bundles/jquery?v=gOdOgfykTFJnypePAvGweyMPwl-krhx8ntIhefPKelg1",
"_score": 1,
"_source": {
"extension": {
"X-Parsed-By": "org.apache.tika.parser.DefaultParser",
"Content-Encoding": "ISO-8859-1",
"resourceName": "http://www.searchtechnologies.com/bundles/jquery?v=gOdOgfykTFJnypePAvGweyMPwl-krhx8ntIhefPKelg1"
},
"keywords": "keywords-NOT-PROVIDED",
"default_collection": true,
"wiki_collection": false,
"description": "description-NOT-PROVIDED",
"connectorSpecific": {
"discoveredBy": "http://www.searchtechnologies.com/",
"xslt": "false",
"pathFromSeed": "E",
"md5": "OKTGVLEWTE5V4PWXUBM2RK3KMQ"
},
"title": "Title-NOT-PROVIDED",
"url": "http://www.searchtechnologies.com/bundles/jquery?v=gOdOgfykTFJnypePAvGweyMPwl-krhx8ntIhefPKelg1",
"remove": "wiki_collection",
"UD": "http://www.searchtechnologies.com/bundles/jquery?v=gOdOgfykTFJnypePAvGweyMPwl-krhx8ntIhefPKelg1",
Now I want to use a wildcard query to search for few url which includes some pattern(for eg. http://www.searchtechnologies.com/bundles)
This is my wildcard query:
GET pibtest1/_search
{
"query": {
"wildcard": {
"url": {
"value": "http://www.searchtechnologies.com/bundles*"
}
}
}
}
I am using "*" wildcard which matches any character sequence. But I am not getting any results. My output looks like this:
{
"took": 11,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 0,
"max_score": null,
"hits": []
}
}
I want my results to include those url which matches this "http://www.searchtechnologies.com/bundles" pattern. Any help would be appreciated.
Based on comments your url field is an analyzed field. So when you insert data the data will be tokenized as ["www.searchtechnologies.com", "v", "jquery", "gOdOgfykTFJnypePAvGweyMPwl", ...]. So your query wont match this field.
You should delete your index.
Insert a mapping and specify url field as not analyzed {"index":"not_analyzed"}
Insert your data.
Run wildcard query.
If you dont want to delete your index because a downtime check: https://www.elastic.co/blog/changing-mapping-with-zero-downtime

How to filter out elements from an array that doesn’t match the query?

stackoverflow won't let me write that much example code so I put it on gist.
So I have this index
with this mapping
here is a sample document I insert into newly created mapping
this is my query
GET products/paramSuggestions/_search
{
"size": 10,
"query": {
"filtered": {
"query": {
"match": {
"paramName": {
"query": "col",
"operator": "and"
}
}
}
}
}
}
this is the unwanted result I get from previous query
{
"took": 2,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 1,
"max_score": 0.33217794,
"hits": [
{
"_index": "products",
"_type": "paramSuggestions",
"_id": "1",
"_score": 0.33217794,
"_source": {
"productName": "iphone 6",
"params": [
{
"paramName": "color",
"value": "white"
},
{
"paramName": "capacity",
"value": "32GB"
}
]
}
}
]
}
}
and finally the wanted result, how I want the query result to look like
{
"took": 2,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 1,
"max_score": 0.33217794,
"hits": [
{
"_index": "products",
"_type": "paramSuggestions",
"_id": "1",
"_score": 0.33217794,
"_source": {
"productName": "iphone 6",
"params": [
{
"paramName": "color",
"value": "white"
},
]
}
}
]
}
}
How should the query look like to achieve the wanted result with filtered array field which matches the query? In other words, all other non-matching array items should not appear in the final result.
The final result is the _source document that you indexed. There is no feature that lets you mask field elements of your document out of the Elasticsearch response.
That said, depending on your goal, you can look into how Highlighters and Suggesters identify result terms matching the query, or possibly, roll-your-own client-side masking using info returned from setting "explain": true in your query.

Does an empty field in a document take up space in elasticsearch?

Does an empty field in a document take up space in elasticsearch?
For example, in the case below, is the total amount of space used to store the document the same in Case A as in Case B (assuming the field "colors" is defined in the mapping).
Case A
{"features":
"price": 1,
"colors":[]
}
Case B
{"features":
"price": 1,
}
If you keep the default settings, the original document is stored in the _source field, there will be a difference as the original document of case A is bigger than case B.
Otherwise, there should be no difference : for case A, no term is added in the index for the colors field as it's empty.
You can use the _size field to see the size of the original document indexed, which is the size of the _source field :
POST stack
{
"mappings":{
"features":{
"_size": {"enabled":true, "store":true},
"properties":{
"price":{
"type":"byte"
},
"colors":{
"type":"string"
}
}
}
}
}
PUT stack/features/1
{
"price": 1
}
PUT stack/features/2
{
"price": 1,
"colors": []
}
POST stack/features/_search
{
"fields": [
"_size"
]
}
The last query will output this result, which shows than document 2 takes more space than 1:
{
"took": 1,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 2,
"max_score": 1,
"hits": [
{
"_index": "stack",
"_type": "features",
"_id": "1",
"_score": 1,
"fields": {
"_size": 16
}
},
{
"_index": "stack",
"_type": "features",
"_id": "2",
"_score": 1,
"fields": {
"_size": 32
}
}
]
}
}

Resources