How can i make elasticsearch return nested values in format of hits {value1:..., value2..., value3..., etc..}
This is my request:
{
"_source": 0,
"query": {
"bool": {
"must": [
{
"nested": {
"path": "photo",
"query": {
"bool": {
"must": [
{
"match": {
"photo.hello": "true"
}
}
]
}
},
"inner_hits" : {}
}
}
]
}}}
{
"took": 4,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 2,
"max_score": 1.2231436,
"hits": [
{
"_index": ".3eautiful",
"_type": "profile",
"_id": "6UAaCls5iSgavEtFE2qMX902Xmb2",
"_score": 1.2231436,
"inner_hits": {
"photo": {
"hits": {
"total": 1,
"max_score": 1.2231436,
"hits": [
{
"_index": ".3eautiful",
"_type": "profile",
"_id": "6UAaCls5iSgavEtFE2qMX902Xmb2",
"_nested": {
"field": "photo",
"offset": 0
},
"_score": 1.2231436,
"_source": {
"hello": "true",
"i_am_superCOOL": "true",
"xoxox": "true",
"id": "-KSDRx5BN54JHitoq7Wb"
}
}
]
}
}
}
},
{
"_index": ".3eautiful",
"_type": "profile",
"_id": "KDFbeXrOedf7b6NVRGMO0HDIFgx1",
"_score": 1.2231436,
"inner_hits": {
"photo": {
"hits": {
"total": 2,
"max_score": 1.2231436,
"hits": [
{
"_index": ".3eautiful",
"_type": "profile",
"_id": "KDFbeXrOedf7b6NVRGMO0HDIFgx1",
"_nested": {
"field": "photo",
"offset": 1
},
"_score": 1.2231436,
"_source": {
"alahu": "true",
"hello": "true",
"same": "true",
"smukais": "true",
"id": "-KSDJzyUC_N5je-cR2aT"
}
},
{
"_index": ".3eautiful",
"_type": "profile",
"_id": "KDFbeXrOedf7b6NVRGMO0HDIFgx1",
"_nested": {
"field": "photo",
"offset": 0
},
"_score": 1.2231436,
"_source": {
"hello": "true",
"same": "true",
"selfyyy": "true",
"superSexy": "true",
"id": "-KPn4p7spS8NO7IVSLdF"
}
}
]
}
}
}
}
]
}
}
I am using 2 dimension dynamic attribute search, the problem with this approach is that the result's can be 20 from 1 user, but i need to make it propriety based.
Just sticked to the same format.
Related
Right now I am using hunspell dictionary as my search engine in ES. It works weirdly and I don't understand why. For example, I have several entries in my index with the word "перец" in different forms:
1 ч. л. смеси перцев горошком;
2–3 колечка красного перца чили с семенами;
черный молотый перец;
and several entries with the word "колодец" in different forms:
несколько колодцев;
3 колодца;
1 колодец;
My index has the following settings:
PUT http://localhost:9200/ingredient
Content-Type: application/json
{
"settings": {
"analysis": {
"analyzer": {
"custom_analyzer": {
"tokenizer": "standard",
"filter": [
"lowercase",
"ru_RU",
"my_stemmer"
],
"char_filter": [
"html_strip"
]
}
},
"filter": {
"my_stemmer": {
"type": "stemmer",
"language": "russian"
},
"ru_RU": {
"type": "hunspell",
"locale": "ru_RU"
}
}
}
},
"mappings": {
"properties": {
"name": {
"type": "text",
"analyzer": "custom_analyzer"
}
}
}
}
When I make my search query for "колодец" like this:
GET http://localhost:9200/ingredient/_search?pretty
Content-Type: application/json
{
"query": {
"query_string": {
"query": "колодец",
"default_field": "name"
}
}
}
I receive the following JSON:
{
"took": 4,
"timed_out": false,
"_shards": {
"total": 1,
"successful": 1,
"skipped": 0,
"failed": 0
},
"hits": {
"total": {
"value": 3,
"relation": "eq"
},
"max_score": 5.0841255,
"hits": [
{
"_index": "ingredient",
"_type": "_doc",
"_id": "2940d2bc-59ca-4c41-98d6-803d50913d04",
"_score": 5.0841255,
"_source": {
"name": "несколько колодцев",
"id": "2940d2bc-59ca-4c41-98d6-803d50913d04",
"_meta": {}
}
},
{
"_index": "ingredient",
"_type": "_doc",
"_id": "2940d2bc-59ca-4c41-98d6-803d50913d05",
"_score": 5.0841255,
"_source": {
"name": "3 колодца",
"id": "2940d2bc-59ca-4c41-98d6-803d50913d05",
"_meta": {}
}
},
{
"_index": "ingredient",
"_type": "_doc",
"_id": "2940d2bc-59ca-4c41-98d6-803d50913d06",
"_score": 5.0841255,
"_source": {
"name": "1 колодец",
"id": "2940d2bc-59ca-4c41-98d6-803d50913d06",
"_meta": {}
}
}
]
}
}
Response code: 200 (OK); Time: 45ms; Content length: 1199 bytes
But when I make the similar request with "перец":
GET http://localhost:9200/ingredient/_search?pretty
Content-Type: application/json
{
"query": {
"query_string": {
"query": "перец",
"default_field": "name"
}
}
}
I only get this:
{
"took": 1,
"timed_out": false,
"_shards": {
"total": 1,
"successful": 1,
"skipped": 0,
"failed": 0
},
"hits": {
"total": {
"value": 23,
"relation": "eq"
},
"max_score": 3.1693017,
"hits": [
{
"_index": "ingredient",
"_type": "_doc",
"_id": "9c72cba2-2986-40dd-b15b-0df0288e91f1",
"_score": 2.8541024,
"_source": {
"name": "свежемолотый черный перец",
"id": "9c72cba2-2986-40dd-b15b-0df0288e91f1",
"_meta": {}
}
},
]
}
}
I do not get neither 1 ч. л. смеси перцев горошком nor 2–3 колечка красного перца чили с семенами.
It seems strange to me because колодец and перец have a similar way of making their morphological forms. Do I have this problem because my hunspell dictionary is not full enough? If so where can I find the most complete hunspell dictionary or the other dictionary for the Russian language?
I'm using ElasticSearch 2.4
I need to get all Purchases that match all queries.
I'm actually using inner_hits function but it doesn´t works as expected because it only shows the match of the current nested query and the problem is the combination with main document query.
I have this mapping and bellow I created an example with my comments:
PUT /example_contact_purchases
{
"mappings": {
"contact": {
"dynamic": false,
"properties": {
"name": {
"type": "string"
},
"country": {
"type": "string"
},
"purchases": {
"type": "nested",
"properties": {
"uuid":{
"type":"string"
},
"brand":{
"type":"string"
}
}
}
}
}
}
}
POST example_contact_purchases/contact
{
"name" : "Fran",
"country": "ES",
"purchases" : [
{
"uuid" : "23",
"brand":"Sony"
},
{
"uuid":"23",
"brand":"Sony"
}
]
}
POST example_contact_purchases/contact
{
"name" : "Jhon",
"country": "UK",
"purchases" : [
{
"uuid" : "45",
"brand": "Lenovo"
},
{
"uuid":"23",
"brand":"Sony"
},
{
"uuid":"77",
"brand":"HP"
}
]
}
POST example_contact_purchases/contact
{
"name" : "Lucas",
"country": "ES",
"purchases" : [
{
"uuid" : "45",
"brand": "Lenovo"
},
{
"uuid":"23",
"brand":"Sony"
},
{
"uuid":"77",
"brand":"HP"
}
]
}
GET example_contact_purchases/contact/_search
{
"query": {
"bool": {
"should": [
{"bool": {
"must": [
{
"query_string": {
"query": "country:ES"
}
},
{
"nested": {
"path": "purchases",
"inner_hits":{
"name":"0"
},
"filter": {
"query": {
"query_string": {
"query": "(purchases.brand:Sony)"
}
}
}
}
}
]
}},
{"bool": {
"must": [
{
"query_string": {
"query": "country:UK"
}
},
{
"nested": {
"path": "purchases",
"inner_hits":{
"name":"1"
},
"filter": {
"query": {
"query_string": {
"query": "(purchases.uuid:45)"
}
}
}
}
}
]
}
}
]
}
}
}
I am using simple query like this:
"(country.raw:ES AND purchases.brand:Sony) OR (country:UK AND purchases.uuid:45)"
And the result of the search query is:
{
"took": 10,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 3,
"max_score": 0.5949223,
"hits": [
{
"_index": "example_contact_purchases",
"_type": "contact",
"_id": "AXFfJJdZXthyTIlmcERM",
"_score": 0.5949223,
"_source": {
"name": "Jhon",
"country": "UK",
"purchases": [
{
"uuid": "45",
"brand": "Lenovo"
},
{
"uuid": "23",
"brand": "Sony"
},
{
"uuid": "77",
"brand": "HP"
}
]
},
"inner_hits": {
"0": {
"hits": {
"total": 1,
"max_score": 1,
"hits": [
{
"_index": "example_contact_purchases",
"_type": "contact",
"_id": "AXFfJJdZXthyTIlmcERM",
"_nested": {
"field": "purchases",
"offset": 1
},
"_score": 1,
"_source": {
"uuid": "23",
"brand": "Sony"
}
}
]
}
},
"1": {
"hits": {
"total": 1,
"max_score": 1,
"hits": [
{
"_index": "example_contact_purchases",
"_type": "contact",
"_id": "AXFfJJdZXthyTIlmcERM",
"_nested": {
"field": "purchases",
"offset": 0
},
"_score": 1,
"_source": {
"uuid": "45",
"brand": "Lenovo"
}
}
]
}
}
}
},
{
"_index": "example_contact_purchases",
"_type": "contact",
"_id": "AXFfJKBHXthyTIlmcERN",
"_score": 0.5949223,
"_source": {
"name": "Lucas",
"country": "ES",
"purchases": [
{
"uuid": "45",
"brand": "Lenovo"
},
{
"uuid": "23",
"brand": "Sony"
},
{
"uuid": "77",
"brand": "HP"
}
]
},
"inner_hits": {
"0": {
"hits": {
"total": 1,
"max_score": 1,
"hits": [
{
"_index": "example_contact_purchases",
"_type": "contact",
"_id": "AXFfJKBHXthyTIlmcERN",
"_nested": {
"field": "purchases",
"offset": 1
},
"_score": 1,
"_source": {
"uuid": "23",
"brand": "Sony"
}
}
]
}
},
"1": {
"hits": {
"total": 1,
"max_score": 1,
"hits": [
{
"_index": "example_contact_purchases",
"_type": "contact",
"_id": "AXFfJKBHXthyTIlmcERN",
"_nested": {
"field": "purchases",
"offset": 0
},
"_score": 1,
"_source": {
"uuid": "45",
"brand": "Lenovo"
}
}
]
}
}
}
},
{
"_index": "example_contact_purchases",
"_type": "contact",
"_id": "AXFfJI1SXthyTIlmcERL",
"_score": 0.5139209,
"_source": {
"name": "Fran",
"country": "ES",
"purchases": [
{
"uuid": "23",
"brand": "Sony"
},
{
"uuid": "23",
"brand": "Sony"
}
]
},
"inner_hits": {
"0": {
"hits": {
"total": 2,
"max_score": 1,
"hits": [
{
"_index": "example_contact_purchases",
"_type": "contact",
"_id": "AXFfJI1SXthyTIlmcERL",
"_nested": {
"field": "purchases",
"offset": 1
},
"_score": 1,
"_source": {
"uuid": "23",
"brand": "Sony"
}
},
{
"_index": "example_contact_purchases",
"_type": "contact",
"_id": "AXFfJI1SXthyTIlmcERL",
"_nested": {
"field": "purchases",
"offset": 0
},
"_score": 1,
"_source": {
"uuid": "23",
"brand": "Sony"
}
}
]
}
},
"1": {
"hits": {
"total": 0,
"max_score": null,
"hits": []
}
}
}
}
]
}
}
Unfortunatly the first result is wrong:
"inner_hits": {
"0": {
"hits": {
"total": 1,
"max_score": 1,
"hits": [
{
"_index": "example_contact_purchases",
"_type": "contact",
"_id": "AXFfJJdZXthyTIlmcERM",
"_nested": {
"field": "purchases",
"offset": 1
},
"_score": 1,
"_source": {
"uuid": "23",
"brand": "Sony"
}
}
]
}
},
"1": {
"hits": {
"total": 1,
"max_score": 1,
"hits": [
{
"_index": "example_contact_purchases",
"_type": "contact",
"_id": "AXFfJJdZXthyTIlmcERM",
"_nested": {
"field": "purchases",
"offset": 0
},
"_score": 1,
"_source": {
"uuid": "45",
"brand": "Lenovo"
}
}
]
}
}
}
It should show the purchase for Jhon UK with parameters:
{"uuid": "45","brand":"Lenovo"} ( inner_hits with name "1")
Thanks
I'm looking for a solution to find duplicate(exact) Docs in ElasticSearch.
I've read https://qbox.io/blog/minimizing-document-duplication-in-elasticsearch and tried it but its results are not as I expected as example this is my sample simple query :
GET /last_month_ads/_search
{
"size": 0,
"fields": [
"title"
],
"aggs": {
"duplicateCount": {
"terms": {
"field": "title",
"size" : 3
},
"aggs": {
"duplicateDocuments": {
"top_hits": {}
}
}
}
}
}
and the result is
{
"took": 981,
"timed_out": false,
"_shards": {
"total": 2,
"successful": 2,
"failed": 0
},
"hits": {
"total": 482909,
"max_score": 0,
"hits": []
},
"aggregations": {
"duplicateCount": {
"doc_count_error_upper_bound": 11667,
"sum_other_doc_count": 1958146,
"buckets": [
{
"key": "CM",
"doc_count": 46867,
"duplicateDocuments": {
"hits": {
"total": 46867,
"max_score": 1,
"hits": [
{
"_index": "last_month_ads",
"_type": "ads",
"_id": "AV73EtoBQTqkjEa7YQG1",
"_score": 1,
"_source": {
"id": "20642316",
"cat_id": "43606",
"user_id": "1825875",
"title": "125 CM HOME",
"desc": "DESC"
}
},
{
"_index": "last_month_ads",
"_type": "ads",
"_id": "AV73EtpdQTqkjEa7YQHc",
"_score": 1,
"_source": {
"id": "20642379",
"cat_id": "43604",
"user_id": "4642299",
"title": "Home with Big CM",
"desc": "DESC"
}
},
{
"_index": "last_month_ads",
"_type": "ads",
"_id": "AV73Etp6QTqkjEa7YQHp",
"_score": 1,
"_source": {
"id": "20642409",
"cat_id": "43607",
"user_id": "4813303",
"title": "100 of live CM is here ",
"desc": "DESC"
}
}
]
}
}
},
}
]
}
}
}
I'm looking for Exact (or similar) titles not abundance words in titles, how can I get get Duplicate(similar) Docs in Elastic Search?
How to sort by match prioritising the most left words matched
Explanation
Sort the prefix query by the word it matches, but prioritising the matches in the words more at left.
Tests I've made
Data
DELETE /test
PUT /test
PUT /test/person/_mapping
{
"properties": {
"name": {
"type": "multi_field",
"fields": {
"name": {"type": "string"},
"original": {
"type": "string",
"index": "not_analyzed"
}
}
}
}
}
PUT /test/person/1
{"name": "Berta Kassulke"}
PUT /test/person/2
{"name": "Kaley Bartoletti"}
PUT /test/person/3
{"name": "Kali Hahn"}
PUT /test/person/4
{"name": "Karolann Klein"}
PUT /test/person/5
{"name": "Sofia Mandez Kaloo"}
The mapping was added for the 'sort on original value' test.
Simple query
Query
POST /test/person/_search
{
"query": {
"prefix": {"name": {"value": "ka"}}
}
}
Result
{
"took": 2,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 4,
"max_score": 1,
"hits": [
{
"_index": "test",
"_type": "person",
"_id": "4",
"_score": 1,
"_source": {
"name": "Karolann Klein"
}
},
{
"_index": "test",
"_type": "person",
"_id": "5",
"_score": 1,
"_source": {
"name": "Sofia Mandez Kaloo"
}
},
{
"_index": "test",
"_type": "person",
"_id": "1",
"_score": 1,
"_source": {
"name": "Berta Kassulke"
}
},
{
"_index": "test",
"_type": "person",
"_id": "2",
"_score": 1,
"_source": {
"name": "Kaley Bartoletti"
}
},
{
"_index": "test",
"_type": "person",
"_id": "3",
"_score": 1,
"_source": {
"name": "Kali Hahn"
}
}
]
}
}
With sorting
Request
POST /test/person/_search
{
"query": {
"prefix": {"name": {"value": "ka"}}
},
"sort": {"name": {"order": "asc"}}
}
Result
{
"took": 7,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 4,
"max_score": null,
"hits": [
{
"_index": "test",
"_type": "person",
"_id": "2",
"_score": null,
"_source": {
"name": "Kaley Bartoletti"
},
"sort": [
"bartoletti"
]
},
{
"_index": "test",
"_type": "person",
"_id": "1",
"_score": null,
"_source": {
"name": "Berta Kassulke"
},
"sort": [
"berta"
]
},
{
"_index": "test",
"_type": "person",
"_id": "3",
"_score": null,
"_source": {
"name": "Kali Hahn"
},
"sort": [
"hahn"
]
},
{
"_index": "test",
"_type": "person",
"_id": "5",
"_score": null,
"_source": {
"name": "Sofia Mandez Kaloo"
},
"sort": [
"kaloo"
]
},
{
"_index": "test",
"_type": "person",
"_id": "4",
"_score": null,
"_source": {
"name": "Karolann Klein"
},
"sort": [
"karolann"
]
}
]
}
}
With sort on original value
Query
POST /test/person/_search
{
"query": {
"prefix": {"name": {"value": "ka"}}
},
"sort": {"name.original": {"order": "asc"}}
}
Result
{
"took": 6,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 4,
"max_score": null,
"hits": [
{
"_index": "test",
"_type": "person",
"_id": "1",
"_score": null,
"_source": {
"name": "Berta Kassulke"
},
"sort": [
"Berta Kassulke"
]
},
{
"_index": "test",
"_type": "person",
"_id": "2",
"_score": null,
"_source": {
"name": "Kaley Bartoletti"
},
"sort": [
"Kaley Bartoletti"
]
},
{
"_index": "test",
"_type": "person",
"_id": "3",
"_score": null,
"_source": {
"name": "Kali Hahn"
},
"sort": [
"Kali Hahn"
]
},
{
"_index": "test",
"_type": "person",
"_id": "4",
"_score": null,
"_source": {
"name": "Karolann Klein"
},
"sort": [
"Karolann Klein"
]
},
{
"_index": "test",
"_type": "person",
"_id": "5",
"_score": null,
"_source": {
"name": "Sofia Mandez Kaloo"
},
"sort": [
"Sofia Mandez Kaloo"
]
}
]
}
}
Intended result
Sorted by name ASC but prioritising the matches on the most left words
Kaley Bartoletti
Kali Hahn
Karolann Klein
Berta Kassulke
Sofia Mandez Kaloo
Good Question. One way to achieve this would be with the combination of edge ngram filter and span first query
This is my setting
{
"settings": {
"analysis": {
"analyzer": {
"my_custom_analyzer": {
"tokenizer": "standard",
"filter": ["lowercase",
"edge_filter",
"asciifolding"
]
}
},
"filter": {
"edge_filter": {
"type": "edgeNGram",
"min_gram": 2,
"max_gram": 8
}
}
}
},
"mappings": {
"person": {
"properties": {
"name": {
"type": "string",
"analyzer": "my_custom_analyzer",
"search_analyzer": "standard",
"fields": {
"standard": {
"type": "string"
}
}
}
}
}
}
}
After that I inserted your sample documents. Then I wrote the following query with dis_max. Notice that end parameter for first span query is 1 so this will prioritize(higher score) leftmost match. I am first sorting by score and then by name.
{
"query": {
"dis_max": {
"tie_breaker": 0.7,
"boost": 1.2,
"queries": [
{
"match": {
"name": "ka"
}
},
{
"span_first": {
"match": {
"span_term": {
"name": "ka"
}
},
"end": 1
}
},
{
"span_first": {
"match": {
"span_term": {
"name": "ka"
}
},
"end": 2
}
}
]
}
},
"sort": [
{
"_score": {
"order": "desc"
}
},
{
"name.standard": {
"order": "asc"
}
}
]
}
The result I get
"hits": [
{
"_index": "esedge",
"_type": "policy_data",
"_id": "2",
"_score": 0.72272325,
"_source": {
"name": "Kaley Bartoletti"
},
"sort": [
0.72272325,
"bartoletti"
]
},
{
"_index": "esedge",
"_type": "policy_data",
"_id": "3",
"_score": 0.72272325,
"_source": {
"name": "Kali Hahn"
},
"sort": [
0.72272325,
"hahn"
]
},
{
"_index": "esedge",
"_type": "policy_data",
"_id": "4",
"_score": 0.72272325,
"_source": {
"name": "Karolann Klein"
},
"sort": [
0.72272325,
"karolann"
]
},
{
"_index": "esedge",
"_type": "policy_data",
"_id": "1",
"_score": 0.54295504,
"_source": {
"name": "Berta Kassulke"
},
"sort": [
0.54295504,
"berta"
]
},
{
"_index": "esedge",
"_type": "policy_data",
"_id": "5",
"_score": 0.2905494,
"_source": {
"name": "Sofia Mandez Kaloo"
},
"sort": [
0.2905494,
"kaloo"
]
}
]
I hope this helps.
I am using ElasticSearch via NEST c#. I have large list of information about people
{
firstName: 'Frank',
lastName: 'Jones',
City: 'New York'
}
I'd like to be able to filter and sort this list of items by lastName as well as order by the length so people who only have 5 characters in their name will be at the beginning of the result set then people with 10 characters.
So with some pseudo code I'd like to do something like
list.wildcard("j*").sort(m => lastName.length)
You can do the sorting with script-based sorting.
As a toy example, I set up a trivial index with a few documents:
PUT /test_index
POST /test_index/doc/_bulk
{"index":{"_id":1}}
{"name":"Bob"}
{"index":{"_id":2}}
{"name":"Jeff"}
{"index":{"_id":3}}
{"name":"Darlene"}
{"index":{"_id":4}}
{"name":"Jose"}
Then I can order search results like this:
POST /test_index/_search
{
"query": {
"match_all": {}
},
"sort": {
"_script": {
"script": "doc['name'].value.length()",
"type": "number",
"order": "asc"
}
}
}
...
{
"took": 2,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 4,
"max_score": null,
"hits": [
{
"_index": "test_index",
"_type": "doc",
"_id": "1",
"_score": null,
"_source": {
"name": "Bob"
},
"sort": [
3
]
},
{
"_index": "test_index",
"_type": "doc",
"_id": "4",
"_score": null,
"_source": {
"name": "Jose"
},
"sort": [
4
]
},
{
"_index": "test_index",
"_type": "doc",
"_id": "2",
"_score": null,
"_source": {
"name": "Jeff"
},
"sort": [
4
]
},
{
"_index": "test_index",
"_type": "doc",
"_id": "3",
"_score": null,
"_source": {
"name": "Darlene"
},
"sort": [
7
]
}
]
}
}
To filter by length, I can use a script filter in a similar way:
POST /test_index/_search
{
"query": {
"filtered": {
"query": {
"match_all": {}
},
"filter": {
"script": {
"script": "doc['name'].value.length() > 3",
"params": {}
}
}
}
},
"sort": {
"_script": {
"script": "doc['name'].value.length()",
"type": "number",
"order": "asc"
}
}
}
...
{
"took": 3,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 3,
"max_score": null,
"hits": [
{
"_index": "test_index",
"_type": "doc",
"_id": "4",
"_score": null,
"_source": {
"name": "Jose"
},
"sort": [
4
]
},
{
"_index": "test_index",
"_type": "doc",
"_id": "2",
"_score": null,
"_source": {
"name": "Jeff"
},
"sort": [
4
]
},
{
"_index": "test_index",
"_type": "doc",
"_id": "3",
"_score": null,
"_source": {
"name": "Darlene"
},
"sort": [
7
]
}
]
}
}
Here's the code I used:
http://sense.qbox.io/gist/22fef6dc5453eaaae3be5fb7609663cc77c43dab
P.S.: If any of the last names will contain spaces, you might want to use "index": "not_analyzed" on that field.