I have nested query where I want to compare last two inputs and display smaller one.
For example:
"price_history":[
{"id":0,
"price":16.99,
"date":"2021-02-07"
},
"id":1,
"price":20.99,
"date":"2021-02-08"
},
{"id":2,
"price":16.99,
"date":"2021-02-09"
}
]
So I want only id 1 and 2 to be compared and only id 2 to be shown.
I am looking for help to build such a query and am open to any other data model suggestions.
I understand you want to compare the last 2 prices and display the lowest. That information can be calculated on index time so it should be done there instead of re-calculating on each query.
I will show you how to do it on index time assuming you have no control on the software is ingesting the data using a Pipeline that will process your data before putting it in elasticsearch.
We are going to create a new field called best_price that store this price so you can then make queries against this field instead of calculate it on each query.
Ingesting data
POST test_uzer/_doc
{
"price_history": [
{
"id": 0,
"price": 16.99,
"date": "2021-02-07"
},
{
"id": 1,
"price": 20.99,
"date": "2021-02-08"
},
{
"id": 2,
"price": 16.99,
"date": "2021-02-09"
}
]
}
POST test_uzer/_doc
{
"price_history": [
{
"id": 0,
"price": 1.99,
"date": "2021-02-07"
},
{
"id": 1,
"price": 15.99,
"date": "2021-02-08"
},
{
"id": 2,
"price": 16.99,
"date": "2021-02-09"
}
]
}
Creating the ingest pipeline
PUT _ingest/pipeline/best_price
{
"description": "return the best price between the 2 last",
"processors": [
{
"script": {
"lang": "painless",
"source": "def prices = ctx.price_history; def length = prices.length; ctx.best_price = prices[length - 1].price > prices[length - 2].price ? prices[length - 2].price : prices[length - 1].price"
}
}
]
}
Reindexing the data to have the new field
POST _reindex
{
"source": {
"index": "test_uzer"
},
"dest": {
"index": "test_uzer_new",
"pipeline": "best_price"
}
}
Add the ingest pipeline as default to apply to all the new documents
PUT test_uzer_new/_settings
{
"index": {
"default_pipeline": "best_price"
}
}
Ingest document to test
POST test_uzer_new/_doc
{
"price_history": [
{
"id": 0,
"price": 2,
"date": "2021-02-07"
},
{
"id": 1,
"price": 3,
"date": "2021-02-08"
},
{
"id": 2,
"price": 1,
"date": "2021-02-09"
}
]
}
best_price should be 1
POST test_uzer_new/_search
{
"took" : 0,
"timed_out" : false,
"_shards" : {
"total" : 1,
"successful" : 1,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : {
"value" : 3,
"relation" : "eq"
},
"max_score" : 1.0,
"hits" : [
{
"_index" : "test_uzer_new",
"_type" : "_doc",
"_id" : "a8oz-ncBRP0FeAG5geN1",
"_score" : 1.0,
"_source" : {
"best_price" : 16.99,
"price_history" : [
{
"date" : "2021-02-07",
"price" : 16.99,
"id" : 0
},
{
"date" : "2021-02-08",
"price" : 20.99,
"id" : 1
},
{
"date" : "2021-02-09",
"price" : 16.99,
"id" : 2
}
]
}
},
{
"_index" : "test_uzer_new",
"_type" : "_doc",
"_id" : "bMpE-ncBRP0FeAG54ONR",
"_score" : 1.0,
"_source" : {
"best_price" : 15.99,
"price_history" : [
{
"date" : "2021-02-07",
"price" : 1.99,
"id" : 0
},
{
"date" : "2021-02-08",
"price" : 15.99,
"id" : 1
},
{
"date" : "2021-02-09",
"price" : 16.99,
"id" : 2
}
]
}
},
{
"_index" : "test_uzer_new",
"_type" : "_doc",
"_id" : "bspJ-ncBRP0FeAG59uOi",
"_score" : 1.0,
"_source" : {
"best_price" : 1,
"price_history" : [
{
"date" : "2021-02-07",
"price" : 2,
"id" : 0
},
{
"date" : "2021-02-08",
"price" : 3,
"id" : 1
},
{
"date" : "2021-02-09",
"price" : 1,
"id" : 2
}
]
}
}
]
}
}
Works!
Of course there are many ways to achieve what you want, but the point is index and calculate as much data as you can before querying. This will make your searches faster.
Related
I'm trying to get all the documents with highest field value (+ conditional term filter)
Given the Employees mapping
Name Department Salary
----------------------------
Tomcat Dev 100
Bobcat QA 90
Beast QA 100
Tom Dev 100
Bob Dev 90
In SQL it would look like
select * from Employees where Salary = select max(salary) from Employees
expected output
Name Department Salary
----------------------------
Tomcat Dev 100
Beast QA 100
Tom Dev 100
and
select * from Employees where Salary = (select max(salary) from Employees where Department ='Dev' )
expected output
Name Department Salary
----------------------------
Tomcat Dev 100
Tom Dev 100
Is it possible with Elasticsearch ?
The below should help:
Looking at your data, note that I've come up with the below mapping:
Mapping:
PUT my-salary-index
{
"mappings": {
"properties": {
"name": {
"type": "keyword"
},
"department":{
"type": "keyword"
},
"salary":{
"type": "float"
}
}
}
}
Sample Documents:
POST my-salary-index/_doc/1
{
"name": "Tomcat",
"department": "Dev",
"salary": 100
}
POST my-salary-index/_doc/2
{
"name": "Bobcast",
"department": "QA",
"salary": 90
}
POST my-salary-index/_doc/3
{
"name": "Beast",
"department": "QA",
"salary": 100
}
POST my-salary-index/_doc/4
{
"name": "Tom",
"department": "Dev",
"salary": 100
}
POST my-salary-index/_doc/5
{
"name": "Bob",
"department": "Dev",
"salary": 90
}
Solutions:
Scenario 1: Return all employees with max salary
POST my-salary-index/_search
{
"size": 0,
"aggs": {
"my_employees_salary":{
"terms": {
"field": "salary",
"size": 1, <--- Note this
"order": {
"_key": "desc"
}
},
"aggs": {
"my_employees": {
"top_hits": { <--- Note this. Top hits aggregation
"size": 10
}
}
}
}
}
}
Note that I've made use of Terms Aggregation with Top Hits aggregation chained to it. I'd suggest to go through the links to understand both the aggregations.
So basically you just need to retrieve the first element in the Terms Aggregation that is why I've mentioned the size: 1. Also note the order, just in case if you requirement to retrieve the lowest.
Scenario 1 Response:
{
"took" : 1,
"timed_out" : false,
"_shards" : {
"total" : 1,
"successful" : 1,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : {
"value" : 5,
"relation" : "eq"
},
"max_score" : null,
"hits" : [ ]
},
"aggregations" : {
"my_employees" : {
"doc_count_error_upper_bound" : 0,
"sum_other_doc_count" : 2,
"buckets" : [
{
"key" : 100.0,
"doc_count" : 3,
"employees" : {
"hits" : {
"total" : {
"value" : 3,
"relation" : "eq"
},
"max_score" : 1.0,
"hits" : [
{
"_index" : "my-salary-index",
"_type" : "_doc",
"_id" : "1",
"_score" : 1.0,
"_source" : {
"name" : "Tomcat",
"department" : "Dev",
"salary" : 100
}
},
{
"_index" : "my-salary-index",
"_type" : "_doc",
"_id" : "3",
"_score" : 1.0,
"_source" : {
"name" : "Beast",
"department" : "QA",
"salary" : 100
}
},
{
"_index" : "my-salary-index",
"_type" : "_doc",
"_id" : "4",
"_score" : 1.0,
"_source" : {
"name" : "Tom",
"department" : "Dev",
"salary" : 100
}
}
]
}
}
}
]
}
}
}
Scenario 2: Return all employee with max salary from particular department
POST my-salary-index/_search
{
"size": 0,
"query": {
"bool": {
"must": [
{
"term": {
"department": "Dev"
}
}
]
}
},
"aggs": {
"my_employees_salary":{
"terms": {
"field": "salary",
"size": 1,
"order": {
"_key": "desc"
}
},
"aggs": {
"my_employees": {
"top_hits": {
"size": 10
}
}
}
}
}
}
For this, there are many ways to do this, but the idea is that you basically filter the documents before you apply aggregation on top of it. That way it would be more efficient.
Note that I'v just added a bool condition to the aggregation query mentioned in solution for Scenario 1.
Scenario 2 Response
{
"took" : 1,
"timed_out" : false,
"_shards" : {
"total" : 1,
"successful" : 1,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : {
"value" : 3,
"relation" : "eq"
},
"max_score" : null,
"hits" : [ ]
},
"aggregations" : {
"my_employees_salary" : {
"doc_count_error_upper_bound" : 0,
"sum_other_doc_count" : 1,
"buckets" : [
{
"key" : 100.0,
"doc_count" : 2,
"my_employees" : {
"hits" : {
"total" : {
"value" : 2,
"relation" : "eq"
},
"max_score" : 0.53899646,
"hits" : [
{
"_index" : "my-salary-index",
"_type" : "_doc",
"_id" : "1",
"_score" : 0.53899646,
"_source" : {
"name" : "Tomcat",
"department" : "Dev",
"salary" : 100
}
},
{
"_index" : "my-salary-index",
"_type" : "_doc",
"_id" : "4",
"_score" : 0.53899646,
"_source" : {
"name" : "Tom",
"department" : "Dev",
"salary" : 100
}
}
]
}
}
}
]
}
}
}
You can also think of making use of SQL Access if you have complete xpack or rather licensed version of x-pack.
Hope this helps.
I didn't find any answers how to do simple thing in ElasticSearch 6.8 I need to filter nested objects.
Index
{
"settings": {
"index": {
"number_of_shards": "5",
"number_of_replicas": "1"
}
},
"mappings": {
"human": {
"properties": {
"cats": {
"type": "nested",
"properties": {
"name": {
"type": "text"
},
"breed": {
"type": "text"
},
"colors": {
"type": "integer"
}
}
},
"name": {
"type": "text"
}
}
}
}
}
Data
{
"name": "iridakos",
"cats": [
{
"colors": 1,
"name": "Irida",
"breed": "European Shorthair"
},
{
"colors": 2,
"name": "Phoebe",
"breed": "european"
},
{
"colors": 3,
"name": "Nino",
"breed": "Aegean"
}
]
}
select human with name="iridakos" and cats with breed contains 'European' (ignore case).
Only two cats should be returned.
Million thanks for helping.
For nested datatypes, you would need to make use of nested queries.
Elasticsearch would always return the entire document as a response. Note that nested datatype means that every item in the list would be treated as an entire document in itself.
Hence in addition to return entire document, if you also want to know the exact hits, you would need to make use of inner_hits feature.
Below query should help you.
POST <your_index_name>/_search
{
"query": {
"bool": {
"must": [
{
"match": {
"name": "iridakos"
}
},
{
"nested": {
"path": "cats",
"query": {
"match": {
"cats.breed": "european"
}
},
"inner_hits": {}
}
}
]
}
}
}
Response:
{
"took" : 3,
"timed_out" : false,
"_shards" : {
"total" : 1,
"successful" : 1,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : {
"value" : 1,
"relation" : "eq"
},
"max_score" : 0.74455214,
"hits" : [
{
"_index" : "my_cat_index",
"_type" : "_doc",
"_id" : "1", <--- The document that hit
"_score" : 0.74455214,
"_source" : {
"name" : "iridakos",
"cats" : [
{
"colors" : 1,
"name" : "Irida",
"breed" : "European Shorthair"
},
{
"colors" : 2,
"name" : "Phoebe",
"breed" : "european"
},
{
"colors" : 3,
"name" : "Nino",
"breed" : "Aegean"
}
]
},
"inner_hits" : { <---- Note this
"cats" : {
"hits" : {
"total" : {
"value" : 2, <---- Count of nested doc hits
"relation" : "eq"
},
"max_score" : 0.52354836,
"hits" : [
{
"_index" : "my_cat_index",
"_type" : "_doc",
"_id" : "1",
"_nested" : {
"field" : "cats",
"offset" : 1
},
"_score" : 0.52354836,
"_source" : { <---- First Nested Document
"breed" : "european"
}
},
{
"_index" : "my_cat_index",
"_type" : "_doc",
"_id" : "1",
"_nested" : {
"field" : "cats",
"offset" : 0
},
"_score" : 0.39019167,
"_source" : { <---- Second Document
"breed" : "European Shorthair"
}
}
]
}
}
}
}
]
}
}
Note in your response how the inner_hits section would appear where you would find the exact hits.
Hope this helps!
You could use something like this:
{
"query": {
"bool": {
"must": [
{ "match": { "name": "iridakos" }},
{ "match": { "cats.breed": "European" }}
]
}
}
}
To search on a cat's breed, you can use the dot-notation.
I want to fetch common words of list of users sorted by total count.
example:
I have a index of words used by a user.
docs:
[
{
user_id: 1,
word: 'food',
count: 2
},
{
user_id: 1,
word: 'thor',
count: 1
},
{
user_id: 1,
word: 'beer',
count: 7
},
{
user_id: 2,
word: 'summer',
count: 12
},
{
user_id: 2,
word: 'thor',
count: 4
},
{
user_id: 1,
word: 'beer',
count: 2
},
..otheruserdetails..
]
input: user_ids: [1, 2]
desired output:
[
{
'word': 'beer',
'total_count': 9
},
{
'word': 'thor',
'total_count': 5
}
]
what I have so far:
fetch all docs using user_id in user_id list (bool should query)
process docs in app layer.
loop through each keyword
check if keyword is present for each user_id
if yes, find count
else, dispose and go to next keyword
However, this is not feasible because word docs are gonna grow huge and app layer won't keep-up. any way to move this to ES query?
You can use Terms aggregation and Value Count aggregation
One can look at "Terms aggregation" as a "Group By". Output will give a unique list of userIds, list of all words under user and finally count of each word
{
"from": 0,
"size": 10,
"query": {
"terms": {
"user_id": [
"1",
"2"
]
}
},
"aggs": {
"users": {
"terms": {
"field": "user_id",
"size": 10
},
"aggs": {
"words": {
"terms": {
"field": "word.keyword",
"size": 10
},
"aggs": {
"word_count": {
"value_count": {
"field": "word.keyword"
}
}
}
}
}
}
}
}
Result
"hits" : [
{
"_index" : "index89",
"_type" : "_doc",
"_id" : "gFRzr3ABAWOsYG7t2tpt",
"_score" : 1.0,
"_source" : {
"user_id" : 1,
"word" : "thor",
"count" : 1
}
},
{
"_index" : "index89",
"_type" : "_doc",
"_id" : "flRzr3ABAWOsYG7t0dqI",
"_score" : 1.0,
"_source" : {
"user_id" : 1,
"word" : "food",
"count" : 2
}
},
{
"_index" : "index89",
"_type" : "_doc",
"_id" : "f1Rzr3ABAWOsYG7t19ps",
"_score" : 1.0,
"_source" : {
"user_id" : 2,
"word" : "thor",
"count" : 4
}
},
{
"_index" : "index89",
"_type" : "_doc",
"_id" : "gVRzr3ABAWOsYG7t8NrR",
"_score" : 1.0,
"_source" : {
"user_id" : 1,
"word" : "food",
"count" : 2
}
},
{
"_index" : "index89",
"_type" : "_doc",
"_id" : "glRzr3ABAWOsYG7t-Npj",
"_score" : 1.0,
"_source" : {
"user_id" : 1,
"word" : "thor",
"count" : 1
}
},
{
"_index" : "index89",
"_type" : "_doc",
"_id" : "g1Rzr3ABAWOsYG7t_9po",
"_score" : 1.0,
"_source" : {
"user_id" : 2,
"word" : "thor",
"count" : 4
}
}
]
},
"aggregations" : {
"users" : {
"doc_count_error_upper_bound" : 0,
"sum_other_doc_count" : 0,
"buckets" : [
{
"key" : 1,
"doc_count" : 4,
"words" : {
"doc_count_error_upper_bound" : 0,
"sum_other_doc_count" : 0,
"buckets" : [
{
"key" : "food",
"doc_count" : 2,
"word_count" : {
"value" : 2
}
},
{
"key" : "thor",
"doc_count" : 2,
"word_count" : {
"value" : 2
}
}
]
}
},
{
"key" : 2,
"doc_count" : 2,
"words" : {
"doc_count_error_upper_bound" : 0,
"sum_other_doc_count" : 0,
"buckets" : [
{
"key" : "thor",
"doc_count" : 2,
"word_count" : {
"value" : 2
}
}
]
}
}
]
}
}
You can use aggregations along with filter for the user like below:
{
"size": 0,
"aggs": {
"words_stats": {
"filter": {
"terms": {
"user_id": [
"1",
"2"
]
}
},
"aggs": {
"words": {
"terms": {
"field": "word.keyword"
},
"aggs": {
"total_count": {
"sum": {
"field": "count"
}
}
}
}
}
}
}
}
The results will be:
{
"key" : "beer",
"doc_count" : 2,
"total_count" : {
"value" : 9.0
}
},
{
"key" : "thor",
"doc_count" : 2,
"total_count" : {
"value" : 5.0
}
},
{
"key" : "food",
"doc_count" : 1,
"total_count" : {
"value" : 2.0
}
},
{
"key" : "summer",
"doc_count" : 1,
"total_count" : {
"value" : 12.0
}
}
Here is what I had to do:
I have referred to #Rakesh Chandru & #jaspreet chahal's answers' and came up with this. this query handles intersection and sorting.
Process:
filter by user_ids
group_by(terms aggs) on keyword (word in example),
order by aggregating (sum) counts
{
size: 0, // because we do not want result of filtered records
query: {
terms: { user_id: user_ids } // filter by user_ids
},
aggs: {
group_by_keyword: {
terms: {
field: "keyword", // group by keyword
min_doc_count: 2, // where count >= 2
order: { agg_count: "desc" }, // order by count
size
},
aggs: {
agg_count: {
sum: {
field: "count" // aggregating count
}
}
}
}
}
}
I want to fetch document between two date according to start and end date.
This is Mapping:
PUT data
{
"mappings": {
"_doc": {
"properties": {
"product_code": {"type": "keyword"},
"color_code": {"type": "keyword"},
"warehouse_id": {"type": "short"},
"stock": {"type": "float"},
"inventory_start_date": {
"type": "date",
"format": "yyyy-MM-dd"
},
"inventory_end_date": {
"type": "date",
"format": "yyyy-MM-dd"
}
}
}
}
}
This is list of my data:
POST _bulk
{ "index" : { "_index" : "data", "_type" : "_doc" } }
{ "product_code" : "20001", "color_code" : "001", "warehouse_id" : 5, "stock" : 10,"inventory_start_date" : "2019-01-01","inventory_end_date" : "2019-01-04"}
{ "index" : { "_index" : "data", "_type" : "_doc" } }
{ "product_code" : "20001", "color_code" : "001", "warehouse_id" : 5, "stock" : 4, "inventory_start_date" : "2019-01-04","inventory_end_date" : "2019-01-07"}
{ "index" : { "_index" : "data", "_type" : "_doc" } }
{ "product_code" : "20001", "color_code" : "001", "warehouse_id" : 5, "stock" : 0, "inventory_start_date" : "2019-01-07","inventory_end_date" : "2019-01-07"}
inventory_start_date and inventory_end_date keeps the stock amount between 2 dates.
Here is the question: how can I fetch data between 2 date range. For example; fetching documents between 2019-01-05 and 2019-01-06. Or, fetching documents between 2019-01-01 and 2019-01-03.
Considering the documents you've mentioned, if I want the list of the documents with the inventory for e.g. between 2019-01-01 and 2019-01-04, then I would expand my query as below and implement it using range queries on two different fields wrapped in a bool must clause.
inventory_start_date - in between 2019-01-01 and 2019-01-04
inventory_end_date - in between 2019-01-01 and 2019-01-04
Query
POST myindex/_search
{
"query": {
"bool": {
"must": [
{
"range": {
"inventory_start_date": {
"gte": "2019-01-01",
"lte": "2019-01-04"
}
}
},
{
"range": {
"inventory_end_date": {
"gte": "2019-01-01",
"lte": "2019-01-04"
}
}
}
]
}
}
}
Response
{
"took" : 1,
"timed_out" : false,
"_shards" : {
"total" : 5,
"successful" : 5,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : 1,
"max_score" : 2.0,
"hits" : [
{
"_index" : "myindex",
"_type" : "_doc",
"_id" : "1",
"_score" : 2.0,
"_source" : {
"product_code" : "20001",
"color_code" : "001",
"warehouse_id" : 5,
"stock" : 10,
"inventory_start_date" : "2019-01-01",
"inventory_end_date" : "2019-01-04"
}
}
]
}
}
Likewise, for any other different date ranges, you would need to construct your query accordingly.
Let me know if this helps.
I really think that I'm trying to do is fairly simple. I'm simply trying to query for N tags. A clear example of this was asked and answered over at "Elasticsearch: How to use two different multiple matching fields?". Yet, that solution doesn't seem to work for the latest version of ES (more likely, I'm simply doing it wrong).
To show the current data and to demonstrate a working query, see below:
{
"query": {
"filtered": {
"filter": {
"terms": {
"Price": [10,5]
}
}
}
}
}
Here are the results for this. As you can see, 5 and 10 are showing up (this demonstrates that basic queries do work):
{
"took" : 1,
"timed_out" : false,
"_shards" : {
"total" : 6,
"successful" : 6,
"failed" : 0
},
"hits" : {
"total" : 4,
"max_score" : 1.0,
"hits" : [ {
"_index" : "labelsample",
"_type" : "entry",
"_id" : "AVLGnGMYXB5vRcKBZaDw",
"_score" : 1.0,
"_source" : {
"Category" : [ "Medium Signs" ],
"Code" : "a",
"Name" : "Sample 1",
"Timestamp" : 1.455031083799152E9,
"Price" : "10",
"IsEnabled" : true
}
}, {
"_index" : "labelsample",
"_type" : "entry",
"_id" : "AVLGnGHHXB5vRcKBZaDF",
"_score" : 1.0,
"_source" : {
"Category" : [ "Small Signs" ],
"Code" : "b",
"Name" : "Sample 2",
"Timestamp" : 1.45503108346191E9,
"Price" : "5",
"IsEnabled" : true
}
}, {
"_index" : "labelsample",
"_type" : "entry",
"_id" : "AVLGnGILXB5vRcKBZaDO",
"_score" : 1.0,
"_source" : {
"Category" : [ "Medium Signs" ],
"Code" : "c",
"Name" : "Sample 3",
"Timestamp" : 1.455031083530215E9,
"Price" : "10",
"IsEnabled" : true
}
}, {
"_index" : "labelsample",
"_type" : "entry",
"_id" : "AVLGnGGgXB5vRcKBZaDA",
"_score" : 1.0,
"_source" : {
"Category" : [ "Medium Signs" ],
"Code" : "d",
"Name" : "Sample 4",
"Timestamp" : 1.4550310834233E9,
"Price" : "10",
"IsEnabled" : true
}
}]
}
}
As a side note: the following bool query gives the exact same results:
{
"query": {
"bool": {
"must": [{
"terms": {
"Price": [10,5]
}
}]
}
}
}
Notice Category...
Let's simply copy/paste Category into a query:
{
"query": {
"filtered": {
"filter": {
"terms": {
"Category" : [ "Medium Signs" ]
}
}
}
}
}
This gives the following gem:
{
"took" : 1,
"timed_out" : false,
"_shards" : {
"total" : 6,
"successful" : 6,
"failed" : 0
},
"hits" : {
"total" : 0,
"max_score" : null,
"hits" : [ ]
}
}
Again, here's the bool query version that gives the same 0-hit result:
{
"query": {
"bool": {
"must": [{
"terms": {
"Category" : [ "Medium Signs" ]
}
}]
}
}
}
In the end, I definitely need something similar to "Category" : [ "Medium Signs", "Small Signs" ] working (in concert with other label queries and minimum_should_match as well-- but I can't even get this bare-bones query to work).
I have zero clue why this is. I poured over the docs for houring, trying everything I can see. Do I need to look into debugging various encodings? Is my syntax archaic?
The problem here is that ElasticSearch is analyzing and betokening the Category field, and the terms filter expects an exact match. One solution here is to add a raw field to Category inside your entry mapping:
PUT labelsample
{
"mappings": {
"entry": {
"properties": {
"Category": {
"type": "string",
"fields": {
"raw": {
"type": "string",
"index": "not_analyzed"
}
}
},
"Code": {
"type": "string"
},
"Name": {
"type": "string"
},
"Timestamp": {
"type": "date",
"format": "epoch_millis"
},
"Price": {
"type": "string"
},
"IsEnabled": {
"type": "boolean"
}
}
}
}
}
...and filter on the raw field:
GET labelsample/entry/_search
{
"query": {
"filtered": {
"filter": {
"terms": {
"Category.raw" : [ "Medium Signs" ]
}
}
}
}
}