On elastic search, when doing a simple query like:
GET miindex-*/mytype/_search
{
"query": {
"query_string": {
"analyze_wildcard": true,
"query": "*"
}
}
}
It returns a format like:
{
"took": 1,
"timed_out": false,
"_shards": {
"total": 1,
"successful": 1,
"failed": 0
},
"hits": {
"total": 28,
"max_score": 1,
"hits": [
...
So I parse like response.hits.hits to get the actual records.
However if you are doing another type of query e.g. aggregation, the response is totally different like:
{
"took": 1,
"timed_out": false,
"_shards": {
"total": 1,
"successful": 1,
"failed": 0
},
"hits": {
"total": 28,
"max_score": 0,
"hits": []
},
"aggregations": {
"myfield": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 0,
"buckets": [
...
and I need to look actually in another json property: response.aggregations.myfield.buckets which gets even more complicated if you have more than one aggregation.
So, my question is very simple, isn't there a way that I can get Elasticsearch to response always with just the fields I want just like in SQL format:
E.g.
SELECT author, bookid FROM books
Would return:
{"author":"rogers", "bookid":099991}
{"author":"peter", "bookid":099992}
SELECT COUNT(author) As count_author, author, count(bookid) As count_bookid, bookid FROM books GROUP BY author, bookid
Would return:
{"count_author":4, "author":"rogers", "count_bookid":9, "bookid":099991}
{"count_author":8, "author":"peter", "count_bookid":9, "bookid":099992}
Is there a way to show only the fields I want and nothing else?(not having to look within nested json objects and all that stuff). (I want this because I'm doing many reports and I want to have a simple function that parses each response easily in a uniform way.)
Related
I have just started with Elasticsearch and am using the NEST API for my .Net application. I have an index and some records inserted. I am now trying to get a distinct list of document field values. I have this working in Postman. I do not know how to port the JSON aggregation body to a NEST call. Here is the call I am trying to port to the NEST C# API:
{
"size": 0,
"aggs": {
"hosts": {
"terms": {
"field": "host"
}
}
}
Here is the result which is my next question. How would I parse or assign a POCO to the result? I am only interested in the distinct list of the field value in this case 'host'. I really just want an enumerable of strings back. I do not care about the count at this point.
{
"took": 0,
"timed_out": false,
"_shards": {
"total": 1,
"successful": 1,
"skipped": 0,
"failed": 0
},
"hits": {
"total": {
"value": 3,
"relation": "eq"
},
"max_score": null,
"hits": []
},
"aggregations": {
"hosts": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 0,
"buckets": [
{
"key": "hoyt",
"doc_count": 3
}
]
}
}
}
I was able to get the results I am after with the following code:
var result = await client.SearchAsync<SyslogEntryIndex>(s => s.Size(0).Aggregations(a => a.Terms("hosts", t => t.Field(f => f.Host))));
List<string> hosts = new List<string>();
foreach (BucketAggregate v in result.Aggregations.Values)
{
foreach (KeyedBucket<object> item in v.Items)
{
hosts.Add((string)item.Key);
}
}
return hosts;
I want to apply document level security in elastic, but once I provide more than one value in user metadata I get no matches.
I am creating a role and a user in elastic and passing values inside user metadata to the role on whose basis the search should happen. It works fine if I give one value.
For creating role:
PUT _xpack/security/role/my_policy
{
"indices": [{
"names": ["my_index"],
"privileges": ["read"],
"query": {
"template": {
"source": "{\"bool\": {\"filter\": [{\"terms_set\": {\"country_name\": {\"terms\": {{#toJson}}_user.metadata.country_name{{/toJson}},\"minimum_should_match_script\":{\"source\":\"params.num_terms\"}}}}]}}"
}
}
}]
}
And for user:
PUT _xpack/security/user/jack_black
{
"username": "jack_black",
"password":"testtest",
"roles": ["my_policy"],
"full_name": "Jack Black"
"email": "jb#tenaciousd.com",
"metadata": {
"country_name": ["india" , "japan"]
}
}
I expect the output to be results for india and japan only. If the user searches for anything else they should get no results.
However, I do not see any results at all:
{
"took": 1,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"skipped": 0,
"failed": 0
},
"hits": {
"total": 0,
"max_score": null,
"hits": []
}
}
I am very new to using Elastic search storage and looking for a clue to find the list of all fields listed under_source. So far, I have come across the ways to find out the values for the different fields defined under _source but not the way to list out all the fields. For example: I have below document
{
"took": 1,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"skipped": 0,
"failed": 0
},
"hits": {
"total": 2,
"max_score": 1,
"hits": [
{
"_index": "my_product",
"_type": "_doc",
"_id": "B2LcemUBCkYSNbJBl-G_",
"_score": 1,
"_source": {
"email": "123#abc.com",
"product_0": "iWLKHmUBCkYSNbJB3NZR",
"product_price_0": "10",
"link_0": ""
}
}
]
}
}
So, from the above example, I would like to get the fields names like email, product_0, product_price_0 and link_0 which are under _source. I have been retrieving the values by parsing the array returned from the ess api but what should be at the ? mark to get the field names $result['hits']['hits'][0]['_source'][?]
Note: I am using php to insert data into ESS and retrieve data from it.
If I understood correctly you need array_keys
array_keys($result['hits']['hits'][0]['_source'])
I have following elastic search query, I want to apply timeout. So I used
"timeout" param.
GET testdata-2016.04.14/_search
{
"size": 10000,
"timeout": "1ms"
}
I have set timeout to be 1ms, but I observed that query is taking time about more than 5000ms. I have tried the query as below also:
GET testdata-2016.04.14/_search?timeout=1ms
{
"size": 10000
}
IN both cases, I am getting below response after approx. 5000ms.
{
"took": 126,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 26536,
"max_score": 1,
"hits": [
{
...................
...................
}
}
}
I am not sure what is happening here. Is anything missing in above queries ?
Please help.
I have tried to find out solution on google but didn't find any working solution.
My scan/scroll is working fine with one index:
http://localhost:9200/2014-07-10/picture/_search?search_type=scan&scroll=1m
So, now I'm trying to do the same thing but using multiple indexes.
http://localhost:9200/2014-07-*/picture/_search?search_type=scan&scroll=1m
This is returning a huge scroll_id:
{
"_scroll_id": "c2NhbjsxMjk7OTA1Nzp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkwNzE6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MDQ3OnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTA1OTp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkwNDQ6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MDc3OnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTA0OTp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkwNjg6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MDQ2OnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTA2MDp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkwNDU6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MDcyOnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTA1MTp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkwNjY6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MDQ4OnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTA2MTp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkwOTg6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MDU0OnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTA2OTp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkwNzY6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MDUyOnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTA2Mzp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkwOTk6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MDc5OnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTA1MDp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkwNTU6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MDY3OnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTA2Mjp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkxMDA6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MDcwOnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTA1Mzp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkwNTY6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTUxOnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTA2NTp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkxMDE6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MDgyOnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTA4Nzp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkwNTg6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTM3OnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTA2NDp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkxMDI6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MDc0OnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTA5MTp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkwODE6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTM4OnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTA4OTp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkxMDM6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MDgwOnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTExNzp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkxMzQ6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTM5OnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTA5MDp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkxMDQ6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MDc4OnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTExOTp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkwNzM6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTQwOnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTA5Mjp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkxMDU6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MDc1OnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTExNTp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkwODM6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTQxOnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTA5Mzp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkxMDY6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTE0OnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTEyOTp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkwODQ6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTQyOnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTA5NDp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkxMDc6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTEzOnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTExNjp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkwODU6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTQzOnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTA5NTp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkxMDg6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTMyOnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTA4Njp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkxMTg6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTQ0OnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTA5Njp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkxMDk6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTIwOnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTA4ODp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkxMzA6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTQ1OnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTA5Nzp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkxMTA6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTIxOnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTE1ODp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkxMzE6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTQ2OnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTE2Mjp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkxMTE6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTIyOnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTE1Nzp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkxMjg6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTQ3OnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTE2Mzp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkxMTI6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTIzOnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTE2OTp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkxMzM6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTQ4OnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTE2NDp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkxNjA6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTI1OnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTE2Nzp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkxMjQ6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTQ5OnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTE2ODp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkxNTk6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTI2OnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTE1NDp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkxMjc6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTUwOnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTE2NTp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkxNzI6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTYxOnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTEzNjp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkxMzU6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTUyOnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTE2Njp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkxNzE6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTcwOnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTE1NTp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkxNTY6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTUzOnpyamdUdG5SUVcyLUZpTlY1anN3TlE7MTt0b3RhbF9oaXRzOjY1NjI7",
"took": 15,
"timed_out": false,
"_shards": {
"total": 129,
"successful": 129,
"failed": 0
},
"hits": {
"total": 6562,
"max_score": 0,
"hits": []
}
}
So, when I try to scroll with this scroll_id, it returns CONN_REFUSED and creshes the server.
Is this a problem? Maybe performance issue? Or scanning on multiple indexes is not possible?