My scan/scroll is working fine with one index:
http://localhost:9200/2014-07-10/picture/_search?search_type=scan&scroll=1m
So, now I'm trying to do the same thing but using multiple indexes.
http://localhost:9200/2014-07-*/picture/_search?search_type=scan&scroll=1m
This is returning a huge scroll_id:
{
"_scroll_id": "c2NhbjsxMjk7OTA1Nzp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkwNzE6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MDQ3OnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTA1OTp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkwNDQ6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MDc3OnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTA0OTp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkwNjg6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MDQ2OnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTA2MDp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkwNDU6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MDcyOnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTA1MTp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkwNjY6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MDQ4OnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTA2MTp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkwOTg6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MDU0OnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTA2OTp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkwNzY6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MDUyOnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTA2Mzp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkwOTk6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MDc5OnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTA1MDp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkwNTU6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MDY3OnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTA2Mjp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkxMDA6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MDcwOnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTA1Mzp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkwNTY6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTUxOnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTA2NTp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkxMDE6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MDgyOnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTA4Nzp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkwNTg6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTM3OnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTA2NDp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkxMDI6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MDc0OnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTA5MTp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkwODE6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTM4OnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTA4OTp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkxMDM6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MDgwOnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTExNzp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkxMzQ6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTM5OnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTA5MDp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkxMDQ6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MDc4OnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTExOTp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkwNzM6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTQwOnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTA5Mjp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkxMDU6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MDc1OnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTExNTp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkwODM6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTQxOnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTA5Mzp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkxMDY6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTE0OnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTEyOTp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkwODQ6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTQyOnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTA5NDp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkxMDc6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTEzOnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTExNjp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkwODU6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTQzOnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTA5NTp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkxMDg6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTMyOnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTA4Njp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkxMTg6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTQ0OnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTA5Njp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkxMDk6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTIwOnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTA4ODp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkxMzA6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTQ1OnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTA5Nzp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkxMTA6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTIxOnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTE1ODp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkxMzE6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTQ2OnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTE2Mjp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkxMTE6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTIyOnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTE1Nzp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkxMjg6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTQ3OnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTE2Mzp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkxMTI6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTIzOnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTE2OTp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkxMzM6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTQ4OnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTE2NDp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkxNjA6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTI1OnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTE2Nzp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkxMjQ6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTQ5OnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTE2ODp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkxNTk6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTI2OnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTE1NDp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkxMjc6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTUwOnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTE2NTp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkxNzI6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTYxOnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTEzNjp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkxMzU6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTUyOnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTE2Njp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkxNzE6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTcwOnpyamdUdG5SUVcyLUZpTlY1anN3TlE7OTE1NTp6cmpnVHRuUlFXMi1GaU5WNWpzd05ROzkxNTY6enJqZ1R0blJRVzItRmlOVjVqc3dOUTs5MTUzOnpyamdUdG5SUVcyLUZpTlY1anN3TlE7MTt0b3RhbF9oaXRzOjY1NjI7",
"took": 15,
"timed_out": false,
"_shards": {
"total": 129,
"successful": 129,
"failed": 0
},
"hits": {
"total": 6562,
"max_score": 0,
"hits": []
}
}
So, when I try to scroll with this scroll_id, it returns CONN_REFUSED and creshes the server.
Is this a problem? Maybe performance issue? Or scanning on multiple indexes is not possible?
Related
when i wanted to view the logs on kibana, i recieved this error :
1 of 37 shards failed The data you are seeing might be incomplete or wrong.
this is Response:
{
"took": 10,
"timed_out": false,
"_shards": {
"total": 21,
"successful": 20,
"skipped": 20,
"failed": 1,
"failures": [
{
"shard": 0,
"index": "tourism-2022.12.11",
"node": null,
"reason": {
"type": "no_shard_available_action_exception",
"reason": null,
"index_uuid": "j2J6dUvTQ_q7qeyyU56bag",
"shard": "0",
"index": "tourism-2022.12.11"
}
}
]
},
"hits": {
"total": 0,
"max_score": 0,
"hits": []
}
}
i delete some indexes
expand pvc
but doesnt work anything
Via Kibana console check the cluster status, if you haven't got Kibana available convert the command to curl request.
Check the Elasticsearch cluster status:
GET /_cluster/health?wait_for_status=yellow&timeout=50s
Check index status:
GET /_cat/indices/tourism-2022.12.11?v=true&s=index
All shards are green, have you got documents available in your index ?
I am very new to using Elastic search storage and looking for a clue to find the list of all fields listed under_source. So far, I have come across the ways to find out the values for the different fields defined under _source but not the way to list out all the fields. For example: I have below document
{
"took": 1,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"skipped": 0,
"failed": 0
},
"hits": {
"total": 2,
"max_score": 1,
"hits": [
{
"_index": "my_product",
"_type": "_doc",
"_id": "B2LcemUBCkYSNbJBl-G_",
"_score": 1,
"_source": {
"email": "123#abc.com",
"product_0": "iWLKHmUBCkYSNbJB3NZR",
"product_price_0": "10",
"link_0": ""
}
}
]
}
}
So, from the above example, I would like to get the fields names like email, product_0, product_price_0 and link_0 which are under _source. I have been retrieving the values by parsing the array returned from the ess api but what should be at the ? mark to get the field names $result['hits']['hits'][0]['_source'][?]
Note: I am using php to insert data into ESS and retrieve data from it.
If I understood correctly you need array_keys
array_keys($result['hits']['hits'][0]['_source'])
On elastic search, when doing a simple query like:
GET miindex-*/mytype/_search
{
"query": {
"query_string": {
"analyze_wildcard": true,
"query": "*"
}
}
}
It returns a format like:
{
"took": 1,
"timed_out": false,
"_shards": {
"total": 1,
"successful": 1,
"failed": 0
},
"hits": {
"total": 28,
"max_score": 1,
"hits": [
...
So I parse like response.hits.hits to get the actual records.
However if you are doing another type of query e.g. aggregation, the response is totally different like:
{
"took": 1,
"timed_out": false,
"_shards": {
"total": 1,
"successful": 1,
"failed": 0
},
"hits": {
"total": 28,
"max_score": 0,
"hits": []
},
"aggregations": {
"myfield": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 0,
"buckets": [
...
and I need to look actually in another json property: response.aggregations.myfield.buckets which gets even more complicated if you have more than one aggregation.
So, my question is very simple, isn't there a way that I can get Elasticsearch to response always with just the fields I want just like in SQL format:
E.g.
SELECT author, bookid FROM books
Would return:
{"author":"rogers", "bookid":099991}
{"author":"peter", "bookid":099992}
SELECT COUNT(author) As count_author, author, count(bookid) As count_bookid, bookid FROM books GROUP BY author, bookid
Would return:
{"count_author":4, "author":"rogers", "count_bookid":9, "bookid":099991}
{"count_author":8, "author":"peter", "count_bookid":9, "bookid":099992}
Is there a way to show only the fields I want and nothing else?(not having to look within nested json objects and all that stuff). (I want this because I'm doing many reports and I want to have a simple function that parses each response easily in a uniform way.)
I have following elastic search query, I want to apply timeout. So I used
"timeout" param.
GET testdata-2016.04.14/_search
{
"size": 10000,
"timeout": "1ms"
}
I have set timeout to be 1ms, but I observed that query is taking time about more than 5000ms. I have tried the query as below also:
GET testdata-2016.04.14/_search?timeout=1ms
{
"size": 10000
}
IN both cases, I am getting below response after approx. 5000ms.
{
"took": 126,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 26536,
"max_score": 1,
"hits": [
{
...................
...................
}
}
}
I am not sure what is happening here. Is anything missing in above queries ?
Please help.
I have tried to find out solution on google but didn't find any working solution.
I'm seeing weird behaviours with ids on elasticsearch 1.2.0 (recently upgraded from 1.0.1).
A search retrieves my document, showing the correct value for _id:
[terminal]
curl 'myServer:9200/global/_search?q=someField:something
result is
{
"took": 79,
"timed_out": false,
"_shards": {
"total": 12,
"successful": 12,
"failed": 0
},
"hits": {
"total": 1,
"max_score": 17.715034,
"hits": [
{
"_index": "global",
"_type": "user",
"_id": "7a113e4f-44de-3b2b-a3f1-fb881da1b00a",
...
}
]
}
}
But a direct lookup on id doesn't:
[terminal]
curl 'myServer:9200/global/user/7a113e4f-44de-3b2b-a3f1-fb881da1b00a'
result is
{
"_index": "global",
"_type": "user",
"_id": "7a113e4f-44de-3b2b-a3f1-fb881da1b00a",
"found": false
}
It seems that this is on documents that have previously been updated using custom scripting.
Any ideas?
I think you should upgrade to 1.2.1
Due to release notes (http://www.elasticsearch.org/blog/elasticsearch-1-2-1-released/) there are some problems, especially with get:
`There was a routing bug in Elasticsearch 1.2.0 that could have a number of bad side effects on the cluster. Possible side effects include:
documents that were indexed prior to the upgrade to 1.2.0 may not be accessible via get. A search would find these documents, but not a direct get of the document by ID.`