I am trying to download complete elastic search index using:
curl -o output_filename -m 600 -GET 'http://ip/index/_search?q=*&size=7000000'.
But its giving error:
{"error":"ArrayIndexOutOfBoundsException[-131072]","status":500}
How can I download complete index data?
The scroll API is what you're looking for, which supports proper pagination:
Scrolling is not intended for real time user requests, but rather for processing large amounts of data
It's the same /_search endpoint but additional gets passed the ?scroll=<timeout> parameter.
Please be sure to understand what the timeout to e.g. scroll=1m means: it will keep alive your scroll context until you request the next batch/page.
Use the scroll_id from the response to request the next batch/page.
Related
when I send a POST request, I received warning
org.elasticsearch.client.RestClient: request [POST http://localhost:9200/_search?typed_keys=true&max_concurrent_shard_requests=5&ignore_unavailable=false&expand_wildcards=open&allow_no_indices=true&ignore_throttled=true&search_type=query_then_fetch&batched_reduce_size=512]
returned 1 warnings: [299 Elasticsearch-7.14.2-6bc13727ce758c0e943c3c21653b3da82f627f75 "this request accesses system indices: [.apm-agent-configuration, .apm-custom-link, .kibana_7.13.4_001, .kibana_task_manager_7.13.4_001, .tasks], but in a future major version, direct access to system indices will be prevented by default"]
Now, I understand that system indices will be hidden in the future and cannot be accessed. What is the correct usage or command to send so that this warning will not be displayed?
your use of POST http://localhost:9200/_search is querying all indices in Elasticsearch, which you probably don't really want to be doing
you're better off specifying which indices you want to query
My friend stored 65000 documents on the Elastic Search cloud and I would like to retrieve all of them (using python). However, when I am running my current script, there is an error noticing that :
RequestError(400, 'search_phase_execution_exception', 'Result window is too large, from + size must be less than or equal to: [10000] but was [30000]. See the scroll api for a more efficient way to request large data sets. This limit can be set by changing the [index.max_result_window] index level setting.')
My script
es = Elasticsearch(cloud_id=cloud_id, http_auth=(username, password))
docs = es.search(body={"query": {"match_all": {}}, '_source': ["_id"], 'size': 65000})
What would be the easiest way to retrieve all those document and not limit it to 10000 docs? thanks
The limit has been set so that the result set does not overwhelm your nodes. Results will occupy memory in the elastic node. So bigger the result set, bigger the memory footprint and impact on the nodes.
Depending on what you want to do with the retrieved documents,
try to use the scroll api (as suggested in your error message) if its a batch job. Be mindful of the lifetime of scroll context in that case.
https://www.elastic.co/guide/en/elasticsearch/reference/current/search-request-body.html#request-body-search-scroll
or, use the Search After
https://www.elastic.co/guide/en/elasticsearch/reference/current/search-request-body.html#request-body-search-search-after
You should use the scroll API and get the results in different calls. The scroll API will return to you the results 10000 by 10000 as maximum (that will be available to consult during the amount of time you indicate in the call) and you will be able then to paginate the results and obtain them thanks to a scroll_id.
The error message itself is mentioning that how can you solve the issue, look carefully this part of the error message.
This limit can be set by changing the [index.max_result_window] index
level setting.
Please refer update indices level setting on how to change that.
So for your setting it would look like:
PUT /<your-index-name>/_settings
{
"index" : {
"index.max_result_window" : 65000 -> note its equal to your all the docs in your index
}
}
I am trying to retrieve data from an index in ElasticSearch. I configured the "QueryElasticSearchHttp" processor and it works just fine. However when I try to use the ScrollElasticsearchHttp processor with the same URL, query, index properties and set the 'scroll' to default 1 minute, it doesn't work.
I get an error response of 404 : "Elasticsearch returned code 404 with message Not found".
I am also tailing the log on the ES cluster and I see this error;
[DEBUG][o.e.a.s.TransportSearchScrollAction] [2] Failed to execute query phase
org.elasticsearch.transport.RemoteTransportException:[127.0.0.1:9300][indices:data/read/search[phase/query+fetch/scroll]]
Caused by: org.elasticsearch.search.SearchContextMissingException: No search context found for id [2]
at org.elasticsearch.search.SearchService.getExecutor(SearchService.java:457) ~[elasticsearch-7.5.2.jar:7.5.2]
I am on Apache NiFi 1.10.0
Here is the config for the processor:
I should see a total of 441 hits, and with page size 20 I should see 23 queries being made to ES.
But I don't get a single result back. I have tried higher values for "scroll" and also played around with "page size" to no avail.
I also noticed that even though the ScrollElasticsearchHttp processor is set to run every 1m, on the ES log I don't see any error log repeated every minute.
Update:
When I cleared the state via UI: "View state" -> "Clear State", I was able to make a single call, that returned a page full of hits in one flowfile.
However, there are more pages to be retrieved. How do I make the processor to go fetch the next page?
My understanding was that the single invocation of the ScrollElasticsearchHttp will page through all the result sets and bring in each page as one flowfile. Is this not correct?
Please decrease the scheduling time to around 10-20 sec. So in every 10-20 sec processor will fetch the next set of records based on your page size.
You can check the state value when the fetching process is in progress i.e. you will find a scroll id in it. Once the fetching process is complete then state value will be changed to "finishedQuery" : true.
I'm using OpenDJ 3.0.0 release version.
I have two base dns, 1st is dc=tenant1, 2nd is dc=tenant2, the vlv index I created is based on dc=tenant1, but the ldap search happened on dc=tenant2
Here is the vlv index, which looks like
filter:
(&(objectClass=ns-nationsky-base-subject)(uid=)(cn=))
base dn: dc=tenant1
sort order:uid cn mail
scope: one level
There will be "# Server-side sort failed: Unwilling to Perform" when I try to use ldapsearch with a vlv control, like below:
/ldapsearch -p 1389 -h localhost -D 'cn=Directory Manager' -w 'password' -b 'ou=People,ou=Subjects,dc=tenant2' -G 0:2000:1:0 -s one --sortorder uid "(uid=a)" cn
It all works good but it will always be an error of "# Server-side sort failed: Unwilling to Perform" if there are too many entries in my server.(say 15000)
from the access log , I can see unindexed search
[19/Sep/2016:23:06:38 +0800] SEARCH REQ conn=35 op=1 msgID=2 base="ou=People,ou=Subjects,dc=tenant2" scope=one filter="(uid=a)" attrs="cn"
[19/Sep/2016:23:06:40 +0800] SEARCH RES conn=35 op=1 msgID=2 result=0 nentries=8458 unindexed etime=2543
Any idea how I can fix it ?
A VLV Index and queries are really meant to browse a well know set of entries (like all users) and not varying sets of entries.
So, in order to use a VLV Index, the search request must match the base, the scope, the filter and the sorting order defined for that index (and filters should be constant).
If the VLV index was defined with (&(objectClass=ns-nationsky-base-subject)(uid=)(cn=)), then a search with (uid=a) will not match the index and thus cannot be used.
Server side sorting is a very expensive request, this is why, when there is no index, the server will refuse to sort many entries (governed by index-entry-limit). While it is possible to increase this limit, this has very serious implications in the amount of resources that are used in the server and may seriously impact performances of the server.
I need to use Google Custom Search API https://developers.google.com/custom-search/v1/overview. From that page, it said:
For CSE users, the API provides 100 search queries per day for free.
If you need more, you may sign up for billing in the Developers
Console. Additional requests cost $5 per 1000 queries, up to 10k
queries per day.
I already sign up for billing inside the developer console. However, I still could not retrieve results more than 100. What things should I do more? https://www.googleapis.com/customsearch/v1?cx=CSE_INSTANCE&key=API_KEY&q=QUERY&start=100
{ error: { errors: [ { domain: "global", reason: "invalid", message:
"Invalid Value" } ], code: 400, message: "Invalid Value" } }
Query: Definition
https://support.google.com/customsearch/answer/1361951
Any actual user query from a Google Site Search engine, including but
not limited to search engines installed on your website using XML,
iFrame, or the Custom Search Element.
That means you would probably need to send eleven queries to get more than 100 results.
GET https://www.googleapis.com/customsearch/v1?&q=QUERY&...&start=1
GET https://www.googleapis.com/customsearch/v1?&q=QUERY&...&start=11
GET https://www.googleapis.com/customsearch/v1?&q=QUERY&...&start=21
GET ...
GET https://www.googleapis.com/customsearch/v1?&q=QUERY&...&start=81
GET https://www.googleapis.com/customsearch/v1?&q=QUERY&...&start=91
GET https://www.googleapis.com/customsearch/v1?&q=QUERY&...&start=101
Check every response and if error code is 400, you can stop - there is probably no need to send next (&start=previous+10) request.
Now you can merge responses and start building results page.
Google Custom Search and Google Site Search return up to 10 results
per query. If you want to display more than 10 results to the user,
you can issue multiple requests (using the start=0, start=11 ...
parameters) and display the results on a single page. In this case,
Google will consider each request as a separate query, and if you are
using Google Site Search, each query will count towards your limit.
There might be a better way to do this then I described above. (But, I'm not sure about batching API calls.)
And (finally) possible answer to your question: I made more than few tests, but I haven't had any luck with start greater than 100 (I was getting the same as you - <Response [400]>). I'm using "Browser key" from my billing-enabled project. That could mean we can't get 101st, 102nd, 103rd, etc. results with CSE API.
The API documentation says it never returns more than 100 items.
https://developers.google.com/custom-search/v1/reference/rest/v1/cse/list
start
integer (uint32 format)
The index of the first result to return. The default number of results
per page is 10, so &start=11 would start at the top of the second page
of results. Note: The JSON API will never return more than 100
results, even if more than 100 documents match the query, so setting
the sum of start + num to a number greater than 100 will produce an
error. Also note that the maximum value for num is 10.