Accessing Additional Results
By default, each Nearby Search or Text Search returns up to 20 establishment results per query; however, each search can return as many as 60 results, split across three pages. If your search will return more than 20, then the search response will include an additional value — next_page_token. Pass the value of the next_page_token to the pagetoken parameter of a new search to see the next set of results. If the next_page_token is null, or is not returned, then there are no further results.
How is next_page_token paid? Or is it only charged for the first response from placeAPI? Or I will ask a question differently. If I use next_page_token after the first 20 spots, is it taken into account somewhere in the payment?
Related
From reading the documentation I can see that the maximum number of results per response can be set to 50. See link below.
https://learn.microsoft.com/en-us/rest/api/cognitiveservices-bingsearch/bing-custom-search-api-v7-reference#count
What is the maximum number of total results this API can return?
If a search returned a total of 400 results, would the Bing Custom Search API be able to return them all?
For example the Google Custom Search API will only return a maximum of 100 results per query (10 pages of 10 results).
I am trying to get all documents in an index, I tried the following-
1) getting the total number of records first and then setting /_search?size= parameter -doesn't work as size parameter is restricted to 10000
2)tried paginating by making multiple calls and used the parameters '?size=1000&from=9000'
-worked till 'from' was < 9000 but after it exceeds 9000 i again get this size restriction error-
"Result window is too large, from + size must be less than or equal to: [10000] but was [100000]. See the scroll api for a more efficient way to request large data sets. This limit can be set by changing the [index.max_result_window] index level setting"
So how can I retrieve all documents in the index?I read some answers suggesting to use the scroll api and even the documentation states -
"While a search request returns a single “page” of results, the scroll API can be used to retrieve large numbers of results (or even all results) from a single search request, in much the same way as you would use a cursor on a traditional database."
But I couldn't find any sample query to get all records in a single request.
I have a total of 388794 documents in the index.
Also note, this is a one time call so I am not worried about performance concerns.
Figured out the solution-
Scroll api is the proper way to do it- here's how its working-
In the first call to fetch the documents, a size say 1000 can be provided and scroll parameter specifying the time in minutes after which search context times out.
POST /index/type/_search?scroll=1m
{
"size": 1000,
"query": {....
}
}
For all subsequent calls we can use the scroll_id returned in the response of the first call to get the nest chunk of records.
POST /_search/scroll
{
"scroll" : "1m",
"scroll_id" : "DnF1ZXJ5VGhIOLSJJKSVNNZZND344D123RRRBNMBBNNN==="
}
I'm trying to figure out how to accomplish pagination with a multi match query using elasticsearch.
The scroll and search_after APIs seem like they won't work. scroll isn't meant for real time user requests as per documentation. search_after requires some unique field per id and requires you to sort on that field as per documentation but when using a multi-match query you're basically sorting by the score.
So, the only thing I've thought of so far is to do the following:
Send back last document id + score and use the score as the sort field. But, this could potentially return duplicate documents if other documents were added in between two queries.
If you want to paginate the first option is to use from and size parameter in your query. The documentation here
Pagination of results can be done by using the from and size
parameters. The from parameter defines the offset from the first
result you want to fetch. The size parameter allows you to configure
the maximum amount of hits to be returned.
Though from and size can be set as request parameters, they can also
be set within the search body. from defaults to 0, and size defaults
to 10.
Note that from + size can not be more than the index.max_result_window
index setting which defaults to 10,000. See the Scroll or Search After
API for more efficient ways to do deep scrolling.
If you don't need to paginate over 10k results it's your best choice. The max_result_window can be modified, but the performance will decrease as the selected page number will increase.
But of course if some documents are added during your user pagination they will be added and your pagination can be slightly inaccurate.
For a given date range in the query and with a search_after parameter I am able to successfully extract the relevant results. How do I figure out if I am at the end of the search results for the given date range and I dont have to continue querying with the search_after parameter.
There is a pretty cool "trick" that does not involve any additional queries or knowledge of the total number of results:
Say you have a page size of 20. Instead of asking elasticsearch for 20 results, ask it for 21.
If you got 21 results back, only use the first 20 of them. But you now know that the next query will have at least one more result (If you use the sort values of the 20th result for the search_after parameter, not the 21st!).
If you get 20 results or fewer, there will be no additional results.
This github issue gives some more details into why elasticsearch does not have this feature out of the box: https://github.com/elastic/elasticsearch/issues/22364
You can either keep querying until it starts returning zero results, or it does return the total, so you could keep a track of how many you've already retrieved and stop searching once you've met the total. (I do a combination of both)
First, it seems this isn't related to unindexed-search privilege.I try ROOT DN user, same problem.
My Case:
I have 5000 entries of user, each entry contains "xxx#XXX.com" in the "mail" attribute.
And I have a VLV with sort order: +uid +cn +mail
I try the filter "(mail=.com)" in VLV, trying to get a paged result, with total count returned. I understand that returned values will exceed 4000 limit. And I understand that SSS is very expensive request(this is admin, so this operation won't be too often).
My question is: in this case, should I accept it and tell the user to narrow down the search result, or there are any possible solutions to solve this?
Thanks,
Wayne
No this is not related to the unindexed privilege, but to internal administrative limits.
VLV requests (and sort requests) will work without proper indexing only if they are processing less than 4000 entries.
Otherwise, a proper VLV Index is required, and to be used it must match all parameters of the search query: base, scope, filter and sort parameters.