Currently, I'm using Files: list API below to get all the files
https://developers.google.com/drive/api/v3/reference/files/list
Attached screenshot contains the parameters used where I've provided (pagesize=1000). This only returns 1000 files per call. I have to set (pageToken='nextPageToken' value from previous response)
Is there a way to for the API to return all the files instead of having to set (pageToken='nextPageToken' value from previous response) ?. Please advise
Answer: No there is no way to List more then 1000 files without pagination.
Addional information.
If you check the documentation that you yourself have linked
You will notice that it states that the default page size is 100, that means that if you don't send the page size parameter that it will automatically be set to 100 by the system.
You will also notice that it states Acceptable values are 1 to 1000, inclusive. this means that you can max set pagesize to 1000
If you want additional files you need to use the nextpagetoken to get another set of rows.
There is no other way around pagination if you want more then the 1000 rows. I dont know what your doing but maybe using the the Q parameter to search for just the files you are looking for and thereby limiting the response to under 1000.
Also note that my testing suggests that you get 100 back for My Drives and 460 (?) for Shared Drives. I have never got 1000 back for either type. You will need to iterate each "page" in turn
Related
I am trying to invoke following slack API to fetch private and public channels.
https://api.slack.com/methods/conversations.list
By default as per slack documentation 200 channels are returned at a time when limit is given 1000.
I am passing types= “private_channel,public_channel“ to get the private as well as public channels.
If I pass the types = public_channel with limit 1000 or 9999,
162 channels are returned
If I pass the types= private_channel,public_channel with limit
1000 or 9999,
105 channels are returned
Can anybody please answer same.
With the way pagination works in that API, it's possible to get fewer than the number of results you're asking for, even if there are more results in the total collection to return. You'll need to check if there are additional pages of results and crawl through all of them to build the complete set.
This is because of the way data is retrieved in the back end -- it includes archived data, data of other types -- all the filtering that happens for your result happens after the data is fetched, making additional API calls required to get the next window of data to be filtered and then presented to you.
Here's the relevant documentation:
It's possible to receive fewer results than your specified limit, even when there are additional results to retrieve. Avoid the temptation to check the size of results against the limit to conclude the results have been completely returned. Instead, check the next_cursor value in the response_metadata object to make sure that it's empty, null, or non-existent.
In JMeter i need to perform a large search and count the number of rows which are returned. Max rows are 50000.
The number of rows which are returned are shown on the website after a search. "Number of returned rows: xx".
Or I can count the rows inside the HTTP response.
I have tried to use a regex post-processer to count the amount of rows which are returned, the problem is that JMeter freezes since the http-response is so large.
I have also tried to extract the text directly from the website unsuccesfully. I guess one cant do that since the information is not in the HTTP-response?
--So:
Is there some faster and less demanding way to counter all the returned rows inside a HTTP-response body?
Or is there some way to get the text directly from the website?
Thank you.
It looks like your application is buggy, I don't think that returning 50000 entries in a single shot is something people should be doing as there is creates extra network traffic and consumes a lot of resources on server and client(browser) side. I would rather expect some form of Pagination when it comes to operating large amounts of data.
If you're totally sure that your application works as expected you can try using Boundary Extractor which is available since JMeter 4.0
Due to the specifics of internal implementation it consumes less resources and acts faster than the Regular Expression Extractor therefore the load you will be able to conduct from a single machine will be higher.
Check out The Boundary Extractor vs. the Regular Expression Extractor in JMeter article for more information
yes you can get that count in matchNr which is coming after search string. use Regular expression to match any name or id,
do match No. -1
ex. regex variable name is totalcount so then you can fetch that count by using ${totalcount_matchNr}
I perform a bing API search for webpages and the query cameras.
The first "page" of results (offset=0, count=50) returns 49 actual results. It also returns a totalEstimatedMatches of 114000000 -- 114 million. Neat, that's a lot of results.
The second "page" of results (offset=49, count=50) performs similarly...
...until I reach page 7 (offset=314, count=50). Suddenly totalEstimatedMatches is 544.
And the actual count of results returned per-page trails off precipitously from there. In fact, over 43 "pages" of results, I get 413 actual results, of which only 311 have unique URLs.
This appears to happen for any query after a small number of pages.
Is this expected behavior? There's no hint from the API documentation that exhaustive pagination should lead to this behavior... but there you have it.
Here's a screenshot:
Each time the API is called, the search API obtains a group of possible matches starting at in the result set, and then filters out the results based on different parameters (e.g spam, duplicates, safesearch setting, etc), finally leaving a final result set. If the final result after filtering and optimization is more than the count parameter then the number of results equal to count would be returned. If the parameter is more than the final result set count then the final result set is returned which will be less than the count parameter. If the search API is called again, passing in the offset parameter to get the next set of results, then the filtering process happens again on the next set of results which means it may also be less than count.
You should not expect the full count parameter number of results to always be returned for each API call. If further search results beyond the number returned are required then the query should be called again, passing in the offset parameter with a value equal to the number of results returned in the previous API call. This also means that when making subsequent API calls, the offset parameter should never be a hard coded value and should always be calculated based on the results of previous queries.
totalEstimatedMatches can also add to confusion around the Bing Search API results. The word ‘estimated’ is important because the number is an estimation based on an initial quick result set, prior to the filtering described above. Additionally, the totalEstimatedMatches value can change as you iterate through the result set by making subsequent API calls with increasing offset values. The totalEstimatedMatches should only be used as a rough guide indicating the magnitude of the possible result set, and it should not be used to determine the number of results that will ultimately be returned. To query all of the possible results you should continue making API calls, passing in offset with a value of the sum of the results returned in previous calls, until that sum is greater than totalEstimatedMatches of the most recent API call.
Note that you can see this same behavior by going to bing.com directly and using a query such as https://www.bing.com/search?q=bill+gates&count=50. Notice that you will get around 34 results with a totalEstimatedMatches of ~567,000 (valid as of June 2017, future searches may change), and if you click the 'next page' arrow you will see that the next query executed will start at the offset of the 34 returned in the first query (ie. https://www.bing.com/search?q=bill+gates&count=50&first=34). If you click ‘next’ several more times you may see the totalEstimatedMatches also change from page to page.
This seems to be expected behavior. The Web Search API is not a crawler API, thus it only delivers results, that the algorithms deem relevant for a human. Simply put, most humans won't skim through more than a few pages of results, furthermore they expect to find relevant results on the first page.
If you could retrieve the results in the millions, you could simply copy their search index and Bing would be out of business.
Search indices seem to be things of political and economic power, as far as I know there are only four relevant search indices world wide: from Google, from Microsoft (Bing), from Russia, and from China.
Those who control the search, control the Spice... ;-)
I am experimenting with the google custom search API (free version) for performing image search. I would like to commence with the paid version. However, I have some difficulties in understanding the pricing and some documented query parameters in the API calls at https://developers.google.com/custom-search/json-api/v1/using_rest#api-specific_query_parameters
1) In the free version, we have 100 queries/day. If I understood well, 1 query means a single API call. This call can return a maximum of 10 (since the parameter 'num' takes a maximum value of 10) results only. Is this both for free and paid versions? Or is it possible to retrieve more results per API request in the paid version? Precisely, can 'num' take values greater than 10?
2) The parameter 'start' is documented as index of the first result to return. In the free version, I cannot get more than 100 results for a specific query (parameter 'q'). To summarize precisely, I can get 10 results/API call, each call with parameter 'start' taking the values 1, 11,... 91 and same value for 'q'. The API call returns an error for any value of 'start' greater than 91. Is n't the free version supposed to allow 100 API calls? Or perhaps, this restriction is placed to avoid being able to retrieve more than 100 results per search term 'q'?
3) In the paid version, are API calls which return non-200 responses billed for as well?
4) In the paid version, how many API calls can be made for a specific search term 'q'?
5) Do you think there are particular restrictions with respect to the number of results that apply specific to image search only?
Thanks in advance for your help.
The results are paginated. The search results show 10 per page. If you want more you need to set the start page to 11 & get 10 more. It is an exact imitation of what would happen in Google UI search. If you have trouble understanding goto Google search and observe the results. It should match almost. parameter n must be the number of results per page.
In the free version you have 100 free/day. Anything else will 0.5 cents per request. You cannot make more 10k calls per day. So free is not actually free.
In the "paid" version you can buy in bulk. AFAIK there is no daily limit. You can "buy" let us say 11000 requests for 55$ (11000*0.5) and use it all up in one day. But the paid version will be ended soon :( . Please check this blog for info https://customsearch.googleblog.com/
I am wondering how the each command on a Parse Query counts towards the request execution limits. I am building an app that will need to perform a function on many objects (could be more than 1000) in a parse class.
For example (in JavaScript),
var query = new Parse.Query(Parse.User);
query.equalTo('anObjectIWant',true); //there could be more than 1000 objects I want
query.each(function(object){
doSomething(object); //doSomething does NOT involve another Parse request
});
So, will the above code count as 1 request towards my Parse application execution limit (you get 30/second free), or will each object (each recurrence of calling "each") use one request (so 1000 objects would be 1000 requests)?
I have evaluated the resource usage by observing the number of API requests made by query.each() for different result set sizes. The bottom line is that (at the moment of writing) this function is using the default query result count limit of 100. Thus if your query matches up to 100 results it will make 1 API request, 2 API requests for 101-200 and so forth.
This behavior can not be changed by manually increasing the limit to the maximum using query.limit(1000). If you do this you will get an error when you call query.each() afterwards (this is also mentioned in the documentation).
Therefore it has to be considered to manually implement this functionality (e.g., by recursive query.find()) which allows you to set the query limit to 1000 and thus, in the best case, only consumes one-tenth of the API requests query.each() would consume.
This would count as 1 or 2 depending on :
If it is run from cloudcode function =2,when 1 is for cloudcode call + 1 for query. Since queries get their results all at once it is single call.
If this should be place within "beforeSave" functions or similar then only query would be counted, 1 API call.
So you should be pretty fine as long as you don't trigger another parse API for each result.
I would not be surprised if the .each method would query the server each iteration.
You can actually check this using their "control panel", just look at the amount of requests beeing made.
We left Parse after doing some prototyping, one of the reasons was that while using proper and sugested code from the parse website, I managed to create 6500 requests a day beeing the only one using the app.
Using our own API, we are down to not more than 100.