I am wondering how the each command on a Parse Query counts towards the request execution limits. I am building an app that will need to perform a function on many objects (could be more than 1000) in a parse class.
For example (in JavaScript),
var query = new Parse.Query(Parse.User);
query.equalTo('anObjectIWant',true); //there could be more than 1000 objects I want
query.each(function(object){
doSomething(object); //doSomething does NOT involve another Parse request
});
So, will the above code count as 1 request towards my Parse application execution limit (you get 30/second free), or will each object (each recurrence of calling "each") use one request (so 1000 objects would be 1000 requests)?
I have evaluated the resource usage by observing the number of API requests made by query.each() for different result set sizes. The bottom line is that (at the moment of writing) this function is using the default query result count limit of 100. Thus if your query matches up to 100 results it will make 1 API request, 2 API requests for 101-200 and so forth.
This behavior can not be changed by manually increasing the limit to the maximum using query.limit(1000). If you do this you will get an error when you call query.each() afterwards (this is also mentioned in the documentation).
Therefore it has to be considered to manually implement this functionality (e.g., by recursive query.find()) which allows you to set the query limit to 1000 and thus, in the best case, only consumes one-tenth of the API requests query.each() would consume.
This would count as 1 or 2 depending on :
If it is run from cloudcode function =2,when 1 is for cloudcode call + 1 for query. Since queries get their results all at once it is single call.
If this should be place within "beforeSave" functions or similar then only query would be counted, 1 API call.
So you should be pretty fine as long as you don't trigger another parse API for each result.
I would not be surprised if the .each method would query the server each iteration.
You can actually check this using their "control panel", just look at the amount of requests beeing made.
We left Parse after doing some prototyping, one of the reasons was that while using proper and sugested code from the parse website, I managed to create 6500 requests a day beeing the only one using the app.
Using our own API, we are down to not more than 100.
Related
I have gone through various REST API documentation. However, I don't have a clear understanding of it. Can someone help me to understand it in layman's terms? How is it different from API being stateful? Why today all the APIs we develop are restful APIs?
Term stateless/stateful is a common term and has the same meaning for Rest API as for anything else.
Stateless means that when you make a call to some entity the response (output) will always depend on your input only. I.e. for any number of calls with the same input you will ALWAYS get the same response. Example: a request of "what is 2 + 2 equals to?" will always get you an answer 4 no matter how many times you ask and regardless if there were other queries in-between your query requests.
Stateful means that the output will depend on your input and some internal state. Example: "Please add 2 to current number that you hold. what is the new number?" (Assuming that the internal state (the current number) is 0). after the first query you will get the answer 2 but after the same query for the second time you will get answer 4 and so forth.
Each request from client to server must contain all of the information necessary to understand the request, and cannot take advantage of any stored context on the server. -- Fielding, 2000
Note that this is a constraint on semantics: all of the information that you need to understand the meaning of the request is in the request itself.
Counter example: the LIST command in FTP, where a null argument for pathname "implies the user's current working or default directory" is not stateless -- current working directory is data of session state that the server needs to remember from previous requests.
I am trying to invoke following slack API to fetch private and public channels.
https://api.slack.com/methods/conversations.list
By default as per slack documentation 200 channels are returned at a time when limit is given 1000.
I am passing types= “private_channel,public_channel“ to get the private as well as public channels.
If I pass the types = public_channel with limit 1000 or 9999,
162 channels are returned
If I pass the types= private_channel,public_channel with limit
1000 or 9999,
105 channels are returned
Can anybody please answer same.
With the way pagination works in that API, it's possible to get fewer than the number of results you're asking for, even if there are more results in the total collection to return. You'll need to check if there are additional pages of results and crawl through all of them to build the complete set.
This is because of the way data is retrieved in the back end -- it includes archived data, data of other types -- all the filtering that happens for your result happens after the data is fetched, making additional API calls required to get the next window of data to be filtered and then presented to you.
Here's the relevant documentation:
It's possible to receive fewer results than your specified limit, even when there are additional results to retrieve. Avoid the temptation to check the size of results against the limit to conclude the results have been completely returned. Instead, check the next_cursor value in the response_metadata object to make sure that it's empty, null, or non-existent.
we have started using the high level REST client finally, to ease the development of queries from backend engineering perspective. For indexing, we are using the client.update(request, RequestOptions.DEFAULT) so that new documents will be created and existing ones modified.
The issue that we are seeing is, the indexing is delayed, almost by 5 minutes. I see that they use async http calls internally. But that should not take so long, I looked for some timing options inside the library, didn't find anything. Am I missing anything or the official documentation is missing for this?
Since refresh_interval: 1 in your index settings, it means it is never refreshed unless you do it manually, which is why you don't see the data just after it's been updated.
You have three options here:
A. You can call the _update endpoint with the refresh=true (or refresh=wait_for) parameter to make sure that the index is refreshed just after your update.
B. You can simply set refresh_interval: 1s (or any other duration that makes sense for you) in your index settings, to make sure the index is automatically refreshed on a regular basis.
C. You can explicitly call index/_refresh on your index to refresh it whenever you think is appropriate.
Option B is the one that usually makes sense in most use cases.
Several reference on using the refresh wait_for but I had a hard time finding what exactly needed to be done in the rest high level client.
For all of you that are searching this answer:
IndexRequest request = new IndexRequest(index, DOC_TYPE, id);
request.setRefreshPolicy(WriteRequest.RefreshPolicy.WAIT_UNTIL);
In JMeter i need to perform a large search and count the number of rows which are returned. Max rows are 50000.
The number of rows which are returned are shown on the website after a search. "Number of returned rows: xx".
Or I can count the rows inside the HTTP response.
I have tried to use a regex post-processer to count the amount of rows which are returned, the problem is that JMeter freezes since the http-response is so large.
I have also tried to extract the text directly from the website unsuccesfully. I guess one cant do that since the information is not in the HTTP-response?
--So:
Is there some faster and less demanding way to counter all the returned rows inside a HTTP-response body?
Or is there some way to get the text directly from the website?
Thank you.
It looks like your application is buggy, I don't think that returning 50000 entries in a single shot is something people should be doing as there is creates extra network traffic and consumes a lot of resources on server and client(browser) side. I would rather expect some form of Pagination when it comes to operating large amounts of data.
If you're totally sure that your application works as expected you can try using Boundary Extractor which is available since JMeter 4.0
Due to the specifics of internal implementation it consumes less resources and acts faster than the Regular Expression Extractor therefore the load you will be able to conduct from a single machine will be higher.
Check out The Boundary Extractor vs. the Regular Expression Extractor in JMeter article for more information
yes you can get that count in matchNr which is coming after search string. use Regular expression to match any name or id,
do match No. -1
ex. regex variable name is totalcount so then you can fetch that count by using ${totalcount_matchNr}
I perform a bing API search for webpages and the query cameras.
The first "page" of results (offset=0, count=50) returns 49 actual results. It also returns a totalEstimatedMatches of 114000000 -- 114 million. Neat, that's a lot of results.
The second "page" of results (offset=49, count=50) performs similarly...
...until I reach page 7 (offset=314, count=50). Suddenly totalEstimatedMatches is 544.
And the actual count of results returned per-page trails off precipitously from there. In fact, over 43 "pages" of results, I get 413 actual results, of which only 311 have unique URLs.
This appears to happen for any query after a small number of pages.
Is this expected behavior? There's no hint from the API documentation that exhaustive pagination should lead to this behavior... but there you have it.
Here's a screenshot:
Each time the API is called, the search API obtains a group of possible matches starting at in the result set, and then filters out the results based on different parameters (e.g spam, duplicates, safesearch setting, etc), finally leaving a final result set. If the final result after filtering and optimization is more than the count parameter then the number of results equal to count would be returned. If the parameter is more than the final result set count then the final result set is returned which will be less than the count parameter. If the search API is called again, passing in the offset parameter to get the next set of results, then the filtering process happens again on the next set of results which means it may also be less than count.
You should not expect the full count parameter number of results to always be returned for each API call. If further search results beyond the number returned are required then the query should be called again, passing in the offset parameter with a value equal to the number of results returned in the previous API call. This also means that when making subsequent API calls, the offset parameter should never be a hard coded value and should always be calculated based on the results of previous queries.
totalEstimatedMatches can also add to confusion around the Bing Search API results. The word ‘estimated’ is important because the number is an estimation based on an initial quick result set, prior to the filtering described above. Additionally, the totalEstimatedMatches value can change as you iterate through the result set by making subsequent API calls with increasing offset values. The totalEstimatedMatches should only be used as a rough guide indicating the magnitude of the possible result set, and it should not be used to determine the number of results that will ultimately be returned. To query all of the possible results you should continue making API calls, passing in offset with a value of the sum of the results returned in previous calls, until that sum is greater than totalEstimatedMatches of the most recent API call.
Note that you can see this same behavior by going to bing.com directly and using a query such as https://www.bing.com/search?q=bill+gates&count=50. Notice that you will get around 34 results with a totalEstimatedMatches of ~567,000 (valid as of June 2017, future searches may change), and if you click the 'next page' arrow you will see that the next query executed will start at the offset of the 34 returned in the first query (ie. https://www.bing.com/search?q=bill+gates&count=50&first=34). If you click ‘next’ several more times you may see the totalEstimatedMatches also change from page to page.
This seems to be expected behavior. The Web Search API is not a crawler API, thus it only delivers results, that the algorithms deem relevant for a human. Simply put, most humans won't skim through more than a few pages of results, furthermore they expect to find relevant results on the first page.
If you could retrieve the results in the millions, you could simply copy their search index and Bing would be out of business.
Search indices seem to be things of political and economic power, as far as I know there are only four relevant search indices world wide: from Google, from Microsoft (Bing), from Russia, and from China.
Those who control the search, control the Spice... ;-)