What count as a Parse request? - parse-platform

Because of the change of Terms, Parse now limits the number of requests a second, which is a good thing but does Parse Push and Parse Analytics count as requests ?

Anytime you make a network call to Parse on behalf of your app via the iOS, Android, JavaScript, Windows, Xamarin, Unity, or REST API, it counts as an API request.
This does include things like finds, saves, logins, amongst other kinds of requests. It also includes requests to send push notifications, although this is seen as a single request regardless of how many recipients are targeted. Serving Parse files counts as an API request, including static assets served from Parse Hosting.
Analytics requests do have a special exemption. You can send us your analytics events any time without being limited by your app's request limit, as noted on Parse's Plans page.

Here is the link to official FAQ - https://parse.com/plans/faq
What is considered an API request?
In general, anytime you make a network call to Parse on behalf of your
app using one of the Parse SDKs or REST API, it counts as an API
request.
What types of operations are counted as API requests?
Queries, saves, logins, amongst other kinds of operations will be taken into account
when determining the number of requests generated by your app. A
request to send a push notification sent by a SDK will count as a
single request regardless of how many installations are targeted.
Batched requests will be counted based on the number of operations
performed in each batch. Serving Parse files counts as an API request,
including static assets served from Parse Hosting. Analytics requests
do not count towards your app's request limit.

Related

How can we increase throughput on the LUIS programmatic API?

When using the LUIS programmatic API, we get frequent 429 errors ("too many requests") when doing a half-dozen GET and POST requests. We've inserted a pause in our code to deal with this.
We have a paid subscription key to LUIS, which indicates we should get 50 requests/second (see https://azure.microsoft.com/en-us/pricing/details/cognitive-services/language-understanding-intelligent-services/). However, it seems the paid subscription key can only be used for hitting the application endpoint. For Ocp-Apim-Subscription-Key in request headers, we must use our "programmatic key", which is associated with the Starter_Key, which is (apparently) rate-limited.
Am I missing something here? How do we get more throughput on the LUIS programmatic api?
One of our engineers arrived at the following answer, so I'm going to post his answer here.
The programmatic API is limited to 5 requests per second, 100K requests per month for everyone. Your paid subscription key is only for the endpoint API, not for the programmatic API.
If you need more throughput:
Put your API requests in a queue. It is unlikely that you need to update your LUIS model non-stop, 5 times per second -- you probably just have a burst of updates. Put them in a queue to stay within the limit.
Don't try using the same user account to manage multiple LUIS models. Set up additional accounts for each model, which gives you additional programmatic keys. Each programmatic key gives you 5 requests per second.

Google Maps API token/authentication

Google Maps API provides an Autocomplition service.
According to this blog post (official?) this service is limited only by adding "powered by Google" logo.
When I'm using js library (http://maps.google.com/maps/api/js?sensor=false&libraries=places) I'm not sending any Key information. But in a sniffer I can see some token GET parameter, which seems is generated by library.
Which one limitation information is correct?
How Google can track without Key (in case it is limited by requests per day)?
Is that possible to retrieve autosuggestion by js (from google.maps.places.Autocomplete), but then using reference (without storing) on backend and loading place details (similar to getPlace() functionality of an Autocomplete object)? If this not limited, how to generate token?
Google Places API Web Service
The Google Places API Web Service enforces a default limit of 1 000
requests per 24 hour period, which you can increase free of charge. If
your app exceeds the limit, the app will start failing. Verify your
identity to get up to 150 000 requests per 24 hour period, by enabling
billing on the Google Developers Console.
Now check at the very top of that page
Note: These limits do not apply to the Places Library in the Google
Maps JavaScript API, which is covered by the Google Maps JavaScript
API limits. If you are developing a web based application that only
needs to search for places, and does not submit new places, you should
use the Places Library of the Google Maps Javascript API rather than
the Google Places API Web Service. The Places library assigns a quota
to each end user rather than to each key. This means that your
available quota increases with your user base rather than being capped
at a fixed amount.
they are probably using ip address to identify different users.

Choosing a proxy caching approach to work with Google Calendar API

I need a suitable caching approach for use with an enterprise portal showing data from the Google Calendar API. What algorithms or design patterns are best applicable?
The Google Calendar API is limited by number of requests per day (defaults to 10,000 requests/day - I have requested more) and rate of access (5 requests/second/user).
There are two core API methods that I expect to use, one to get a list of user calendars (1 API hit) and one to download the events of an individual calendar (1 API hit per calendar).
Both the calendar list and individual calendars contain etag values which can be used to help avoid unnecessary API requests. If you have a list of individual calendar etag values then you can see if any of these have changed by just querying the calendar list. (Unfortunately a HTTP 304 Not Modified response is still counted as an API hit).
Also I don’t really want to download and cache the entire calendar contents (so maybe just a few days or weeks at a time).
I need to find an approach which tries to minimize the number of API calls but doesn't try to store everything. It also needs to be able to cope with occasionally fetching data from unchanged calendars because the "time sliding window" on the calendar data has moved on. I would like the system to be backed by data storage so that multiple portal instances could share the same data.

POSTing entities to WebAPI in batch?

Do I need to send individual entity updates to WebAPI, or can I POST an array of them and send them all at once? It seems like a dumb question, but I can't find anything that says one way or another.
Brad has a blog post that talks about implementing batching support in Web API.
Also, Web API samples project on codeplex has a sample for doing batching in web API hosted on asp.net.
It seems like WEB API 2 has support for this
From the site (Web API Request Batching):
Request batching is a useful way of minimizing the number of messages
that are passed between the client and the server. This reduces
network traffic and provides a smoother, less chatty user interface.
This feature will enable Web API users to batch multiple HTTP requests
and send them as a single HTTP request.
There are a number of samples for different scenarios on this page.
https://aspnetwebstack.codeplex.com/wikipage?title=Web+API+Request+Batching
You will have to create an action that accepts a collection of items.
If all you have is an action that accepts a single item than you need to send separate requests.
With batching always think about how you would report the failures and whether a failing of a single item should invalidate the whole batch.

Should I do API requests server side or client side?

I am trying to make a web app using ExpressJS and Coffeescript that pulls data from Amazon, LastFM, and Bing's web API's.
Users can request data such as the prices for a specific album from a specific band, upcoming concert times and locations for a band, etc... stuff like that.
My question is: should I make these API calls client-side using jQuery and getJSON or should they be server-side? I've done client-side requests; how would I even make an API call from the server side?
I just want to know what the best practice is, and also if someone could point me in the right direction for making server-side API requests, that would be very helpful.
Thanks!
There's are two key considerations for this question:
Do calls incur any data access? Are the results just going to be written to the screen?
How & where do you plan to handle errors? How do you handle throttling?
Item #2 is really important here because web services go down all of the time for a whole host of reasons. Your calls to Bing, Amazon & Last FM will fail probably 1% or 0.1% of the time (based on my experiences here).
To make requests users server-side JS you probably want to take a look at the Request package on NPM.
It's often good to abstract away your storage and dependent services to isolate changes and offer a consolidated and consistent web api for your application. But sometimes, if you have a good hypermedia web api (RESTful responses link to other resources), you could reference a resource link from another service in the response from your service (ex: SO request could reference gravatar image/resource of user). There's no one size fits all - it depends on whether you want to encapsulate the dependency or integrate with it.
It might be beneficial to make the web-api requests from your service exposed via expressjs as your own web-apis.
Making http web-api requests is easy from node. Here's another SO post covering that:
HTTP GET Request in Node.js Express
well, the way you describe it I think you may want to fetch data from amazon, lastfm and so on, process it with node, save it in your database and provide your own api.
you can use node's http.request() to fetch the data and build your own rest api with express.js

Resources