GetStream update limit in free plan - limit

Which are the limitations of free plan in getstream?
I have 8 members and 5 administrators following each others feed and I always get an email from GetStream alerting the rate limit when I make them follow.
Now I have an issue when updating activities. Perhaps I have reached my update limit, because sometimes when I try to create an activity, I get a ERROR TIMEDOUT.

We send out two rate limit messages: one for API calls, and one for feed updates. Our API call rate limit is about 2000 activities per minute, but feed updates are more like 50-100 per minute on the free plan. Setting up a follow relationship will trigger some feed updates as old activities get copied from other feeds to the new follower's feed.
When you do hit a rate limit, we don't stop your incoming traffic, but we de-prioritize slightly it so it takes a little longer to catch up. Our API v2 coming out soon will report rate limit information in API calls so you'll have more visibility into how close you are to hitting those limits before getting emails.
Regarding timeouts, which region is your app in (us-east, us-west, eu-central) and where are you located related to that area? We're going to be rolling out multi-region support later this year to minimize latencies there as well.

Related

What is "sf_max_daily_api_calls"?

Does someone know what "sf_max_daily_api_calls" parameter in Heroku mappings does? I do not want to assume it is a daily limit for write operations per object and I cannot find an explanation.
I tried to open a ticket with Heroku, but in their support ticket form "Which application?" drop-down is required, but none of the support categories have anything to choose there from, the only option is "Please choose..."
I tried to find any reference to this field and can't - I can only see it used in Heroku's Quick Start guide, but without an explanation. I have a very busy object I'm working on, read/write, and want to understand any limitations I need to account for.
Salesforce orgs have rolling 24h limit of max daily API calls. Generally the limit is very generous in test orgs (sandboxes), 5M calls because you can make stupid mistakes there. In productions it's lower. Bit counterintuitive but protects their resources, forces you to write optimised code/integrations...
You can see your limit in Setup -> Company information. There's a formula in documentation, roughly speaking you gain more of that limit with every user license you purchased (more for "real" internal users, less for community users), same as with data storage limits.
Also every API call is supposed to return current usage (in special tag for SOAP API, in a header in REST API) so I'm not sure why you'd have to hardcode anything...
If you write your operations right the limit can be very generous. No idea how that Heroku Connect works. Ideally you'd spot some "bulk api 2.0" in the documentation or try to find synchronous vs async in there.
Normal old school synchronous update via SOAP API lets you process 200 records at a time, wasting 1 API call. REST bulk API accepts csv/json/xml of up to 10K records and processes them asynchronously, you poll for "is it done yet" result... So starting job, uploading files, committing job and then only checking say once a minute can easily be 4 API calls and you can process milions of records before hitting the limit.
When all else fails, you exhausted your options, can't optimise it anymore, can't purchase more user licenses... I think they sell "packets" of more API calls limit, contact your account representative. But there are lots of things you can try before that, not the least of them being setting up a warning when you hit say 30% threshold.

What is the minimum interval allowed for calling List Payment API

One of our cafes has a coffee station where the barista needs to see orders (payments) as they come in from the the registers. We are not able to use the Webhooks feature because it does not allow us to filter based on location and register and our volume is too high. So I am developing an iOS app which will periodically call the payments API for that specific location to get the latest transactions. It will use the begin_time parameter to get txns since the last query. The app will only be deployed only on one device and we would like to make the call in intervals of every 5-10 seconds. It will probablly pull down 1-3 txns for each call. Is there a minimum interval that is recommended or enforced?
Thanks,
Mike
The only minimum interval you would have to worry about would running into rate limits, but at that rate, you won't have any problems. You can read more about error codes (including rate limiting) on Square's official documentation

Error 429: Insufficient tokens (DefaultGroupUSER-100s) What defines a user?

tl/dr do 100 devices all using the same Client ID count as 100 users, with their own limits, or one user sharing limits?
I have a webpage which reads and writes to a Google Sheet.
Because the webpage needs to know if a cell has changed, it polls the server once every 1000ms:
var pollProcId = window.setInterval(pollForInput, 1000);
where pollForInput does a single:
gapi.client.sheets.spreadsheets.values.get(request).then(callback);
When I tried to use this app with a class of 100 students I got many 429 error codes (more than I got successful reads) in response to google.apps.sheets.v4.SpreadsheetsService.GetValues requests:
Many of my users never got as far as seeing even the first request come back.
As far as I can make out, these are AnalyticsDefaultGroupUSER-100s errors which, according to the error responses page:
Indicates that the requests per 100 seconds per user per project quota
has been exhausted.
But with my app only requesting once per 1000 milliseconds, I wouldn't expect to see this many 429s as I have a limit of 100 requests per 100 seconds (1 per second) so only users whose application didn't complete in 100 seconds should have received a 429.
I know I should implement Exponential Backoff (which I'll do, I promise) but I'm worried I'm misunderstanding what a "user" in this context is.
Each user is using their own device (so presumably has a different IP address) but they are all using my "Client ID".
Does this scenario count as many users making one request per second, or a single user making a hundred requests per second?
Well, the user in the per user quota means that a single user making a request. So let's take the Sheets API, it has a quota of 100 for the Read requests per 100 seconds per user. So meaning only a single user can make a request per second. Note that Read request has a same set of quota as the Write request. But these two sets of quotas have their own set of quota and didn't share the same limit quota.
If you want a higher quota than the default, then you can apply for a higher quota by using this form or by visiting your developer console, then click the pencil icon in the quota that you want to increase.
I also suggest you to do the Exponential Backoff as soon as possible, because it can help you to avoid getting this kind of error.
Hope it helps you.

Does the Google Analytics API throttle requests?

Does the Google Analytics API throttle requests?
We have a batch script that I have just moved from v2 to v3 of the API and the requests go through quite well for the first bit (50 queries or so) and then they start taking 4s or so each. Is this Google throttling us?
While Matthew is correct, I have another possibility for you. Google analytics API cashes your requests to some extent. Let me try and explain.
I have a customer / site that I request data from. While testing I noticed some strange things.
the first million rows results would come back with in an acceptable amount of time.
after a million rows things started to slow down we where seeing results returning in 5 times as much time instead of 5 minutes we where waiting 20 minutes or more for the results to return.
Example:
Request URL :
https://www.googleapis.com/analytics/v3/data/ga?ids=ga:34896748&dimensions=ga:date,ga:sourceMedium,ga:country,ga:networkDomain,ga:pagePath,ga:exitPagePath,ga:landingPagePath&metrics=ga:entrances,ga:pageviews,ga:exits,ga:bounces,ga:timeOnPage,ga:uniquePageviews&filters=ga:userType%3D%3DReturning+Visitor;ga:deviceCategory%3D%3Ddesktop&start-date=2014-05-12&end-date=2014-05-22&start-index=236001&max-results=2000&oauth_token={OauthToken}
Request Time (seconds:milliseconds): :0:484
Request URL :
https://www.googleapis.com/analytics/v3/data/ga?ids=ga:34896748&dimensions=ga:date,ga:sourceMedium,ga:country,ga:networkDomain,ga:pagePath,ga:exitPagePath,ga:landingPagePath&metrics=ga:entrances,ga:pageviews,ga:exits,ga:bounces,ga:timeOnPage,ga:uniquePageviews&filters=ga:userType%3D%3DReturning+Visitor;ga:deviceCategory%3D%3Ddesktop&start-date=2014-05-12&end-date=2014-05-22&start-index=238001&max-results=2000&oauth_token={OauthToken}
Request Time (seconds:milliseconds): :7:968
I did a lot of testing stopping and starting my application. I couldn't figure out why the data was so fast in the beginning then slow later.
Now I have some contacts on the Google Analytics Development team the guys in charge of the API. So I made a nice test app, logged some results showing my issue and sent it off to them. With the question Are you throttling me?
They where also perplexed, and told me there is no throttle on the API. There is a flood protection limit that Matthew speaks of. My Developer contact forwarded it to the guys in charge of the traffic.
Fast forward a few weeks. It seams that when we make a request for a bunch of data Google cashes the data for us. Its saved on the server incase we request it again. By restarting my application I was accessing the cashed data and it would return fast. When I let the application run longer I would suddenly reach non cashed data and it would take longer for them to return the request.
I asked how long is data cashed for, answer there was no set time. So I don't think you are being throttled. I think your initial speedy requests are cashed data and your slower requests are non cashed data.
Email back from google:
Hi Linda,
I talked to the engineers and they had a look. The response was
basically that they thinks it's because of caching. The response is
below. If you could do some additional queries to confirm the behavior
it might be helpful. However, what they need to determine is if it's
because you are querying and hitting cached results (because you've
already asked for that data). Anyway, take a look at the comments
below and let me know if you have additional questions or results that
you can share.
Summary from talking to engineer: "Items not already in our cache will
exhibit a slower retrieval processing time than items already present
in the cache. The first query loads the response into our cache and
typical query times without using the cache is about 7 seconds and
with using the cache is a few milliseconds. We can also confirm that
you are not hitting any rate limits on our end, as far as we can tell.
To confirm if this is indeed what's happening in your case, you might
want to rerun verified slow queries a second time to see if the next
query speeds up considerably (this could be what you're seeing when
you say you paste the request URL into a browser and results return
instantly)."
-- IMBA Google Analytics API Developer --
Google's Analytics API does have a rate limit per their docs: https://developers.google.com/analytics/devguides/reporting/core/v3/coreErrors
However they should not caused delayed requests, rather the request should be returned with a response of: 403 userRateLimitExceeded
Description of that error:
Indicates that the user rate limit has been exceeded. The maximum rate limit is 10 qps per IP address. The default value set in Google Developers Console is 1 qps per IP address. You can increase this limit in the Google Developers Console to a maximum of 10 qps.
Google's recommended course of action:
Retry using exponential back-off. You need to slow down the rate at which you are sending the requests.

Pricing: Are push notifications really free?

According to the parse.com pricing page, push notifications are free up to 1 million unique recipients.
API calls are free up to 30 requests / second.
I want to make sure there is no catch here.
An example will clarify: I have 100K subscribed users. I will send weekly push notifications to them. In a month, that will be 4 push "blasts" with 100K recipients each. Is this covered by the free tier? Would this count as 4 API calls, 400K API calls, or some other amount?
100k users is 1/10 the advertised unique recipient limit, so that should be okay.
Remember that there's a 10sec timeout, too. So the only way to blast 100k pushes within the free-tier resource limits is to create a scheduled job that spends about 2 hours (that's a safe rate of 15 req/sec) doing pushes and writing state so you can pick up later where you left off.
Assuming there's no hidden gotcha (you'll probably need to discover those empirically), I think the only gotcha in plain sight is the fact that the free tier allows only one (1) scheduled job. Any other long-running processing -- and there are bound to be some on 100k users -- are going to have to share the job, making the what-should-this-single-job-work-on-now logic pretty complex.
You should take a look at the FAQ for Parse.com:
https://www.parse.com/plans/faq
What is considered an API request?
Anytime you make a network call to
Parse on behalf of your app using one of the Parse SDKs or REST API,
it counts as an API request. This does include things like queries,
saves, logins, amongst other kinds of requests. It also includes
requests to send push notifications, although this is seen as a single
request regardless of how many recipients are targeted. Serving Parse
files counts as an API request, including static assets served from
Parse Hosting. Analytics requests do have a special exemption. You can
send us your analytics events any time without being limited by your
app's request limit.

Resources