I'm hosting my API on Heroku (example endpoint: https://some-api.herokuapp.com/some/request). I'm not currently paying for any subscription on Heroku.
Is there a limit to the number of calls to this API that I can make, eg. per hour?
There is no limit on requests your application can receive. You should implement rate limiting in your own code.
the limit seems to be 4500 calls per hour.
https://devcenter.heroku.com/articles/limits
Related
I have implemented an API which adhere with the Snowflake's Asynchronous External Function.
In our developed system, we are using AWS API gateway, Lambda function and a Third Party API( TPA).
In our scenarios, we store certain information in Snowflake's table and try to enrich this table using Snowflake's External User Defined Function.
We are able to enrich the table if the number of records are less. If we try to enrich the 3 millions of records, then after certain time, our TAPI starts sending HTTP 429. This is a indicator which tells our lambda function to slow the number of Snowflake's requests.
We understand this and the moment Lambda function gets the HTTP 429, then it sends the HTTP 429 back to Snowflake in any polling/post requests. It is expected that Snowflake will slow down the request rather than throwing an error and stopped processing further.
Below response to Snowflake
{
"statusCode" : 429
}
And it is a fixed situation which looks like Snowflake is not respecting HTTP 429 in the Request-Reply Pattern.
Snowflake does handle HTTP 4xx responses when working with external functions.
Have you engaged support? I have worked with customers having this issue, and snowflake team is able to review.
AWS API gateway has a default limit of 10000 rps.
Please review Designing High Performance External Functions
Remote services should return HTTP response code 429 when overloaded.
If Snowflake sees HTTP 429, Snowflake scales back the rate at which it
sends rows, and retries sending batches of rows that were not
processed successfully.
Your options for resolution are:
Work with AWS to increase your API Gateway rate limit.
However, some proxy services, including Amazon API Gateway and Azure
API Management, have default usage limits. When the request rate
exceeds the limit, these proxy services throttle requests. If
necessary, you might need to ask AWS or Azure to increase your quota
on your proxy service.
or
Try using a smaller warehouse, so that snowflake sends less volume to API gateway per second. This has obvious drawback of you running slower.
Our API service has several endpoints and one of them is critical for the user experience because it directly affects the page load time.
Is there a way to priotize calls to GET /api/priority_endpoint over GET /api/regular_endpoint so the prioritized requests spend less time in the requests queue?
Thanks...
No, this is not possible. All requests are sent to a random dyno as soon as they're within the router.
The only way you could do this would be by writing your own request queue in your app's code.
I am trying to post multiple messages in Yammer through only one account.
I have a service account on Yammer. Through that service account I want to post multiple messages on behalf of multiple users, but the REST API of Yammer has rate limits.
Is there is any way to post multiple messages without any hurdle of rate limits?
The rate limits cannot be exceeded, you'll get a 429 error for Too Many Requests. The rate limits are per app per user, so if you have a legitimate reason for posting a lot of messages in a very short amount of time, some creative thinking around these boundaries can help.
Currently the documentation just says:
If Connect API endpoints receive too many requests associated with the same application or access token in a short time window, they might respond with a 429 Too Many Requests error. If this occurs, try your request again at a later time.
Much appreciated!
Currently, Connect API rate limits are on the order of 10 QPS. This limit might change in the future and should not be relied on.
My application sends notifications to the customers in bulks. For example at 8am every day a back-end systems generates notifications for 50K customers, and those notifications should be delivered in a reasonable time.
During performance testing I've discovered that sending a single push request to C2DM server takes about 400 millis which is far too long. I was told that a production quota may provide better performance, but will it reduce to 10 millis?
Besides, I need C2DM performance marks before going to production because it may affect the implementation - sending the requests from multiple threads, using asynchronous http client etc.
Does anyone knows about the C2DM server benchmarks or any performance-related server implementation guidelines?
Thanks,
Artem
I use appengine, which makes the delivery of a newsletter to 1 million users a very painful task. Mostly because the c2dm API doesn't support multiple users delivery, which makes it necessary to create one http request per user.
The time to the c2dm server to respond will depend on your latency from the google servers. In my case (appengine) it's very small.
My advice to you is to create as many threads as possible, since the threads will be waiting for the IO over the network. If you every pass the quota, you can always ask permission for more traffic.