What's the rate limit on the square connect api? - square-connect

Currently the documentation just says:
If Connect API endpoints receive too many requests associated with the same application or access token in a short time window, they might respond with a 429 Too Many Requests error. If this occurs, try your request again at a later time.
Much appreciated!

Currently, Connect API rate limits are on the order of 10 QPS. This limit might change in the future and should not be relied on.

Related

What is the slack API workspace rate limit for chat.postMessage

I am trying to use postMessage for posting message in slack. I want to know the Rate Limit for chat.postMessage for a workspace.
Found this in the document:
chat.postMessage has special rate limiting conditions. It will generally allow an app to post 1 message per second to a specific channel. There are limits governing your app's relationship with the entire workspace above that, limiting posting to several hundred messages per minute. Generous burst behavior is also granted.
But what I could not understand is -
What is the several hundred messages?
After how many messages will the app be blocked?
Can anyone please guide me through this? Thanks in Advance.

Opayo - How to handle timeouts?

I currently manage an integration into Opayo Direct (v4.00). We send requests to Opayo, which most of the time work fine, but occasionally they time out (Our timeout limit is currently set to 20s, which is ages for a consumer to wait).
Does anyone know of a way to either:
Retry the payment request without double charging the consumer? or,
Send a follow up request to get the status of the submitted payment?
For 2. it looks like you need valid transaction identifier from Opayo, which of course we won't have as the request has timed out.
I can't see any mention of idempotency or guidance for what to do in this situation, even in their most recent API specification (PI integration).
Has anyone come up with a workable solution for this problem, other than change provider?

Prioritizing specific endpoints on Heroku to skip / spend less time in request queue

Our API service has several endpoints and one of them is critical for the user experience because it directly affects the page load time.
Is there a way to priotize calls to GET /api/priority_endpoint over GET /api/regular_endpoint so the prioritized requests spend less time in the requests queue?
Thanks...
No, this is not possible. All requests are sent to a random dyno as soon as they're within the router.
The only way you could do this would be by writing your own request queue in your app's code.

How to post bulk messages in yammer?

I am trying to post multiple messages in Yammer through only one account.
I have a service account on Yammer. Through that service account I want to post multiple messages on behalf of multiple users, but the REST API of Yammer has rate limits.
Is there is any way to post multiple messages without any hurdle of rate limits?
The rate limits cannot be exceeded, you'll get a 429 error for Too Many Requests. The rate limits are per app per user, so if you have a legitimate reason for posting a lot of messages in a very short amount of time, some creative thinking around these boundaries can help.

Google C2DM server side performance

My application sends notifications to the customers in bulks. For example at 8am every day a back-end systems generates notifications for 50K customers, and those notifications should be delivered in a reasonable time.
During performance testing I've discovered that sending a single push request to C2DM server takes about 400 millis which is far too long. I was told that a production quota may provide better performance, but will it reduce to 10 millis?
Besides, I need C2DM performance marks before going to production because it may affect the implementation - sending the requests from multiple threads, using asynchronous http client etc.
Does anyone knows about the C2DM server benchmarks or any performance-related server implementation guidelines?
Thanks,
Artem
I use appengine, which makes the delivery of a newsletter to 1 million users a very painful task. Mostly because the c2dm API doesn't support multiple users delivery, which makes it necessary to create one http request per user.
The time to the c2dm server to respond will depend on your latency from the google servers. In my case (appengine) it's very small.
My advice to you is to create as many threads as possible, since the threads will be waiting for the IO over the network. If you every pass the quota, you can always ask permission for more traffic.

Resources