Yammer API limiting of my request to 5 Requests per minute - yammer

I can't seem to get the Yammer API to allow more then 5 Requests / Minute. So i have put 6 seconds delay in each request.
But the API page says I should be able to get:
API calls are subject to rate limiting. Exceeding any rate limits will result in all endpoints returning a status code of 429 (Too Many Requests). Rate limits are per user per app. There are four rate limits:
Autocomplete: 10 requests in 10 seconds.
Messages: 10 requests in 30 seconds.
Notifications: 10 requests in 30 seconds.
All Other Resources: 10 requests in 10 seconds.

We found a bug in the api_messages_controller that was doubling the request count because of a before filter.​ A fix went out so you should be good now.

Related

How to execute APIs after 30 seconds of time interval without waiting the response

I have to run a script for examination centre.
So the requirement is that the image APIs should hit after 30 seconds of time interval.
I have added loop controller, in that I have added a timer of 30 seconds.
But the problem is the image APIs response time is > 1 and we want to run next API after 30 seconds but currently next loop is executing after getting response from first API i.e 60 seconds
So is there a way that next API will hit after 30 seconds even if response of first API has not come yet?
But the problem is the image APIs response time is > 1 and we want to run next API after 30 seconds but currently next loop is executing after getting response from first API i.e 60 seconds
So is there a way that next API will hit after 30 seconds even if response of first API has not come yet?
If you don't care about response times and other metrics you can just set the response timeout to 1000 ms under Advanced tab of the HTTP Request sampler (or even better HTTP Request Defaults, this way the setting will be propagated to all HTTP Request samplers in the HTTP Request Defaults scope)

Rate limiting WebCient requests after certain number of retries?

Is it possible to rate limit webclient requests for a url or any requests after a defined number of retries using resilience4j or otherwise?
Following is a sample code.
webClient.post()
.bodyValue(...)
.header(...)
.uri(...)
.retrieve()
.bodyToMono(Void.class)
.retryWhen(...) // every 15 mins, for 1 day
.subscribe();
Example use is say 10000 requests in a day need to be sent to 100 different urls. There are retries for case when a url is temporary unavailable.
But if a url comes back up after few hours, it would get accumulated large number of requests which I would like to rate limit.
Effectively I don't want to rate limit the whole operation but for specific urls which are not available for a long time or rate limit requests which have been retried 'x' number of times. Is there a way to achieve this?
Not to be confused with circuit breaker requirement, it's not exactly an issue to keep retrying every few mins atleast in the context.

Incorrect graph generate by jmeter listener Hits per Seconds and Composite Graph

learning using jmeter and getting problem when reading graph listener output
creating Thread group with number thread 8, ram-up 1 and loop forever
adding listener active threads over time, hits per seconds, response times over times
result:
a. in Active Threads Over Time getting correct result with maximum 8 thread
b. in Hits per Second, graph result is really weird, there is 148 number of hist/sec
trying to debug and change thread to 1, Hits per Second still generate weird graph with 20 hits/sec
any idea why this happening?
i use latest release from jmeter 3.0
As I had clarified here, jp#gc - Hits per Second, this listener shows the total number of requests sent to the server per second. Per Second is by default - it can be changed in the settings tab.
When you have 1 user, JMeter sends 18-20 requests / second (Loop forever will keep sending the requests for the user as soon as the user gets the response). So, The user was able to make 19 requests in a second. When you have 8 users, the test plan sends around 133 requests. It seems to work fine and nothing weird here.
When you have 8 users, JMeter would not have any issues in sending the first 8 requests (first request for each thread).But the subsequent requests for each thread will be sent only if the response is received for the previous request. (if you have any timers to simulate user think time,then the user will wait for the duration to send the next request after the response is received).
If 1 user is able to make 19 requests (or server processed 19 requests per second), then 8 users should be able to send 152 requests. But, when you increase the user load/increase the number of requests sent to the server, It's throughput (number of requests the server can process / unit time) will also increase gradually as shown in the picture. If you keep increasing the user, at one point, you would see the server's throughput (number of hits / second) gets saturated / would not increase beyond this point. So, may be, here the server got saturated at 133 requests / second. That is why we could not see 152 requests for 8 users. To understand the behavior, you need to increase user (ramp up) slowly.
Check here for few tips on JMeter

What does (extra requests) mean in Parse.com services?

In Parse's pricing FAQ it is mentioned that "If your app hits its request limit, your extra requests will begin to fail with error code 155 (RequestLimitExceeded)".
What does (extra requests) mean?
Are they the requests made within the same minute? day? month? or all other requests made until the request limit is increased?
From the Parse.com FAQs:
The request limit is calculated on a per-minute basis. For example, if
an app is set to 30 requests/second, your app will hit its request
limit once it makes more than 1,800 requests over a 60 second period.
If your app hits its request limit, your extra requests will begin to
fail with error code 155 (RequestLimitExceeded). To prevent the
requests from failing you should adjust the request limit slider for
the relevant app on the Account Overview page. Please note that you
can see your actual requests/second on the Performance Analytics tab.
Your plan supports a certain number of requests per second. Extra requests are all requests that are started after the limit defined by your plan has already been reached. As outlined above, it is calculated on a per-minute basis: if more than 60 * your rate per second requests are made with your API key per minute, some of them will be extra requests and fail.
The FAQ entry actually contains an example: If your plan allows for 30 requests to be made per second, every minute 60 * 30 = 1800 requests are allowed. After the minute has passed, the counter will be reset.

Raised Google Drive API Per-user limit, still getting userRateLimitExceeded errors

Similar to Raising Google Drive API per-user limit does not prevent rate limit exceptions
In Drive API Console, quotas looks like this:
Despite Per-user limit being set to an unnecessarily high requests/sec, I am still getting rate errors at the user-level.
What I'm doing:
I am using approx 8 threads uploading to Drive, and they are ALL implementing a robust exponential back-off of 1, 2, 4, 8, 16, 32, 64 sec back-off respectively (pretty excessive back-off, but necessary imho). The problem can still persists through all of this back-off in some of the threads.
Is there some other rate that is not being advertised / cannot be set?
I'm nowhere near the requests/sec, and still have 99.53% total quota. Why am I still getting userRateLimitExceeded errors?
userRateLimitExceeded is flood protection basically. Its used to prevent people from sending to many requests to fast.
Indicates that the user rate limit has been exceeded. The maximum rate
limit is 10 qps per IP address. The default value set in Google
Developers Console is 1 qps per IP address. You can increase this
limit in the Google Developers Console to a maximum of 10 qps.
You need to slow your code down, by implementing Exponential Backoff.
Make a request to the API
Receive an error response that has a retry-able error code
Wait 1s + random_number_milliseconds seconds
Retry request
Receive an error response that has a retry-able error code
Wait 2s + random_number_milliseconds seconds
Retry request
Receive an error response that has a retry-able error code
Wait 4s + random_number_milliseconds seconds
Retry request
Receive an error response that has a retry-able error code
Wait 8s + random_number_milliseconds seconds
Retry request
Receive an error response that has a retry-able error code
Wait 16s + random_number_milliseconds seconds
Retry request
If you still get an error, stop and log the error.
The idea is that every time you see that error you wait a few seconds then try and send it again. If you get the error again you wait a little longer.
Quota user:
Now I am not sure how your application works but, If all the quests are coming from the same IP this could cause your issue. As you can see by the Quota you get 10 requests per second / per user. How does Google know its a user? They look at the IP address. If all your reuqests are coming from the same IP then its one user and you are locked to 10 requests per second.
You can get around this by adding QuotaUser to your request.
quotaUser - Alternative to userIp. Link
Lets you enforce per-user quotas from a server-side application even in cases when the user's IP address is unknown. This can occur,
for example, with applications that run cron jobs on App Engine on a
user's behalf.
You can choose any arbitrary string that uniquely identifies a user, but it is limited to 40 characters.
Overrides userIp if both are provided.
Learn more about capping usage.
If you send a different quotauser on every reqest, say a random number, then Google thinks its a different user and will assume that its only one request in the 10 seconds. Its a little trick to get around the ip limitation when running server applications that request everything from the same IP.

Resources