We are facing issue sometimes while pulling the information from yammer.
following is the error
https://www.yammer.com/oauth2/access_token.json?client_id=q1gKtZ6UyinT6WWYqTEoag&client_secret=xYKbPYYJLX5U6cyjPuUEdq9DMfKCixXBo7QaGw4QHuU&code=CZnZejAnylQITtsVN7wxVw
[[31merror[0m] play - Cannot invoke the action, eventually got an error: java.lang.RuntimeException: Failed : HTTP error code : 429 : null
[[31merror[0m] application -
You are exceeding the rate limits when you are pulling data. See below
API calls are subject to rate limiting. Exceeding any rate limits will result in all endpoints returning a status code of 429 (Too Many Requests). Rate limits are per user per app. There are four rate limits:
Autocomplete: 10 requests in 10 seconds.
Messages: 10 requests in 30 seconds.
Notifications: 10 requests in 30 seconds.
All Other Resources: 10 requests in 10 seconds.
https://developer.yammer.com/v1.0/docs/rest-api-rate-limits
Related
I am doing load test on my system using Jmeter. the requirement is I need to generate 150 requests per minute for a duration of 20 minutes constantly.
I tried with below approaches
I tried by giving this configuration.
No of threads - 3000 [150 req/min * 20 mins]
rampup period - 1200sec [20mins * 60]
But here test stopped after creation of 2004 thread. by giving
this error
Failed to start the native thread for java.lang.Thread “Thread Group 1-2004”
Uncaught Exception java.lang.OutOfMemoryError: unable to create native thread: possibly out of memory or process/resource limits reached in thread Thread[#51,StandardJMeterEngine,6,main]. See log file for details
Used concurrency thread group with below details
Target concurrency - 150
ramp up time - 1 min
hold target rate time - 20 mins
but here no of samples collected are more than 3000 [150 req *20 sec] which i feel is not correct
Is it possible to create exact load according to my requirement in Jmeter(150 req/min ->duration of 20 mins) or should I explore other tools like locust??
tried with precision timers (attaching screen shots)
enter image description here
enter image description here
enter image description here
Your understanding of relationship between users and hits per second is not correct.
When JMeter thread (virtual user) is started it begins executing Samplers as fast as it can. The throughput (number of requests per second) mainly depends on the response time.
For example:
you have 1 user and 1 second response time - the load will be 1 request per second
you have 1 user and 2 seconds response time - the load will be 0.5 requests per second
you have 2 users and 2 seconds response time - the load will be 1 requests per second
you have 4 users and 2 seconds response time - the load will be 2 requests per second
etc.
If you want to slow down JMeter to the desired number of requests per minute it can be done using Timers.
For example:
Constant Throughput Timer:
Precise Throughput Timer:
Throughput Shaping Timer
I thought Solana/Metaplex etc should be able to handle large numbers of transactions in quick succession. I just wrote a load test to do 50 mints of an existing SPL token (that has metaplex token-data associated with it)
In my code I dont specify any particular node/rpc - rather just the cluster i.e. testnet
What should I be doing here ?
{"name":"Error","message":"failed to get info about account 2fvtsp6U6iDVhJvox5kRpUS6jFAStk847zATX3cpsVD8: Error: 429 Too Many Requests: {\"jsonrpc\":\"2.0\",\"error\":{\"code\": 429, \"message\":\"Too many requests from your IP, contact your app developer or support#rpcpool.com.\"}, \"id\": \"76627a31-4522-4ebb-ae22-5861fa6781f0\" } \r\n","stack":"Error: failed to get info about account 2fvtsp6U6iDVhJvox5kRpUS6jFAStk847zATX3cpsVD8: Error: 429 Too Many Requests: {\"jsonrpc\":\"2.0\",\"error\":{\"code\": 429, \"message\":\"Too many requests from your IP, contact your app developer or support#rpcpool.com.\"}, \"id\": \"76627a31-4522-4ebb-ae22-5861fa6781f0\" } \r\n\n at Connection.getAccountInfo (/Users/ffff/dev/walsingh/TOKENPASS/tpass-graphql/graphql/node_modules/#metaplex/js/node_modules/#solana/web3.js/lib/index.cjs.js:5508:13)\n at runMicrotasks (<anonymous>)\n at processTicksAndRejections (node:internal/process/task_queues:96:5)\n at async Token.getAccountInfo (
The 429 issue you are running into is a RPC rate limit. Testnet has the following rate limits at the time of writing:
Maximum number of requests per 10 seconds per IP: 100
Maximum number of requests per 10 seconds per IP for a single RPC: 40
Maximum concurrent connections per IP: 40
Maximum connection rate per 10 seconds per IP: 40
Maximum amount of data per 30 second: 100 MB
You probably ran into one of these limits. The general recommendation is to go get access to one of the RPCs without rate limits, as the public endpoints are not meant for testing how many transactions you can get through.
Quicknode, Triton, and Genesysgo provide RPC infra to use.
I'm using Google Sheet API in 4 different apps.
In 2 of them I'm getting this error far before I actually reach the limit:
GSheet error: rateLimitExceeded: Quota exceeded for quota group 'ReadGroup' and limit 'Read requests per user per 100 seconds' of service 'sheets.googleapis.com' for consumer 'project_number:xxx'.
I also get the same error but for writing.
You can see here, my limits are 500 requests per 100 seconds, and I max out around 15, yet I still get this error a LOT.
Any ideas?
I have a facade which calls 3 different services for some type of requests and finally orchestrates the responses before sending the response back to the client. Here, it is mandatory that all 3 services are up and serving as expected. The client request can not be served even one of them is down.
I am looking for a circuit breaker to solve this problem. The circuit breaker should respond with error code even one of the service is down. I was checking the resilence4j circuit breaker and it doesnt fit for my problem.
https://resilience4j.readme.io/docs/circuitbreaker
Is there any other open source available?
Why doesn't it fit to you problem?
You can protect every service with a CircuitBreaker. As soon one of the CircuitBreakers is open, you can short circuit and directly return an error response to your client.
CircuitBreaker Works on protected function as below –
Thread <—> CircuitBreaker <—> Protected_Function
So a Protected_Function can call 1 or more microservices, Mostly we use 1 Protected_Function for 1 external micro service call because we have can tune resilience based on the profile or behavior of that particular micro-service. But as your requirement is different so we can have 3 calls under 1 Protected_Function.
So as per your explanation above, you Façade is calling 3 micro-services (assume in series). What you can do is to call you Façade or all 3 services through or inside a Protected Function –
#CircuitBreaker(name = "OVERALL_PROTECTION")
public Your_Response Protected_Function (Your_Request) {
Call_To_Service_1;
Call_To_Service_2;
Call_To_Service_3;
return Orchestrate_Your_Response;
}
Further you can add resilience for OVERALL_PROTECTION in your YAML property file as below (I have used Count based Sliding Window) –
resilience4j.circuitbreaker:
backends:
OVERALL_PROTECTION:
registerHealthIndicator: true
slidingWindowSize: 100 # start rate calc after 100 calls
minimumNumberOfCalls: 100 # minimum calls before the CircuitBreaker can calculate the error rate.
permittedNumberOfCallsInHalfOpenState: 10 # number of permitted calls when the CircuitBreaker is half open
waitDurationInOpenState: 10s # time that the CircuitBreaker should wait before transitioning from open to half-open
failureRateThreshold: 50 # failure rate threshold in percentage
slowCallRateThreshold: 100 # consider all transactions under interceptor for slow call rate
slowCallDurationThreshold: 2s # if a call is taking more than 2s then increase the error rate
recordExceptions: # increment error rate if following exception occurs
- org.springframework.web.client.HttpServerErrorException
- java.io.IOException
- org.springframework.web.client.ResourceAccessException
You can also use time based slidingWindow instead of count based if you wish, Rest I have mentioned #Comment for self explanation in front of each parameter in configuration.
resilience4j.retry:
instances:
OVERALL_PROTECTION:
maxRetryAttempts: 5
waitDuration: 100
retryExceptions:
- org.springframework.web.client.HttpServerErrorException
- java.io.IOException
- org.springframework.web.client.ResourceAccessException
Above configuration will perform a retry for 5 times if Exceptions under retryExceptions occurs.
resilience4j.ratelimiter:
instances:
OVERALL_PROTECTION:
timeoutDuration: 100ms #The default wait time a thread waits for a permission
limitRefreshPeriod: 1000 #The period of a limit refresh. After each period the rate limiter sets its permissions count back to the limitForPeriod value
limitForPeriod: 25 #The number of permissions available during one limit refresh period
Above configuration will allow maximum up to 25 transactions in 1 second.
I have created API using Django Rest Framework.
API communicates with GCP cloud storage to store profile Image(around 1MB/pic).
While performing load testing (around 1000 request/s) to that server.
I have encountered the following error.
I seem to be a GCP cloud storage max request issue, but unable to figure out the solution of it.
Exception Type: SSLError at /api/v1/users
Exception Value: HTTPSConnectionPool(host='www.googleapis.com', port=443): Max retries exceeded with url: /storage/v1/b/<gcp-bucket-name>?projection=noAcl (Caused by SSLError(SSLError("bad handshake: SysCallError(-1, 'Unexpected EOF')",),))
Looks like you have the answer to your question here:
"...buckets have an initial IO capacity of around 1000 write requests
per second...As the request rate for a given bucket grows, Cloud
Storage automatically increases the IO capacity for that bucket"
Therefore it automatically Auto-Scale. The only thing is that you need to increase the requests/s gradually as described here:
"If your request rate is expected to go over these thresholds, you should start with a request rate below or near the thresholds and then double the request rate no faster than every 20 minutes"
Looks like your bucket should get an increase of I/O capacity that will work in the future.
You are actually right in the edge (1000 req/s), but I guess this is what is causing your error.