What is the behavior of read and write timeouts in OkHttp?
Is the timeout exception triggered when the whole request exceeds the timeout duration or is when the socket doesn't receive (read) or send (write) any packet for this duration.
I think is the second behavior but could someone clarify this?
Thanks in advance.
The timeouts are triggered when you block for too long. On read that occurs if the server doesn't send you response data. On write it occurs if the server doesn't read the request you sent. Or if the network makes it seem like that's what's happening!
Timeouts are continuous: if the timeout is 3 seconds and the response is 5 bytes, an extreme case might succeed in 15 seconds just as long as the server sends something every 3 seconds. In other words, the timeout is reset after ever successful I/O.
Okio’s Timeout class also offers a deadline abstraction that is concerned with the total time spent.
Related
I have set read and write timeouts to 5 seconds. But I have handler responsible to add data from excel which takes more than 5 seconds to complete the request.
So what I want to do is change or increase the read/write timeout by X amount of time if the request is not completed within 5 seconds.
I have implemented TimeoutHandler provided by net/http package to cancel the request if it is not completed within specified time.
What are the ways to do this? How can I manipulate the timeout?
This is only for a demo and Alexa (Amazon echo) doesn't support us pushing text to it to be spoken randomly so we want to pull off a hack.
User speaks into Alexa
We have our lambda execute an action and then hopefully sleep and wait on an API response which will not happen until we do something
Then we may post a response from another user
Lambda now returns the text
In this way, we are trying to simulate two way communication through Alexa.
Do I have to worry about Alexa timing out? If so, how long will it take? Will my Lambda timeout as well (I am assuming I can just sleep in that code or hang on a remote call)?
The response timeout is set by your AWS Lambda backend. If you do not configure it from the default, the timeout is 3 seconds by default. The rules for configuring the timeout are documented in the Lambda FAQs:
Q: How long can an AWS Lambda function execute?
All calls made to AWS Lambda must complete execution within 300
seconds. The default timeout is 3 seconds, but you can set the timeout
to any value between 1 and 300 seconds.
If your response processing takes long enough to create a noticeable wait, the Echo device will flash its light ring in a rapid circle to indicate work taking place. This will continue, blocking any other interaction with the Echo device, until the response is returned or the backing Lambda function reaches its timeout limit.
I'm not sure what the maximum timeout is for Alexa, but I just tried a 60 second execution and it seemed to work. Lambda lets you set the timeout of the request under Configuration/AdvancedSettings. There is a box for minutes but have never tried upping the timeout greater then 10s of seconds.
I have following test result from load test -
timeStamp,elapsed,label,responseCode,Latency
1447675626444,9,API1,201,9
1447675626454,1151,API2,404,Not Found,1151
As explicit, call to API2 fails and there is delay of 1oms between two calls.
I know that label timeStamp is time from epoch but is it the time when request was fired from client or the time when last response byte was received?
If latter then how do I find the time when request was fired from client?
The first timestamp is request start time. The latency is time from timestamp when the first response byte is received. elapsed is time from timestamp when the complete response was received. So in your case,
444: API1 request went out. 9 milliseconds later, at
453: First byte AND last byte of API1 response is received - because latency is the same as elapsed
454: API2 request went out
If you're using a regular thread group in JMeter with two samplers, the second request is not sent out until the response to the first sampler is completely received. Your issue would seem to be something other than pure sequence of calls.
==
To clarify what #Mike said about "request is sent behind two or three lines code.":
The Timestamp is when JMeter sampler code marked the request start event and made a log entry. After which the JVM has to execute a few of lines of code to use Apache HTTPClient object to create a TCP connection, assemble a HTTP request and then send out a HTTP packet over possibly several TCP packets. On any modern system this difference between timestamp and actual request going out will be less than a few millseconds. If this timing is important for you to measure, JMeter isn't really the right tool, you should use a network sniffer like Wireshark to look for timestamp of when the first packet was actually transmitted.
As you said, the timestamp is the API started time, maybe the request is sent behind two or three lines code. You cannot get the exactly timestamp when request is sent. As far as I know, it doesn't affect your performance.
If you just want to test how long between the request is sent and the response is returned, you need to make another API.
Similar to Raising Google Drive API per-user limit does not prevent rate limit exceptions
In Drive API Console, quotas looks like this:
Despite Per-user limit being set to an unnecessarily high requests/sec, I am still getting rate errors at the user-level.
What I'm doing:
I am using approx 8 threads uploading to Drive, and they are ALL implementing a robust exponential back-off of 1, 2, 4, 8, 16, 32, 64 sec back-off respectively (pretty excessive back-off, but necessary imho). The problem can still persists through all of this back-off in some of the threads.
Is there some other rate that is not being advertised / cannot be set?
I'm nowhere near the requests/sec, and still have 99.53% total quota. Why am I still getting userRateLimitExceeded errors?
userRateLimitExceeded is flood protection basically. Its used to prevent people from sending to many requests to fast.
Indicates that the user rate limit has been exceeded. The maximum rate
limit is 10 qps per IP address. The default value set in Google
Developers Console is 1 qps per IP address. You can increase this
limit in the Google Developers Console to a maximum of 10 qps.
You need to slow your code down, by implementing Exponential Backoff.
Make a request to the API
Receive an error response that has a retry-able error code
Wait 1s + random_number_milliseconds seconds
Retry request
Receive an error response that has a retry-able error code
Wait 2s + random_number_milliseconds seconds
Retry request
Receive an error response that has a retry-able error code
Wait 4s + random_number_milliseconds seconds
Retry request
Receive an error response that has a retry-able error code
Wait 8s + random_number_milliseconds seconds
Retry request
Receive an error response that has a retry-able error code
Wait 16s + random_number_milliseconds seconds
Retry request
If you still get an error, stop and log the error.
The idea is that every time you see that error you wait a few seconds then try and send it again. If you get the error again you wait a little longer.
Quota user:
Now I am not sure how your application works but, If all the quests are coming from the same IP this could cause your issue. As you can see by the Quota you get 10 requests per second / per user. How does Google know its a user? They look at the IP address. If all your reuqests are coming from the same IP then its one user and you are locked to 10 requests per second.
You can get around this by adding QuotaUser to your request.
quotaUser - Alternative to userIp. Link
Lets you enforce per-user quotas from a server-side application even in cases when the user's IP address is unknown. This can occur,
for example, with applications that run cron jobs on App Engine on a
user's behalf.
You can choose any arbitrary string that uniquely identifies a user, but it is limited to 40 characters.
Overrides userIp if both are provided.
Learn more about capping usage.
If you send a different quotauser on every reqest, say a random number, then Google thinks its a different user and will assume that its only one request in the 10 seconds. Its a little trick to get around the ip limitation when running server applications that request everything from the same IP.
I have an application running in CF8 which does calls to external systems like search engine and ldaps often. But at times some request never gets the response and is shown always in the the active request list.
Even tho there is request timeout set in the administration, its not getting applied to these scenarios.
I have around 5 request still pending to be finished for the last 20hours !!!
My server settings are as below
Timeout Requests after ( seconds) : 300 sec
Max no of simultaneous requests : 20
Maximum number of running JRun threads : 50
Maximum number of running JRun threads : 1000
Timeout requests waiting in queue after 300 seconds
I read through some articles and found there are cases where threads are never responded or killed. But i dont have a solid solution how can i timeout this or kill this automatically
really appreciated if you guys have some idea on this :)
The ColdFusion timeout does not apply to 'third party' connections.
A long-running LDAP query, for example, will take as long as it needs. When the calling template gets the result from the query your timeout will apply.
This often leads to confusion interpreting errors. You will get an error saying that whichever function after the long running request causes the timeout.
Further reading available here
You can (and probably should) set a timeout on the CFLDAP call itself. http://help.adobe.com/en_US/ColdFusion/9.0/CFMLRef/WSc3ff6d0ea77859461172e0811cbec22c24-7f97.html
Thanks, Antony, for recommending my blog entry CF911: Lies, Damned Lies, and CF Request Timeouts...What You May Not Realize. This problem of requests not timing out when expected can be very troublesome and a surprise for most.
But Anooj, while that at least explains WHY they don't die (and you can't kill them within CF), one thing to consider is that you may be able to kill them in the REMOTE server being called, in your case, the LDAP server.
You may be able to go to the administrator of THAT server and on showing them that CF has a long-running request, they may be able to spot and resolve the problem. And if they can, that may free the connection from CF and your request then will stop.
I have just added a new section on this idea to the bottom of that blog entry, as "So is there really nothing I can do for the hung requests?"
Hope that helps.