Within a request on an ApiController, I'm tracking the duration of awaiting the Sql Connection to open.
await t.TrackDependencyAsync(async() => { await sqlConnection.OpenAsync(); return true; }, "WaitingSqlConnection");
If my request is not called for at least 5 minutes, then any new call will see the duration of OpenAsync be huge (c. 3s) instead of immediate.
I'd like to understand the reason to eradicate that crazy slowness.
UPDATE
I created an endpoint just to open the SqlConnection. If I wait more than 5 minutes then call that OpenConnection endpoint then call any another request, the OpenConnection will incur the waiting cost mentioned above but the request will not.
Hence, I've scheduled a job on Azure to run every minute and call the OpenConnection endpoint. However, when I make requests from my http client, I incur the waiting time. As if opened the SqlConnection was somehow linked to the http client ip...
Also, that 5 minutes windows is typical of DNS TTL... However 3s for a DNS lookup of the Database endpoint is too long. It can't be that.
UPDATE 2
Time observed at the htt client level seems to be the result of both awaiting for the connection as well as some other latencies (dns lookup?).
Here is a table summarizing what I observe:
UPDATE 3
The difference between row 3 and 4 of my table is time spent in TCP/IP Connect and HTTPS Handshake according to Fiddler. Let's not focus on it on that post but only on the time spent waiting for the SqlConnection to open.
UPDATE 4
Actually I think both two waiting times have the same reason.
The server needs to "keep alive" its connection to the database and the client needs to "keep alive" its connection to the server.
UPDATE 5
I had a job running every 4 minutes to open the SqlConnection but once in a while it was incurring the waiting cost. So I think the inactivity time is 4 minutes not 5 (hence I updated this post title).
So I updated my scheduled job to run every minute. then I realised it was still incurring the waiting cost but regularly every 30 minutes (hence I updated this post title).
These two times strangely correlates with those of Azure Load Balancer Idle Timeout.
Related
I have this strange issue where sometimes if I make two AJAX requests to my Apache 2.2 server in rapid succession, the second request will wait for the first to finish before finishing.
Example, I have two requests, one that sleeps for 10 seconds and one that returns immediately. If I run the request that returns immediatly by itself it will always return within 300ms. However, if I call the request that takes 10 seconds, and then call the request that returns right away about 50% of the time the second request will wait until the first finishes and chrome will report that the request too about 10 seconds before receiving a response. The other half of the time the quick request will return right away.
I can't find any pattern to make it behave one way or another, it will just randomly block the quick AJAX requests sometimes, and other times it will behave as expected. I'm working on a dev server that only I am accessing and I've set several variables such as MaxRequestsPerChild to a high value.
Does anyone have any idea why Apache, seemingly at random, is turning my AJAX requests into synchronous requests?
Here is the code I'm running:
$.ajax({async:true,dataType:'json',url:'/progressTest',success:function(d){console.log('FINAL',d)}}); // Sleeps for 10 seconds
$.ajax({async:true,dataType:'json',url:'/progressTestStatus',success:function(d){console.log('STATUS',d)}}); // Takes ~300ms
And here are two screen shots. The first where it behaved as expected and the second where it waited for the slow process to finish first (in the example the timeout was set to 3 seconds).
UPDATE: Per the comments below - this appears to be related to Chrome only performing one request at a time. Any ideas why Chrome would set such a low limit on async requests?
The problem is not with Apache but with Google Chrome limiting the number of concurrent requests to your development server. I can only make guesses as to why it's limited to one request. Here are a couple:
1) Do you have many tabs open? There is a limit to the total number of concurrent connections and if you have many tabs making requests with KeepAlive you may be at that limit and can only establish one connect to your server. If that's the case you might be able to fix that by adding KeepAlive to your own output headers.
2) Do you have some extensions enabled. Some extensions do weird things to the browser. Try disabling all your extensions and making the same requests. If it works then enable them one at a time to find the culprit extension.
I'm working on developing a web site using cakephp. I'm analyzing the website now using firebug + Yslow and Google chrome developer tools. In an Ajax request I get a large waiting time about 6s while the receiving time is too small 66ms which cause a great latency in the request. Does anybody know why the waiting time is too large??
Waiting time - From the time of request to the time first byte is received, which involves a round trip time. There can be latency if your server away from your machine. Usually it requires 3 round trips. 1 for DNS lookup and 1 for establishing TCP Connection, 1 for request and response pair.
Receiving Time : It will be less if there is less amount of data being downloaded from the server to the client.
For further reference : http://www.webperformancematters.com/journal/2007/7/24/latency-bandwidth-and-response-times.html
My guess is that you might be performing a SQL query as part of the resource that you are calling via Ajax. If this is the case, you may need to tune your query or indexes to improve the speed of the query. Can you post some code so we may review?
I have hosted a state machine Worklow as a WCF service..And the workflow is called in an ASP.NET code. I used netTcpContextBinding for workflow hosting. Problem is that if a SendRecieve activity within the workflow is taking a lot of time (say 1 minute) to execute, then it will show transaction aborted error and will terminate.. i have already set the binding values for send, recieve, open, close timeouts to maximum values in both web.config and the app.config..
How can i overcome this issue?
A TransactionScope has a default timeout of 60 seconds so if whatever you are doing in there takes longer it will time out and abort. You can increase the timeout on the TransactionScope but quite frankly the 60 seconds is already quite long. In most cases you are better of at doing any long running work to collect data before the transaction and keep your transaction time as short as possible.
I have an application running in CF8 which does calls to external systems like search engine and ldaps often. But at times some request never gets the response and is shown always in the the active request list.
Even tho there is request timeout set in the administration, its not getting applied to these scenarios.
I have around 5 request still pending to be finished for the last 20hours !!!
My server settings are as below
Timeout Requests after ( seconds) : 300 sec
Max no of simultaneous requests : 20
Maximum number of running JRun threads : 50
Maximum number of running JRun threads : 1000
Timeout requests waiting in queue after 300 seconds
I read through some articles and found there are cases where threads are never responded or killed. But i dont have a solid solution how can i timeout this or kill this automatically
really appreciated if you guys have some idea on this :)
The ColdFusion timeout does not apply to 'third party' connections.
A long-running LDAP query, for example, will take as long as it needs. When the calling template gets the result from the query your timeout will apply.
This often leads to confusion interpreting errors. You will get an error saying that whichever function after the long running request causes the timeout.
Further reading available here
You can (and probably should) set a timeout on the CFLDAP call itself. http://help.adobe.com/en_US/ColdFusion/9.0/CFMLRef/WSc3ff6d0ea77859461172e0811cbec22c24-7f97.html
Thanks, Antony, for recommending my blog entry CF911: Lies, Damned Lies, and CF Request Timeouts...What You May Not Realize. This problem of requests not timing out when expected can be very troublesome and a surprise for most.
But Anooj, while that at least explains WHY they don't die (and you can't kill them within CF), one thing to consider is that you may be able to kill them in the REMOTE server being called, in your case, the LDAP server.
You may be able to go to the administrator of THAT server and on showing them that CF has a long-running request, they may be able to spot and resolve the problem. And if they can, that may free the connection from CF and your request then will stop.
I have just added a new section on this idea to the bottom of that blog entry, as "So is there really nothing I can do for the hung requests?"
Hope that helps.
My actual script execution time, is less then a microsecond and yet, the total time, the response takes is about 250 ms - 1000 times more, on a typical ajaxcall. Even in environments where I have a reliable T1 connection, the responses still take 50-100ms.
Background info:
Call are being made via POST/GET through AJAX, jQuery
Backend is PHP/mysql on the Joyent servers.
the information shown below comes from firebug, net tab.
DNS Lookup = 0
Connecting = 46ms
Sending = 0ms
Waiting = 172ms
Receiving = 0ms
you need to move closer to the servers. :) Sounds like the speed of light is your bottleneck.
Have a look at the trace route of your network packets to the server.