Ajax request delay 1 second - ajax

This time my problem is delay between ajax request, and php file response.I have checked it out with google chrome statistics, and it shows almost always exact 1 second wait time every time function loops and sends ajax GET request.This makes my script pretty much unusable cause it slows so much, that browser get's unresponsive till it executes full loop.
I tried to remove all MYSQL queries, so exclude MYSQL as the problem, and delay still existed.Im pretty much sure it's not MYSQL taking so long to execute.
Does anyone have any idea what may be this delay caused on.Maybe it's some settings on my PC, or something with AJAX?
Thank You
Maybe this will explain better.

YES!I've figured it out.Apearantly my MYSQL connection have taken it's time, the reason was that i've used localhost to initiate connection instead 127.0.0.1.Now It's much MUCH faster! :D .No reason blaming AJAX, simple 127.0.0.1 did the trick.

Related

CloudKit. Slow connection to Container Database first time

My question is about cloudkit and the delay I have when I run a CKQueryOperation to fetch records from the public database.
I have had a lot of tests and, definitely, this only happens when I run for the first time or when I haven´t use the app for a long time. In that situation, when I run the query I have to wait for several seconds before getting records. But if I repeat the request a couple of seconds later (or if I cancel the firts one and launch it again), then everything is fast and perfect.
Does cloudkit have any "cache" for queries already launched so the next time (in a short term) is faster? or is there anything about establishing the connection the first time and later this connection keeps alive?
I really have tried a lot of things and the result is always the same.
Please, do you have any clue about this behaviour?

no response from the host :snmpwalk

I have implemented AgentX using mib2c.create-dataset.conf ( with cache enabled)
In my snmd.conf :: agentXTimeout 15
In testtable.h file I have changed cache value as below...
#define testTABLE_TIMEOUT 60
According to my understanding It loads data every 60 second.
Now my issue is if the data in data table is exceeds some amount it takes some amount of time to load it.
As in between If I fired SNMPWALK it gives me “no response from the host” If I use SNMPWALK for whole table and in between testTABLE_TIMEOUT occurs it stops in between and shows following error (no response from the host).
Please tell me how to solve it ? In my table large amount of data is present and changing frequently.
I read some where:
(when the agent receives a request for something in this table and the cache is older than the defined timeout (12s > 10s), then it does re-load the data. This is the expected behaviour.
However the agent does not automatically release the local cache (i.e. call the 'free' routine) as soon as the timeout has expired.
Instead this is handled by a regular "garbage collection" run (once a minute), which will free any stale caches.
In the meantime, a request that tries to use that cache will spot that it's expired, and reload the data.)
Is there any connection between these two ?? I can’t get this... How to resolve my problem ???
Unfortunately, if your data set is very large and it takes a long time to load then you simply need to suffer the slow load and slow response. You can try and load the data on a regular basis using snmp_alarm or something so it's immediately available when a request comes in, but that doesn't really solve the problem either since the request could still come right after the alarm is triggered and the agent will still take a long time to respond.
So... the best thing to do is optimize your load routine as much as possible, and possibly simply increase the timeout that the manager uses. For snmpwalk, for example, you might add -t 30 to the command line arguments and I bet everything will suddenly work just fine.

Apache Makes some AJAX Request Behave Synchronously

I have this strange issue where sometimes if I make two AJAX requests to my Apache 2.2 server in rapid succession, the second request will wait for the first to finish before finishing.
Example, I have two requests, one that sleeps for 10 seconds and one that returns immediately. If I run the request that returns immediatly by itself it will always return within 300ms. However, if I call the request that takes 10 seconds, and then call the request that returns right away about 50% of the time the second request will wait until the first finishes and chrome will report that the request too about 10 seconds before receiving a response. The other half of the time the quick request will return right away.
I can't find any pattern to make it behave one way or another, it will just randomly block the quick AJAX requests sometimes, and other times it will behave as expected. I'm working on a dev server that only I am accessing and I've set several variables such as MaxRequestsPerChild to a high value.
Does anyone have any idea why Apache, seemingly at random, is turning my AJAX requests into synchronous requests?
Here is the code I'm running:
$.ajax({async:true,dataType:'json',url:'/progressTest',success:function(d){console.log('FINAL',d)}}); // Sleeps for 10 seconds
$.ajax({async:true,dataType:'json',url:'/progressTestStatus',success:function(d){console.log('STATUS',d)}}); // Takes ~300ms
And here are two screen shots. The first where it behaved as expected and the second where it waited for the slow process to finish first (in the example the timeout was set to 3 seconds).
UPDATE: Per the comments below - this appears to be related to Chrome only performing one request at a time. Any ideas why Chrome would set such a low limit on async requests?
The problem is not with Apache but with Google Chrome limiting the number of concurrent requests to your development server. I can only make guesses as to why it's limited to one request. Here are a couple:
1) Do you have many tabs open? There is a limit to the total number of concurrent connections and if you have many tabs making requests with KeepAlive you may be at that limit and can only establish one connect to your server. If that's the case you might be able to fix that by adding KeepAlive to your own output headers.
2) Do you have some extensions enabled. Some extensions do weird things to the browser. Try disabling all your extensions and making the same requests. If it works then enable them one at a time to find the culprit extension.

How to simulate browser timeout of ajax request?

I'm trying to secure my web application against timeouts of ajax requests.
To do it, I obviously need to simulate such a timeout.
From what I've found here:
http://kb.mozillazine.org/Network.http.connect.timeout#Background
the firefox timeout is system-dependent and from what I've found here: http://support.microsoft.com/kb/181050 the IE timeout period is 60 minutes by default.
So I see the following ways to simulate a timeout:
make the server wait 60 minutes (yuck ;))
change the IE timeout period to a smaller value (which requires registry changes)
configure a proxy between the client and the server and make it timeout
All the ways above seem like an overkill to me. Does anyone know an easier way (possibly on a different browser)?
Thanks!
Wouldn't it be much easier to simply set the ajax timeout to 1 millisecond. Even on localhost it will always timeout at that value. This is the method I always use. The only thing you don't exercise with this approach is the actual "feel" that your preferred timeout period gives to the end user (ie, does 3 seconds feel long, is 2 seconds too short). But if you're just looking to exercise the code under the error response this does the trick for me.
Eventually the easiest way for me was simulate the timeout by setting ReceiveTimeout in registry HKEY_CURRENT_USER\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings
as described here:
http://support.microsoft.com/kb/181050
Darshan's solution might also work, but I just tested the above. Thank you all for help!
whats harm in setting KeepAliveTimeout in registry
HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\InternetSettings ?
More information can be found here:
http://support.microsoft.com/kb/181050
It's simple, set the timeout to 10.
like this : xhr.timeout = 10;

Coldfusion request never timeout for ldap requests !

I have an application running in CF8 which does calls to external systems like search engine and ldaps often. But at times some request never gets the response and is shown always in the the active request list.
Even tho there is request timeout set in the administration, its not getting applied to these scenarios.
I have around 5 request still pending to be finished for the last 20hours !!!
My server settings are as below
Timeout Requests after ( seconds) : 300 sec
Max no of simultaneous requests : 20
Maximum number of running JRun threads : 50
Maximum number of running JRun threads : 1000
Timeout requests waiting in queue after 300 seconds
I read through some articles and found there are cases where threads are never responded or killed. But i dont have a solid solution how can i timeout this or kill this automatically
really appreciated if you guys have some idea on this :)
The ColdFusion timeout does not apply to 'third party' connections.
A long-running LDAP query, for example, will take as long as it needs. When the calling template gets the result from the query your timeout will apply.
This often leads to confusion interpreting errors. You will get an error saying that whichever function after the long running request causes the timeout.
Further reading available here
You can (and probably should) set a timeout on the CFLDAP call itself. http://help.adobe.com/en_US/ColdFusion/9.0/CFMLRef/WSc3ff6d0ea77859461172e0811cbec22c24-7f97.html
Thanks, Antony, for recommending my blog entry CF911: Lies, Damned Lies, and CF Request Timeouts...What You May Not Realize. This problem of requests not timing out when expected can be very troublesome and a surprise for most.
But Anooj, while that at least explains WHY they don't die (and you can't kill them within CF), one thing to consider is that you may be able to kill them in the REMOTE server being called, in your case, the LDAP server.
You may be able to go to the administrator of THAT server and on showing them that CF has a long-running request, they may be able to spot and resolve the problem. And if they can, that may free the connection from CF and your request then will stop.
I have just added a new section on this idea to the bottom of that blog entry, as "So is there really nothing I can do for the hung requests?"
Hope that helps.

Resources