I have a dynamic ajaxy app, and I save the state when the user closes the explorer window.
It works ok in all browsers but in IE there is problem. After I close twice the application tab, i can't connect anymore to the server.
My theory is that the connection to the server fail to complete while the tab is being closed and somehow ie7 thinks that it has 2 outstanding connections to the server and therefore queues new connections indefinitely.
Any one has experienced this, any workaround or solution?
In IE if you use long-polling AJAX request, you have to close down the XHR connection on 'unload'. Otherwise it will be kept alive by browser, even if you navigate away from your site. These kept alive connections will then cause the hang, because your browser will hit the maximum open connection limit.
This problem does not happen in other browsers.
Well, you can get around the connection-limit easily enough; simply create a wildcard domain and instruct your app to round-robin the subdomains; e.g. a.rsrc.dmvnoc.com, b.rsrc.dmvnoc.com, etc, for my netMail application. Without this trick, preloading all the images takes almost 30 seconds on a LAN (because of MSIE's low connection limit), but with it, the images download in about a second.
If you need to combine scripts with this trick, just set document.domain to the parent in the new scripts.
However, you might want to checkpoint the state on change anyway- the user might lose their network connection, or their computer might crash. If you want to reduce network traffic, have the client simply set a cookie that contains the relevent state- you can fit an awful lot in there (3000 bytes or so) and then the server gets it automatically on the next connection anyway- where it can save the results (as it presently does) and remove the cookie to signal that it has saved the state.
Related
Practical Challenge:
I have a LR script that runs against an app being mocked and do not have a logout button (yet).
The test runs fine With stable response time for about 10 minutes, but after that the response time peaks and the server goes into 99% memory usage and transactions start to fail.
I suspect this is due to the script does not terminate the vusers after each run anf it builds up a lot of running sessions against the server wich is not terminated. But I might be wrong.
Anyays I want to programatically close each run after it has competed the business process.
I have red somewhere that web_set_sockets_option ("SHUTDOWN_MODE", "ABRUPT") could be used for this, but I want to be sure that this function actually does what I want and what does 'ABRUPT' means?
Are there better ways of closing sessions? Clicking the close browser during recording does not result in anything being captured in the script.
It's a server issue on session aging. Your server admin for your website can adjust the timeout values where no activity has taken place on a given session. By default most places have this set at 30 minutes. Trim it to what you need rather than taking the default value on the server.
Also, you may have hit a leak situation if resources are constantly accumulated on the server side but never released.
Based on your question I assume you're using the WEB/HTML protocol. I agree that the core issue is that your app's sessions should expire more elegantly and probably sooner. But, in order to get beyond this while testing you can try this. It isn't a guarantee, but it has worked sometimes for me in the past when dealing with similar situations. Try changing your Run-time Settings for the script:
Run-time Settings > Browser > Browser Emulation
Make sure you have the box checked for "Simulate a new user on each iteration". You can also try playing with the other settings here, like clearing the cache each iteration. This could cause a new connection setting with the web page for each iteration depending on the server's session settings. Again, this isn't 100%, but it has worked for me from time to time.
try this:
web_set_sockets_option("CLOSE_KEEPALIVE_CONNECTIONS", "1");
Okay, I know it sounds generic. But I mean on an AJAX level. I've tried using Firebug to track the NET connections and posts and it's a mystery. Does anyone know how they do the instant autosave constantly without DESTROYING the network / browser?
My guess (and this is only a guess) is that google uses a PUSH service. This seems like the most viable option given their chat client (which is also integrated within the window) also uses this to delivery "real time" messages with minimal latency.
I'm betting they have a whole setup that manages everything connection related and send flags to trigger specific elements. You won't see connection trigers because the initial page visit establishes the connection then just hangs on the entire duration you have the page open. e.g.
You visit the page
The browser established a connection to [example]api.docs.google.com[/example] and remains open
The client-side code then sends various commands and receives an assortment of responses.
These commands are sent back and forth until you either:
Lose the connection (timeout, etc.) in which case it's re-established
The browser window is closed
Example of, how I see, a typical communication:
SERVER: CLIENT:
------- -------
DOC_FETCH mydocument.doc
DOC_CONTENT mydocument.doc 15616 ...
DOC_AUTOSAVE mydocument.doc 24335 ...
IM collaboratorName Hi Joe!
IM_OK collaboratorName OK
AUTOSAVE_OK mydocument.doc OK
Where the DOC_FETCH command is saying I want the data. The server replies with the corresponding DOC_CONTENT <docname> <length> <contents>. Then the client triggers DOC_AUTOSAVE <docname> <length> <content>. Given the number of potential simultaneous requests, I would bet they keep the "context" in the requests/responses so after something is sent it can be matched up. In this example, it knows the IM_OK matches the second request (IM), and the AUTOSAVE_OK matches the first request (AUTOSAVE)--Something like how AOL's IM protocol works.
Again, this is only a guess.
--
To prove this, use something like ethereal and see if you can see the information transferring in the background.
We have WCF services that operate over multiple protocols for different customers. Most work fine, but when we use SSL the connections take a long time to close. Opening a connection is no problem, but closing is very slow.
The strangest behavior is that the close time is proportional to the amount of data that was transmitted on the connection. If just a few bytes are sent from the server to the client the connection will close almost instantly, but a search that returns several hundred rows takes as long to close the connection as the original search. The close time seems directly proportional to amount of data transmitted. It seems that the results are retransmitted back to the server for verification before the connection will close.
An error is almost never thrown, but the connection close time essentially doubles the required time to execute a call.
Here are the basic settings:
Custom binding
Binary encoding
Reliable session, Ordered=true
Binding element is HttpsTransportBindingElement
using RemoteCertificateValidationCallback
All of the proxies are constructed programmatically with ChannelFactory.
We found that the problem was with ReliableSession. ReliableSession tries to verify everything that was sent in the next connection. This sounds like a good idea, but it is essentially worthless because even if I found something that didn't verify it is too late to do anything about it.
Bottom line: ReliableSession isn't very reliable.
Just a theory it could be that it writes to a log when the proxy closes, and you get an extra hit due to decryption, or that it does not cache https results.
Do you have any WCF logging turned on?
Does the CPU spike when you close the proxy?
Could you check if it is actually sending two requests to the server?
I'm using Firebug 1.5.2 and while testing a site before production release i can see a huge amount of time consumed by 'Blocking' parts of the requests.
What exactly does the 'Blocking' mean?
"Blocking" previously (earlier versions of FireBug) was called "Queuing". It actually means that request is sitting in queue waiting for available connection. As far as I know number of persistent connections by default is limited in last versions of Firefox to 6, IE8 also 6. Earlier it was only 2. It can be changed by user in browser settings.
Also as I know that while javascript file is loading, all other resources (css, images) are blocked
Blocking is a term used to describe an event that stops other events or code from processing (within the same thread).
For example if you use "blocking" sockets then code after the socket request has been made will not be processed until the request is complete (within the same thread).
Asynchronous activities (non blocking) would simply make the request and let other code run whilst the request happened in the background.
In your situation it basically means that certain parts of firebug / the browser cannot activate until other parts are complete. I.e. it is waiting for an image to download before downloading more.
As far as I know, two reasons cause components to cause blocking others from loading.
Browser's enforced (but usually configurable) limit of how many parallel resources can be loaded from a particular host at a time.
Inline javascript, which can cause the browser to wait and see if it at all needs to go ahead with downloading the rest of the components (just in case the javascript redirects or replaces the content of the page)
It means "waiting for connection". As explained in the official documentation by Mozilla, "Blocking" is "Time spent in a queue waiting for a network connection." That can be due to Firefox hitting its internal parallel connections limit, as explained there and in answers here.
It can also mean "waiting because server is busy". One possible reason for "Blocking" times is missing in the official documentation linked above: it can happen when the server cannot provide a connection at the time because it is overloaded. In that case, the connection request goes into a queue on the server until it can be processed once a worker process becomes free [source].
In a technical sense, such a connection is not yet established because the request is awaiting accept() from the server [source]. And maybe that is why it is subsumed under "Blocking" by Firefox, as it could also be considered "Time spent in a queue waiting for a network connection".
(This behaviour is not fully consistent as of Firefox 51 though: for the first URL called up in a new tab, the time before the server accepts the connection request is not counted at all in the "Timings" tab – only for subsequent URLs entered. Either of both behaviours could be a bug, I don't know which one.)
The session timeout in web applications typically denotes the idle time - i.e. the period of time when the user doesn't work with the application.
Now, what if there is an automated script written that posts a request every 5 minutes - wouldn't that user's session go on endlessly? This being the case, won't this approach heavily load the application affecting its performance in the long run?
Running an automated call to the server, say via an AJAX request, will keep the session alive. Typically that's the point though. An interesting side effect of this is that if the request happens predictably and regularly, you can use it as a "ping" to determine if the user's browser is still open. If one or two pings are missed, you can close the session earlier and actually free up resources sooner than if you just let the session time out.
Yes, and Yes.
This is why if you're going to write an application for the web, you really want to find a way to implement it without using server side sessions. Usually, you will be able to find ways to implement the same functionality using cookies -- then the session data is client-side so who cares if they stay active permanently.
I did something similar for an application that relies heavily on session data.
What I did was set the IIS timeout to a relatively low number, say 10 minutes, then have a timed AJAX call that pings a blank page every 5 minutes.
This overhead on this is actually fairly low, as all you are doing is requesting a blank page, and if a person closes their browser, the session ends in 10 minutes.
You want to keep session as small as possible. That said, if everyone starts doing that, of course it will load your application, with(out) session. If you think your users are compelled to do that, consider why, as either your application is missing an important feature or is forcing them into something.
Now, regardless of that, if you are expecting lots of users to be active at the same time, so much than a single server won't do, then you would will end up having the session out of process. If the session is in Sql Server, it is just saved data, so in that case we wouldn't be talking about memory usage.
Well... I guess "It Depends" The first question you should ask yourself is whether you even need session.
If you have an automated process, my guess is that you don't really need to use session.
In that case, either turn it off or don't worry about it.
I guess your session table would be a little bit larger, but on the other hand you won't be tearing down and recreating the session. I don't see how this would "heavily load" the application. I suppose it would depend on the application itself and how much memory is used to maintain session state.
It would allow the use's session to go on endlessly, as long as they have their browser open. If need to keep a session alive for an extended period of time, you could also track the sessions via the DB and not in memory.
Also, if you are worried about the indefinite open session, you could implement a timeout from when the session opened and if there is an extended idle time.