close connection in LoadRunner - session

Practical Challenge:
I have a LR script that runs against an app being mocked and do not have a logout button (yet).
The test runs fine With stable response time for about 10 minutes, but after that the response time peaks and the server goes into 99% memory usage and transactions start to fail.
I suspect this is due to the script does not terminate the vusers after each run anf it builds up a lot of running sessions against the server wich is not terminated. But I might be wrong.
Anyays I want to programatically close each run after it has competed the business process.
I have red somewhere that web_set_sockets_option ("SHUTDOWN_MODE", "ABRUPT") could be used for this, but I want to be sure that this function actually does what I want and what does 'ABRUPT' means?
Are there better ways of closing sessions? Clicking the close browser during recording does not result in anything being captured in the script.

It's a server issue on session aging. Your server admin for your website can adjust the timeout values where no activity has taken place on a given session. By default most places have this set at 30 minutes. Trim it to what you need rather than taking the default value on the server.
Also, you may have hit a leak situation if resources are constantly accumulated on the server side but never released.

Based on your question I assume you're using the WEB/HTML protocol. I agree that the core issue is that your app's sessions should expire more elegantly and probably sooner. But, in order to get beyond this while testing you can try this. It isn't a guarantee, but it has worked sometimes for me in the past when dealing with similar situations. Try changing your Run-time Settings for the script:
Run-time Settings > Browser > Browser Emulation
Make sure you have the box checked for "Simulate a new user on each iteration". You can also try playing with the other settings here, like clearing the cache each iteration. This could cause a new connection setting with the web page for each iteration depending on the server's session settings. Again, this isn't 100%, but it has worked for me from time to time.

try this:
web_set_sockets_option("CLOSE_KEEPALIVE_CONNECTIONS", "1");

Related

Session corrupt using aspnet_state service

We have for some time now been experiencing problems with data being saved in our SQL database.
Sometimes records are saved with data that does not match the rest of the row, making it seem like at some point, data is being 'swapped' for something else, perhaps, another user's data, before being passed to the database.
We do use TransactionScopes throughout with Isolation Level of ReadCommitted which makes me think the data integrity issue lies within the application rather than at the Database level.
We do use the session extensively and we are starting to think that the times of the corrupt data are similar to the times we deploy updates to the system during the day.
We do use the aspnet_state service to persist the session over application restarts.
Our users rely on terminal sessions therefore multiple users all log into the same server and launch the system via a browser.
We have in the past noticed users logging in with the same domain credentials but we are now relatively confident that users now log in with unique accounts.
99.9% of the data is correct but we have been struggling to understand what could be causing this intermittent data integrity issue.
We are now limiting our deploys to outside working hours on pain of death, but this is not always possible.
Can anyone shed light on why/how this might be happening?
EDIT: We have now isolated this to the DAL layer, see SQL query returns incorrect value in multi user environment
I have recently been fighting this!, and had similar problem to yours around 95% of the data written back was correct. I looked at various reasons why, the main culprit was some users on the network had downloaded Chrome and opening the record within Chrome, breaking our session id's as Chrome ignores sessions.
The other cause had been either the users was not closing the browser or not logging off the application allowing either the same user or completely different user to pick and use the session id.
After introducing a browser check and then reject Chrome, educating the users to make sure they log off, doing any updates to outside busy periods the problem was just about gone.
I forgot to mention, also on your IIS its best to turn off caching in the Output Caching, for the user and kernal set to prevent caching.

IIS Orphaned Requests

We have IIS 7 running a Classic ASP app and I've been noticing the following issue lately. Over the course of the day, if I look at Server Node --> Worker Processes some requests seem to fill up there. The elapsed time is something crazy like 12 hours at the end of the day. This requests all sit in the ExecuteRequestHandler stage.
There is no way anything is executing for that long, and I cannot seem to reproduce the issue. I have tried dumping w3wp.exe, using FRT, and all that good stuff, but I have some general questions:
Is there a setting that controls WHEN IIS stops a request? To be specific, in development, if I purposely design a page to be slow (i.e. update a SQL table thats locked) and then CLOSE out of browser, and monitor the requests in IIS, I see that the request still sits there for about 20 seconds before being removed. Is that 20 seconds a random interval, or can that be SET somewhere? To be clear, it's not that the page takes 20 seconds to execute, it will execute forever (in this test case) but it seems IIS gives up on it after 20 or so seconds after I log out.
Is there some way to see "orphaned" requests, I.E. requests in the app pool that nobody is waiting for anymore
What else can I do to try and debug this? A dump of w3wp says there are client connections with an HTTP request state of HTR_READING_CLIENT_REQUEST.
I keep getting suggestions of modifying IIS config settings such as AspRequestQueueMax, every time I try looking those up in the ApplicationHost.config I don't see those items set, so either I'm looking at the wrong place, or a default value would not be explicitly set in the config. This begs 2 questions: a) How do you READ these config values, i.e. get current value, b) how do you SET these.
A Classic ASP request will keep running until the script timeout is reached, regardless of whether the client is connected or not. I believe the default is 90 seconds, but an .ASP file can override this by setting the Server.ScriptTimeout property directly (which is pretty common). If your request queue is filling up then this is likely the reason and changing the defaults will not help.
If you can edit the ASP code, you can add logic like this in potentially long running sections:
If Not Response.IsClientConnected Then Call Response.End()
You can also global search your code for Server.ScriptTimeout to understand from where the abuse is coming.
If you do want to change the default script timeout, here is where it is stored:
https://www.iis.net/configreference/system.webserver/asp/limits
To change via the IIS7 GUI go to: (web site) > (features view) > ("IIS" category) > "ASP" > expand "Limits Properties" node > "Script Time-out"

What exactly happens when I change number of Azure role instances?

I observe the following weird behavior. I have an Azure web role which is deployed on love Azure cloud. Now I click "Configure" in the Azure Management Portal and change the number of instances - the portal shows some "activity". Now I open the browser and navigate to the URL assigned to my deployment and start refreshing the page something like once per two seconds. The page reloads fine many times and then fro some time it will stop reloading - the request will be rejected, then after something like half a minute the requests are handled normally.
What is happening? Is the web server temporarily stopped? How do I change number of instances so that HTTP requests to the role are handled at all times?
When you change the configuration file, your current instance might be restarted. This might be the reason you met with, which your website didn't response in about 30 seconds.
Please have a look http://msdn.microsoft.com/en-us/library/microsoft.windowsazure.serviceruntime.roleenvironment.changing.aspx and check if it 's because of the role restarting.
What you are doing is manual. Have you looked at the SDK for autoscaling Azure?
http://channel9.msdn.com/posts/Autoscaling-Windows-Azure-applications
Check out the demo at the 18 minute mark. It doesn't answer your question directly, but its a much more configurable/dynamic way of scaling Azure.
Azure updates your roles one update domain at a time, so in theory you should see no downtime when updating the config (provided you have at least two instances). However, if you refresh the browser every couple of seconds, it's possible that your requests go always to the same instance due to keep-alive.
It would be interesting to know what the behavior is if you disable keep-alives for your webrole. Note that this will have a performance impact, so you'll probably want to re-enable keep-alives after the exercise.

Session timeout in web applications

The session timeout in web applications typically denotes the idle time - i.e. the period of time when the user doesn't work with the application.
Now, what if there is an automated script written that posts a request every 5 minutes - wouldn't that user's session go on endlessly? This being the case, won't this approach heavily load the application affecting its performance in the long run?
Running an automated call to the server, say via an AJAX request, will keep the session alive. Typically that's the point though. An interesting side effect of this is that if the request happens predictably and regularly, you can use it as a "ping" to determine if the user's browser is still open. If one or two pings are missed, you can close the session earlier and actually free up resources sooner than if you just let the session time out.
Yes, and Yes.
This is why if you're going to write an application for the web, you really want to find a way to implement it without using server side sessions. Usually, you will be able to find ways to implement the same functionality using cookies -- then the session data is client-side so who cares if they stay active permanently.
I did something similar for an application that relies heavily on session data.
What I did was set the IIS timeout to a relatively low number, say 10 minutes, then have a timed AJAX call that pings a blank page every 5 minutes.
This overhead on this is actually fairly low, as all you are doing is requesting a blank page, and if a person closes their browser, the session ends in 10 minutes.
You want to keep session as small as possible. That said, if everyone starts doing that, of course it will load your application, with(out) session. If you think your users are compelled to do that, consider why, as either your application is missing an important feature or is forcing them into something.
Now, regardless of that, if you are expecting lots of users to be active at the same time, so much than a single server won't do, then you would will end up having the session out of process. If the session is in Sql Server, it is just saved data, so in that case we wouldn't be talking about memory usage.
Well... I guess "It Depends" The first question you should ask yourself is whether you even need session.
If you have an automated process, my guess is that you don't really need to use session.
In that case, either turn it off or don't worry about it.
I guess your session table would be a little bit larger, but on the other hand you won't be tearing down and recreating the session. I don't see how this would "heavily load" the application. I suppose it would depend on the application itself and how much memory is used to maintain session state.
It would allow the use's session to go on endlessly, as long as they have their browser open. If need to keep a session alive for an extended period of time, you could also track the sessions via the DB and not in memory.
Also, if you are worried about the indefinite open session, you could implement a timeout from when the session opened and if there is an extended idle time.

ie save onunload bug

I have a dynamic ajaxy app, and I save the state when the user closes the explorer window.
It works ok in all browsers but in IE there is problem. After I close twice the application tab, i can't connect anymore to the server.
My theory is that the connection to the server fail to complete while the tab is being closed and somehow ie7 thinks that it has 2 outstanding connections to the server and therefore queues new connections indefinitely.
Any one has experienced this, any workaround or solution?
In IE if you use long-polling AJAX request, you have to close down the XHR connection on 'unload'. Otherwise it will be kept alive by browser, even if you navigate away from your site. These kept alive connections will then cause the hang, because your browser will hit the maximum open connection limit.
This problem does not happen in other browsers.
Well, you can get around the connection-limit easily enough; simply create a wildcard domain and instruct your app to round-robin the subdomains; e.g. a.rsrc.dmvnoc.com, b.rsrc.dmvnoc.com, etc, for my netMail application. Without this trick, preloading all the images takes almost 30 seconds on a LAN (because of MSIE's low connection limit), but with it, the images download in about a second.
If you need to combine scripts with this trick, just set document.domain to the parent in the new scripts.
However, you might want to checkpoint the state on change anyway- the user might lose their network connection, or their computer might crash. If you want to reduce network traffic, have the client simply set a cookie that contains the relevent state- you can fit an awful lot in there (3000 bytes or so) and then the server gets it automatically on the next connection anyway- where it can save the results (as it presently does) and remove the cookie to signal that it has saved the state.

Resources