first time poster so go easy on me.
I am currently trying to address a performance issue when hitting my web service after a one minute period of inactivity. Literally after one minute of THAT user not hitting the web service then the next call will take 15 seconds before actually hitting the service operation. If you keep making random (not the same service operation just so you guys don't think it is "caching" the call) service operation calls the service returns immediately (less than a second).
Here are some "timings" I decided to take so you can see how I came to the one minute of inactivity:
2:04PM
2:16PM --15 seconds
2:21PM --15 seconds
2:24PM --15 seconds
2:25PM --15 seconds
Again, if you hit the web service continuously without a one minute period of inactivity then ALL methods will return in less than a second.
Here are some details regarding my web service:
WCF, WebHttpBinding, RESTful, using HTTPs.
Basic Authentication + Custom Authentication using IDispatchMessageInspector. Authentication happens with EVERY call (except to the Initializer.aspx page).
Custom Initialization.aspx page has been created which is called every night after the Application Pool is recycled. This page caches a bunch of global data used by all users along with starting that compile.
Application Pool ONLY recycles every night at 2AM. Worker threads are never killed off because timeout is disabled.
I heard about ReliableSession but as the setting implies that sounds like it would only work for PerSession, not PerCall.
Is there any way to resolve this or am I stuck to resorting to "pinging" the server every 45 seconds using a dummy service operation?
Found out the issue. We have multiple domain controllers. When the user was getting authenticated it would start from the forest level and work its way down to the actual domain controller that server resided on. The firewalls that were put in place were blocking all domain controllers except what the server resided on.
So basically, it would fail to communicate to the N+ domain controllers until it finally reached the only one it could.
You can fix this a number of ways but we just created firewall rules to allow the web server to communicate to the domain controller the users needed to be authenticated against.
Related
I have a service being load tested by a third party. A few minutes after starting, we start to see requests hanging for a very long period of time and the caller ultimately times out (after 60 seconds).
They are testing with 15 users with each user using two devices at once, so a total of 30 connections.
The service is a simple façade to a more complex operation, calling an external system. Benchmarking our communications to the external system looks as though everything is responding in the time we would expect (sub 200ms).
The IIS logs reveals a bunch of very high requests (> 200sec) which ultimately do return a 200 and have Win32 error code ERROR_NETNAME_DELETD (error 64). I have checked the Service Log and can match up the response to the request (based on the SOAP message id) and can see that we do eventually respond with the correct information (although the client has long given up).
Any ideas as to what could be causing this behavior? We're hosting in IIS using wsHttpBinding and we're using WS-Security with x509 certificates (message & transport encryption).
We don't have benchmark logging inside of our service but the code is a very simple mapping of the WCF request to the server request, making the request, and mapping the response to the WCF response. We do this manually and there is no parsing involved (straight assignments).
After a detailed investigation, including getting Microsoft support involved we were hitting up against the serviceThrottling defaults, specifically the maxConcurrentSessions. We determined this from perfmon - there is a counter for this. We were unsure as to why we saw this as the service behaved when called with a .NET client.
It turns out that the Java consumer of this application, using CXF, was not respecting the WSDL (specifically the bit about WS-SecureConversation) and closing sessions out when it closed its connection.
Our solution was to jack up the maxConcurrentSessions to a high number, set the inactivityTimeout down low (to a minute) to force session abandonment. In addition, we set establishSecurityContext to false to avoid the WSS negotiation consuming an additional session.
The solution is inelegant as the service logs are littered with errors about forced session closures, but it fixed the issue we were seeing here. Unfortunately we had a requirement for WS-Security so our solution needed to stick with that.
I hope this helps someone as this was an interesting and time consuming problem to pin down.
I want to increase the 'session timeout', which currently is set to 20 minutes. How can I increase or decrease it by one hour, or in other terms, 60 minutes?
There are a few ways to accomplish what you need, as we ran into the same issue when doing our NetSuite integration.
You can make a dummy search event every couple of min. We searched for a bogus transaction that we knew would never be created, and limited to a date in the distant past and only that date. That way the search would return very quickly with zero results.
Implement SingleSignOn. This is the preferred method. Once you initiate the single sign on, if the session has timed out on you previously you can quickly make a new session using tokens and do not need to ask the user for their username/password again.
We had a service that needed consumed at two different points in the application that did not know about each other. So the way we got around this but still using one service was saving the cookies from the service in a shared location. Then when the service is needed by one of the application they would recreate the service from the cookies. If the service had timed out we would recreate the service and update the cookies. This method became outdated once we implimented SingleSignOn, as then we could just create the service from the tokens as needed, and the tokens were stored in a shared location.
Hope this helped.
There is no standard way that I know of in NetSuite, you could though use a browser plugin to refresh the page or click the home button every 19 mins. Would work if for example the person is AFK.
There is no way to change the web service request timeout period (for sync operations it lasts approx 15 min, then the operation gets terminated on the server side). The general practice for long running operations that takes more than 15 mins is to use async requests.
I have a .NET 3.5 BasicHttpBinding no security WCF service hosted on IIS 6.0.
I have service throttling bumped up as per MS recommendations.
My service operation is getting called a few hundreds of time conccurrently, and at some point the client gets a timeout exception (59:00, that's whats set in the server and client timeouts).
If I raise the timeout it just hits the new limit.
It seems like the application just "freezes" somewhere and we have not been able to figure out how this happens.
WCF tracing on the server side doesn't come up with anything.
Any ideas regarding what could be the issue?
Thanks
I assume your WebService is not using the new async/await especially wrt the database calls. In that case its because you are blocking your limited threads.
In more detail. IIS/ASP.net only creates a limited number of threads to handle requests. The first...say 8 requests spin up threads and start working. At some point they will hit the database (I am assuming a traditional n-tier app). Those threads sleep. The next say...992 requests hit IIS and are held in a queue.
At some point the database calls return, process stuff...send data to the client. Another 8 requests are dequeued...hit the database...etc...
However each set of 8 requests takes a finite time to complete. With over 900 requests ahead of them, the last 100 or so threads will take at the very least 100 * latency * number of roundtrips before they can start up. If your latency * number of roundtrips is high...your last request will take a long time before it even gets dequeued, hence the timeout.
Two remedies exists. The first, create more threads....will use up all your memory and your IIS crashes. The second is to use .net 4.5 and async/await.
See here for more information
I observe the following weird behavior. I have an Azure web role which is deployed on love Azure cloud. Now I click "Configure" in the Azure Management Portal and change the number of instances - the portal shows some "activity". Now I open the browser and navigate to the URL assigned to my deployment and start refreshing the page something like once per two seconds. The page reloads fine many times and then fro some time it will stop reloading - the request will be rejected, then after something like half a minute the requests are handled normally.
What is happening? Is the web server temporarily stopped? How do I change number of instances so that HTTP requests to the role are handled at all times?
When you change the configuration file, your current instance might be restarted. This might be the reason you met with, which your website didn't response in about 30 seconds.
Please have a look http://msdn.microsoft.com/en-us/library/microsoft.windowsazure.serviceruntime.roleenvironment.changing.aspx and check if it 's because of the role restarting.
What you are doing is manual. Have you looked at the SDK for autoscaling Azure?
http://channel9.msdn.com/posts/Autoscaling-Windows-Azure-applications
Check out the demo at the 18 minute mark. It doesn't answer your question directly, but its a much more configurable/dynamic way of scaling Azure.
Azure updates your roles one update domain at a time, so in theory you should see no downtime when updating the config (provided you have at least two instances). However, if you refresh the browser every couple of seconds, it's possible that your requests go always to the same instance due to keep-alive.
It would be interesting to know what the behavior is if you disable keep-alives for your webrole. Note that this will have a performance impact, so you'll probably want to re-enable keep-alives after the exercise.
I find that WCF service will take 8-10 seconds to load the first hit. After that it will take less than a second.
Any thoughts?
Probably due to .NET's cold start. Have you looked at setting up the IIS Warmup Module which initializes dependancies before an initial request?
From the Learn IIS website
Decrease the response time for first requests by pre-loading worker processes. The IIS Application Warm-Up module lets you configure the Web application to be pre-loaded before the first request arrives so that the worker process responds to the first Web request more quickly.
Increase reliability by pre-loading worker processes when overlapped recycling occurs. Because the recycled worker process in an overlapped recycling scenario only communicates its readiness and starts accepting requests after it finishes loading and initializing the resources as specified by the configuration, pre-loading the dependencies reduces the response times for the first requests.
Customize the pre-loading of applications. You can configure the IIS Application Warm-Up module to initialize Web applications by using specific Web pages and user identities. This makes it possible to create specific initialization processes that can be executed synchronously or asynchronously, depending on the initialization logic. In addition, these procedures can use specific identities to ensure a proper initialization.