Initial requests are taking 3-5 seconds, subsequent calls takes < 500 milliseconds. Service makes a light weight stored proc call and there is no latency found when we profile it.
This service does not hit very frequently and Idle Time-out of the Process model is 20 minutes (default). We tried to hit it every 19 minutes but no change.
Service running under IIS 7.5 with .Net framework 4.5
The first request for an ASP.NET website always takes some time because the code needs to be compiled. You can install a module for IIS that automatically makes that initial request so you don't get that slowdown yourself.
http://www.iis.net/downloads/microsoft/application-initialization
Related
I have an ASP.NET Core 6 Web API which will call WCF on the same server, but different site hosting on IIS.
I now have an issue that when I call WCF (I have tested WCF using SoapUI with same settings too, the result is normal and don't have this issue), I found a long waiting before the call to the WCF service. I have traced the current connections counter in performance monitor and found it like the image below:
I use SoapUI Load Test for testing this, with 50 threads settings at once and runs 2 per thread in SoapUI, and I get this chart for the current connections counter. And the chart is the same as the SoapUI percentage, in the long waiting the percentage keeps 0% and in the peaks the percentage goes up. The response time of the load test leads to nearly 30 seconds due to this long waiting.
I have an web app, front end in angular and backend .Net Web API.
There are times when the application becomes slow and during that time I have seen that some of the service calls take longer, as shown in the network capture in the chrome tool. If I call the same services directly using postman or jmeter they are very fast, but through UI it takes time.
[Update] The slow services are random. Next time when it is ran, there are other services that takes time. May not be the ones those were found slower earlier.
This happens sometimes not always. When the app performs fine, all services respond in milli seconds and so continue to do so when directly called as well.
Any pointer.
first time poster so go easy on me.
I am currently trying to address a performance issue when hitting my web service after a one minute period of inactivity. Literally after one minute of THAT user not hitting the web service then the next call will take 15 seconds before actually hitting the service operation. If you keep making random (not the same service operation just so you guys don't think it is "caching" the call) service operation calls the service returns immediately (less than a second).
Here are some "timings" I decided to take so you can see how I came to the one minute of inactivity:
2:04PM
2:16PM --15 seconds
2:21PM --15 seconds
2:24PM --15 seconds
2:25PM --15 seconds
Again, if you hit the web service continuously without a one minute period of inactivity then ALL methods will return in less than a second.
Here are some details regarding my web service:
WCF, WebHttpBinding, RESTful, using HTTPs.
Basic Authentication + Custom Authentication using IDispatchMessageInspector. Authentication happens with EVERY call (except to the Initializer.aspx page).
Custom Initialization.aspx page has been created which is called every night after the Application Pool is recycled. This page caches a bunch of global data used by all users along with starting that compile.
Application Pool ONLY recycles every night at 2AM. Worker threads are never killed off because timeout is disabled.
I heard about ReliableSession but as the setting implies that sounds like it would only work for PerSession, not PerCall.
Is there any way to resolve this or am I stuck to resorting to "pinging" the server every 45 seconds using a dummy service operation?
Found out the issue. We have multiple domain controllers. When the user was getting authenticated it would start from the forest level and work its way down to the actual domain controller that server resided on. The firewalls that were put in place were blocking all domain controllers except what the server resided on.
So basically, it would fail to communicate to the N+ domain controllers until it finally reached the only one it could.
You can fix this a number of ways but we just created firewall rules to allow the web server to communicate to the domain controller the users needed to be authenticated against.
I have two REST endpoints driving some navigation in a web site. Both create nearly the same response, but one gets its data straight from the db whereas the other has to ask a search engine (solr) first to get some data and then do the db calls.
If i profile both endpoints via JProfiler i get a higher runtime (approx. 60%) for the second one (about 31ms vs. 53ms). That's as expected.
Profile result:
If i view the same ajax calls from the client side i get a very different picture.
The faster of the both calls takes about 146 ms waiting and network time
The slower of the both calls takes about 1.4 seconds waiting and network
Frontend timing is measured via chrome developer tools. The server is a tomcat 7.0.30 running in STS 3.2. Client and server live on the same system, db and solr are external so there should be no network latency between tomcat and the browser. As a side note: The faster response has the bigger payload (2.6 vs 4.5 kb).
I have no idea why the slower of the both calls takes about 60% more server time but in sum nearly 1000% more "frontend time".
The question is: Is there any way i can figure out where this timing differences originate?
By default, the CPU views in JProfiler show times in the "Runnable" thread state. If a thread reads data from a socket connection or waits for some condition, that time is not included in the "Runnable" thread state.
In the upper right corner of the CPU views there is a thread state selector. If you change that to "All states", you will get times that you can compare with the wall clock times from the browser.
I have a .NET 3.5 BasicHttpBinding no security WCF service hosted on IIS 6.0.
I have service throttling bumped up as per MS recommendations.
My service operation is getting called a few hundreds of time conccurrently, and at some point the client gets a timeout exception (59:00, that's whats set in the server and client timeouts).
If I raise the timeout it just hits the new limit.
It seems like the application just "freezes" somewhere and we have not been able to figure out how this happens.
WCF tracing on the server side doesn't come up with anything.
Any ideas regarding what could be the issue?
Thanks
I assume your WebService is not using the new async/await especially wrt the database calls. In that case its because you are blocking your limited threads.
In more detail. IIS/ASP.net only creates a limited number of threads to handle requests. The first...say 8 requests spin up threads and start working. At some point they will hit the database (I am assuming a traditional n-tier app). Those threads sleep. The next say...992 requests hit IIS and are held in a queue.
At some point the database calls return, process stuff...send data to the client. Another 8 requests are dequeued...hit the database...etc...
However each set of 8 requests takes a finite time to complete. With over 900 requests ahead of them, the last 100 or so threads will take at the very least 100 * latency * number of roundtrips before they can start up. If your latency * number of roundtrips is high...your last request will take a long time before it even gets dequeued, hence the timeout.
Two remedies exists. The first, create more threads....will use up all your memory and your IIS crashes. The second is to use .net 4.5 and async/await.
See here for more information