How can I optimize MVC and IIS pipeline to obtain higher speed? - model-view-controller

I am doing performance tweaking of a simple app that uses MVC on IIS 7.5.
I have a StopWatch starting up in Application_BeginRequest and I take a snapshot at Controller.OnActionExecuting.
So I measure the time spend in the entire IIS pipeline: from request receipt to the moment execution finally gets to my controller.
I obtain 700 microseconds on my 3GHz quad-core (project compiled Release x64), and I wonder where the bottleneck is, especially hearing some people say that one can get up to 8000 page loads per second with MVC.
How can I optimize MVC and IIS pipeline to obtain higher speed?

The tool "IIS Tuner" may be helpful. It is an open source tool and you may investigate the tricks that application made. the tool is available at codeplex

I obtain 700 microseconds on my 3GHz quad-core (project compiled Release x64), and I wonder where the bottleneck is, especially hearing some people say that one can get up to 8000 page loads per second with MVC.
Note that a result of 700 µs in the pipeline is not incompatible with getting throughput of 8,000 requests per second. (You may be confusing response time with throughput.) If 8,000 people simultaneously made requests and each one was fulfilled less than one second later, that would be 8,000 requests per second regardless of whether the response time was 1 µs, 700 µs, or 700 ms.
Is 700 microseconds too long for IIS+MVC pipeline to run on every page load?
Not necessarily. You'd have to evaluate whether or not you're actually getting saturated with requests.

The tool "IIS Tuner" may be helpful.
And makes WCAT not works. Any views?
more details as below,
D:\Program Files\IIS Resources\WCAT Client>wcclient.exe localhost
wcclient.exe 5.2.3652 - Web Capacity Analysis Toolkit Client.
Copyright (c) 1995-2002 Microsoft Corporation. All rights reserved.
Compiled May 29 2003, 16:28:20
Connecting main client thread...
Connected.
Waiting for Config Message: Connecting Dead controller thread...
Done.
IP version requested for testing is unspecified
Receiving script header message: Done.
Receiving string table: Receiving 1 script pages ...
Fail to resolve server address for IP supported by the client: localhost
Connecting client abort notification...
Failed to resolve server address(es).

Have you looked into:
- async controllers? ASP.NET processes are limited to 12 threads (or 12 threads per CPU) not sure which.
- there are a bunch of micro-optmization tricks (for example, MVC loads all the view engines...when you only need Razor remove the others)
So, there are definitely ways to improve performance and you have full control over the HTML in MVC as well (no viewstate, obstrusive markup, unecessary postbacks etc)

Related

Sitecore page load slowness

I'm using Sitecore instance 9.1, Solr 7.2.1, and SXA 1.8.
I have deployed the environment on Azure and while monitoring incoming requests (to CD instance), I've noticed slowness in loading some pages at specific times.
I've explored App Insights and found an unexplainable behavior the request is taking 28.7 seconds while the breakdown of it shows executions of milli-seconds .. How is that possible? and How to explain what's happening during extra 28 seconds on the app service ??
I've checked the profiler and it shows that the thread is taking only 1042.48 ms .. How is that possible ?
This is an intermittent issue happens during the day .. regular requests are being served within 3 to 4 seconds.
I noticed that Azure often shows a profile trace for a "similar", but completely different request when clicking from the End-to-end transaction view. You can check this by comparing the timestamp and URL of the profile trace and the transaction you clicked from.
For example, I see a transaction logged at 8:58:39 PM, 2021-09-25 with 9.1 s response time:
However, when I click the profile trace icon, Azure takes me to a trace that was captured 10 minutes earlier, at 08:49:20 PM, 2021-09-25 and took only 121.64 ms:
So, if the issue you experience is intermittent and you cannot replicate it easily, try looking at the profile traces with the Slowest wall clock time by going to Application Insights → Performance → Drill into profile traces:
This will show you the worst-performing requests captured by the profiler at the top of the list:
In order to figure out why it is slow, you’ll need to understand what happens internally, f.e:
How the wall clock time is spent while processing your request?
Are there any locks internally?
The source of that data is dynamic profiling, Azure can do that on demand.
The IIS stats report would show you slowest requests, so you could look into Thread Time distribution to see where those 28 seconds are spent:
In Sitecore the when the application start the Initial prefetch configuration allows to pre-populate prefetch caches. Pre-heated prefetch caches help to reduce the processing time of incoming requests. The initial prefetch configuration of caches are taking time to load on initial stage.
Sitecore XP instance takes too long to load. This is caused by a performance issue in the CatalogRepository.GetCatalogItems method. It will be fixed in upcoming updates
see Site core knowledge base
In Sitecore XP 9.0 the initial prefetch configuration was revised. The prefetch cache for the core database was configured to include items that are used to render the Sitecore Client interface.
The Sitecore Client interface is not used on Content Delivery instances. Disabling initial prefetch configuration for the Core database helps in avoiding excessive resource consumption on the SQL Server hosting the Core database.
Change the configuration of the Core database in the \App_Config\Sitecore.config file:
Refer site core knowledge base

MaxConcurrentRequest in selfhost application

I have a selfhost signalr application, everything is ok but when users become more than 5000, users reconnected rapidly. I know that defalt value of appConcurrentRequestLimit is 5000. and i run this:
cd %windir%\system32\inetsrv
appcmd.exe set config /section:system.webserver/serverRuntime /appConcurrentRequestLimit:100000
but nothing changed. I increased maxConcurrentRequestsPerCPU and requestQueueLimit according to this
but i have got problem yet.
i'm using windows server 2012 and iis 8
You are shooting in the dark here, and you have no data about the actual performance and what's happening. The users could reconnect because of different reasons (server timeouts, regular interval reconnects, server errors). There are countless possibilities.
The correct way to know what's happening and measure performance is to run a Baseline performance load test using the default configuration, and collect the relevant performance counters like current requests, queued requests, current connections, max connections etc.
You should also collect any relevant Error logs on the server that could help you figure out what's happening.
You can find the full list of performance counters you need below:
Memory
.NET CLR Memory# bytes in all Heaps (for w3wp)
ASP.NET
ASP.NET\Requests Current
ASP.NET\Queued
ASP.NET\Rejected
CPU
Processor Information\Processor Time
TCP/IP
TCPv6\Connections Established
TCPv4\Connections Established
Web Service
Web Service\Current Connections
Web Service\Maximum Connections
Threading
.NET CLR LocksAndThreads\ # of current logical Threads
.NET CLR LocksAndThreads\ # of current physical Threads
Once you have your baseline performance results on a graph, then you can modify configuration (e.g. modify the number of concurrent requests like you tried above) and then re-run your test, and collect again the same performance counters.
The performance counter results will speak for themselves, and they will lead you to a solution.
You can generate the load with a tool like Crank:
https://github.com/SignalR/SignalR/tree/dev/src/Microsoft.AspNet.SignalR.Crank
In addition you can also check the SignalR troubleshooting guide:
http://www.asp.net/signalr/overview/testing-and-debugging/troubleshooting

Troubleshooting MVC4 Web API Performance Issues

I have an asp.net mvc4 web api interface that gets about 54k requests a day.
http://myserv.x.com/api/123/getstuff?whatstuff=thisstuff
I have 3 web servers behind a load balancer that are setup to handle the http requests.
On average response times are ~300ms. However, lately something has gone awry (or maybe it has always been there) as there is sporadic behavior of response times coming back in 10-20sec. This would be for the same request hitting the same server directly instead of through the load balancer.
GIVEN:
- System has been passed down to me so there may be gaps with IIS confiuration, etc,.
- Database: SQL Server 2008R2
- Web Servers: Windows Server 2008R2 Enterprise SP1
- IIS 7.5
- Using MemoryCache aggressively with Model and Business Objects with eviction set to 2hrs
- Looked at the logs but really don't see anything significantly relevant
- One application pool...no other LOB applications running on this server
Assumptions & Ask:
Somehow I'm thinking that something is recycling the application pool or IIS worker threads are shutting down and restarting thus causing each new request to warmup and recache itself. It's so sporadic that it's tough to trouble shoot right now. The same request to the same server comes back fast as expected (back to back N requests) since it was cached in about 300ms....but wait about 5-10-20min and that same request to the same server takes 16seconds.
I have limited tracing to go by as these are prod systems so I can only expose so much logging details. Any help and information attacking this or similar behavior somebody else has run into is appreciated. Thx
UPDATE:
The w3wpe.exe process grows to ~3G. Somehow it gets wiped out and the PID changes so itself or something is killing it every 3-4min I see tons of warnings in my webserver (IIS) log:
A process serving application pool 'MyApplication' suffered a fatal
communication error with the Windows Process Activation Service. The
process id was '1732'. The data field contains the error number.
After 4-5 days of assessing IIS and configuration vs internal code issues I finally found the issue with little to no help with windbg or debugdiag IIS tools. Those tools contain so much information even with mini dumps or log trace stacks that they can be red herrings. Best bet was to reproduce it by setting up a "copy intelligently" instance of a production system, which we did not have at the time and took a bit for ops to set something up.
Needless to say the problem had to do with over cacheing business objects. There was one race condition where updates on a certain table were updating an attribute to that corresponding business object (updates were coming from multiple servers) which was causing an OOC stackoverflow that pretty much caused the cacheing to recursively cache itself to death thus causing the w3wp.exe process to die and psuedo-recycle itself. It was one of those edge cases that was incredibly hard to test and repro in a non-production environment.

ASP.NET MVC lost in finding botleneck

I have ASP.NET MVC app which accept file uploads and has result pooling using SignalR. The app hosted on Prod server with IIS7, 4 Gb Ram and two cores CPU.
The app on Dev server works perfectly but when I host it on Prod server with about 50 000 users per day the app become unrresponsible after five minutes of running. The web page request time increase dramatically and it takes about 30 seconds to load one page. I have tried to record all MvcApplication.Application_BeginRequest event call and got 9000 hits in 5 minutes. Not sure is this acceptable number of hits or not for app like this.
I have used ANTS Performance Profiler(not useful in Prod app profiling, slow and eats all memory) to profile code but profiler do not show any time delay issues in my code/MSSQL queries.
Also I have tried to monitored CPU and RAM spike problems but I didn't find any. CPU percentage sometimes goes to 15% but never up and memory usage is normal.
I suspect that there is something wrong with request or threads limits in ASP.NET/IIS7 but don't know how to profile it.
Could someone suggest any profiling solutions which could help in this situation? Tried to hunt the problem for two week already without any result :(
You may try using the MiniProfiler and more specifically the MiniProfiler.MVC3 NuGet package which is specifically created for ASP.NET MVC applications. It will show you all kind of useful information such as the time spend for different methods in the execution of the request.

asmx web service slower after upgrade

I am transferring a C# asp.net asmx web service to windows 2008 and iis7 (64 bit), from ii6 (32 bit). I got an approximate web method performance time of about 160 ms before. On iis7, I'm now getting about 320 ms, even after reducing the web method down to almost no code to execute. I realize there is a compilation time on the first call. This timing is after about 20 calls and the time seems has stabilized.
I would like to reduce the time to run the web method from 320 ms to under 200 ms. This is to help handle the case where several calls would need to be processed. Another problem is when I ramp up 20 calls in 1 second, once in a while one of the calls will take about 3 seconds. This is also not desirable.
I've tried compiling in release mode and removed a debug compilation from the web.config. The .asmx file just references the class to load in the dll binary.
Something that is different is that iis7 is configured to show more detailed error messages to help with setup. However, since this is only when an error occurs I don't see how it could be slowing a regular call down.
I've tried both integrated pipeline mode and the classic pipeline mode and still get similar times. I've also tried setting the default compilation language to C#. I've tried checking the ping time to verify it is not the network. IIS has some database connections setup from the time when there was code in the web service method, but now that it does basically nothing I don't believe that should be an issue.
fyi - The problem was not asmx, iis7, iis6, debug mode, or the pipeline mode. The new IIS7 64 bit server started off with an internal ip address and then graduated to a public ip address. The ip address made a difference due to differences in network routing. The IIS6 test case was based on a public ip. Once I used the public ip for the IIS7 server the times were reasonably similar.

Resources