Is there a memory limit for w3wp.exe? - asp.net-mvc-3

Is there a max memory size for w3wp.exe? Mine is getting up to about 2.5-3G then seems to crash/reset.
Per the "GIVEN" dimensions below I setup some counters and noticed that the w3wp.exe will service http requests then reset to 0 along with the w3wp.exe process crashing (changing pids). As a result REQUESTS_QUEUED and ACTIVE_REQUESTS grow large causing delays in processing until the w3wp.exe can restart itself. It's doing this every 3-4min so more than likely due to heavy system volume during peak load. But not sure if it's a memory issue or not.
I see tons of warnings in my webserver (IIS) log:
A process serving application pool 'MyApplication' suffered a fatal
communication error with the Windows Process Activation Service. The
process id was '1732'. The data field contains the error number.
RESULT: Customers are reporting sporadic response times for http requests.
Can I increase this memory limit or reconfigure IIS to handle increased load?
GIVEN:
System has been passed down to me so there may be gaps with IIS configuration, etc.
Database: SQL Server 2008R2
Web Servers: Windows Server 2008R2 Enterprise SP1 (64bit, 64G RAM)
IIS 7.5
Using MVC4 Web API with MemoryCache aggressively with Model and Business Objects with eviction set to 2hrs
Looked at the logs but really don't see anything significantly relevant
One application pool...no other LOB applications running on this server

Is the application pool set to run in 32-bit mode? That can cause memory issues even if you have plenty of RAM. On a 64-bit system, the memory limit for a 32-bit process is 4 GB.

Actually after solving the RC in which overuse of memorycacheing that was crashing the w3wpe.exe process I can safely say that an mvc4 web api service can grow up to 20G ... from baseline of 3G (64bit machine and application pool). AT least that was the last level I saw it before eviction policy starting cleaning up things. Probably a bit excessive in footprint but the application is very fast returning machine learning targeted content sub-100ms.

Related

Unexplained memory usage on Azure Windows App Service Plan - Drill down missing

We have a memory problem with our Azure Windows App Service Plan (service level is P1v3 with 1 instance – this means 8 GB memory).
We are running two small .NET 6 App Services on it (some web APIs), that use custom containers – without problems.
They’re not in production and receive a very low number of requests.
However, when looking at the service plan’s memory usage in Diagnose and Solve Problems / Memory Analysis, we see an unexplained 80% memory percent usage – in a stable way:
And the real problem occurs when we try to start a third app service on the plan. We get this "out of memory" error in our log stream :
ERROR - Site: app-name-dev - Unable to start container.
Error message: Docker API responded with status code=InternalServerError,
response={"message":"hcsshim::CreateComputeSystem xxxx:
The paging file is too small for this operation to complete."}
So it looks like docker doesn’t have enough mem to start the container. Maybe because of the 80% mem usage ?
But our apps actually have very low memory needs. When running them locally on dev machines, we see about 50-150M memory usage (when no requests occur).
In Azure, the private bytes graph in “availability and performance” shows very moderate consumption for the biggest app of the two:
Unfortunately, the “Memory drill down” is unavailable:
(needless to say, waiting hours doesn’t change the message…)
Even more strange, stopping all App Services of the App Service Plan still show a Memory Percentage of 60% in the Plan.
Obviously some memory is being retained by something...
So the questions are:
Is it normal to have 60% memory percentage in an App Service Plan with no App Services running ?
If not, could this be due to a memory leak in our app ? But app services are ran in supposedly isolated containers, so I'm not sure this is possible. Any other explanation is of course welcome :-)
Why can’t we access the memory drill down ?
Any tips on the best way to fit "small" Docker containers with low memory usage in Azure App Service ? (or maybe in another Azure resource type...). It's a bit frustrating to be able to use ony 3GB out of a 8GB machine...
Further details:
First app is a .NET 6 based app, with its docker image based on aspnet:6.0-nanoserver-ltsc2022
Second app is also a .NET 6 based app, but has some windows DLL dependencies, and therefore is based on aspnet:6.0-windowsservercore-ltsc2022
Thanks in advance!
EDIT:
I added more details and changed the questions a bit since I was able to stop all app services tonight.

Trouble - IIS suddenly high memory usage

What happens is that a web service on my IIS server significantly increases the ram used.
It works in the range of 200 ~ 700 mb. But for a few days now, he suddenly starts using 3, 4, 5 gb of ram.
As a palliative solution to not block users, I end the service by the task manager itself and it goes back to normal, but some time later it increases again:
task manager photo
I used the performance monitor and saw that it increases this part here:
performance monitor photo
I really don't know how to solve this, I'm stuck, can anyone help me?
There are a lot of reasons that your IIS worker process could be using a lot of CPU, to start, you should look at which web requests are currently executing with IIS to see if that helps you identify the issue to be able to troubleshoot IIS worker process.
Via the IIS Worker Processes
Via the IIS management console, you can view the running worker processes. You can view which IIS application pool is causing high CPU and view the currently running web requests. after selecting "Worker Processes" from the main IIS menu, you can see the currently running IIS worker processes. If you double-click on a worker process, you can see all of the currently executing requests.

IIS 8 Application Pools taks 30mins+ starting and don't respond to requests

OS: Windows Server 2012 Standard
IIS: 8.0.9200.16384
Processor: 4x Xeon 2.67Ghz CPU
RAM: 40GB
Problem:
We have recently enabled IIS's AutoStart feature, since doing so our start up time for the application pools has gone up considerably. The application pool appears to be running but it seems to ramp up its CPU usage to the maximum 25% for about 30 minutes and the websites running in that pool don't respond until this has completed. We have checked the event log and there doesn't appear to be any faults. We have checked the logging in our preload function and this appears to only take about 60-90 seconds.
How can we diagnose what is causing the delay in the application pools starting up?
Background:
We are serving up multiple copies of the same ASP.Net MVC3 application, from multiple application pools (20 sites per pool). We have approximately 8 pools service up 160 sites We have IProcessHostPreloadClient built which preloads some settings from the database when the sites starts up. We have a second server with the same basic specs but only 3 pools of 20, which only takes approx 5 minutes per pool to start up.
For anyone interested here is what we did to resolve/mitigate the issue:
Break up our sites into smaller groupings per application pools (this reduces the startup time per application pool). We went with 10 sites per pool.
Change to using the IIS 8 Application Initialize 'PreloadEnabled' option rather than the serviceAutoStartProvider for site initialization.
When deploying new code, don't restart the application pools, instead use the app_offline.htm feature to unload the application and restart it.
The app_offline.htm feature is the key one for us, this means we are able to deploy new versions of our software with out stopping and starting the application pools and incurring the start up time penalty. Also incrementally restarting application pools help reduce the strain on the CPU which meant we got a consistent start up time for each pool. This is only required when we do an IIS reset or server restart (rarely).

How to increase memory and cache size for application pool in IIS 7 efficiently

I have searched the internet for how to increase memory and cache size for application pools in IIS 7 but all topics are diffused and I don't know the effect of combining those settings together.
Can somebody describe how I can increase memory and cache size for application pools in IIS 7?
In my understanding output cache can be set only at the IIS level and not specifically for an application pool. Whatever is set at the IIS level is applied to all the web sites under it. So effectively you can apply a max cache size at the web application level.
If you are using windows 7 professional (IIS features vary depending on the operating system) if you open IIS manager and click on the server name, in the features view there is an Output Caching feature. You can edit that to set the max cache size. If you set it to a very high value, it will use up a lot of your RAM and could deteriorate the performance of the whole box.
THe application pool itself can have a private memory limit and a virtual memory limit.
Primary memory limit: Maximum amount of private memory (in KB) a worker process can consume before causing the application pool to recycle.
Virtual memory Limit: Maximum amount of virtual memory (in KB) a worker process can consume before causing the application pool to recycle.
Both the above settings are set to 0 by default, which means there is no limit set.
Long story short: Raising the output cache size at the IIS server level is the best option that suits your needs.

Troubleshooting MVC4 Web API Performance Issues

I have an asp.net mvc4 web api interface that gets about 54k requests a day.
http://myserv.x.com/api/123/getstuff?whatstuff=thisstuff
I have 3 web servers behind a load balancer that are setup to handle the http requests.
On average response times are ~300ms. However, lately something has gone awry (or maybe it has always been there) as there is sporadic behavior of response times coming back in 10-20sec. This would be for the same request hitting the same server directly instead of through the load balancer.
GIVEN:
- System has been passed down to me so there may be gaps with IIS confiuration, etc,.
- Database: SQL Server 2008R2
- Web Servers: Windows Server 2008R2 Enterprise SP1
- IIS 7.5
- Using MemoryCache aggressively with Model and Business Objects with eviction set to 2hrs
- Looked at the logs but really don't see anything significantly relevant
- One application pool...no other LOB applications running on this server
Assumptions & Ask:
Somehow I'm thinking that something is recycling the application pool or IIS worker threads are shutting down and restarting thus causing each new request to warmup and recache itself. It's so sporadic that it's tough to trouble shoot right now. The same request to the same server comes back fast as expected (back to back N requests) since it was cached in about 300ms....but wait about 5-10-20min and that same request to the same server takes 16seconds.
I have limited tracing to go by as these are prod systems so I can only expose so much logging details. Any help and information attacking this or similar behavior somebody else has run into is appreciated. Thx
UPDATE:
The w3wpe.exe process grows to ~3G. Somehow it gets wiped out and the PID changes so itself or something is killing it every 3-4min I see tons of warnings in my webserver (IIS) log:
A process serving application pool 'MyApplication' suffered a fatal
communication error with the Windows Process Activation Service. The
process id was '1732'. The data field contains the error number.
After 4-5 days of assessing IIS and configuration vs internal code issues I finally found the issue with little to no help with windbg or debugdiag IIS tools. Those tools contain so much information even with mini dumps or log trace stacks that they can be red herrings. Best bet was to reproduce it by setting up a "copy intelligently" instance of a production system, which we did not have at the time and took a bit for ops to set something up.
Needless to say the problem had to do with over cacheing business objects. There was one race condition where updates on a certain table were updating an attribute to that corresponding business object (updates were coming from multiple servers) which was causing an OOC stackoverflow that pretty much caused the cacheing to recursively cache itself to death thus causing the w3wp.exe process to die and psuedo-recycle itself. It was one of those edge cases that was incredibly hard to test and repro in a non-production environment.

Resources