My asp .net mvc3 application is hosted in IIS 7 and i wish to reduce memory consumption by reducing the recycle time interval of IIS to 1000 mintes . Is there any side-effects in doings this?
No side effects of reducing the recycling time in IIS. It doesn't even consumes much time to do recycle. Recycling saves you from memory leak. For more info check this link.
Related
I see a degradation in response times within myapplications. After a server restart, response times are acceptable. However, after some time, which depends on the workload on the system, the response times degrade and the server has to be restarted to return to good performance.
Are you monitoring the Java heap usage with verbose garbage collection (GC) logs?
The behavior you describe can happen if the heap has enough free space after a restart, then gradually fills with long-lived objects as the workload runs. This may be caused by the heap simply being too small, or the application may have a memory leak, using heap and not releasing it for collection when the associated work is completed. When there is not enough free heap space, the application work slows down because the JVM spends excessive time running GC.
You can learn more about Java GC troubleshooting in our documentation
https://www.eclipse.org/openj9/docs/vgclog/
You can also open a support case to get assistance from WebSphere/Java troubleshooting experts, if you have a support arrangement with IBM.
We've recently copied an api that we have from being IIS hosted into a console app (to be hosted with Owin + TopShelf as a service) and have been performance profiling the two hosting options using JMeter.
We throw 18 threads at the apis and we get differing results back from the IIS hosted vs console hosted, specifically as follows :
Response times through IIS are slower. This isn't surprising as the pipeline in IIS is more involved.
Throughput through IIS is consistent, i.e. we don't see significant increases/decreases in throughput (we achieve 5500 requests/responses per min)
Throughput when hosted in a console app starts off very high (20,000 per min) but degrades quickly to approximately 4,500 per min over a 10 minute period.
We're trying to determine what the cause of this throughput drop is when hosting as a console. Why is we start with 20,000 requests per min (presumably calculated on initial response times when it hasn't run for a minute) but degrade to 4,500?
Other things of note, CPU isn't a concern. It's fluctuates to start but settles below 30% available, and memory is average 1.34GB on a 4GB ram machine.
Why might the throughput in IIS be stable and why does it degrade when hosted in using MS Owin hosting through a console app (given stable CPU and Memory)?
Incidentally we're trying to isolate pieces of code that could cause the degradation.
Any thoughts on this would be appreciated.
OS: Windows Server 2012 Standard
IIS: 8.0.9200.16384
Processor: 4x Xeon 2.67Ghz CPU
RAM: 40GB
Problem:
We have recently enabled IIS's AutoStart feature, since doing so our start up time for the application pools has gone up considerably. The application pool appears to be running but it seems to ramp up its CPU usage to the maximum 25% for about 30 minutes and the websites running in that pool don't respond until this has completed. We have checked the event log and there doesn't appear to be any faults. We have checked the logging in our preload function and this appears to only take about 60-90 seconds.
How can we diagnose what is causing the delay in the application pools starting up?
Background:
We are serving up multiple copies of the same ASP.Net MVC3 application, from multiple application pools (20 sites per pool). We have approximately 8 pools service up 160 sites We have IProcessHostPreloadClient built which preloads some settings from the database when the sites starts up. We have a second server with the same basic specs but only 3 pools of 20, which only takes approx 5 minutes per pool to start up.
For anyone interested here is what we did to resolve/mitigate the issue:
Break up our sites into smaller groupings per application pools (this reduces the startup time per application pool). We went with 10 sites per pool.
Change to using the IIS 8 Application Initialize 'PreloadEnabled' option rather than the serviceAutoStartProvider for site initialization.
When deploying new code, don't restart the application pools, instead use the app_offline.htm feature to unload the application and restart it.
The app_offline.htm feature is the key one for us, this means we are able to deploy new versions of our software with out stopping and starting the application pools and incurring the start up time penalty. Also incrementally restarting application pools help reduce the strain on the CPU which meant we got a consistent start up time for each pool. This is only required when we do an IIS reset or server restart (rarely).
I have a site hosted on Windows Azure shared websites. It just got suspended for going over memory usage limit of 512MB/hour.
I do use .net caching rather heavily (to prevent multiple calls to database/external APIs, etc...).
Is that caching a no-no in shared websites on Windows Azure?
Do you use System.Runtime.Cache? You should be able to limit the amount of caching e.g. the memorycache object uses. See http://msdn.microsoft.com/en-us/library/dd941874.aspx for more information.
Even if you will stop using Cache it still can be used by framework/libs. I also have same problem (interesting, that in free mode memory limit is 1024MB, but shared one is lowered to 512).
As I see, memory amount that Azure shows on portal seems very close to System.Diagnostics.Process.GetCurrentProcess().PrivateMemorySize value.
At this moment I'm experimenting with caching settings to set maximum memory:
<system.web>
<caching>
<cache privateBytesLimit="250000000" privateBytesPollTime="00:00:15"/>
</caching>
</system.web>
Several days ago I set 300MB but several minutes ago got suspended again :(, so lowering to 250MB.
But anyway, this is very unclear, strange and "wrong" solution imho.
UPDATE
Got suspended again this morning. Temporarily converted to standard mode with small instance (1.7 GB RAM).
My WorkingSet counter now is about 200 megs now (with PeakWorkingSet 330 megs). BUT! GC's CollectionCount is increased approx 8 times (Gen0 is 1800 times instead of 250 for less that a day).
My current theory is that in "shared" mode websites are running inside "big" VM with a lot of memory and Garbage Collector just not have a need to run often, leading to longer "garbage life" and more memory consumption.
Have no access to my developer computer right now for some verification, but planing to convert site to web role in cloud service ASAP - with extra small instance (cost is comparable to shared web site cost)...
Might be worth checking a profile using perfmon on your local machine to see if what if its hitting the limits normally first, then look at maybe configuring the logging on Azure and again digging through it.
Also ensuring everything is precompiled and that your not loading and modules etc you don't need can really effect performance etc on Azure.
I think what you might want to try here is scale our instead of up. If you add a second instance that will double your resource limit.
Is there a max memory size for w3wp.exe? Mine is getting up to about 2.5-3G then seems to crash/reset.
Per the "GIVEN" dimensions below I setup some counters and noticed that the w3wp.exe will service http requests then reset to 0 along with the w3wp.exe process crashing (changing pids). As a result REQUESTS_QUEUED and ACTIVE_REQUESTS grow large causing delays in processing until the w3wp.exe can restart itself. It's doing this every 3-4min so more than likely due to heavy system volume during peak load. But not sure if it's a memory issue or not.
I see tons of warnings in my webserver (IIS) log:
A process serving application pool 'MyApplication' suffered a fatal
communication error with the Windows Process Activation Service. The
process id was '1732'. The data field contains the error number.
RESULT: Customers are reporting sporadic response times for http requests.
Can I increase this memory limit or reconfigure IIS to handle increased load?
GIVEN:
System has been passed down to me so there may be gaps with IIS configuration, etc.
Database: SQL Server 2008R2
Web Servers: Windows Server 2008R2 Enterprise SP1 (64bit, 64G RAM)
IIS 7.5
Using MVC4 Web API with MemoryCache aggressively with Model and Business Objects with eviction set to 2hrs
Looked at the logs but really don't see anything significantly relevant
One application pool...no other LOB applications running on this server
Is the application pool set to run in 32-bit mode? That can cause memory issues even if you have plenty of RAM. On a 64-bit system, the memory limit for a 32-bit process is 4 GB.
Actually after solving the RC in which overuse of memorycacheing that was crashing the w3wpe.exe process I can safely say that an mvc4 web api service can grow up to 20G ... from baseline of 3G (64bit machine and application pool). AT least that was the last level I saw it before eviction policy starting cleaning up things. Probably a bit excessive in footprint but the application is very fast returning machine learning targeted content sub-100ms.