How to reduce memory usage on windows azure shared website? - memory-management

I have a site hosted on Windows Azure shared websites. It just got suspended for going over memory usage limit of 512MB/hour.
I do use .net caching rather heavily (to prevent multiple calls to database/external APIs, etc...).
Is that caching a no-no in shared websites on Windows Azure?

Do you use System.Runtime.Cache? You should be able to limit the amount of caching e.g. the memorycache object uses. See http://msdn.microsoft.com/en-us/library/dd941874.aspx for more information.

Even if you will stop using Cache it still can be used by framework/libs. I also have same problem (interesting, that in free mode memory limit is 1024MB, but shared one is lowered to 512).
As I see, memory amount that Azure shows on portal seems very close to System.Diagnostics.Process.GetCurrentProcess().PrivateMemorySize value.
At this moment I'm experimenting with caching settings to set maximum memory:
<system.web>
<caching>
<cache privateBytesLimit="250000000" privateBytesPollTime="00:00:15"/>
</caching>
</system.web>
Several days ago I set 300MB but several minutes ago got suspended again :(, so lowering to 250MB.
But anyway, this is very unclear, strange and "wrong" solution imho.
UPDATE
Got suspended again this morning. Temporarily converted to standard mode with small instance (1.7 GB RAM).
My WorkingSet counter now is about 200 megs now (with PeakWorkingSet 330 megs). BUT! GC's CollectionCount is increased approx 8 times (Gen0 is 1800 times instead of 250 for less that a day).
My current theory is that in "shared" mode websites are running inside "big" VM with a lot of memory and Garbage Collector just not have a need to run often, leading to longer "garbage life" and more memory consumption.
Have no access to my developer computer right now for some verification, but planing to convert site to web role in cloud service ASAP - with extra small instance (cost is comparable to shared web site cost)...

Might be worth checking a profile using perfmon on your local machine to see if what if its hitting the limits normally first, then look at maybe configuring the logging on Azure and again digging through it.
Also ensuring everything is precompiled and that your not loading and modules etc you don't need can really effect performance etc on Azure.

I think what you might want to try here is scale our instead of up. If you add a second instance that will double your resource limit.

Related

CacheServiceEmulator.exe >3GB RAM usage?

I am developing an application for windows azure using the latest SDK.
At the moment I am implementing the session provider using the cache, but the simulator is completely out of proportion:
The cache is implemented as a "very small" worker role (max. 768 MB RAM).
Does anyone know if this is normal or if I have some misconfiguration?
While I can't say if this is the "right" behavior but I am also seeing the same. In my case, cacheserviceemulator.exe starts at about 300KB memory and keeps on consuming more and more memory. What I have been told is that it is expected behavior. This service is expected to consume up to 30% of your system's memory over an extended period of time.
One other note: I noticed that you're using X-Small Size instance for cache worker role. Please note that minimum size supported for caching is Small. Thought I should mention that as well.

How many users should a EC2 Micro Instance be able to handle only with a nginx server?

I have a iOS Social App.
This app talks to my server to do updates & retrieval fairly often. Mostly small text as JSON. Sometimes users will upload pictures that my web-server will then upload to a S3 Bucket. No pictures or any other type of file will be retrieved from the web-server
The EC2 Micro Ubuntu 13.04 Instance runs PHP 5.5, PHP-FPM and NGINX. Cache is handled by Elastic Cache using Redis and the database connects to a separate m1.large MongoDB server. The content can be fairly dynamic as newsfeed can be dynamic.
I am a total newbie in regards to configuring NGINX for performance and I am trying to see whether I've configured my server properly or not.
I am using Siege to test my server load but I can't find any type of statistics on how many concurrent users / page loads should my system be able to handle so that I know that I've done something right or something wrong.
What amount of concurrent users / page load should my server be able to handle?
I guess if I cant get hold on statistic from experience what should be easy, medium, and extreme for my micro instance?
I am aware that there are several other questions asking similar things. But none provide any sort of estimates for a similar system, which is what I am looking for.
I haven't tried nginx on microinstance for the reasons Jonathan pointed out. If you consume cpu burst you will be throttled very hard and your app will become unusable.
IF you want to follow that path I would recommend:
Try to cap cpu usage for nginx and php5-fpm to make sure you do not go over the thereshold of cpu penalities. I have no ideia what that thereshold is. I believe the main problem with micro instance is to maintain a consistent cpu availability. If you go over the cap you are screwed.
Try to use fastcgi_cache, if possible. You want to hit php5-fpm only if really needed.
Keep in mind that gzipping on the fly will eat alot of cpu. I mean alot of cpu (for a instance that has almost none cpu power). If you can use gzip_static, do it. But I believe you cannot.
As for statistics, you will need to do that yourself. I have statistics for m1.small but none for micro. Start by making nginx serve a static html file with very few kb. Do a siege benchmark mode with 10 concurrent users for 10 minutes and measure. Make sure you are sieging from a stronger machine.
siege -b -c10 -t600s 'http:// private-ip /test.html'
You will probably see the effects of cpu throttle by just doing that! What you want to keep an eye on is the transactions per second and how much throughput can the nginx serve. Keep in mind that m1small max is 35mb/s so m1.micro will be even less.
Then, move to a json response. Try gzipping. See how much concurrent requests per second you can get.
And dont forget to come back here and report your numbers.
Best regards.
Micro instances are unique in that they use a burstable profile. While you may get up two 2 ECU's in terms of performance for a short period of time, after it uses its burstable allotment it will be limited to around 0.1 or 0.2 ECU. Eventually the allotment resets and you can get 2 ECU's again.
Much of this is going to come down to how CPU/Memory heavy your application is. It sounds like you have it pretty well optimized already.

Memory footprint for large systems in Vaadin

I'm working in financial sector and we are about to select Vaadin 7 for development of large heavy load system.
But I'm a bit worried about Vaadin memory footprint for large systems since Vaadin keeps all state in session. It means that for every new user all application state will be stored in memory, won't it?
We cannot aford to build monolithic system - system must be scalable and agile instead. Since we have huge client base it must be easy to customize and ready to grow.
Could anyone please share the experience and possible workarounds how to minimize or eliminate those problems in Vaadin?
During the development of our products we faced the problem of large memory footprint using the default Vaadin architecture.
The Vaadin architecture is based on components driven by events. Using components is fairly simple to create a tightly coupled application. The reason is that components are structured into a hierarchy. It's like a pyramid. The larger application is built; the larger pyramid is stored in the session for each user.
In order to significantly reduce the memory allocation we've created a page based approach for the application with a comprehensive event model on the background using the old school state management. It is based on the Statechart notation in XML format.
As the result the session keeps only visited pages during the user workflow, described by the Statechart configuration. When the user finishes the workflow, all the pages are released to be collected by garbage collector.
To see the difference we have done some tests to compare memory allocated for the user working with the application.
The applications developed:
with tightly coupled approach consume from 5 to 15MB of heap per user
with loose-coupled approach - up to 2 MB
We are quite happy with results since it let us scale the large system using 4GB RAM up to 1000-1500 concurrent users per server.
Almost forgot. We used Lexaden Web Flow library. It is with Apache license.
I think you should have a look here: https://vaadin.com/blog/-/blogs/vaadin-scalability-study-quicktickets
Plus, I have found the following info by people who run Vaadin in production.
Balázs Hódossy:
We have a back office system with more than 10 000 users. The daily
user number is about 3000 but half of them use the system 8 hours
without logout. We use Liferay 6.0.5 Tomcat bundle and Vaadin as
portlet. Our two servers have 48 GB RAM and we give Tomcat 24 GB heap.
DB got 18 GB and the system the rest. Measure the heap to the session
size, concurrent users, and the activity. More memory cause more
rarely but longer full GC. We plan to increase the number of Tomcat
workers and reduce the heap. When you measure your server, try to add
a little bit more memory. If the cost is so important than decrease
the processor cost and buy more RAM. Most of the time it is valuable
with a little tuning.
Pierre-Emmanuel Gros:
For 1000 dayly user heavyly used , a pure vaadin application: Server
3 gb 2 core Jetty with ulimit to 50000 Postgresql 9 with 50
concurent users ( a connection pool is used). As software part, I used also ehcache to cache DTO objects,and pure JDBC.

MVC3 memory management

What is the best way to check memory usage in an ASP.NET MVC3 application?
I have been told by my hosting provider to recyle the IIS application pool every so often to improve the speed of the site. Is this what is 'recommended practice'? Surely I shouldn't need to restart my application every so often? I'd much rather find out if it is an issue with memory usage in my application and correct it. So any tips & best practices you use would be quite helpful too.
The application is based on ASP.NET MVC3, C# and EF Code First. Any guidance, links appreciated.
EDIT:
I found this page after I posted, which is quite useful. But I'd still like to hear any other views.
ASP.NET MVC and EF Code First Memory Usage
Thank you
I have a site that never recycles (until the machine is rebooted weekly)
Your application generally should keep performing fine. If it doesn't, there is some leak.
This can occur because
Cache never expires
Cache never expires
Session storage keeps growing and never times out
ObjectContexts are never disposed and kept in the session, etc
Objects that should be disposed aren't
Objects that are created via a dependency injection container aren't setup to release after each request, and thus potentially have internal collections that keep growing.
There are more causes - but these are a few main ones.
So the question really is 'there is no best practice - it depends on your app'
If you are worried about current sessions during a restart, keep in mind a restart can be quick and current requests are allowed to finish (sometimes) and forms authentication tokens will survive the restart, however sessions will not unless you configure an out of process state server.
If your memory usage keeps growing, then setup a restart schedule, otherwise do once a week or never - or setup once memory goes to XYZ then reset. ASP.NET will restart automatically once a certain threshold is reached as well based on what the hoster has setup on memoryLimit:
http://msdn.microsoft.com/en-us/library/7w2sway1.aspx
By default IIS recycles the application pool automatically at an interval (I think is 29 hours or so) but that is surely set by the host, no matter how little or how much memory you're the process is using. THe recycling trigger can be a time interval or when the process hits a certain memory usage limit. I'm sure any shared host has both of them set.
About memory usage, you can use the GC.GetTotalMemory method which will give you an approximate usage. Even when using Perfmon the readings aren't very accurate but it gives you an idea.
//global.asax.cs
void Application_EndRequest(object o,EventArgs a)
{
var ctype=Context.Response.Headers["Content-Type"];
if (ctype == null || !ctype.Contains("text/html")) return;
Context.Response.Write(string.format("<p>Memory usage: {0}</p>",GC.GetTotalMemory(false)));
}
Be aware that you'll see the usage increasing increasing until the GC kicks in and the usage will drop to a more 'realistic' value.
If you have the money I recommend a specialized tool such as the Memory profiler
Other things you can do to at least be ready if the application has memory or performance problems:
Proper layering of the application, means you can refactor the more inefficient parts without affecting the others.
The Repository pattern will be very helpful, because you can start using EF , find out that EF uses to much memory (like in the link you've found), but then you could switch the repository implementation to use PetaPoco or Dapper.net.
In general an OR\M is more of a heavy library, if the application doesn't need ORM features but just a quick way to work with a db, use from the beginning a mico-Orm like those mentioned above.
Always dispose objects implementing IDisposable.
When dealing with large db records, use pagination. It's good for both server resources usage and user experience
Apply the YAGNI (You Aint Gonna Need It) principle as much as possible, this somehow implies a bit of TDD :)

ASP.NET MVC 2 app pool crashes - how to debug on server?

My application which was built in asp.net mvc 2 crashes sometimes (maybe once a month), and the error is Service Unavailable 503. Both times restarting app pool made the application work again. Since this error only happens live on the server (shared hosting) I don't know how to debug it. I don't have access to event logs so I don't see a way for debugging it.
Any suggestion?
UPDATE:
I contacted my hosting provider and they sent me this:
Imposed memory limit in Windows servers
Q: Is there a CPU/memory limit/restriction for Windows
Plans? A: Yes. 100MB for Reseller Class, 250MB for Personal Class ASP
and 500MB for Business Class ASP.
Q: What will happen if I hit the memory limit? A: If the worker
process exceeds the private memory quota, IIS will recycle that pool
which limits the memory usage. Your active sessions to the website may
get expired. If your site works on authentication you will be asked to
login again.
Q: How can I check the memory limit for my site? A: Run your site in
your local or test machine with the limits and try to optimize the
codes. If the limit of 250MB is exceeded you should get Business
Class, else you can go for Personal Class.
Q: What if my site exceeds the 500MB limit in Business Class? A: We
can increase the worker process to 2 if you are in a Business Class
server. If the number of worker processes are increased, the load will
be evenly shared across both the WP processes.
If the memory usage is still high, you should consider getting a
dedicated server where you can use unmetered memory for your website.
My account is Personal Class ASP (250MB). Since my website is a photo gallery, could this be related with generating thumbnails?
Thanks,
Ilija
Are you generating thumbnails on the fly when the request for thumbnail comes to the server? If yes, then you might want to consider generating thumbnails when you upload the photo and then just serve smaller images.
Very nice pictures, btw ;-)
PS. Try setting up memory limit on your local machine and hammer it with multiple requests - maybe you will be able to reproduce it.

Resources