I am developing an application for windows azure using the latest SDK.
At the moment I am implementing the session provider using the cache, but the simulator is completely out of proportion:
The cache is implemented as a "very small" worker role (max. 768 MB RAM).
Does anyone know if this is normal or if I have some misconfiguration?
While I can't say if this is the "right" behavior but I am also seeing the same. In my case, cacheserviceemulator.exe starts at about 300KB memory and keeps on consuming more and more memory. What I have been told is that it is expected behavior. This service is expected to consume up to 30% of your system's memory over an extended period of time.
One other note: I noticed that you're using X-Small Size instance for cache worker role. Please note that minimum size supported for caching is Small. Thought I should mention that as well.
Related
We have a memory problem with our Azure Windows App Service Plan (service level is P1v3 with 1 instance – this means 8 GB memory).
We are running two small .NET 6 App Services on it (some web APIs), that use custom containers – without problems.
They’re not in production and receive a very low number of requests.
However, when looking at the service plan’s memory usage in Diagnose and Solve Problems / Memory Analysis, we see an unexplained 80% memory percent usage – in a stable way:
And the real problem occurs when we try to start a third app service on the plan. We get this "out of memory" error in our log stream :
ERROR - Site: app-name-dev - Unable to start container.
Error message: Docker API responded with status code=InternalServerError,
response={"message":"hcsshim::CreateComputeSystem xxxx:
The paging file is too small for this operation to complete."}
So it looks like docker doesn’t have enough mem to start the container. Maybe because of the 80% mem usage ?
But our apps actually have very low memory needs. When running them locally on dev machines, we see about 50-150M memory usage (when no requests occur).
In Azure, the private bytes graph in “availability and performance” shows very moderate consumption for the biggest app of the two:
Unfortunately, the “Memory drill down” is unavailable:
(needless to say, waiting hours doesn’t change the message…)
Even more strange, stopping all App Services of the App Service Plan still show a Memory Percentage of 60% in the Plan.
Obviously some memory is being retained by something...
So the questions are:
Is it normal to have 60% memory percentage in an App Service Plan with no App Services running ?
If not, could this be due to a memory leak in our app ? But app services are ran in supposedly isolated containers, so I'm not sure this is possible. Any other explanation is of course welcome :-)
Why can’t we access the memory drill down ?
Any tips on the best way to fit "small" Docker containers with low memory usage in Azure App Service ? (or maybe in another Azure resource type...). It's a bit frustrating to be able to use ony 3GB out of a 8GB machine...
Further details:
First app is a .NET 6 based app, with its docker image based on aspnet:6.0-nanoserver-ltsc2022
Second app is also a .NET 6 based app, but has some windows DLL dependencies, and therefore is based on aspnet:6.0-windowsservercore-ltsc2022
Thanks in advance!
EDIT:
I added more details and changed the questions a bit since I was able to stop all app services tonight.
Can anyone tell me if the type of behavior outlined in the memory dump from Visual Studio
Is normal? for instance does the StackExchange.Redis.PhysicalConnection run that high on inclusive size (bytes)? Or is that really high?
Basically we are experiencing slowness with our web head after converting our code to run on Azure Redis from Session (we are now serializing and deserializing as needed and storing in Redis cache) but overall performance is horrible.
The requests complete but it can take a while, is that due to the single threaded nature of Redis? We are using the configuration outlined as best practice by the Azure Redis team as outlined here https://stackoverflow.com/a/28821220
What else can we look at to help increase the performance as the current performance is not acceptable as a viable replacement for our session based implementation (asp.net webforms/sql server/azure IaaS) we currently have.
PS - Serialization and Deserialization does cause a hit, we understand that IIS spoiled us with its own special memory pool for non-serialized datasets and such, but there is no way that it should cause a 300-500% increase in page loads like it is now for us.
Thoughts appreciated!
#Tim Wieman
How large are your cached objects?
They can range in size, there are some datasets stored in redis.
What type of objects are they?
Most objects are custom objects w/variable number of properties, some even contain collections.
What serializer are you using?
We are using Newtonsoft for anything that doesn't require Rowstate and the required binary serializer for the datasets that do need rowstate.
All serialization, and subsequent deserialization, is done in code before call redis databases StringGet or StringSet.
If appears the memory was in fact extremely high, we were erroneously creating thousands of connections to Redis instead of a singleton instance of the Redis Cache.
The multiple connections were not getting cleaned up by the GC before the CPU would get to 98% and the server would become unresponsive.
We adjusted our code to ensure a single instance of the connection to Azure Redis is used for all Redis calls and have tested thoroughly.
It appears to be resolves as Azure Redis is no longer eating up memory or CPU resources.
I have an azure cloud service consisting of one web role with azure cache co-located with the same role. I am seeing cache accesses to be too slow. Each get call takes ~800 ms. What could be wrong? What should i look at for optimization?
More data is needed.
Whats your object size ? Are only the puts taking time or the gets are equally bad ?
Is the slowness only seen on devfabric or on actual deployments too ?
Whats the role size ?
Colocation brings with it CPU boundness with your current app, is it hogging all the CPU ? Colocation was not supported with XS machines. Apart from the serializer, the machine itself could be the cause for slowness.
I have a site hosted on Windows Azure shared websites. It just got suspended for going over memory usage limit of 512MB/hour.
I do use .net caching rather heavily (to prevent multiple calls to database/external APIs, etc...).
Is that caching a no-no in shared websites on Windows Azure?
Do you use System.Runtime.Cache? You should be able to limit the amount of caching e.g. the memorycache object uses. See http://msdn.microsoft.com/en-us/library/dd941874.aspx for more information.
Even if you will stop using Cache it still can be used by framework/libs. I also have same problem (interesting, that in free mode memory limit is 1024MB, but shared one is lowered to 512).
As I see, memory amount that Azure shows on portal seems very close to System.Diagnostics.Process.GetCurrentProcess().PrivateMemorySize value.
At this moment I'm experimenting with caching settings to set maximum memory:
<system.web>
<caching>
<cache privateBytesLimit="250000000" privateBytesPollTime="00:00:15"/>
</caching>
</system.web>
Several days ago I set 300MB but several minutes ago got suspended again :(, so lowering to 250MB.
But anyway, this is very unclear, strange and "wrong" solution imho.
UPDATE
Got suspended again this morning. Temporarily converted to standard mode with small instance (1.7 GB RAM).
My WorkingSet counter now is about 200 megs now (with PeakWorkingSet 330 megs). BUT! GC's CollectionCount is increased approx 8 times (Gen0 is 1800 times instead of 250 for less that a day).
My current theory is that in "shared" mode websites are running inside "big" VM with a lot of memory and Garbage Collector just not have a need to run often, leading to longer "garbage life" and more memory consumption.
Have no access to my developer computer right now for some verification, but planing to convert site to web role in cloud service ASAP - with extra small instance (cost is comparable to shared web site cost)...
Might be worth checking a profile using perfmon on your local machine to see if what if its hitting the limits normally first, then look at maybe configuring the logging on Azure and again digging through it.
Also ensuring everything is precompiled and that your not loading and modules etc you don't need can really effect performance etc on Azure.
I think what you might want to try here is scale our instead of up. If you add a second instance that will double your resource limit.
I have a worker role which runs multiple threads(I used ThreadedWorkerRole). My worker role downloads some data and then images related to that data. Everything works fine locally but when I deploy the app on azure, It starts by showing a reasonable memory usage(48 MB) but then it shoots to the 800 MB within 1 or 2 hours. My application did care disposing the objects with lots of "using" statements and closing the streams properly. But I still wonder what cause the memory to jump at such a high value. One More thing, I have used RETSLib(PInvoke library to hit RETS server) which downloads data and images. Can it be the issue of Unmanaged code?
It can definitely be the issue of unmanaged code leaking memory. Is RETSLib a .net wrapper on librets? Some references to php implementations of librets leaking memory.
You mention "downloads some data and then images related to that data". Are you using Entity Framework to get this initial data or store it into SQL? If so I am assuming that you dispose of the ObjectContext. There have been instances where EF 4.0 seems to have some memory issues.
A link (old) that talks about this.
Could have added this as a comment but stackoverflow would not allow me to do so on account of my low rep points