I'm running an RoR app no heroku which rapidly takes the available 512 Mb. I'm using puma (4.3.5) .
I've followe the tutorials here and the derailed benchmarks on local machine. The perf:mem_over_time and benchmarks on local never raise any issues. What is astounding is the fact that no matter what, the memory on local machine does not increase whereas when app is deployed on heroku, it steadily increases.
Any ideas on how to debug the problem on heroku? Running the derailed benchmarks is not possible on heroky since it complains that it cannot connect to postgres server ( User does not have CONNECT privilege.)
Ok, the problem seemed to be an obvious one : The number of workers on prod was set to 5. Each one take on average 80Mb, to start with, so just a minor increase in memory, triggere R14 not enough memory. I've reduced it to 2 workers and it' fine now.
Related
We have a memory problem with our Azure Windows App Service Plan (service level is P1v3 with 1 instance – this means 8 GB memory).
We are running two small .NET 6 App Services on it (some web APIs), that use custom containers – without problems.
They’re not in production and receive a very low number of requests.
However, when looking at the service plan’s memory usage in Diagnose and Solve Problems / Memory Analysis, we see an unexplained 80% memory percent usage – in a stable way:
And the real problem occurs when we try to start a third app service on the plan. We get this "out of memory" error in our log stream :
ERROR - Site: app-name-dev - Unable to start container.
Error message: Docker API responded with status code=InternalServerError,
response={"message":"hcsshim::CreateComputeSystem xxxx:
The paging file is too small for this operation to complete."}
So it looks like docker doesn’t have enough mem to start the container. Maybe because of the 80% mem usage ?
But our apps actually have very low memory needs. When running them locally on dev machines, we see about 50-150M memory usage (when no requests occur).
In Azure, the private bytes graph in “availability and performance” shows very moderate consumption for the biggest app of the two:
Unfortunately, the “Memory drill down” is unavailable:
(needless to say, waiting hours doesn’t change the message…)
Even more strange, stopping all App Services of the App Service Plan still show a Memory Percentage of 60% in the Plan.
Obviously some memory is being retained by something...
So the questions are:
Is it normal to have 60% memory percentage in an App Service Plan with no App Services running ?
If not, could this be due to a memory leak in our app ? But app services are ran in supposedly isolated containers, so I'm not sure this is possible. Any other explanation is of course welcome :-)
Why can’t we access the memory drill down ?
Any tips on the best way to fit "small" Docker containers with low memory usage in Azure App Service ? (or maybe in another Azure resource type...). It's a bit frustrating to be able to use ony 3GB out of a 8GB machine...
Further details:
First app is a .NET 6 based app, with its docker image based on aspnet:6.0-nanoserver-ltsc2022
Second app is also a .NET 6 based app, but has some windows DLL dependencies, and therefore is based on aspnet:6.0-windowsservercore-ltsc2022
Thanks in advance!
EDIT:
I added more details and changed the questions a bit since I was able to stop all app services tonight.
I'm new to heroku and wondering about their terminology.
I host a project that requires seeding to populate a database with tens of thousands of rows. To do this I employ a web dyno to extract information from APIs across the web.
As my dyno is running I get memory notifications saying that the dyno has exceeded memory requirements (specific heroku errors are R14 and R15).
I am not sure whether this merely means that my seeding process (web dyno) is running too fast and will be throttled, or whether my database itself is too large and must be reduced?
R14 and R15 errors are only thrown on their runtime dynos. For reference, Heroku Postgres databases do not run on dynos. If you're hitting R14/R15 errors it means that the seed data you're pulling down is likely exhausting your memory quota. You'll need to either decrease the size of the data or batch the data, write to Postgres and then clean up before proceeding.
I have a small ruby on rails application which i have deployed on an amazon ec-2 instance using capistrano, my instance is a t2.small instance with nginx installed on it and local postgress db installed on the server too. i have a development instance on which i do frequent deployments, recently whenever i try to do a capistrano deployment on my ec-2 instance the cpu-utilization has an enormous spike, usually is its between 20-25% but during deployment for some reason it goes upto 85% which makes my instance unresponsive and i have to do a hard restart on my server to get it back working
I dont know why is this happening and what should i do to solve this because load balancing and auto scaling makes no sense in this scenario as the issue occurs only during deployment
I have attached a screenshot of my server cpu utilization and the 2 high peaks are both when i performed cap deployment
The only solution i can think of is increasing the instance type, but i want to know what other options do i have to solve this. Any help is appreciated, thanks in advance
If this is interim spike (only during installation) and you don't need high CPU during application usage, you may try t2.unlimited approach.
If t2.unlimited couldn't support your need, I think increasing the instance type is the only option left for you.
Ive deployed Prisma to Heroku: https://www.prisma.io/blog/heroku-integration-homihof6eifi/
The only traffic to my site is myself testing it. I started out with 1 hobby dynamo but the 'resting' memory usage was around 95% and I'm was getting some "Memory quota exceeded" errors.
To try and fix this I increased the dynamo count to 2. Im now paying $50 per month.
Despite this Im still getting memory usage warnings. $50 a month for a service thats struggling with very low traffic seems crazy. Have I set something up wrong? Should I have increased the memory limit rather than number of dynamos?
A future release of Prisma will introduce significant improvements to memory handling. While operating Prisma on a 512 mb dyno is certainly possible, you will currently have a much better experience upgrading to a 1024 mb dyno. Running a single large dyno will provide a much bigger improvement than running two small dynos.
Hope this helps :-)
I have a web app on heroku which all the time is using around 300% of the allowed RAM (512 MB). I see my logs full of Error R14 (Memory quota exceeded) [an entry every second]. Although in bad condition, my app still works.
Apart from degraded performance, are there any other consequences also which I should be aware of ( like heroku be charging extra for anything related to this issue, scheduled jobs might fail etc) ?
To the best of my knowledge Heroku will not take action even if you continue to exceed the memory requirements. However, I don't think the availability of the full 1 GB of overage (out of the 1.5 GB that you are consuming) is guaranteed, or is guaranteed to be physical memory at all times. Also, if you are running close to 1.5 GB, then you risk going over the hard 1.5 GB limit at which point your dyno will be terminated.
I also get the following every time I run a specific task on my Heroku app and check heroku logs --tail:
Process running mem=626M(121.6%)
Error R14 (Memory quota exceeded)
My solution would be to check out Celery and Heroku's documentation on this.
Celery is an open source asynchronous task queue, or job queue, which makes it very easy to offload work out of the synchronous request lifecycle of a web app onto a pool of task workers to perform jobs asynchronously.