I'm hosting my Rails 4.1.4 project with 2 Unicorn processes on free dyno for my development server. After the app running for a while, sometimes I feel getting slow. I added New relic, logentries, and enable log-runtime-metrics. Then I look at New relic and logentries
» heroku web.1 - - source=web.1 dyno=heroku.21274089.82eb32b4-c547-4041-b452-d3fedae05ee9 sample#load_avg_1m=0.00 sample#load_avg_5m=0.00 sample#load_avg_15m=0.01
» heroku web.1 - - source=web.1 dyno=heroku.21274089.82eb32b4-c547-4041-b452-d3fedae05ee9 sample#memory_total=393.41MB sample#memory_rss=368.38MB sample#memory_cache=4.47MB sample#memory_swap=20.56MB sample#memory_pgpgin=121244pages sample#memory_pgpgout=25796pages
What I don't understand is my dyno’s memory is only sample#memory_rss=368.38MB, but why it already uses swap memory sample#memory_swap=20.56MB? Because from what I thought from heroku article https://devcenter.heroku.com/articles/dynos#memory-behavior, it should switch to swap memory if it reaches dyno's memory which is 512 Mb for free dyno.
I was seeing significant swap even when using only 50% of available RAM in my app, so I asked. Here's a quote from their support team:
We leave Ubuntu's swappiness at its default of 60 on our runtimes, including PX dynos.
A value of 60 ensures that your app will begin swapping well before it reaches max memory. The Linux kernel parameter vm.swappiness ranges from 0 to 100, with 0 indicating no swap and 100 indicating always swap.
So when running on Heroku you should expect your application to swap even if the footprint of your app is far smaller than the advertised RAM of your dyno.
Related
I'm running an RoR app no heroku which rapidly takes the available 512 Mb. I'm using puma (4.3.5) .
I've followe the tutorials here and the derailed benchmarks on local machine. The perf:mem_over_time and benchmarks on local never raise any issues. What is astounding is the fact that no matter what, the memory on local machine does not increase whereas when app is deployed on heroku, it steadily increases.
Any ideas on how to debug the problem on heroku? Running the derailed benchmarks is not possible on heroky since it complains that it cannot connect to postgres server ( User does not have CONNECT privilege.)
Ok, the problem seemed to be an obvious one : The number of workers on prod was set to 5. Each one take on average 80Mb, to start with, so just a minor increase in memory, triggere R14 not enough memory. I've reduced it to 2 workers and it' fine now.
I'm trying to understand the Dyno Load section of my metrics of my app. My particular app has five worker dynos. Given that information, if I see a Load Max or Load Avg of 2 or 2.5 then I should be ok, right? With this setup my goal would be to keep the load under five (1 for each dyno)? Is that the correct way to view this?
The load you see in Heroku Metrics is per dyno. Each dyno sends its own load, the max being the maximum value.
So expecting 5 to be a good value because you have 5 dynos isn't right.
You need to evaluate that value based on the type of dynos you have, as each of them will have more CPU shares and be able to handle more load.
Heroku recommends (here) keeping Free, Hobby and Standard dynos between 0.5 and 1.0 load.
Performance-M dynos can go to 3.0 or 4.0, and PL can go up to 16.0.
See also dyno sizes and their CPU share: https://devcenter.heroku.com/articles/dyno-types
Is it possible to stop a Heroku Daily Dyno Restart for a Hobby Dyno?
My Goal is to stop the Dyno from restarting.
In short, No (with an aside that the restart shouldn't be seen as a bad thing).
From the Heroku Dynos and Dyno Manager Docs
Dynos are also restarted (cycled) at least once per day to help maintain the health of applications running on Heroku. Any changes to the local filesystem will be deleted. The cycling happens once every 24 hours (plus up to 216 random minutes, to prevent every dyno for an application from restarting at the same time).
Cycling happens for all dynos, including one-off dynos, so dynos will run for a maximum of 24 hours + 216 minutes.
In addition, dynos are restarted as needed for the overall health of the system and your app. For example, the dyno manager occasionally detects a fault in the underlying hardware and needs to move your dyno to a new physical location.
Additionally, Dynos Restart if you:
create a new release by deploying new code
change your config vars
change your add-ons
run heroku restart
With Hobby Dynos, the real issue is that inactivity causes the Dyno to sleep throughout the day. From my personal experience, waking up a sleeping dyno can cause a page to take ~30s to load.
There are many solutions to 'ping' the dyno on regular intervals to keep it 'awake'.
An example solution for a Node Server is heroku-self-ping
Does anyone know the answer for this? I spawned two 2X dynos on heroku and the performance with the free 1X dyno is much better than the two 2X dynos. They both have the same Rails app talking to the same database. Boggles my mind!
More dynos give you more concurrency. Based on a single threaded web server such as thin, if you have 4 x 1x dynos, you can serve four requests at the same time.
With 2 x 2x dynos you can only serve two requests.
2x dynos have more memory (1024MB) and more CPU available. This is useful if your application takes up a lot of memory. Most apps should not need 2x dynos.
Heroku have recently added PX dynos as well, which have significantly more power available.
You can read about the different dynos Heroku offers on their website.
The actual data changed since the accepted answer was posted.
The current difference is the memory (512MB for 1x and 1024MB for 2x), as well as the CPU share that is doubled for 2x.
More details can be found on the following Heroku dev center page: https://devcenter.heroku.com/articles/dyno-types
I have a web app on heroku which all the time is using around 300% of the allowed RAM (512 MB). I see my logs full of Error R14 (Memory quota exceeded) [an entry every second]. Although in bad condition, my app still works.
Apart from degraded performance, are there any other consequences also which I should be aware of ( like heroku be charging extra for anything related to this issue, scheduled jobs might fail etc) ?
To the best of my knowledge Heroku will not take action even if you continue to exceed the memory requirements. However, I don't think the availability of the full 1 GB of overage (out of the 1.5 GB that you are consuming) is guaranteed, or is guaranteed to be physical memory at all times. Also, if you are running close to 1.5 GB, then you risk going over the hard 1.5 GB limit at which point your dyno will be terminated.
I also get the following every time I run a specific task on my Heroku app and check heroku logs --tail:
Process running mem=626M(121.6%)
Error R14 (Memory quota exceeded)
My solution would be to check out Celery and Heroku's documentation on this.
Celery is an open source asynchronous task queue, or job queue, which makes it very easy to offload work out of the synchronous request lifecycle of a web app onto a pool of task workers to perform jobs asynchronously.