Running a clojure app on Heroku, I've specified this in my leiningen profiles.clj
:jvm-opts ^:replace ["-Xms128m" "-Xmx350m" "-Xss512k" "-XX:MaxMetaspaceSize=150m"]
And I'm running my worker with lein trampoline run
But, I get these errors currently:
2015-06-20T14:38:14.652680+00:00 heroku[worker.1]: Error R14 (Memory quota exceeded)
2015-06-20T14:38:34.779145+00:00 heroku[worker.1]: Process running mem=552M(107.8%)
2015-06-20T14:38:34.779145+00:00 heroku[worker.1]: Error R14 (Memory quota exceeded)
2015-06-20T14:38:54.511927+00:00 heroku[worker.1]: Process running mem=552M(107.8%)
Since 350+150=500, I'm expecting that my memory usage should be below the 512MB limit on Heroku. Is there another max memory setting I need to add?
Heroku is limiting your process memory, which is different from the memory allocated by the JVM. See for example:
Why is my JVM process larger than max heap size?
Related
I'm running an RoR app no heroku which rapidly takes the available 512 Mb. I'm using puma (4.3.5) .
I've followe the tutorials here and the derailed benchmarks on local machine. The perf:mem_over_time and benchmarks on local never raise any issues. What is astounding is the fact that no matter what, the memory on local machine does not increase whereas when app is deployed on heroku, it steadily increases.
Any ideas on how to debug the problem on heroku? Running the derailed benchmarks is not possible on heroky since it complains that it cannot connect to postgres server ( User does not have CONNECT privilege.)
Ok, the problem seemed to be an obvious one : The number of workers on prod was set to 5. Each one take on average 80Mb, to start with, so just a minor increase in memory, triggere R14 not enough memory. I've reduced it to 2 workers and it' fine now.
Trying to get Heroku to run some Puppeteer jobs. Locally, it works. It's slow but it works. Monitoring the memory in OS X Activity Monitor, it doesn't get above 50MB. But when I deploy this script to Heroku, I'm getting a Memory quota exceeded every time, and the memory footprint it much larger.
Looking at the logs, I'm getting the message:
Process running mem=561M(106.5%) .
Error R14 (Memory quota exceeded) .
Restarting .
State changed from up to starting
Either Activity Monitor is not reporting the memory correctly, or something is going wrong only when running the script on Heroku. I can't imagine why a page scrape of 25 pages would be 561M.
Also since the Puppeteer scripts must be contained in try/catch—the memory error is crashing the Dyno and restarting. By the time the Dyno restarts, the browser hangs up. So the restarting does little good. Is there a way to catch 'most' errors on Heroku but throw when there is a memory R14 error?
I had a similar issue. What I discovered is that if you are not closing the browser you will get immediately an R14 error. What I recommend:
Make sure you use a single browser instance and multiple contexts instead of multiple browsers.
Make sure you close the contexts after you call pdf
If you are processing large pages you need to scale your heroku instance, you don't have a choice. Unfortunately, you need to pay 50$ for 1GB of memory on heroku...
Some ugly code but it points the fact you context is closed after calling pdf function.
browser.createIncognitoBrowserContext().then((context)=>{
context.newPage().then((page)=>{
page.setContent(html).then(()=>{
page.pdf(options).then((pdf)=>{
let inputStream = bufferToStream(pdf);
let outputStream = fs.createWriteStream(path);
inputStream.pipe(outputStream).on("finish", () => {
context.close().then(()=>{
resolve();
}).catch(reject);
});
});
}).catch(reject)
}).catch(reject)
}).catch(reject);
I'm new to heroku and wondering about their terminology.
I host a project that requires seeding to populate a database with tens of thousands of rows. To do this I employ a web dyno to extract information from APIs across the web.
As my dyno is running I get memory notifications saying that the dyno has exceeded memory requirements (specific heroku errors are R14 and R15).
I am not sure whether this merely means that my seeding process (web dyno) is running too fast and will be throttled, or whether my database itself is too large and must be reduced?
R14 and R15 errors are only thrown on their runtime dynos. For reference, Heroku Postgres databases do not run on dynos. If you're hitting R14/R15 errors it means that the seed data you're pulling down is likely exhausting your memory quota. You'll need to either decrease the size of the data or batch the data, write to Postgres and then clean up before proceeding.
I'm hosting my Rails 4.1.4 project with 2 Unicorn processes on free dyno for my development server. After the app running for a while, sometimes I feel getting slow. I added New relic, logentries, and enable log-runtime-metrics. Then I look at New relic and logentries
» heroku web.1 - - source=web.1 dyno=heroku.21274089.82eb32b4-c547-4041-b452-d3fedae05ee9 sample#load_avg_1m=0.00 sample#load_avg_5m=0.00 sample#load_avg_15m=0.01
» heroku web.1 - - source=web.1 dyno=heroku.21274089.82eb32b4-c547-4041-b452-d3fedae05ee9 sample#memory_total=393.41MB sample#memory_rss=368.38MB sample#memory_cache=4.47MB sample#memory_swap=20.56MB sample#memory_pgpgin=121244pages sample#memory_pgpgout=25796pages
What I don't understand is my dyno’s memory is only sample#memory_rss=368.38MB, but why it already uses swap memory sample#memory_swap=20.56MB? Because from what I thought from heroku article https://devcenter.heroku.com/articles/dynos#memory-behavior, it should switch to swap memory if it reaches dyno's memory which is 512 Mb for free dyno.
I was seeing significant swap even when using only 50% of available RAM in my app, so I asked. Here's a quote from their support team:
We leave Ubuntu's swappiness at its default of 60 on our runtimes, including PX dynos.
A value of 60 ensures that your app will begin swapping well before it reaches max memory. The Linux kernel parameter vm.swappiness ranges from 0 to 100, with 0 indicating no swap and 100 indicating always swap.
So when running on Heroku you should expect your application to swap even if the footprint of your app is far smaller than the advertised RAM of your dyno.
I have a web app on heroku which all the time is using around 300% of the allowed RAM (512 MB). I see my logs full of Error R14 (Memory quota exceeded) [an entry every second]. Although in bad condition, my app still works.
Apart from degraded performance, are there any other consequences also which I should be aware of ( like heroku be charging extra for anything related to this issue, scheduled jobs might fail etc) ?
To the best of my knowledge Heroku will not take action even if you continue to exceed the memory requirements. However, I don't think the availability of the full 1 GB of overage (out of the 1.5 GB that you are consuming) is guaranteed, or is guaranteed to be physical memory at all times. Also, if you are running close to 1.5 GB, then you risk going over the hard 1.5 GB limit at which point your dyno will be terminated.
I also get the following every time I run a specific task on my Heroku app and check heroku logs --tail:
Process running mem=626M(121.6%)
Error R14 (Memory quota exceeded)
My solution would be to check out Celery and Heroku's documentation on this.
Celery is an open source asynchronous task queue, or job queue, which makes it very easy to offload work out of the synchronous request lifecycle of a web app onto a pool of task workers to perform jobs asynchronously.