Ive deployed Prisma to Heroku: https://www.prisma.io/blog/heroku-integration-homihof6eifi/
The only traffic to my site is myself testing it. I started out with 1 hobby dynamo but the 'resting' memory usage was around 95% and I'm was getting some "Memory quota exceeded" errors.
To try and fix this I increased the dynamo count to 2. Im now paying $50 per month.
Despite this Im still getting memory usage warnings. $50 a month for a service thats struggling with very low traffic seems crazy. Have I set something up wrong? Should I have increased the memory limit rather than number of dynamos?
A future release of Prisma will introduce significant improvements to memory handling. While operating Prisma on a 512 mb dyno is certainly possible, you will currently have a much better experience upgrading to a 1024 mb dyno. Running a single large dyno will provide a much bigger improvement than running two small dynos.
Hope this helps :-)
Related
Out of the blue my Magento 2.3 installatie crashed, the CPU suddely raises and eventually Magento crashes. No updates or something like that, so that can't be te cause.
Main cause seems to be Redis, after stop-writes-on-bgsave-error no in Redis the system didn't crash anymore, but memory and CPU usage are stil high.
Found the cause, google was doing something like a DDOS attack. 150k requests per day...
Adjusted the robots.txt to do less indexing
I'm thinking about launching an web app with heroku but I have no idea to calculate the performance cost. According to their website 1 dyno to professional support is at least $ 25 /month gives a machine with "512MB or 1GB RAM".
If my website has a standard load assets and I have 1,000 people everyday access, how many "dynos" should be recommended to have a reasonable good speed for users?
This will likely get shut down as it's not a 'programming' question. I'll try to answer quickly.
It all depends on how fast your page executes on the server, if it's slow then you run the risk of requests being timed out before they can be served by the app server. Adding more dyno's does not make your application faster though, changing the dyno type, standard-1x, 2x, PM, PL does as it makes more resources available however, slow code will always be slow code - throwing more resources at it is like a band-aid for a period of time.
In short, it's very application dependent both from a code point of view and traffic point of view - do those 1000 users arrive at the same time or are they spreadout?
I have a Compute engine on google cloud with 4 core CPU Ivy Brigde and 15 GB RAM and on that I have deployed my rails application.
Before this I had hosted my rails application on digital ocean and there I was getting good throughput and also the cpu and memory consumption was minimal.
It never crossed 3 GB memory consumption on Digital ocean and the CPU consumption max was around 50% - 55%.
On Digital Ocean I had a single instance with 4 core CPU and 8GB RAM and even I was running mysql,redis and sidekiq on the same instance and still it could handle the load easily.
But as I moved to google cloud I started facing the problems for the same code.
Actually I was expecting more throughput from the Google cloud as Google has data centers in Asia, but I started facing issue.
When I restart apache everything comes back to normal and again after 2 - 3 hours it goes on consuming memeory and CPU and finally instance stops responding to the requests anymore.
I checked the logs..... and there are no much increase in traffic, also I cheked logs during the load time to ensure whether someone is attacking the servers.
But all the request I found are from a valid browsers with valid user agents.
I don't understand why is this happening.
First I felt if it is a DDOS/DOS attack but din't find anything suspicious in the log (apache access logs and rails logs).
Please help me.
Hoping for some good solution that I can try and debug the issue.
Thanks :)
I have a web app on heroku which all the time is using around 300% of the allowed RAM (512 MB). I see my logs full of Error R14 (Memory quota exceeded) [an entry every second]. Although in bad condition, my app still works.
Apart from degraded performance, are there any other consequences also which I should be aware of ( like heroku be charging extra for anything related to this issue, scheduled jobs might fail etc) ?
To the best of my knowledge Heroku will not take action even if you continue to exceed the memory requirements. However, I don't think the availability of the full 1 GB of overage (out of the 1.5 GB that you are consuming) is guaranteed, or is guaranteed to be physical memory at all times. Also, if you are running close to 1.5 GB, then you risk going over the hard 1.5 GB limit at which point your dyno will be terminated.
I also get the following every time I run a specific task on my Heroku app and check heroku logs --tail:
Process running mem=626M(121.6%)
Error R14 (Memory quota exceeded)
My solution would be to check out Celery and Heroku's documentation on this.
Celery is an open source asynchronous task queue, or job queue, which makes it very easy to offload work out of the synchronous request lifecycle of a web app onto a pool of task workers to perform jobs asynchronously.
Can anyone have an estimate of how much data can be inserted in a 5MB database?
Also, would 1 Dyno handle a slashdot,hackernews, etc frontpage?
Thanks.
Quiet a surprising amount... well enough to get you started thats for sure.
I use 1 dyno all the time for low traffic (like my personal website and a few xml servers, but obviously the great thing is if you start getting loads of visitors to your site and are having performance issues all it takes is one little click to add extra dynos.
You should also consider the worker hours, not sure about what you are doing, but a lot of apps could need background process.
This service could be interesting if you are using Ruby, it scales dynos on Heroku:
http://hirefireapp.com