For a pure Rack app running on a Heroku hobby dyno with a Heroku Postgres hobby dev add-on, how do you know how many workers & threads to configure Puma to have?
Based on this article it seems like you'd be safe running 2-4 processes within a single Heroku web dyno depending on your memory usage. For threads, I'd stick to the default (5) depending on the needs of your app.
I'd recommend tuning your app to use a particular config then keeping an eye on the Heroku logs for a few days to see if you get too many R14 errors. At that point, you know you've exhausted the dyno and should scale it back.
It depends greatly on how memory-hungry your application is. Given that it's a pure Rack app and most of the literature out there are for Rails apps - I'd imagine that your optimal values are higher.
The Librato add-on is really helpful here in letting you see your memory usage in near real-time, so you can quickly tweak and monitor how close you are to the 512MB limit. There's a free tier there, and it doesn't need any additional instrumentation either (I'm not affiliated with them in any way, but we do use their service!)
Related
How many requests can Heroku's "Vegur" http proxy handle for a simple "hello world" before hitting the limits (if any)?
Will setting up nginx with ec2 micro, serving same index.html, allow more Throughput ?
Does heroku throttle the requests per dyno?
Heroku Dynos are all small processes running on EC2 machines behind the scenes. Therefore, it will almost always be more performant to run identical code on an EC2 server directly as opposed to Heroku, because when you're using Heroku you're sharing a server with other developers.
With that said, Heroku isn't really about having the fastest server -- it's about simplifying your entire development and deployment stack as much as possible to:
Avoid downtime.
Force you to architect code properly.
Make it easier to scale your project as it grows.
etc.
Heroku is imposing 300MB limit on slugsize. Normally, this should be way more than enough for most of the web apps. However, our company uses libraries that are frequently 50MB or more each, and there are a lot of those.
Is there anyway to increase the slug size limit on Heroku? Has anyone had any success with overcoming this limit?
Reading a bit from the documentation, it seems that it is not something Heroku does. The limitation is related to their expectations of storage consumed per application and circumventing it may result in malfunctioning services. In case they consider it a contract breach, they may take down your service. So I would not advise trying really hard. The documentation tells that smaller slugs will deploy faster and your question is directly against it.
I have no experience with Graph DB applications but I'm trying to write one. I intend to host on Heroku.
I can see there are 2 Graph DB service providers with free plans but I can't decide which one to use, they are both marketing themselves using different attributes, and I can't compare ! For example:
GrapheneDB mentions only the node and relationship count limit, and the query time limit. But nothing about the storage limit.
Graph Story mentions the RAM limit, `storage limit and data transfer limit.
Other properties are mentioned too but they aren't comparable between both providers.
Has anyone tried any of these services on Heroku and could share his experience please ?
EDIT: I found this page which give an idea about how much space does neo4j need.
I'll take a spin at answering this question by staying as much as objective as possible, as, I and some other frequent answerers here, have good relationships with both providers.
Both have their own pro's and con's, and I think looking only at the Heroku side is maybe not a good choice.
There is also one difference between both that you need to know, GraphStory provide Neo4j enterprise while GrapheneDB provide Neo4j Community, this is a fact. However I am personally thinking that if you run neo4j on heroku, then you don't need enterprise because "enterprise" users of Neo4j are using their own environment with clustering on servers with "real" RAM and SSD's, which in fact can be managed by both providers with a licence and support.
You speak about the storage limit. Well the storage depends about your amount of nodes, relationships and properties in the database, so if there is a limit of 1000 nodes you don't need to care about the storage limit I think.
I tried both on heroku, and except the nodes limit, there is not that much difference in matters of performance when you deploy free dynos.
If you are a startup, running Neo4j on heroku is great if you take the paid plan of course, both providers have cool support and both are rewarding their long term customers.
If you look only at the free dynos, then you don't need to care about the limitations, because it will just be LIMITED, in any way !
Outside of Heroku, here are some points I viewed :
GrapheneDB runs on all platforms including Azure which is a cool stuff
GraphStory runs enterprise so you can benefit from the high performance cache
GrapheneDB has an accessible API for creating neo4j servers on the fly and destroying it.
Depending of your location, you may want support from Europe or from US.
basic plans, on both, are suffering of some latency or boot time when not used for a long time
Both have support for spatial
Both are actors in the Neo4j community with cool stuff, you can meet them in real :)
Now, you can test them, both, for free !!!
I tried yesterday one CRUD application deployed in 2 Heroku app: the first with Graph Story and the other with GrapheneDB.
I had monitored with NewRelic and I detected Graph Story app have a medium latency variable from 1 to 2 seconds, instead the GrapheneDB service need only from 20 to 40 ms to perform the same operations.
Graph Story latency:
GrapheneDB latency:
I wanted to try a paid plan for some minutes in Graph Story, but for doing that you need to contact the assistance and waiting for an unknown time. Instead, GrapheneDB allow you to change plan autonomously without any issue.
I tried to export db in Graph Story, but the operation is not in real time: You need to wait for a link dispatched via email. I initiate the operation for 2 times, but the email after 10 hours not yet arrived.
Instead in GrapheneDB the export is immediate without waiting anxious emails
Graph Story offers the following features that differentiate it from other offerings:
Graph Story offers the Enterprise version of Neo4j
There are no limits on nodes or relationships on the free plan
Max query time is 30 seconds
You wouldn't want to use the free plan in production, of course, but it's excellent for proofs-of-concept, learning Neo4j, small hobby projects, etc.
(Full disclosure: I'm the CTO at Graph Story.)
Is there a tool or a way to learn how many connections can manage my Heroku app simultaneously (with one dyno) before giving slow response times or time outs? I read of Blitz and New Relic but I am unsure of how to use them!
There's no quick and easy way to understand how your app scales. But the process usually goes along these lines:
Launch your target environment (a single dyno in your case)
Set up monitoring on all the possible metrics you care about. Usually this will include: CPU load, memory usage, I/O operations, database connections, etc. as well as any relevant applicative metrics. For Heroku, I recommend using Librato for a complete monitoring set.
Run load tests that resemble typical usages of your application, this means not just simple reads of static pages, but also dynamic operations such as user registrations, complex API calls, and anything else you think is relevant. The tools used here really depend on what your app does and how it is built.
See where you hit your limits, assume nothing, you might be bound by any of the resources you are using.
Resolve bottlenecks, rinse, repeat.
This will give your more or less a clue as to where your application will require further resources in order to scale.
I deployed an application on Heroku. I'm using the free service.
Quite frequently, I get the following error.
PG::Error: ERROR: out of memory
If I refresh the browser, it's ok. But then, it happens again randomly.
Why does this happen?
Thanks.
Sam Kong
If you experience these when running queries, your queries are complicated or inefficient. The free tier has no cache, so you're already out there.
If you're getting these errors otherwise, open a support ticket at https://help.heroku.com
heroku restart simply helped me though
If you are not in a free tier, its maybe because you are using too much memory connecting to PG.
Consider an app running on several dynos, with several processes, each with lots of threads, maybe you are filling up the pool.
Also, as it appears in Heroku's Help Center maybe you are caching too many statements that wont be used.