Confusion over Heroku Pricing on Hobby and Professional(Standard 1x) - heroku

My goal is to run 2 workers and 1 web under 14 USD.
or Can I run multiple processes under same dyno and is it a good production practice(n.b: I have a high traffic site)?. (this seems like only way to keep it under 14 USD)
I have a Procfile defined like below:
web: gunicorn -b 0.0.0.0:5004 server:engine --preload --log-file=- --log-level DEBUG -w 4
worker: python3 -u _worker.py
This is running on a Heroku hobby server($7) and for two processes, two dynos it cost me 7+7 = 14 USD.
Now if you look at the Professional(Standard 1x) Pricing it just cost me 25 USD for each dyno here.
But clearly, in the Heroku pricing section, it is written that Unlimited background workers from Standard1x. Check the last Point of Standard Dyno Pricing What does this even mean? Does it just mean I can scale but I gotta pay for each one or Is it like I can start multiple workers in a single Dyno to keep it 25 USD?.

A separate Dyno is required for each additional worker process. The wording on the Heroku's pricing page is a little misleading.
I queried it with support and they clarified:
In this context, "unlimited background workers" means you can run as
many worker processes as you need to. On the lower tiers(free and
hobby), the number of worker processes that can be used is limited.
However, Heroku is built on a 1-process-per-dyno model, which means
every worker needs to run on its own dyno. So you will be charged for
1 dyno for each worker process you use. To say it a different way,
it's not "unlimited worker processes for $25/mon", but instead it's
"run unlimited worker processes at $25/mon each".
Basically, the Free tier can have up to 2 dynos, Hobby can have up to 10 and the Production tiers can have as many as you want ... but you'll pay for each dyno :)

Related

Is there a difference between to be "on" and to be "running" for a dyno?

https://www.heroku.com/pricing says that:
a free dyno "Sleeps after 30 mins of inactivity, otherwise always on depending on your remaining monthly free dyno hours."
a hobby dyno is "Always on"
in case of hobby dynos: price is $7/month, and "You pay for the time your dyno is running as a fraction of the month."
My app will get approximately 5 requests per day which it will serve in 3-4 milliseconds each.
I think about changing from free dynos to hobby dynos to avoid sleeping.
How much will I pay?
Am I right that it is only 5x4x30 milliseconds = 600 milliseconds running time in a month which is approximately $0? Or should I pay the whole $7/month?
I'm also wondering this myself. There's no clear answer on Heroku's website. The so called price "calculator" doesn't allow you to customise the number or type of dynos, let alone enter a estimated number of running minutes.
Judging by some of the comments on forms, I'm guessing it's the full $7 per month but it would be great if this could be clarified.
Answer: The price is $7 per month and there is no option for the dyno to sleep. Dynos can be turned off but this potentially disables functionality on the deployed application.
Also Note: You can't alway mix dyno types so you might have to pay for a worker dyno in addition to web dyno. This can be a real sting when you've been testing/developing with free web and worker dynos. So the jump is not necessarily from $0 to $7, but $0 to $14 per month.

sidekiq running on more than one dyno

Currently I have a sidekiq process running on a single dyno on heroku that goes through every user in a table and syncs their mail with gmail:
Something along the lines of:
User.all do |user|
# sync user's email
end
The process runs every 10 minutes but as you would expect, the more users we have, the more time it takes to sync and our users want to see their mail pretty quickly.
We want to scale out and increase the dynos.
Can anyone suggest a way that I can split the job over 2 dynos?
Should I have a separate query on each dyno that splits the users into 2 or is there a better way?
You could make a scheduler job that runs every 10 minutes, and enqueues all users:
User.find_each do |user|
# put job for this user in redis or something
end
then have your workers constantly fetching new jobs. Each job is then fetching/syncing the email for a single user, and not "all". Use find_each too so that you're not trying to load all users into memory ( http://guides.rubyonrails.org/active_record_querying.html#retrieving-multiple-objects-in-batches )
Re-architecting like this makes processing the sync/fetching easier to scale, as you can just add new worker dynos to increase throughput.

Is there anyway of programmatically scaling the number Heroku workers?

We need a method of determining the load and then scaling the number of dyno workers accordingly.
I am using workless gem with DelayedJob and it works like a charm!
Basically you just need to install it and scale worker dynos to 0. When there's a new job added to DJ queue it picks it up in a few seconds, adds a worker and scales down when the task is performed. There are options for multiple workers but I never got so much jobs, so can't share any experience.
Found this article that shows how to scale workers via a Ruby script.

Occasional slow requests on Heroku

We are seeing inconsistent performance on Heroku that is unrelated to the recent unicorn/intelligent routing issue.
This is an example of a request which normally takes ~150ms (and 19 out of 20 times that is how long it takes). You can see that on this request it took about 4 seconds, or between 1 and 2 orders of magnitude longer.
Some things to note:
the database was not the bottleneck, and it spent only 25ms doing db queries
we have more than sufficient dynos, so I don't think this was the bottleneck (20 double dynos running unicorn with 5 workers each, we get only 1000 requests per minute, avg response time of 150ms, which means we should be able to serve (60 / 0.150) * 20 * 5 = 40,000 requests per minute. In other words we had 40x the capacity on dynos when this measurement was taken.
So I'm wondering what could cause these occasional slow requests. As I mentioned, anecdotally it seems to happen in about 1 in 20 requests. The only thing I can think of is there is a noisy neighbor problem on the boxes, or the routing layer has inconsistent performance. If anyone has additional info or ideas I would be curious. Thank you.
I have been chasing a similar problem myself, with not much luck so far.
I suppose the first order of business would to be to recommend NewRelic. It may have some more info for you on these cases.
Second, I suggest you look at queue times: how long your request was queued. Look at NewRelic for this, or do it yourself with the "start time" HTTP header that Heroku adds to your incoming request (just print now() minus "start time" as your queue time).
When those failed me in my case, I tried coming up with things that could go wrong, and here's a (unorthodox? weird?) list:
1) DNS -- are you making any DNS calls in your view? These can take a while. Even DNS requests for resolving DB host names, Redis host names, external service providers, etc.
2) Log performance -- Heroku collects all your stdout using their "Logplex", which it then drains to your own defined logdrains, services such as Papertrail, etc. There is no documentation on the performance of this, and writes to stdout from your process could block, theoretically, for periods while Heroku is flushing any buffers it might have there.
3) Getting a DB connection -- not sure which framework you are using, but maybe you have a connection pool that you are getting DB connections from, and that took time? It won't show up as query time, it'll be blocking time for your process.
4) Dyno performance -- Heroku has an add-on feature that will print, every few seconds, some server metrics (load avg, memory) to stdout. I used Graphite to graph those and look for correlation between the metrics and times where I saw increased instances of "sporadic slow requests". It didn't help me, but might help you :)
Do let us know what you come up with.

Heroku: Am I paying when I do “heroku run console”?

I have run three commands individually (bash, console and node).
When I do heroku ps I get this:
$ heroku ps
Process State Command
------------ ------------------ ------------------------------
run.1 complete for 11m console
run.2 complete for 8m bash
run.3 complete for 3s node
Am I paying for those 3 processes? I can’t kill them.
The state of those processes is complete. They continue to be shown by heroku ps for a little bit of time but should they only count against your dyno hours when they are actually running. Here is an except from the dyno pricing article on the Heroku dev center:
Dynos cost $0.05 per hour, prorated to the second. For example, an app
with four dynos is charged $0.20 per hour for each hour that the four
dynos are running.

Resources