I have run three commands individually (bash, console and node).
When I do heroku ps I get this:
$ heroku ps
Process State Command
------------ ------------------ ------------------------------
run.1 complete for 11m console
run.2 complete for 8m bash
run.3 complete for 3s node
Am I paying for those 3 processes? I can’t kill them.
The state of those processes is complete. They continue to be shown by heroku ps for a little bit of time but should they only count against your dyno hours when they are actually running. Here is an except from the dyno pricing article on the Heroku dev center:
Dynos cost $0.05 per hour, prorated to the second. For example, an app
with four dynos is charged $0.20 per hour for each hour that the four
dynos are running.
Related
My goal is to run 2 workers and 1 web under 14 USD.
or Can I run multiple processes under same dyno and is it a good production practice(n.b: I have a high traffic site)?. (this seems like only way to keep it under 14 USD)
I have a Procfile defined like below:
web: gunicorn -b 0.0.0.0:5004 server:engine --preload --log-file=- --log-level DEBUG -w 4
worker: python3 -u _worker.py
This is running on a Heroku hobby server($7) and for two processes, two dynos it cost me 7+7 = 14 USD.
Now if you look at the Professional(Standard 1x) Pricing it just cost me 25 USD for each dyno here.
But clearly, in the Heroku pricing section, it is written that Unlimited background workers from Standard1x. Check the last Point of Standard Dyno Pricing What does this even mean? Does it just mean I can scale but I gotta pay for each one or Is it like I can start multiple workers in a single Dyno to keep it 25 USD?.
A separate Dyno is required for each additional worker process. The wording on the Heroku's pricing page is a little misleading.
I queried it with support and they clarified:
In this context, "unlimited background workers" means you can run as
many worker processes as you need to. On the lower tiers(free and
hobby), the number of worker processes that can be used is limited.
However, Heroku is built on a 1-process-per-dyno model, which means
every worker needs to run on its own dyno. So you will be charged for
1 dyno for each worker process you use. To say it a different way,
it's not "unlimited worker processes for $25/mon", but instead it's
"run unlimited worker processes at $25/mon each".
Basically, the Free tier can have up to 2 dynos, Hobby can have up to 10 and the Production tiers can have as many as you want ... but you'll pay for each dyno :)
https://www.heroku.com/pricing says that:
a free dyno "Sleeps after 30 mins of inactivity, otherwise always on depending on your remaining monthly free dyno hours."
a hobby dyno is "Always on"
in case of hobby dynos: price is $7/month, and "You pay for the time your dyno is running as a fraction of the month."
My app will get approximately 5 requests per day which it will serve in 3-4 milliseconds each.
I think about changing from free dynos to hobby dynos to avoid sleeping.
How much will I pay?
Am I right that it is only 5x4x30 milliseconds = 600 milliseconds running time in a month which is approximately $0? Or should I pay the whole $7/month?
I'm also wondering this myself. There's no clear answer on Heroku's website. The so called price "calculator" doesn't allow you to customise the number or type of dynos, let alone enter a estimated number of running minutes.
Judging by some of the comments on forms, I'm guessing it's the full $7 per month but it would be great if this could be clarified.
Answer: The price is $7 per month and there is no option for the dyno to sleep. Dynos can be turned off but this potentially disables functionality on the deployed application.
Also Note: You can't alway mix dyno types so you might have to pay for a worker dyno in addition to web dyno. This can be a real sting when you've been testing/developing with free web and worker dynos. So the jump is not necessarily from $0 to $7, but $0 to $14 per month.
I'm building a Heroku app that relies on scheduled jobs. We were previously using Heroku Scheduler but clock processes seem more flexible and robust. So now we're using a clock process to enqueue background jobs at specific times/intervals.
Heroku's docs mention that clock dynos, as with all dynos, are restarted at least once per day--and this incurs the risk of a clock process skipping a scheduled job: "Since dynos are restarted at least once a day some logic will need to exist on startup of the clock process to ensure that a job interval wasn’t skipped during the dyno restart." (See https://devcenter.heroku.com/articles/scheduled-jobs-custom-clock-processes)
What are some recommended ways to ensure that scheduled jobs aren't skipped, and to re-enqueue any jobs that were missed?
One possible way is to create a database record whenever a job is run/enqueued, and to check for the presence of expected records at regular intervals within the clock job. The biggest downside to this is that if there's a systemic problem with the clock dyno that causes it to be down for a significant period of time, then I can't do the polling every X hours to ensure that scheduled jobs were successfully run, since that polling happens within the clock dyno.
How have you dealt with the issue of clock dyno resiliency?
Thanks!
You will need to store data about jobs somewhere. On Heroku, you don't have any informations or warranty about your code being running only once and all the time (because of cycling)
You may use a project like this on (but not very used) : https://github.com/amitree/delayed_job_recurring
Or depending on your need you could create a scheduler or process which schedule jobs for the next 24 hours and is run every 4 hours in order to be sure your jobs will be scheduled. And hope that the heroku scheduler will work at least once every 24 hours.
And have at least 2 worker processing the jobs.
Though it requires human involvement, we have our scheduled jobs check-in with Honeybadger via an after_perform hook in rails
# frozen_string_literal: true
class ScheduledJob < ApplicationJob
after_perform do |job|
check_in(job)
end
private
def check_in(job)
token = Rails.application.config_for(:check_ins)[job.class.name.underscore]
Honeybadger.check_in(token) if token.present?
end
end
This way when we happen to have poorly timed restarts from deploys we at least know should-be scheduled work didn't actually happen
Would be interested to know if someone has a more fully-baked, simple solution!
Currently I have a sidekiq process running on a single dyno on heroku that goes through every user in a table and syncs their mail with gmail:
Something along the lines of:
User.all do |user|
# sync user's email
end
The process runs every 10 minutes but as you would expect, the more users we have, the more time it takes to sync and our users want to see their mail pretty quickly.
We want to scale out and increase the dynos.
Can anyone suggest a way that I can split the job over 2 dynos?
Should I have a separate query on each dyno that splits the users into 2 or is there a better way?
You could make a scheduler job that runs every 10 minutes, and enqueues all users:
User.find_each do |user|
# put job for this user in redis or something
end
then have your workers constantly fetching new jobs. Each job is then fetching/syncing the email for a single user, and not "all". Use find_each too so that you're not trying to load all users into memory ( http://guides.rubyonrails.org/active_record_querying.html#retrieving-multiple-objects-in-batches )
Re-architecting like this makes processing the sync/fetching easier to scale, as you can just add new worker dynos to increase throughput.
I am writing a rails app in which I have a piece of code which is dependent on Time zones.
How can make my Sidekiq worker, work only in day time of a time zone i.e. for a certain duration of time everyday.
The worker should pause at certain point of time (end of day) even when its queue is not empty and resume the next day.
I suggest starting a Cron Job, using the whenever gem that stops the Sidekiq workers at a certain time, and starts them again at a certain time.
You can stop sidekiq workers via sidekiqctl:
sidekiqctl stop [pidfile] 60