Heroku: stop all one-off dynos without dyno name - heroku

I run the following command twice to start my two one-off dynos
heroku run:detached python main.py
and then stop it using
heroku ps:stop <dyno-name>
however, I wish to stop all my dynos without knowing dyno names before hand, since I am trying to automate turning on and off my dynos using crontab.

I wrote a python script to achieve the same, not that optimised but does the job.
import os
path = 'cd <path-to-project>'
def stop():
online_ps = os.popen(path + ' && heroku ps').read()
for ps in online_ps.split():
if(ps.startswith('run')): # process name example run.1234
os.system(path + ' && heroku ps:stop ' + ps)
def start():
os.system(path + ' && heroku run:detached python main.py <arg1>') # dyno 1
os.system(path + ' && heroku run:detached python main.py <arg2>') # dyno 2
stop()
start()
Note:
mind the spaces in the command
this code works on the assumption that each process name starts with run like run.1234

Related

Monitor a rake task in rails 5

I have a requirement wherein I have to constantly fetch messages from AWS-SQS(simple queue service) and update the related records of a model. The message contains data that needs to be displayed to the related users as notifications as soon as they are fetched. This has been successfully managed by using Action Cable. I have created a rake task that fetches the messages from the queue and does the required processing. This task is supposed to run in an infinite loop. I have 2 questions regarding it?
namespace :sqs_consumer do
desc 'Get data from the AWS-SQS and process it.'
task start: :environment do
# initialize the sqs client
loop do
#read the queue for messages and process them in batches(if any)
end
end
end
1) Is it right to create a rake task for the above requirement? A rake task that runs infinitely is the right way? If not, what is the right approach. I cant run the task periodically since i need data in real time.
2) I want to monitor the task. I am using monit for the same. Unfortunately, Monit conf for the same doesn't seem to work. What am I doing wrong or missing?
check process aws_sqs_consumer with pidfile /var/www/myproject/shared/pids/sqs_consumer.pid
start program = "/bin/sh -c 'cd /var/www/myproject/current; nohup bundle exec rake sqs_consumer:start RAILS_ENV=staging -i 0 -P /var/www/myproject/shared/pids/sqs_consumer.pid >> log/sqs_consumer.log 2>&1 &'" as uid ubuntu and gid ubuntu
stop program = "/bin/sh -c 'kill $(cat /var/www/myproject/shared/pids/sqs_consumer.pid)'" as uid ubuntu and gid ubuntu
This monit configuration worked for me
check process aws_sqs_consumer with pidfile /var/www/myproject/shared/tmp/pids/sqs_consumer.pid
start program = "/bin/sh -c 'cd /var/www/myproject/current && BACKGROUND=y PIDFILE=/var/www/myproject/shared/tmp/pids/sqs_consumer.pid LOG_LEVEL=info bundle exec rake sqs_consumer:start RAILS_ENV=staging'"
stop program = "/bin/sh -c 'kill $(cat /var/www/myproject/shared/tmp/pids/sqs_consumer.pid)'" as uid ubuntu and gid ubuntu with timeout 90 seconds

Calling multiple, independent shells in Heroku

I am looking for this kind of shell scripting equivalent in Heroku dynos.
start cmd /k call "batch1.bat"
start cmd /k call "batch2.bat"
I tried
. batch1.sh &
. batch2.sh &
Even this one crashed.
./batch1.sh &
Would like to know if Heroku supports this activity. If yes, kindly help me with the correct set of commands.
You can spin up a heroku one-off dyno, as follows:
heroku run bash
This gives you an interactive bash shell, from which you can invoke any script in your git repo.
You can also, of course, run "batch" scripts directly, e.g:
heroku run bash -c "ls -lt"
That will spin up a one-off dyno instance, whose bash shell will run whatever command was passed to it, in this case, "ls -lt". After completing the command, the one-off dyno shuts down.
Note that as with all Heroku dynos, the filesystem is ephemeral, so any files created by your script will be gone after your one-off dyno exits.

Multiple Sidekiq Processes on a single dyno

My Rails app on Heroku has a Procfile that starts up a single Sidekiq process on my dyno
worker: bundle exec sidekiq -C config/sidekiq.yml
Is it possible to start up multiple sidekiq processes on the same dyno?
My organization has a large dyno with a lot of memory that we're not using. Before downgrading the dyno, I was wondering if it was an option instead to make use of it by running multiple Sidekiq processes.
Sidekiq Enterprise's Multi-Process feature makes this trivial.
One option would be to run Sidekiq with a supervisor like https://github.com/ochinchina/supervisord
Add the binary to your repository (e.g. bin/supervisord) and add a config file and a Procfile entry.
For a dyno with 8 cores, your configuration could look like this:
[program:sidekiq]
command = bundle exec sidekiq -e ${RACK_ENV:-development} -C config/sidekiq_large.yml
process_name = %(program_name)s_%(process_num)s
numprocs = 8
numprocs_start = 1
exitcodes = 0
stopsignal = TERM
stopwaitsecs = 40
autorestart = unexpected
stdout_logfile = /dev/stdout
stderr_logfile = /dev/stderr
Then in your Procfile:
worker_large: env [...] bin/supervisord -c sidekiq_worker_large.conf
Make sure that you tune your Sidekiq concurrency settings.
There is also this open source third party gem that is meant to do this, without having to pay for Sidekiq Enterprise. I have not used it, don't know how well it works, but it looks good.
https://github.com/vinted/sidekiq-pool
It doesn't have any recent commits or many github stars, but may just be working fine?
Here's another one, that warns you it's unfinished work in progress, but just to compare:
https://github.com/kigster/sidekiq-cluster

How to run a command on heroku console from a rake task and know when it finishes

I have a rake task like below. However, after the first system line runs and User.find_each(&:save) finishes the heroku console stays open and the ruby script does not proceed to the next system call as the first one is not "finished". How do I exit out of the first system call (and proceed to the next) after the User records are done saving?
task :production do
Bundler.with_clean_env do
system("echo 'User.find_each(&:save)' | heroku run console --remote production")
system("echo 'Post.find_each(&:save)' | heroku run console --remote production")
end
end
You're running something (rails console) that doesn't terminate.
What happens when you run rails console locally? It won't terminate until you type `exit'.
So try this:
echo 'User.find...; exit' | heroku run console

Starting or restarting Unicorn with Capistrano 3.x

I'm trying to start or restart Unicorn when I do cap production deploy with Capistrano 3.0.1. I have some examples that I got working with Capistrano 2.x using something like:
namespace :unicorn do
desc "Start unicorn for this application"
task :start do
run "cd #{current_path} && bundle exec unicorn -c /etc/unicorn/myapp.conf.rb -D"
end
end
But when I try and use run in the deploy.rb for Capistrano 3.x I get an undefined method error.
Here are a couple of the things I tried:
# within the :deploy I created a task that I called after :finished
namespace :deploy do
...
task :unicorn do
run "cd #{current_path} && bundle exec unicorn -c /etc/unicorn/myapp.conf.rb -D"
end
after :finished, 'deploy:unicorn'
end
I have also tried putting the run within the :restart task
namespace :deploy do
desc 'Restart application'
task :restart do
on roles(:app), in: :sequence, wait: 5 do
# Your restart mechanism here, for example:
# execute :touch, release_path.join('tmp/restart.txt')
execute :run, "cd #{current_path} && bundle exec unicorn -c /etc/unicorn/deployrails.conf.rb -D"
end
end
If I use just run "cd ... " then I'll get awrong number of arguments (1 for 0)` in the local shell.
I can start the unicorn process with unicorn -c /etc/unicorn/deployrails.conf.rb -D from my ssh'd VM shell.
I can kill the master Unicorn process from the VM shell using kill USR2, but even though the process is killed I get an error. I can then start the process again using unicorn -c ...
$ kill USR2 58798
bash: kill: USR2: arguments must be process or job IDs
I'm very new to Ruby, Rails and Deployment in general. I have a VirtualBox setup with Ubuntu, Nginx, RVM and Unicorn, I'm pretty excited so far, but this one is really messing with me, any advice or insight is appreciated.
I'm using following code:
namespace :unicorn do
desc 'Stop Unicorn'
task :stop do
on roles(:app) do
if test("[ -f #{fetch(:unicorn_pid)} ]")
execute :kill, capture(:cat, fetch(:unicorn_pid))
end
end
end
desc 'Start Unicorn'
task :start do
on roles(:app) do
within current_path do
with rails_env: fetch(:rails_env) do
execute :bundle, "exec unicorn -c #{fetch(:unicorn_config)} -D"
end
end
end
end
desc 'Reload Unicorn without killing master process'
task :reload do
on roles(:app) do
if test("[ -f #{fetch(:unicorn_pid)} ]")
execute :kill, '-s USR2', capture(:cat, fetch(:unicorn_pid))
else
error 'Unicorn process not running'
end
end
end
desc 'Restart Unicorn'
task :restart
before :restart, :stop
before :restart, :start
end
Can't say anything specific about capistrano 3(i use 2), but i think this may help: How to run shell commands on server in Capistrano v3?.
Also i can share some unicorn-related experience, hope this helps.
I assume you want 24/7 graceful restart approach.
Let's consult unicorn documentation for this matter. For graceful restart(without downtime) you can use two strategies:
kill -HUP unicorn_master_pid It requires your app to have 'preload_app' directive disabled, increasing starting time of every one of unicorn workers. If you can live with that - go on, it's your call.
kill -USR2 unicorn_master_pid
kill -QUIT unicorn_master_pid
More sophisticated approach, when you're already dealing with performance concerns. Basically it will reexecute unicorn master process, then you should kill it's predecessor. Theoretically you can deal with usr2-sleep-quit approach. Another(and the right one, i may say) way is to use unicorn before_fork hook, it will be executed, when new master process will be spawned and will try to for new children for itself.
You can put something like this in config/unicorn.rb:
# Where to drop a pidfile
pid project_home + '/tmp/pids/unicorn.pid'
before_fork do |server, worker|
server.logger.info("worker=#{worker.nr} spawning in #{Dir.pwd}")
# graceful shutdown.
old_pid_file = project_home + '/tmp/pids/unicorn.pid.oldbin'
if File.exists?(old_pid_file) && server.pid != old_pid_file
begin
old_pid = File.read(old_pid_file).to_i
server.logger.info("sending QUIT to #{old_pid}")
# we're killing old unicorn master right there
Process.kill("QUIT", old_pid)
rescue Errno::ENOENT, Errno::ESRCH
# someone else did our job for us
end
end
end
It's more or less safe to kill old unicorn when the new one is ready to fork workers. You won't get any downtime that way and old unicorn will wait for it's workers to finish.
And one more thing - you may want to put it under runit or init supervision. That way your capistrano tasks will be as simple as sv reload unicorn, restart unicorn or /etc/init.d/unicorn restart. This is good thing.
I'm just going to throw this in the ring: capistrano 3 unicorn gem
However, my issue with the gem (and any approach NOT using an init.d script), is that you may now have two methods of managing your unicorn process. One with this cap task and one with init.d scripts. Things like Monit / God will get confused and you may spend hours debugging why you have two unicorn processes trying to start, and then you may start to hate life.
Currently I'm using the following with capistrano 3 and unicorn:
namespace :unicorn do
desc 'Restart application'
task :restart do
on roles(:app) do
puts "restarting unicorn..."
execute "sudo /etc/init.d/unicorn_#{fetch(:application)} restart"
sleep 5
puts "whats running now, eh unicorn?"
execute "ps aux | grep unicorn"
end
end
end
The above is combined with the preload_app: true and the before_fork and after_fork statements mentioned by #dredozubov
Note I've named my init.d/unicorn script unicorn_application_name.
The new worker that is started should kill off the old one. You can see with ps aux | grep unicorn that the old master hangs around for a few seconds before it disappears.
To view all caps:
cap -T
and it shows:
***
cap unicorn:add_worker # Add a worker (TTIN)
cap unicorn:duplicate # Duplicate Unicorn; alias of unicorn:re...
cap unicorn:legacy_restart # Legacy Restart (USR2 + QUIT); use this...
cap unicorn:reload # Reload Unicorn (HUP); use this when pr...
cap unicorn:remove_worker # Remove a worker (TTOU)
cap unicorn:restart # Restart Unicorn (USR2); use this when ...
cap unicorn:start # Start Unicorn
cap unicorn:stop # Stop Unicorn (QUIT)
***
So, to start unicorn in production:
cap production unicorn:start
and restart:
cap production unicorn:restart
PS do not forget to correct use gem capistrano3-unicorn
https://github.com/tablexi/capistrano3-unicorn
You can try to use native capistrano way as written here:
If preload_app:true and you need capistrano to cleanup your oldbin pid use:
after 'deploy:publishing', 'deploy:restart'
namespace :deploy do
task :restart do
invoke 'unicorn:legacy_restart'
end
end

Resources