Concurrency with Resque on Heroku (running multiple workers per node) - heroku

Pardon my ignorance, but is there to increase the number of processes per dyno for Resque workers? And if so, how?
I'm currently using Unicorn to add concurrency to the web dynos, which has been working great so far. I would like to extend this to the Resque workers. I followed Heroku's guide to set up the concurrency.

Update: The solution below works, but is not recommended. For resque concurrency on heroku use the resque-pool gem.
It is possible if you use the COUNT=* option. Your procfile will look something like:
web: bundle exec unicorn -p $PORT -c ./config/unicorn.rb
resque: env TERM_CHILD=1 COUNT=2 RESQUE_TERM_TIMEOUT=6 QUEUE=* bundle exec rake resque:workers
It is impotant to note that the rake task in the Procfile is resque:workers and not resque:work.
Update Explanation
There are major problems with the COUNT=* option and the rake resque:workers invocation in production on heroku. Because of the way resque starts up the multiple workers using threads, all of the SIGTERM, SIGKILL, etc. handling that allows workers to stop the current job, re-enqueue job, and shut down properly (including de-registering) will ever happen. This is because the the signals are handled by the main process not trapped by the threads. This can cause phantom workers to remain in the worker list long after they've been killed. This is probably why there is a comment in the resque code that warns that the resque:workers should only be used in development mode.

Related

Ruby spawned process listening on parent server port

I am running a puma server ruby application on fedora 32. In my server I have certain calls which will spawn new long running processes for various reasons. I came across an issue where my spawned processes were running and listening on the same port as my server. This lead to issues with restarting my server on deploys as the server could not start because of processes listening on the desired port. How could this be possible? From my understanding when I spawn a process it should have completely different memory to the parent process, and share no file descriptors. My spawn command is simply
my_pid = Process.spawn(my_cmd, %i[out err] => log_file)
Ruby version 2.7.0
Edit: something I had overlooked in my deploy process and my original problem description, server restart is not an actual tear down and restart of a new process, but via signalling USR2 to the puma server (as described here)
A quick workaround / solution will be to call fork, close Puma's socket within the forked process and then call exec, which replaces the running process... however, this workaround is limited to Unix systems. On windows you can probably achieve something similar using a more complicated approach.
Sadly, I am not sure how to close Puma's listening socket. Perhaps this will helps, but more likely than not there's some other trick to this.
I believe I have found what is causing this. Seems to be an issue with puma restart process, which I was using. By restarting the server with a USR2 signal, it changes the flags on the open fd for the socket.
[me#home puma_testing]$ cat /proc/511620/fdinfo/5
pos: 0
flags: 02000002
mnt_id: 10
[me#home puma_testing]$ kill -s USR2 511620
[me#home puma_testing]$ cat /proc/511620/fdinfo/5
pos: 0
flags: 02
mnt_id: 10
This was tested on fedora 32 using a very simple puma and sinatra setup like so:
puma.rb
# frozen_string_literal: true
rackup File.join(File.dirname(File.realpath(__FILE__)), './server.ru')
# https://www.rubydoc.info/gems/puma/Puma/DSL#prune_bundler-instance_method
# This allows us to install new gems with just a phased-restart. Otherwise you
# need to take the master process down each time.
prune_bundler
port 11111
environment 'production'
pidfile File.join(File.dirname(File.realpath(__FILE__)), '../', 'server.pid')
tag 'test'
And server.ru like so
require 'sinatra'
class App < Sinatra::Base
get "/" do
"Hello World!"
end
get "/spawn" do
spawn "sleep 500"
end
end
run App
Ran using bundler bundle exec puma -C puma.rb. Note you can use /spawn get request to test spawning a new process before and after restart to see if it is listening on the socket with lsof -itcp:11111

How to restart Sidekiq when running on Heroku?

I am running sidekiq in a worker on Heroku as follows:
bundle exec sidekiq -t 25 -e $RAILS_ENV -c 3
One of the operations uses more memory (>500mb) than the worker allows. After the job has completed, the memory still hasn't been released and I get these errors in the heroku rails log files:
2018-11-13T00:56:05.642142+00:00 heroku[sidekiq_worker.1]: Process running mem=646M(126.4%)
2018-11-13T00:56:05.642650+00:00 heroku[sidekiq_worker.1]: Error R14 (Memory quota exceeded)
Is there a way to automatically restart Sidekiq when the memory usage exceeds a certain amount?
Thanks!
have you tried to reduce memory fragmentations? here how you can do it in Heroku.
if that wasn't good enough you can use Heroku platform gem and periodically restart the sidekiq

How to run a specific worker in ManageIQ?

Sometimes I need to run on specific MIQ worker in foreground.
rake evm:start runs all the workers, but if I need just one, how can I do that?
In case if you are unsure what workers to work with, you might be able to do the following:
run evm server normally: bundle exec rake evm:start and see what worker types were running: bundle exec rake evm:status
kill evm server: bundle exec rake evm:stop
start a single worker in the foreground:
ruby lib/workers/bin/run_single_worker.rb MiqWorkerClassHere

How can I tell how many worker dynos I'm using on Heroku?

I'm using HireFireApp to autoscale my web and worker dynos on Heroku. However, when I navigate to the Resque app on my application it says
"0 of 46 Workers Working"
Does this mean that I'm using 46 worker dynos???
Update:
Running heroku ps shows:
web.1 up for 21m bundle exec thin start -p $PORT
worker.1 starting for 1s bundle exec rake resque:work QUEUE..
From the command line in your heroku app have a look at the output of
heroku ps
that will show you how many workers you are running.

Resque Workers working on the wrong queue

I have a few Resque jobs running, each started in a separate terminal window like so:
QUEUE=queue_1 rake environment resque:work
QUEUE=queue_2 rake environment resque:work
Queue 1 started first, then queue 2. The problem is, no matter what QUEUE options I send to new workers, they just keep working on queue 1 -- even if I shut both down. Might this a configuration problem? I haven't seen this issue mentioned anywhere.
Are you explicitly setting the ENV['QUEUE'] environment variable in the "environment" or "resque:setup" tasks defined in the Rakefile?

Resources