Sidekiq multiple workers? - resque

I have a question regarding Sidekiq. I come from the Resque paradigm, and in the current application I launch one worker per queue, so in the terminal I would do:
rake resque:work QUEUE='first'
rake resque:work QUEUE='second'
rake resque:work QUEUE='third'
Then, If I want more workers, for example for the third queue, I just create more workers as:
rake resque:work QUEUE='third'
My question is...
With Sidekiq, how would you start with multiple workers? I know you can do this:
sidekiq -q first, -q second, -q third
But that would just start one worker that fetches all those queues. So, how would I go to start three workers, and tell each worker to just focus on a particular queue? Also, how would I do that in Heroku?

You could use a config file in config/sidekiq.yml :
# Sample configuration file for Sidekiq.
# Options here can still be overridden by cmd line args.
# sidekiq -C config.yml
---
:verbose: true
:pidfile: ./tmp/pids/sidekiq.pid
:concurrency: 15
:timeout: 5
:queues:
- [first, 20]
- [second, 20]
- [third, 1]
staging:
:verbose: false
:concurrency: 25
production:
:verbose: false
:concurrency: 50
:timeout: 60
That way you can configure exactly what you want, and to answer precisely your question the concurrency value is what you are loking for, it defines the number of concurrent workers executed.
More info here : https://github.com/mperham/sidekiq/wiki/Advanced-Options

So, how would I go to start three workers, and tell each worker to
just focus on a particular queue?
You can define on the worker-level in which queue it should be placed via sidekiq_options.
To place for example your worker in a queue called "first" just define it with
class MyWorker
include Sidekiq::Worker
sidekiq_options :queue => :first
...
end

Related

How do You monitor sidekiq processes?

I'm working on a production app that has multiple rails servers behind nginx loadbalancer. We are monitoring sidekiq processes with monit, and it works just fine - when sidekiq proces dies monit starts it right back.
However recently encountered a situation where one of these processes was running and visible to monit, but for some reason not visible to sidekiq. That resulted in many failed jobs and took us some time to notice that we're missing one process in sidekiq Web UI, since monit was telling us everything was fine and all processes were running. Simple restart fixed the problem.
And that bring me to my question: how do you monitor your sidekiq processes? I know i can use something like rollbar to notify me when jobs fail, but i'd like to know if there is a way to monitor process count and preferably send mail when one dies. Any suggestions?
Something that would ping sidekiq/stats and verify response.
My super simple solution to a similar problem looks like this:
# sidekiq_check.rb
namespace :sidekiq_check do
task rerun: :environment do
if Sidekiq::ProcessSet.new.size == 0
exec 'bundle exec sidekiq -d -L log/sidekiq.log -C config/sidekiq.yml -e production'
end
end
end
and then using cron/whenever
# schedule.rb
every 5.minutes do
rake 'sidekiq_check:rerun'
end
We ran into this problem where our sidekiq processes had stopped working off jobs overnight and we had no idea. It took us about 30 minutes to integrate http://deadmanssnitch.com by following these instructions.
It's not the prettiest or cheapest option but it gets the job done (integrates nicely with Pagerduty) and has saved our butt twice in the last few months.
On of our complaints with the service is the shortest grace interval we can set is 15 minutes which is too long for us. So we're evaluating similar services like Healthchecks, etc.
My approach is the following:
create a background job that does something
call the job regularly
check that the thing is being done!
so; using a cron script (or something like whenever) every 5 mins, I run :
CheckinJob.perform_later
It's now up to sidekiq (or delayed_job, or whatever active job you're using) to actually run the job.
The job just has to do something which you can check.
I used to get the job to update a record in my Status table (essentially a list of key/value records). Then I'd have a /status page which returns a :500 status code if the record hasn't been updated in the last 6 minutes.
(obviously your timing may vary)
Then I use a monitoring service to monitor the status page! (something like StatusCake)
Nowdays I have a simpler approach; I just get the background job to check in with a cron monitoring service like
IsItWorking
Dead Mans Snitch
Health Checks
The monitoring service which expects your task to check in every X mins. If your task doesn't check in - then the monitoring service will let you know.
Integration is dead simple for all the services. For Is It Working it would be:
IsItWorkingInfo::Checkin.ping(key:"CHECKIN_IDENTIFIER")
full disclosure: I wrote IsItWorking !
I use god gem to monitor my sidekiq processes. God gem makes sure that your process is always running and also can notify the process status on various channels.
ROOT = File.dirname(File.dirname(__FILE__))
God.pid_file_directory = File.join(ROOT, "tmp/pids")
God.watch do |w|
w.env = {'RAILS_ENV' => ENV['RAILS_ENV'] || 'development'}
w.name = 'sidekiq'
w.start = "bundle exec sidekiq -d -L log/sidekiq.log -C config/sidekiq.yml -e #{ENV['RAILS_ENV']}"
w.log = "#{ROOT}/log/sidekiq_god.log"
w.behavior(:clean_pid_file)
w.dir = ROOT
w.keepalive
w.restart_if do |restart|
restart.condition(:memory_usage) do |c|
c.interval = 120.seconds
c.above = 100.megabytes
c.times = [3, 5] # 3 out of 5 intervals
end
restart.condition(:cpu_usage) do |c|
c.interval = 120.seconds
c.above = 80.percent
c.times = 5
end
end
w.lifecycle do |on|
on.condition(:flapping) do |c|
c.to_state = [:start, :restart]
c.times = 5
c.within = 5.minute
c.transition = :unmonitored
c.retry_in = 10.minutes
c.retry_times = 5
c.retry_within = 1.hours
end
end
end

multiple sidekiq queue for an sinatra application

We have a Ruby on Sinatra application. We use sidekiq and redis for queue process.
We already implemented and using sidekiq that queues up jobs that does insertion into database. it works pretty fine till now.
Now I wanted to add another jobs which will read bulk data from database and export to csv file.
I donot want both this job to be in same queue instead is there possible to create different queue for these jobs in same application?
Please give some solution.
You probably need advanced queue options. Read about them here: https://github.com/mperham/sidekiq/wiki/Advanced-Options
Create csv queue from command line (it can be done in config file as well):
sidekiq -q csv -q default
Then in your worker:
class CSVWorker
include Sidekiq::Worker
sidekiq_options :queue => :csv
# perform method
end
take a look at sidekiq wiki: https://github.com/mperham/sidekiq/wiki/Advanced-Options
by default everything goes inside 'default' queue but you can specify a queue in your worker:
sidekiq_options :queue => :file_queue
and to tell sidekiq to process your queue, you have to either declare it in configuration file:
:queues:
- file_queue
- default
or pass it as argument to the sidekiq process: sidekiq -q file_queue

How to print capistrano current thread hash?

An example output from capistrano:
INFO [94db8027] Running /usr/bin/env uptime on leehambley#example.com:22
DEBUG [94db8027] Command: /usr/bin/env uptime
DEBUG [94db8027] 17:11:17 up 50 days, 22:31, 1 user, load average: 0.02, 0.02, 0.05
INFO [94db8027] Finished in 0.435 seconds command successful.
As you can see, each line starts with "{type} {hash}". I assume the hash is some unique identifier for either the server or the running thread, as I've noticed if I run capistrano over several servers, each one has it's own distinct hash.
My question is, how do I get this value? I want to manually output some message during execution, and I want to be able to match my output, with the server that triggered it.
Something like: puts "DEBUG ["+????+"] Something happened!"
What do I put in the ???? there? Or is there another, built in way to output messages like this?
For reference, I am using Capistrano Version: 3.2.1 (Rake Version: 10.3.2)
This hash is a command uuid. It is tied not to the server but to a specific command that is currently run.
If all you want is to distinguish between servers you may try the following
task :some_task do
on roles(:app) do |host|
debug "[#{host.hostname}:#{host.port}] something happened"
end
end

Is it possible to have one Unicorn child process a queue, whilst the rest process web requests on Heroku single Dyno?

If you have Unicorn set up on a single dyno on Heroku, say with 3 workers.
Is it possible to have 2 of the child workers processing web requests, and 1 Unicorn child doing background jobs, such as a resque queue, or scheduled tasks?
Or is that just not appropriate?
Now got it Working!
OK, so using the answer bellow I managed to get it to pick up the cue, but it took a bit of tinkering first. This is what worked for me.
Procfile
web: bundle exec unicorn_rails -p $PORT -c config/unicorn.rb
unicorn.rb
worker_processes 2
preload_app true
timeout 30
#resque_pid = nil
before_fork do |server, worker|
#resque_pid ||= spawn("bundle exec rake environment resque:work QUEUE=*")
end
after_fork do |server, worker|
ActiveRecord::Base.establish_connection
end
It certainly is possible - take a read of http://bugsplat.info/2011-11-27-concurrency-on-heroku-cedar.html. I've not tried it myself though but I will be soon. Essentially, you'll end up with a unicorn.rb that looks like
worker_processes 3
timeout 30
#resque_pid = nil
before_fork do |server, worker|
#resque_pid ||= spawn("bundle exec rake " + \
"resque:work QUEUES=scrape,geocode,distance,mailer")
end
I'm not entirely sure of the 'appropriateness' since it means Heroku is essentially loosing out on revenue but they haven't taken any stops to block this behaviour (nor I think would they).

Monitoring bundle exec unicorn_rails with bluepill

Due to unicorn_rails complaining about different gem versions we moved to running bundle exec unicorn_rails... in our bluepill files. This change solved that particular problem and things start and stop but when we try sudo bluepill status we now get
unicorn(pix: XXXXXX): unmonitored
Which looks like bluepill is not monitoring the unicorn processes now. It will restart the child processes if I stop them but won't restart the parent process.
I've searched around but can't find much about this issue and was hoping someone could shed some light on it. The bluepill config file is
app_dir = "/opt/local/share/httpd/apps/xyz"
Bluepill.application('xyz', :log_file => "#{app_dir}/current/log/bluepill.log") do |app|
app.process('unicorn') do |process|
process.pid_file = "#{app_dir}/shared/pids/unicorn.pid"
process.working_dir = "#{app_dir}/current"
process.stdout = process.stderr = "#{app_dir}/shared/log/unicorn.err.log"
process.start_command = "bundle exec unicorn_rails -D -c #{app_dir}/current/config/environments/production/unicorn.rb -E production"
process.stop_command = "kill -QUIT {{PID}}"
process.restart_command = "kill -USR2 {{PID}}"
process.start_grace_time = 8.seconds
process.stop_grace_time = 5.seconds
process.restart_grace_time = 13.seconds
process.monitor_children do |child_process|
child_process.stop_command = "kill -QUIT {{PID}}"
child_process.checks :mem_usage, :every => 10.seconds, :below => 200.megabytes, :times => [3,5]
child_process.checks :cpu_usage, :every => 10.seconds, :below => 50, :times => [3,5]
end
end
end
When you run bundle exec, it sets up an environment and forks the unicorn_rails process. Bluepill ends up monitoring the original bundle exec process instead of unicorn which is why you see unmonitored.
I set up my bundler environment directly in bluepill and then execute unicorn_rails directly:
Bluepill.application('xyz') do |app|
app.environment = `env -i BUNDLE_GEMFILE=#{app_dir}/Gemfile bundle exec env`.lines.inject({}) do |env_hash,l|
kv = l.chomp.split('=',2)
env_hash[kv[0] = kv[1]
env_hash
end
app.process('unicorn') do |process|
process.start_command = "unicorn_rails -D -c #{app_dir}/current/config/environments/production/unicorn.rb -E production"
end
end
(Note: I omitted part of the above config file for clarity. Your config file looks good, just try adding the app.environment stuff above and removing bundle exec from your start command.)
This sets up the environment in bluepill by capturing the bundler environment variables using backticks, parsing the returned string into a hash and assigning it to app.environment.
I know this question is old, but I've been facing this problem for weeks. My suspicions were aroused when I realised that 'bluepill quit', followed by reloading the pill while unicorn was running allowed bluepill to consider the process 'up'.
#blt04's answer didn't help. Today I came to a realization. My grace start time of 8 seconds was not enough, because I had preload_app true in my unicorn config... and my app (Rails) takes 12 seconds to load, not 8.
Raising the start time to 30 seconds (wildly in excess of what was needed) solved the problem. Bluepill just says 'starting' for 30 seconds, then goes to 'up' correctly. Unicorn starts and runs as normal.
You'll want your restart time to be longer than it takes for Rails to start too.

Resources