My Rails app on Heroku has a Procfile that starts up a single Sidekiq process on my dyno
worker: bundle exec sidekiq -C config/sidekiq.yml
Is it possible to start up multiple sidekiq processes on the same dyno?
My organization has a large dyno with a lot of memory that we're not using. Before downgrading the dyno, I was wondering if it was an option instead to make use of it by running multiple Sidekiq processes.
Sidekiq Enterprise's Multi-Process feature makes this trivial.
One option would be to run Sidekiq with a supervisor like https://github.com/ochinchina/supervisord
Add the binary to your repository (e.g. bin/supervisord) and add a config file and a Procfile entry.
For a dyno with 8 cores, your configuration could look like this:
[program:sidekiq]
command = bundle exec sidekiq -e ${RACK_ENV:-development} -C config/sidekiq_large.yml
process_name = %(program_name)s_%(process_num)s
numprocs = 8
numprocs_start = 1
exitcodes = 0
stopsignal = TERM
stopwaitsecs = 40
autorestart = unexpected
stdout_logfile = /dev/stdout
stderr_logfile = /dev/stderr
Then in your Procfile:
worker_large: env [...] bin/supervisord -c sidekiq_worker_large.conf
Make sure that you tune your Sidekiq concurrency settings.
There is also this open source third party gem that is meant to do this, without having to pay for Sidekiq Enterprise. I have not used it, don't know how well it works, but it looks good.
https://github.com/vinted/sidekiq-pool
It doesn't have any recent commits or many github stars, but may just be working fine?
Here's another one, that warns you it's unfinished work in progress, but just to compare:
https://github.com/kigster/sidekiq-cluster
Related
I'm fairly new to Bash, redis and linux in general and I'm having trouble with creating a script. This is also my first question, I hope it is not a duplicate.
So here's the problem, I'm creating a simple application in ruby for educational purposes, but the feature I'm trying to implement uses redis and sidekiq. What I want to do is to create an executable script (I named it server) that initiates the redis server, initiates the redis, but it should also shutdown redis after the user finalizes the sidekiq.
This is what I came up with:
#!/usr/bin/env sh
set -e
redis-server --daemonize yes
bundle exec sidekiq -r ./a/sample/path/worker.rb
redis-cli shutdown # this is not working, I want to execute this after shutting sidekiq down...
When I run the fourth line, it starts the little Sidekiq "welcome page" and I can't to anything until I shut it down with Control + C. I assumed that after shutting it with this command, it would continue with the script I wrote, which would be the redis-cli shutdown command.
But it does not. When I Control + C the sidekiq, it simply goes back to the command line.
Is there anyone familiar with these concepts that could help me? I wanted a script that would also shutdown redis after I'm done with sidekiq.
Thanks!
Have you considered using Foreman?
http://blog.daviddollar.org/2011/05/06/introducing-foreman.html
https://github.com/ddollar/foreman
I want to start a delayed_job using multiple pools (4) each pool can take a certain type of jobs and one last for everything (in order to prioritize quick tasks and let longer run on separate pools).
This is the command ran :
/bin/bash -c 'cd /usr/src/app; /usr/bin/env RAILS_ENV=development /usr/src/app/bin/delayed_job --pool=LongTask,MediumTask:2 --pool=QuickTask --pool=*:1 start'
I would like to monitor this with god, but how can I monitor forked processes with godrb ? I would like to keep the process generatin in the delayed_jobs if possible, instead of running 4 god instances each with one delayed_jobs pool.
I use Sidekiq for job processing. Using Foreman, I set up six processes in my Procfile:
redirects: bundle exec sidekiq -c 10 -q redirects
redirects2: bundle exec sidekiq -c 10 -q redirects
redirects3: bundle exec sidekiq -c 10 -q redirects
redirects4: bundle exec sidekiq -c 10 -q redirects
redirects5: bundle exec sidekiq -c 10 -q redirects
redirects6: bundle exec sidekiq -c 10 -q redirects
These processes performed at about 1600+ jobs (simple job to increment some hashes in Redis) per second with all 10 threads busy most of the time. I scaled my Digital Ocean droplet from 8 to 12-core, and performance fell to ~400 per second. For each process, there are only 3-5 busy threads out of 10.
What I did to try to fix the issue:
Make perform method empty
Use less/more process count
Use less/more concurrency
Split queue to server-specific queues (there are three express.js clients from another servers puts jobs in queues)
Trying with different hz values in redis.conf
Setting somaxconn to 1024 (tcp-backlog in redis.conf too)
Turning off RDB save and use only AOF
Flush all Redis dbs (there are two databases for all that logic: one for sidekiq and another for hashes in my workers)
Running sidekiq from terminal without Foreman (to check if it is Foreman issue)
None of above helped me. What could have caused the performance loss?
I have these rake tasks that will occasionally fail. I want to use monit to monitor them and to restart them if necessary.
I have read the other ruby/monit threads on StackOverflow. My case is different in that these programs require my Rails environment in order to work. That's why I have them as rake tasks now.
Here is one of the tasks I need to monitor, in it's entirety:
task(process_updates: :environment) do
`echo "#{Process.pid}" > #{Rails.root}/log/process_alerts.pid`
`echo "#{Process.ppid}" > #{Rails.root}/log/process_alerts.ppid`
SynchronizationService::process_alerts
end
My question is, do I leave this as a rake task, since SynchronizationService::process_alerts requires the Rails environment to work? Or is there some other wrapper I should invoke and then just run some *.rb file?
Monit can check for running pid, since you're creating pid when you run task you can create a monit config which should look something like this:
check process alerts with pidfile RAILSROOT/log/process_alerts.pid
start program = "cd PATH_TO_APP; rake YOURTASK" with timeout 120 seconds
alert your#mail.com on { nonexist, timeout }
Of course RAILSROOT, PATH_TO_APP, YOURTASK should correspond to your paths/rake task.
Monit then will check for running process in system using the pidfile value and will start the process using start program command if it can't find running process.
We actually use God in our development environment, as well as in production, simply because it makes managing unicorn/resque etc simpler.
I've just scaled down our default unicorn config to a single worker in dev, as most of the time this is enough. However, I've added shell scripts wrapping the commands:
# Add an extra unicorn worker
kill -TTIN `cat /path/to/unicorn.pid`
# Remove a unicorn worker
kill -TTOU `cat /path/to/unicorn.pid`
Rather than starting/stopping/restarting unicorn with god, but adding new workers through an ad-hoc shell script, is there a way to have God support a couple of custom commands, like:
god incr unicorn
god decr unicorn
I looked in the documentation and found nothing, but it feels like something it would probably 'unofficially' be able to do.