I use Sidekiq for job processing. Using Foreman, I set up six processes in my Procfile:
redirects: bundle exec sidekiq -c 10 -q redirects
redirects2: bundle exec sidekiq -c 10 -q redirects
redirects3: bundle exec sidekiq -c 10 -q redirects
redirects4: bundle exec sidekiq -c 10 -q redirects
redirects5: bundle exec sidekiq -c 10 -q redirects
redirects6: bundle exec sidekiq -c 10 -q redirects
These processes performed at about 1600+ jobs (simple job to increment some hashes in Redis) per second with all 10 threads busy most of the time. I scaled my Digital Ocean droplet from 8 to 12-core, and performance fell to ~400 per second. For each process, there are only 3-5 busy threads out of 10.
What I did to try to fix the issue:
Make perform method empty
Use less/more process count
Use less/more concurrency
Split queue to server-specific queues (there are three express.js clients from another servers puts jobs in queues)
Trying with different hz values in redis.conf
Setting somaxconn to 1024 (tcp-backlog in redis.conf too)
Turning off RDB save and use only AOF
Flush all Redis dbs (there are two databases for all that logic: one for sidekiq and another for hashes in my workers)
Running sidekiq from terminal without Foreman (to check if it is Foreman issue)
None of above helped me. What could have caused the performance loss?
Related
I'm fairly new to Bash, redis and linux in general and I'm having trouble with creating a script. This is also my first question, I hope it is not a duplicate.
So here's the problem, I'm creating a simple application in ruby for educational purposes, but the feature I'm trying to implement uses redis and sidekiq. What I want to do is to create an executable script (I named it server) that initiates the redis server, initiates the redis, but it should also shutdown redis after the user finalizes the sidekiq.
This is what I came up with:
#!/usr/bin/env sh
set -e
redis-server --daemonize yes
bundle exec sidekiq -r ./a/sample/path/worker.rb
redis-cli shutdown # this is not working, I want to execute this after shutting sidekiq down...
When I run the fourth line, it starts the little Sidekiq "welcome page" and I can't to anything until I shut it down with Control + C. I assumed that after shutting it with this command, it would continue with the script I wrote, which would be the redis-cli shutdown command.
But it does not. When I Control + C the sidekiq, it simply goes back to the command line.
Is there anyone familiar with these concepts that could help me? I wanted a script that would also shutdown redis after I'm done with sidekiq.
Thanks!
Have you considered using Foreman?
http://blog.daviddollar.org/2011/05/06/introducing-foreman.html
https://github.com/ddollar/foreman
I want to start a delayed_job using multiple pools (4) each pool can take a certain type of jobs and one last for everything (in order to prioritize quick tasks and let longer run on separate pools).
This is the command ran :
/bin/bash -c 'cd /usr/src/app; /usr/bin/env RAILS_ENV=development /usr/src/app/bin/delayed_job --pool=LongTask,MediumTask:2 --pool=QuickTask --pool=*:1 start'
I would like to monitor this with god, but how can I monitor forked processes with godrb ? I would like to keep the process generatin in the delayed_jobs if possible, instead of running 4 god instances each with one delayed_jobs pool.
I have a requirement wherein I have to constantly fetch messages from AWS-SQS(simple queue service) and update the related records of a model. The message contains data that needs to be displayed to the related users as notifications as soon as they are fetched. This has been successfully managed by using Action Cable. I have created a rake task that fetches the messages from the queue and does the required processing. This task is supposed to run in an infinite loop. I have 2 questions regarding it?
namespace :sqs_consumer do
desc 'Get data from the AWS-SQS and process it.'
task start: :environment do
# initialize the sqs client
loop do
#read the queue for messages and process them in batches(if any)
end
end
end
1) Is it right to create a rake task for the above requirement? A rake task that runs infinitely is the right way? If not, what is the right approach. I cant run the task periodically since i need data in real time.
2) I want to monitor the task. I am using monit for the same. Unfortunately, Monit conf for the same doesn't seem to work. What am I doing wrong or missing?
check process aws_sqs_consumer with pidfile /var/www/myproject/shared/pids/sqs_consumer.pid
start program = "/bin/sh -c 'cd /var/www/myproject/current; nohup bundle exec rake sqs_consumer:start RAILS_ENV=staging -i 0 -P /var/www/myproject/shared/pids/sqs_consumer.pid >> log/sqs_consumer.log 2>&1 &'" as uid ubuntu and gid ubuntu
stop program = "/bin/sh -c 'kill $(cat /var/www/myproject/shared/pids/sqs_consumer.pid)'" as uid ubuntu and gid ubuntu
This monit configuration worked for me
check process aws_sqs_consumer with pidfile /var/www/myproject/shared/tmp/pids/sqs_consumer.pid
start program = "/bin/sh -c 'cd /var/www/myproject/current && BACKGROUND=y PIDFILE=/var/www/myproject/shared/tmp/pids/sqs_consumer.pid LOG_LEVEL=info bundle exec rake sqs_consumer:start RAILS_ENV=staging'"
stop program = "/bin/sh -c 'kill $(cat /var/www/myproject/shared/tmp/pids/sqs_consumer.pid)'" as uid ubuntu and gid ubuntu with timeout 90 seconds
My Rails app on Heroku has a Procfile that starts up a single Sidekiq process on my dyno
worker: bundle exec sidekiq -C config/sidekiq.yml
Is it possible to start up multiple sidekiq processes on the same dyno?
My organization has a large dyno with a lot of memory that we're not using. Before downgrading the dyno, I was wondering if it was an option instead to make use of it by running multiple Sidekiq processes.
Sidekiq Enterprise's Multi-Process feature makes this trivial.
One option would be to run Sidekiq with a supervisor like https://github.com/ochinchina/supervisord
Add the binary to your repository (e.g. bin/supervisord) and add a config file and a Procfile entry.
For a dyno with 8 cores, your configuration could look like this:
[program:sidekiq]
command = bundle exec sidekiq -e ${RACK_ENV:-development} -C config/sidekiq_large.yml
process_name = %(program_name)s_%(process_num)s
numprocs = 8
numprocs_start = 1
exitcodes = 0
stopsignal = TERM
stopwaitsecs = 40
autorestart = unexpected
stdout_logfile = /dev/stdout
stderr_logfile = /dev/stderr
Then in your Procfile:
worker_large: env [...] bin/supervisord -c sidekiq_worker_large.conf
Make sure that you tune your Sidekiq concurrency settings.
There is also this open source third party gem that is meant to do this, without having to pay for Sidekiq Enterprise. I have not used it, don't know how well it works, but it looks good.
https://github.com/vinted/sidekiq-pool
It doesn't have any recent commits or many github stars, but may just be working fine?
Here's another one, that warns you it's unfinished work in progress, but just to compare:
https://github.com/kigster/sidekiq-cluster
I'm starting up three thin processes with bundle exec thin start -C /etc/thin/staging.yml
I use rvm, ruby version is ree-1.8.7
Contents of /etc/thin/staging.yml:
---
timeout: 30
pid: /home/myuser/apps/g/shared/pids/thin.pid
max_persistent_conns: 512
servers: 3
chdir: /home/myuser/apps/g/current
port: 3040
require: []
log: /home/myuser/apps/g/shared/log/thin.log
daemonize: true
address: 0.0.0.0
max_conns: 1024
wait: 30
environment: staging
lsof -i :3040-3042 will show three ruby processes listening on ports 3040-3042, but the pid files contain three different (slightly lower) pids. All six processes are called merb : merb : Master
When I stop thin with bundle exec thin stop -C /etc/thin/staging.yml, thin first sends a QUIT signal to the processes in the pid files, then, after a timeout, a KILL signal.
The pid files are now gone, the thin logs show that the server has stopped, but there are still three ruby processes listening on ports 3040-3042, so a subsequent thin start will fail.
The only differences between the output of lsof -p of both processes is a /lib/libnss_files-2.12.so library and a postgres socket.
My questions are:
why do I get a timeout during thin stop?
why are there two processes per server instead of one?
how do I fix this elegantly (without kill -9)
Apparently the Merb bootloader does a fork. How braindead is that!
Set Merb::Config[:fork_for_class_load] = false in your config.ru.