I'm not familiar with the internals of Sidekiq and am wondering if it's okay to launch several Sidekiq instances with the same configuration (processing the same queues).
Is it possible that 2 or more Sidekiq instances will process the same message from a queue?
UPDATE:
I need to know if there is a possible conflict, when running Sidekiq on more than 1 machine
Yes, sidekiq can absolutely run many processes against the same queue. Redis will just give the message to a random process.
Nope, I've ran Sidekiqs in different machines with no issues.
Each of the Sidekiqs read from the same redis server, and redis is very robust in multi-threaded, and distributed scenarios.
In addition, if you look at the web interface for Sidekiq, it will show all the workers across all machines because all the workers are logged in the same redis server.
So no, no issues.
Related
I'm using Puma as a web server, and Sidekiq as my queue runner.
For multiple things (Database connections, Redis connections, other external services) I'm using the ConnectionPool gem to manage safe access to connections.
Now, depending on whether I'm running in the context of Sidekiq or of Puma, I need those pools to be different sizes (as large as the number of Sidekiq Threads or Puma threads respectively, and they are different)
What is the best way to know, in your initializers, how big to make your connection pools based on execution context?
Thanks!
You use Sidekiq.server? which returns nil when not running inside the Sidekiq process itself.
I don't know about your specific case (puma/sidekiq), but in general you can find this information in the $PROGRAM_NAME variable. Also similar are $0 and __FILE__.
I'm building a monitoring service similar to pingdom but monitoring different aspects of a system and using sidekiq to queue the tasks which is working well. What I need to do is to schedule sending out pings every minute, rather than using a cron based system which would require spinning up a new ruby instance every minute I have gone down the route of using sidetiq (notice the different spelling with a "t") which uses sidekiq's own queue to schedule future tasks. This feels like a neat solution, however I am concerned this may not be the most reliable way of scheduling tasks? If there are issues with the system (as there inevitable will be at some point) will this method of scheduling tasks be less reliable than using a cron based method and why?
Thanks
You are giving too short description of your system needs but I'll try to guess how it could be:
In the first place using sidekiq means that you'll also need an instance of redis and also means that you'll need a way to monitor the sidekiq process and restart it in case of failure and possibly redis server.
A method based on cron tasks will have fewer requirements therefore much less possibilities of failing.
cron has been around for a long time and it's battle tested and it's very very reliable, but has it's drawbacks too.
Said that, you can build a system with separate instances of redis in a master/slave configuration and you can also use Redis sentinel to implement a failover in case of the master failure, implement a monitoring/alerting system on this setup (you can use something super simple like this http://contribsys.com/inspeqtor/ from the sidekiq author) and you can also start several instances of sidekiq in different machines.
With all of that, you can have a quite reliable system for running sidekiq with sidetiq.
Hope it helps
I am working on a ruby application that creates todos and meetings.
There will be reminders that are sent out with respect to each meeting or todo as you would imagine.
We are already using sidekiq and it would be nice to use sidekiq to create the scheduled jobs in x number of days/hours etc.
My concern is that we will lose the jobs if redis restarts.
Am I write in assuming that if redis restarts, we lose the jobs and if so, is there anything that can be done about it?
If not sidekiq, what else could I use?
There are several ways of doing that, just go through the link http://redis.io/topics/persistence. Snapshotting is a technique to snapshots of the dataset on disk.
I am trying out sidekiq alongside my resque system in production. Now I know this isn't quite an apples-to-oranges comparison but my resque jobs running on a heroku worker take around 4s to complete. I am running only 50 threads on an amazon large instance with sidekiq and the same jobs take on average around 18s. The jobs are very heavy on use of third-party apis so I am assuming my bottleneck is just my network connection but I just wanted to see if anyone has suggestions as to how I can better configure sidekiq.
Sidekiq workers will work parallel only if you will use jruby or rubinius, becouse ruby mri have global interpreter lock
Sidekiq workers will work faster only if you use jruby or rubinious with thread safe libriaries, that not block resoures that they use. So main reason of using sidekiq instead of resque is memory saving
I am new at heroku and resque.
I have a queue in resque and i should hire and release workers according to current amount of jobs in my queue automatically. I tried hirefireapp but it just hire workers while queue expands and not release any worker unless there are no job waiting in queue. So i make some research and find out that there is no way to say a worker not take a new job after finished current one and shutdown yourself. Resque developers and users also have pointed out this issue in this link https://github.com/defunkt/resque/issues/319 and create a new branch which is keepalive to resque ( https://github.com/hone/resque/tree/keepalive ). It seems it is the solution for my issue. However, since i am new in using resque i couldn't find out that how to fire a worker via resque safely.
If anyone who have more experienced in resque and heroku help me, i will be really glad.
Thanks.
You'll want to run a separate process to control the scaling of workers.
resque-heroku-scaler is one option.
A single additional scaler process helps you manage workers efficiently.
This isn't really what Resque is designed for as it's meant to be sitting there working a queue, not deciding whether or not to start up/shut down.
Personally, unless the money required to run the worker 24/7 is that hard to come by I would just leave it running.