Is there a way to manually launch a Sidekiq process without using Ruby, but by posting the appropriate message into Redis? There must be some sort of convention for the message that it expects.
This is already covered in the FAQ: https://github.com/mperham/sidekiq/wiki/FAQ#how-do-i-push-a-job-to-sidekiq-without-ruby
Not sure why you would do this, but from its documentation: "Sidekiq is compatible with Resque. It uses the exact same message format as Resque so it can integrate into an existing Resque processing farm." I know that Resque enqueues a hash of data as a string:
"{\"class\":\"NoOpWorker\",\"args\":[]}"
You can manually verify this by enqueuing a job at a console with:
Resque.enqueue_to "foo", NoOpWorker
And then see what the data is with a redis-cli command
redis-cli lrange resque:queue:foo 0 100
But before proceeding, why would you want to do this? Why not just run a script or a rake task that would use enqueue the job through Sidekiq's normal API instead of hacking around it?
EDIT: Are you trying to interop between technologies?
Redis doesn't know anything about Ruby or Sidekiq.
So yeah, it's possible. It might require some work, and you might have to take versioning of the non-public (well, it is open source after all, so anything is public) API into account.
You could write a separate client process in any programming language, and analyze the redis keyspace. Read up on the implementation of the Sidekiq serialization. A quick look (I don't use Sidekiq) reveals that it uses simple JSON serialization: sidekiq/api.rb .
Hope this helps, TW
Related
I'm building a monitoring service similar to pingdom but monitoring different aspects of a system and using sidekiq to queue the tasks which is working well. What I need to do is to schedule sending out pings every minute, rather than using a cron based system which would require spinning up a new ruby instance every minute I have gone down the route of using sidetiq (notice the different spelling with a "t") which uses sidekiq's own queue to schedule future tasks. This feels like a neat solution, however I am concerned this may not be the most reliable way of scheduling tasks? If there are issues with the system (as there inevitable will be at some point) will this method of scheduling tasks be less reliable than using a cron based method and why?
Thanks
You are giving too short description of your system needs but I'll try to guess how it could be:
In the first place using sidekiq means that you'll also need an instance of redis and also means that you'll need a way to monitor the sidekiq process and restart it in case of failure and possibly redis server.
A method based on cron tasks will have fewer requirements therefore much less possibilities of failing.
cron has been around for a long time and it's battle tested and it's very very reliable, but has it's drawbacks too.
Said that, you can build a system with separate instances of redis in a master/slave configuration and you can also use Redis sentinel to implement a failover in case of the master failure, implement a monitoring/alerting system on this setup (you can use something super simple like this http://contribsys.com/inspeqtor/ from the sidekiq author) and you can also start several instances of sidekiq in different machines.
With all of that, you can have a quite reliable system for running sidekiq with sidetiq.
Hope it helps
In my ruby on rails project, I have to take pull from sql-server to my mysql database.
When I run my project on port 3000, it makes system busy when I want to take pull.
I want such method or way which system can detect, how many ports are running for ruby application and how to close if it is not in use ?
Thanks in advance.
Hard to understand exactly what you're asking for, but I'm assuming that when you are synchronizing databases, the system becomes busy and you can't serve any pages. This is a perfect example for the use of a background job that allows you to do tasks like this without affecting the rails application. The two gems that come to mind that will allow you to do this is Delayed_job and Resque. An outstanding screencast for doing this type of stuff is listed below as well.
http://github.com/collectiveidea/delayed_job
https://github.com/defunkt/resque/
http://railscasts.com/episodes/171-delayed-job
http://railscasts.com/episodes/271-resque
Is there any way to know if worker has finished the particular job/process in Resque.
Scenario: I have 5 worker doing some specific process, I want to know whether process is done to proceed with other part of code.
I am using Ruby 1.8.7 and Rails 3.1.1 if that is of any help.
you can try gearman if you need know this
log your information in your code
use redis-cli to check if the key of your job has value
resque-web and resque-status can also help you
You probably want to use something like resque-status: https://github.com/quirkey/resque-status .
If that doesn't quite meet your needs, you can always check the wiki plugin page for more possibilities: https://github.com/defunkt/resque/wiki/plugins
It is also not hard to store the fact of job completion as an extra field in your database.
We have a requirement to build a small Sinatra app which will capture events from an external API and add them to a queue for processing by a Rails application. We could be receiving hundreds of thousands of events per day.
Given that resque rules itself out by not being able to guarantee that jobs won't get lost, what other options are out there. We've looked at delayed_job and that doesn't play well with Sinatra, so what other alternatives are there for something fast, reliable and scalable.
Have you looked at Beanstalk?
http://kr.github.com/beanstalkd/
http://www.igvita.com/2010/05/20/scalable-work-queues-with-beanstalk/
There's an example Sinatra/Beanstalk app on GitHub:
https://github.com/adamwiggins/clockwork-sinatra-beanstalk
Alternatively you might want to check out RabbitMQ with ruby-amqp, but I think I'd first try the Beanstalk approach (it handles the workload you describe in your post for us):
https://github.com/ruby-amqp/amqp
This is intended to be a lightweight generic solution, although the problem is currently with a IIS CGI application that needs to log the timeline of events (second resolution) for troubleshooting a situation where a later request ends up in the MySQL database BEFORE the earlier request!
So it boils down to a logging debug statements in a single text file.
I could write a service that manages a queue as suggested in this thread:
Issue writing to single file in Web service in .NET
but deploying the service on each machine is a pain
or I could use a global mutex, but this would require each instance to open and close the file for each write
or I could use a database which would handle this for me, but it doesnt make sense to use a database like MySQL to try to trouble shoot a timeline issue with itself. SQLite is another possability, but this thread
http://www.perlmonks.org/?node_id=672403
Suggests that it is not a good choice either.
I am really looking for a simple approach, something as blunt as writing to individual files for each process and consolidating them accasionally with a scheduled app. I do not want to over engineer this, nor spend a week implementing it. It is only needed occassionally.
Suggestions?
Try the simplest solution first - each write to the log opens and closes the file. If you experience problems with this, which you probably won't , look for another solution.
You can use file locking. Lock the file for writing, write the message, unlock.
My suggestion is to preserve performance then think in asynchronous logging. Why not send your data log info using UDP to service listening port and he write to log file.
I would also suggest some kind of a central logger that can be called by each process in an asynchronous way. If the communication is UDP or RPC or whatever would be an implementation detail.
Even thought it's an old post, has anyone got an idea why not using the following concept:
Creating/opening a file with share mode of FILE_SHARE_WRITE.
Having a named global mutex, and opening it.
Whenever a file write is desired, lock the mutex first, then write to the file.
Any input?