I am using resque to process some background jobs (like cron jobs). So I fire these two rake tasks:
$> rake resque:work
$> rake resque:scheduler
My question is how to find worker and scheduler are running? For worker I can use
Resque::Worker.all lists all running workers. And it works fine. But I have not been able to find how to find scheduler is runnig or not (or list all scheduled jobs). I tried Resque::Scheduler.print_schedule but that does not print schedules and always returns epmty hash even if I have run rake resque:scheduler
The method you need is
Resque.get_schedules
It displays a hash of currently scheduled jobs and their related metadata.
{
"job1"=> {
"every"=>"30s",
"class"=>"JobClass",
"args"=>[1],
"queue"=>"default",
"description"=> "Testing..."
}
}
If you aren't dynamically adding any jobs, i.e you aren't using the dynamic setting of resque-scheduler
Resque::Scheduler.dynamic = true
Then
Resque.schedule
will give you the same result.
Am also searching for listing all the scheduled jobs...
Resque.count_all_scheduled_jobs
This displays the number only.
I tried a couple of methods in
Resque.metaclass.instance_methods-Class.methods
but no luck. Have to get into their Github source
Related
I've spent several days on this and about 100 hours but can't get the fix.
Here's my setup (using Rails 4.2.8)
class CustomJob < ActiveJob::Base
def perform(*args)
filename = args.first
data = File.read(filename)
# process that data
end
end
When I run Delayed::Job.enqueue CustomJob.new('filename'), I get the error mentioned in the subject. The job is created and added to the db, but the error message is "Job failed..."
I have this line:
require 'custom_job'
in several places including script/delayed_job.rb, config/initializers/delayed_jobs.rb, config/initializers/custom_job.rb and the file in which I'm calling the job.
I also added this:
config.autoload_paths+=Dir[Rails.root.join('app','jobs')]
config.active_job.queue_adapter = :delayed_job
to my config/application.rb file
And this:
config.threadsafe! unless defined? ($rails_rake_task) && $rails_rake_task
I've also restarted my server after every change. And verified that delayed_job was running using:
dir$ RAILS_ENV=development script/delayed_job status
delayed_job: running [pid 64503]
Sources:
delayed_job fails jobs when running as a daemon. Runs fine when using rake jobs:work
DelayedJob: "Job failed to load: uninitialized constant Syck::Syck"
Rails Custom Delayed Job - uninitialized constant
model classes not loading in Delayed Job when using threadsafe
Can you also try adding this line into config/application.rb file
config.eager_load_paths+=Dir[Rails.root.join('app','jobs')]
I always feel like the answer is obvious...AFTER I figure it out.
The problem was that I was using a shared database and there were existing workers accessing this DB. Though I was restarting and refreshing my local instance of the server, the other instances were trying to run my jobs and the OTHER workers were causing the error, not my local instance.
Solution: Ensure that other instances of delayed_job are using the same table as the code you're testing/building/using. If so, use another DB if possible.
I have a rake file with three tasks, which I need to execute in order.
require 'rake/testtask'
file 'some_binary_file.elf' do
puts 'fetching file from server ...'
# this task connects to a server and downloads some binaries
# it takes a few seconds to run
end
task flash_application: 'some_binary_file.elf' do
puts 'flashing the file to the hardware ...'
# this task copies a binary file to the flash memory
# of some external hardware, also takes a few seconds
end
Rake::TestTask(:hardware) do |t|
puts 'running tests ...'
f.test_files = FileList['test/**/*_test.rb']
end
rake default: [:flash_application, :hardware]
when I run $ rake in a terminal, it produces the following output.
running tests ... < ---- (not actually running)
fetching file from server ...
flashing the file to the hardware ...
I would expect rake to run the tasks in the order I specified, but It seems to always execute the test task first. It is remarkable that the tests do not run - but the output of the task creation is produced anyway.
When you want to run the tasks in a specific order, you must depend them on each other. In your case, :flash_application should then depend on :hardware
Found the Bug - This problem was not ruby / rake specific. The flash_application task changes the working directory. Because of that, there is no Rakefile with a task 'hardware' in the current working directory. But researching for this bug yielded some interesting insights.
Ruby arrays are ordered, if one want to execute task in an order it is sufficient to define them in execution order in an array i.e.
task some_task: [:first, :second, :third]
Rake::TestTask.new defines a plain old rake task when called. That means, when rake is called, ruby creates an instance of a Rake::TestTask. All code passed into the constructor is executed / yielded during this phase. This yields the described behavior from the original question.
I have this small ruby application, not Ruby on Rails - pure Ruby.
I've followed the instructions and I can queue stuff and see that everything is correctly queued using resque-web.
However, I have a problem to start a worker. The documentation indicates to run bin/resque work to launch a worker.
Doing so triggers the message -bash: bin/resque: No such file or directory
Everywhere on the Internet, people have the same problem, but for Rails app, not pure Ruby. The solution seems to include something in the rakefile, which I don't have.
How can I launch my worker? Thanks a lot!
The key to solving your problem is rake.
Resque includes three rake tasks. All you need is a rakefile that requires 'resque/tasks' to use them. When you ask Rake for its list of commands you'll see the three tasks that are included with Resque:
rake resque:failures:sort # Sort the 'failed' queue for the redis_multi_queue failure backend
rake resque:work # Start a Resque worker
rake resque:workers # Start multiple Resque workers
The one you're looking for (to start one worker) is resque:work. You tell it which queue to listen to using the QUEUE environment variable. So starting your worker would be something like:
QUEUE=your_queue_name rake resque:work.
Alternatively, you can listen to all queues using QUEUES=*.
EDIT:
Here's a more complete example. Create a file called rakefile:
require 'resque/tasks'
require 'resque'
class Worker
#queue = :default
def self.perform(my_arg)
puts my_arg
end
end
task :hello do
Resque.enqueue(Worker, 'hello')
end
Then in one terminal type TERM_CHILD=1 QUEUE=default rake resque:work. This will start the worker, watching the queue called default. It will print out any argument a job passes to its perform class method.
In a second terminal type rake hello. This will enqueue a job for the Worker class, passing the string hello as an argument (which will be passed to the perform method in the Worker class). It knows to push to the default queue by looking at the #queue property on Worker.
You'll see hello printed in the terminal where you started the worker.
This example doesn't do anything useful, and you wouldn't put all that in your rakefile, but I think it's a good starting point for you to start modifying it and build your own.
I'm running a series of Rails/Sinatra apps behind nginx + unicorn, with zero-downtime deploys. I love this setup, but it takes a while for Unicorn to finish restarting, so I'd like to send some sort of notification when it finishes.
The only callbacks I can find in Unicorn docs are related to worker forking, but I don't think those will work for this.
Here's what I'm looking for from the bounty: the old unicorn master starts the new master, which then starts its workers, and then the old master stops its workers and lets the new master take over. I want to execute some ruby code when that handover completes.
Ideally I don't want to implement any complicated process monitoring in order to do this. If that's the only way, so be it. But I'm looking for easier options before going that route.
I've built this before, but it's not entirely simple.
The first step is to add an API that returns the git SHA of the current revision of code deployed. For example, you deploy AAAA. Now you deploy BBBB and that will be returned. For example, let's assume you added the api "/checks/version" that returns the SHA.
Here's a sample Rails controller to implement this API. It assumes capistrano REVISION file is present, and reads current release SHA into memory at app load time:
class ChecksController
VERSION = File.read(File.join(Rails.root, 'REVISION')) rescue 'UNKNOWN'
def version
render(:text => VERSION)
end
end
You can then poll the local unicorn for the SHA via your API and wait for it to change to the new release.
Here's an example using Capistrano, that compares the running app version SHA to the newly deployed app version SHA:
namespace :deploy do
desc "Compare running app version to deployed app version"
task :check_release_version, :roles => :app, :except => { :no_release => true } do
timeout_at = Time.now + 60
while( Time.now < timeout_at) do
expected_version = capture("cat /data/server/current/REVISION")
running_version = capture("curl -f http://localhost:8080/checks/version; exit 0")
if expected_version.strip == running_version.strip
puts "deploy:check_release_version: OK"
break
else
puts "=[WARNING]==========================================================="
puts "= Stale Code Version"
puts "=[Expected]=========================================================="
puts expected_version
puts "=[Running]==========================================================="
puts running_version
puts "====================================================================="
Kernel.sleep(10)
end
end
end
end
You will want to tune the timeouts/retries on the polling to match your average app startup time. This example assumes a capistrano structure, with app in /data/server/current and a local unicorn on port 8080.
If you have full access to the box, you could script the Unicorn script to start another script which loops through checking for /proc/<unicorn-pid>/exe which will link to the running process.
See: Detect launching of programs on Linux platform
Update
Based on the changes to the question, I see two options - neither of which are great, but they're options nonetheless...
You could have a cron job that runs a Ruby script every minute which monitors the PID directory mtime, then ensure that PID files exist (since this will tell you that a file has changed in the directory and the process is running) then executes additional code if both conditions are true. Again, this is ugly and is a cron that runs every minute, but it's minimal setup.
I know you want to avoid complicated monitoring, but this is how I'd try it... I would use monit to monitor those processes, and when they restart, kick off a Ruby script which sleeps (to ensure start-up), then checks the status of the processes (perhaps using monit itself again). If this all returns properly, execute additional Ruby code.
Option #1 isn't clean, but as I write the monit option, I like it even better.
Using RubyMine 3.0, I set up a Rake configuration to run a Unit Test. Then I set some breakpoints, then ran the Rake task. No breakpoints were hit, the test just executed like normal and then exited.
Does the RubyMine debugger not work through Rake?
Try this:
Go to Run -> Edit Configurations
Expand the Rake node and add new rake configuration for your rake task (if not already done)
Go to Run -> Debug...
Select your configured rake task.
The Edit/Debug Configurations tab can be a little confusing when setting up rake tasks. I will assume you followed this approach:
Run > Edit Configurations
Select Rake from the List and select the + button (Add New Configuration)
You are greeted with a Configuration tab:
Name
The name attribute just assigns a unique name for this task. You can call it whatever you want.
Task Name
This one is important for rake tasks. This specifies the name of the rake task to be executed. So let's say you wanted to run "rake db:migrate" in debug mode, then for the task name here, you would put "db:migrate" without the quotes.
Turn on invoke/execute tracing, enable full backtrace (--trace)
This option is useful to turn on the standard rake --trace option.
Ruby Arguments
The other useful option is to specify the arguments to be passed to the Ruby interpreter.
Those are the main options. Now you can use Run > Debug and it will stop at breakpoints in the rake task itself.
The above answer is correct. I just want to elaborate on it a little bit, when using a mountable engine. In that case, I had to do the following:
Run > Edit Configuration > Rake
Enter task name e.g. scan_spreadsheet
Change the working directory to your main application or dummy application, not the engine root directory.
If you are using RVM with multiple gemsets, select the second option for Ruby SDK and select the correct gemset