Ruby app and Resque: can't start workers - ruby

I have this small ruby application, not Ruby on Rails - pure Ruby.
I've followed the instructions and I can queue stuff and see that everything is correctly queued using resque-web.
However, I have a problem to start a worker. The documentation indicates to run bin/resque work to launch a worker.
Doing so triggers the message -bash: bin/resque: No such file or directory
Everywhere on the Internet, people have the same problem, but for Rails app, not pure Ruby. The solution seems to include something in the rakefile, which I don't have.
How can I launch my worker? Thanks a lot!

The key to solving your problem is rake.
Resque includes three rake tasks. All you need is a rakefile that requires 'resque/tasks' to use them. When you ask Rake for its list of commands you'll see the three tasks that are included with Resque:
rake resque:failures:sort # Sort the 'failed' queue for the redis_multi_queue failure backend
rake resque:work # Start a Resque worker
rake resque:workers # Start multiple Resque workers
The one you're looking for (to start one worker) is resque:work. You tell it which queue to listen to using the QUEUE environment variable. So starting your worker would be something like:
QUEUE=your_queue_name rake resque:work.
Alternatively, you can listen to all queues using QUEUES=*.
EDIT:
Here's a more complete example. Create a file called rakefile:
require 'resque/tasks'
require 'resque'
class Worker
#queue = :default
def self.perform(my_arg)
puts my_arg
end
end
task :hello do
Resque.enqueue(Worker, 'hello')
end
Then in one terminal type TERM_CHILD=1 QUEUE=default rake resque:work. This will start the worker, watching the queue called default. It will print out any argument a job passes to its perform class method.
In a second terminal type rake hello. This will enqueue a job for the Worker class, passing the string hello as an argument (which will be passed to the perform method in the Worker class). It knows to push to the default queue by looking at the #queue property on Worker.
You'll see hello printed in the terminal where you started the worker.
This example doesn't do anything useful, and you wouldn't put all that in your rakefile, but I think it's a good starting point for you to start modifying it and build your own.

Related

Rails 4 Delayed_job error - Job failed to load: undefined class/module CustomJob

I've spent several days on this and about 100 hours but can't get the fix.
Here's my setup (using Rails 4.2.8)
class CustomJob < ActiveJob::Base
def perform(*args)
filename = args.first
data = File.read(filename)
# process that data
end
end
When I run Delayed::Job.enqueue CustomJob.new('filename'), I get the error mentioned in the subject. The job is created and added to the db, but the error message is "Job failed..."
I have this line:
require 'custom_job'
in several places including script/delayed_job.rb, config/initializers/delayed_jobs.rb, config/initializers/custom_job.rb and the file in which I'm calling the job.
I also added this:
config.autoload_paths+=Dir[Rails.root.join('app','jobs')]
config.active_job.queue_adapter = :delayed_job
to my config/application.rb file
And this:
config.threadsafe! unless defined? ($rails_rake_task) && $rails_rake_task
I've also restarted my server after every change. And verified that delayed_job was running using:
dir$ RAILS_ENV=development script/delayed_job status
delayed_job: running [pid 64503]
Sources:
delayed_job fails jobs when running as a daemon. Runs fine when using rake jobs:work
DelayedJob: "Job failed to load: uninitialized constant Syck::Syck"
Rails Custom Delayed Job - uninitialized constant
model classes not loading in Delayed Job when using threadsafe
Can you also try adding this line into config/application.rb file
config.eager_load_paths+=Dir[Rails.root.join('app','jobs')]
I always feel like the answer is obvious...AFTER I figure it out.
The problem was that I was using a shared database and there were existing workers accessing this DB. Though I was restarting and refreshing my local instance of the server, the other instances were trying to run my jobs and the OTHER workers were causing the error, not my local instance.
Solution: Ensure that other instances of delayed_job are using the same table as the code you're testing/building/using. If so, use another DB if possible.

Order of Rake Test Task

I have a rake file with three tasks, which I need to execute in order.
require 'rake/testtask'
file 'some_binary_file.elf' do
puts 'fetching file from server ...'
# this task connects to a server and downloads some binaries
# it takes a few seconds to run
end
task flash_application: 'some_binary_file.elf' do
puts 'flashing the file to the hardware ...'
# this task copies a binary file to the flash memory
# of some external hardware, also takes a few seconds
end
Rake::TestTask(:hardware) do |t|
puts 'running tests ...'
f.test_files = FileList['test/**/*_test.rb']
end
rake default: [:flash_application, :hardware]
when I run $ rake in a terminal, it produces the following output.
running tests ... < ---- (not actually running)
fetching file from server ...
flashing the file to the hardware ...
I would expect rake to run the tasks in the order I specified, but It seems to always execute the test task first. It is remarkable that the tests do not run - but the output of the task creation is produced anyway.
When you want to run the tasks in a specific order, you must depend them on each other. In your case, :flash_application should then depend on :hardware
Found the Bug - This problem was not ruby / rake specific. The flash_application task changes the working directory. Because of that, there is no Rakefile with a task 'hardware' in the current working directory. But researching for this bug yielded some interesting insights.
Ruby arrays are ordered, if one want to execute task in an order it is sufficient to define them in execution order in an array i.e.
task some_task: [:first, :second, :third]
Rake::TestTask.new defines a plain old rake task when called. That means, when rake is called, ruby creates an instance of a Rake::TestTask. All code passed into the constructor is executed / yielded during this phase. This yields the described behavior from the original question.

Setting up resque-pool over a padrino Rakefile throwing errors

I have setup a Padrino bus application using super-cool Resque for handling background process and ResqueBus for pub/sub of events.
The ResqueBus setup creates a resque queue and a worker for it to work on. Everything upto here works fine. Now since the resquebus is only creating a single worker for a single queue, and the process in my bus app can go haywire since many events will be published and subscribed. So a single worker per application queue seems to be inefficient. So thought of integrating the resque-pool gem to handle the worker process.
I have followed all process that resque pool gem has specified. I have edited my Rakefile.
# Add your own tasks in files placed in lib/tasks ending in .rake,
# for example lib/tasks/capistrano.rake, and they will automatically be available to Rake.
require File.expand_path('../config/application', __FILE__)
Ojus::Application.load_tasks
require 'resque/pool/tasks'
# this task will get called before resque:pool:setup
# and preload the rails environment in the pool manager
task "resque:setup" => :environment do
# generic worker setup, e.g. Hoptoad for failed jobs
end
task "resque:pool:setup" do
# close any sockets or files in pool manager
ActiveRecord::Base.connection.disconnect!
# and re-open them in the resque worker parent
Resque::Pool.after_prefork do |job|
ActiveRecord::Base.establish_connection
end
end
Now I tried to run this resque-pool command.
resque-pool --daemon --environment production
This throws an error like this.
/home/ubuntu/.rvm/gems/ruby-2.0.0-p451#notification-engine/gems/activerecord-4.1.7/lib/active_record/connection_adapters/connection_specification.rb:257:in `resolve_symbol_connection': 'default_env' database is not configured. Available: [:development, :production, :test] (ActiveRecord::AdapterNotSpecified)
I tried to debug this and found out that it throws an error at line
ActiveRecord::Base.connection.disconnect!
For now I have removed this line and everything seems working fine. But due to this a problem may arise because if we restart the padrino application the older ActiveRecord connection will be hanging around.
**
I just wanted to know if there is any work around for this problem and
run the resque-pool command by closing all the ActiveRecord
connections.
**
It would have been helpful if you had given your database.rb file of padrino.
Never mind, you can try
defined?(ActiveRecord::Base) && ActiveRecord::Base.connection.disconnect!
instead of ActiveRecord::Base.connection.disconnect!
and
ActiveRecord::Base.establish_connection(ActiveRecord::Base.configurations[Padrino.env])
instead of ActiveRecord::Base.establish_connection()
to establish a connection with activerecord you have to pass a parameter to what environment you want to connect otherwise it will search 'default_env' which is default in activerecord.
checkout the source code source code

Is it possible to send a notification when a Unicorn master finishes a restart?

I'm running a series of Rails/Sinatra apps behind nginx + unicorn, with zero-downtime deploys. I love this setup, but it takes a while for Unicorn to finish restarting, so I'd like to send some sort of notification when it finishes.
The only callbacks I can find in Unicorn docs are related to worker forking, but I don't think those will work for this.
Here's what I'm looking for from the bounty: the old unicorn master starts the new master, which then starts its workers, and then the old master stops its workers and lets the new master take over. I want to execute some ruby code when that handover completes.
Ideally I don't want to implement any complicated process monitoring in order to do this. If that's the only way, so be it. But I'm looking for easier options before going that route.
I've built this before, but it's not entirely simple.
The first step is to add an API that returns the git SHA of the current revision of code deployed. For example, you deploy AAAA. Now you deploy BBBB and that will be returned. For example, let's assume you added the api "/checks/version" that returns the SHA.
Here's a sample Rails controller to implement this API. It assumes capistrano REVISION file is present, and reads current release SHA into memory at app load time:
class ChecksController
VERSION = File.read(File.join(Rails.root, 'REVISION')) rescue 'UNKNOWN'
def version
render(:text => VERSION)
end
end
You can then poll the local unicorn for the SHA via your API and wait for it to change to the new release.
Here's an example using Capistrano, that compares the running app version SHA to the newly deployed app version SHA:
namespace :deploy do
desc "Compare running app version to deployed app version"
task :check_release_version, :roles => :app, :except => { :no_release => true } do
timeout_at = Time.now + 60
while( Time.now < timeout_at) do
expected_version = capture("cat /data/server/current/REVISION")
running_version = capture("curl -f http://localhost:8080/checks/version; exit 0")
if expected_version.strip == running_version.strip
puts "deploy:check_release_version: OK"
break
else
puts "=[WARNING]==========================================================="
puts "= Stale Code Version"
puts "=[Expected]=========================================================="
puts expected_version
puts "=[Running]==========================================================="
puts running_version
puts "====================================================================="
Kernel.sleep(10)
end
end
end
end
You will want to tune the timeouts/retries on the polling to match your average app startup time. This example assumes a capistrano structure, with app in /data/server/current and a local unicorn on port 8080.
If you have full access to the box, you could script the Unicorn script to start another script which loops through checking for /proc/<unicorn-pid>/exe which will link to the running process.
See: Detect launching of programs on Linux platform
Update
Based on the changes to the question, I see two options - neither of which are great, but they're options nonetheless...
You could have a cron job that runs a Ruby script every minute which monitors the PID directory mtime, then ensure that PID files exist (since this will tell you that a file has changed in the directory and the process is running) then executes additional code if both conditions are true. Again, this is ugly and is a cron that runs every minute, but it's minimal setup.
I know you want to avoid complicated monitoring, but this is how I'd try it... I would use monit to monitor those processes, and when they restart, kick off a Ruby script which sleeps (to ensure start-up), then checks the status of the processes (perhaps using monit itself again). If this all returns properly, execute additional Ruby code.
Option #1 isn't clean, but as I write the monit option, I like it even better.

Finding schedules in resque - ruby

I am using resque to process some background jobs (like cron jobs). So I fire these two rake tasks:
$> rake resque:work
$> rake resque:scheduler
My question is how to find worker and scheduler are running? For worker I can use
Resque::Worker.all lists all running workers. And it works fine. But I have not been able to find how to find scheduler is runnig or not (or list all scheduled jobs). I tried Resque::Scheduler.print_schedule but that does not print schedules and always returns epmty hash even if I have run rake resque:scheduler
The method you need is
Resque.get_schedules
It displays a hash of currently scheduled jobs and their related metadata.
{
"job1"=> {
"every"=>"30s",
"class"=>"JobClass",
"args"=>[1],
"queue"=>"default",
"description"=> "Testing..."
}
}
If you aren't dynamically adding any jobs, i.e you aren't using the dynamic setting of resque-scheduler
Resque::Scheduler.dynamic = true
Then
Resque.schedule
will give you the same result.
Am also searching for listing all the scheduled jobs...
Resque.count_all_scheduled_jobs
This displays the number only.
I tried a couple of methods in
Resque.metaclass.instance_methods-Class.methods
but no luck. Have to get into their Github source

Resources