Resque Workers working on the wrong queue - ruby

I have a few Resque jobs running, each started in a separate terminal window like so:
QUEUE=queue_1 rake environment resque:work
QUEUE=queue_2 rake environment resque:work
Queue 1 started first, then queue 2. The problem is, no matter what QUEUE options I send to new workers, they just keep working on queue 1 -- even if I shut both down. Might this a configuration problem? I haven't seen this issue mentioned anywhere.

Are you explicitly setting the ENV['QUEUE'] environment variable in the "environment" or "resque:setup" tasks defined in the Rakefile?

Related

How to run a specific worker in ManageIQ?

Sometimes I need to run on specific MIQ worker in foreground.
rake evm:start runs all the workers, but if I need just one, how can I do that?
In case if you are unsure what workers to work with, you might be able to do the following:
run evm server normally: bundle exec rake evm:start and see what worker types were running: bundle exec rake evm:status
kill evm server: bundle exec rake evm:stop
start a single worker in the foreground:
ruby lib/workers/bin/run_single_worker.rb MiqWorkerClassHere

After jenkins job complete, the service also down

I have met a problem.
I used Jenkins to install haproxy and start the service, but after the job complete, the executor is free, and the haproxy daemon also disappear.
if I use sleep 30s after the service start, and the haproxy service will also alive at the 30s, after that, the haproxy daemon will down.
This behaviour is by design, as explained in ProcessTreeKiller. To avoid daemons spawned by the Jenkins build being terminated, add
export BUILD_ID=dontKillMe
to the beginning of your shell step.

Concurrency with Resque on Heroku (running multiple workers per node)

Pardon my ignorance, but is there to increase the number of processes per dyno for Resque workers? And if so, how?
I'm currently using Unicorn to add concurrency to the web dynos, which has been working great so far. I would like to extend this to the Resque workers. I followed Heroku's guide to set up the concurrency.
Update: The solution below works, but is not recommended. For resque concurrency on heroku use the resque-pool gem.
It is possible if you use the COUNT=* option. Your procfile will look something like:
web: bundle exec unicorn -p $PORT -c ./config/unicorn.rb
resque: env TERM_CHILD=1 COUNT=2 RESQUE_TERM_TIMEOUT=6 QUEUE=* bundle exec rake resque:workers
It is impotant to note that the rake task in the Procfile is resque:workers and not resque:work.
Update Explanation
There are major problems with the COUNT=* option and the rake resque:workers invocation in production on heroku. Because of the way resque starts up the multiple workers using threads, all of the SIGTERM, SIGKILL, etc. handling that allows workers to stop the current job, re-enqueue job, and shut down properly (including de-registering) will ever happen. This is because the the signals are handled by the main process not trapped by the threads. This can cause phantom workers to remain in the worker list long after they've been killed. This is probably why there is a comment in the resque code that warns that the resque:workers should only be used in development mode.

Use God with multiple applications and start them automatically after a reboot

I'm currently trying to monitor various processes/daemons of in total three Rails/Rack Applications using god. Monitoring works great, the problem is that i'm not able to configure god to autostart all processes after a reboot.
My Setup: I'm running a Linux VPS with Centos & Plesk.
I have a non-root linux user "deployer" which is used to deploy & run the three Rails/Rack Applications. Two applications are running with the passenger apache module, the third application uses a thin Server (that's necessary because the application doesen't work with apache). The two Rails applications, that are using passenger have additional rake tasks that run in the background - these and the thin Server are monitored by god.
The god gem is specified in the Gem File of all three Applications.
In every deploy.rb file i have a method that looks like
namespace :misc do
desc "restart woekers using gog; restart webserver"
task :restart, roles: [:web, :resque] do
run "touch #{current_path}/tmp/restart.txt"
god.all.start
god.all.reload
god.all.terminate
god.all.start
end
end
After a reboot of the server, if i run the cap misc:restart for all three applications manually, all processes are booted up and monitored correctly.
Every try to start god automatically on boot and start all necessary processes failed so far.
I tried many different things, but nothing worked. My approach so far was to create a cron task with #reboot that runs three of the following script:
#!/bin/bash -l
cd /path/to/app/ && bundle exec god -c /path/to/app/config/god/resque.god && bundle exec god load /path/to/app/config/god/resque.god && bundle exec god start resque
This works great for the first application: god and all processes are started.
When the script is executed for the second application (of course with the with the correct paths), god is not able to start the tasks.
I enabled logging in god and the error message (in case of the Rack Application) was "thin: command not found".
When I'm starting the Rack Application first, thin is started correctly and the commands of the other task are not found.
I don't get whats wrong with my configuration. I added the bundle exec command in front of the god calls as you can see above (so the commands should be executed in the environment of their respective application) - nevertheless, it just doesen't work.
I would really appreciate if anyone could help me getting god to start automatically.
If you need further information please don't hesitate to ask!
Thanks in Advance!
Am working on something similar and took this approach:
Use upstart or something similar to launch the god daemon on system boot, for me this is done like so:
/etc/init/god.conf
description "god"
start on runlevel [2]
stop on runlevel [016]
console owner
exec /usr/local/rvm/bin/rvm_god -c /etc/god
respawn
That guy runs god specifying one ruby god configuration file with the -c option:
/etc/god
# Load the configs
God.load "/home/dangerousbeans/kitten_smusher/config/config.god"
God.load "/home/dangerousbeans/irc_nommer/config/config.god"
This ruby dude loads in the individual application god configs and running God.load causes them to boot up.
The individual files look like this I guess as I'm using RVM:
/home/dangerousbeans/irc_nommer/config/config.god
God.watch do |w|
w.dir = "/home/dangerousbeans/irc_nommer"
w.name = "IRCnommer"
# scary rvm magic begins
gemsets_path = [
"/home/dangerousbeans/.rvm/gems/ruby-1.9.3-p125#irc_nommer/bin",
"/home/dangerousbeans/.rvm/rubies/ruby-1.9.3-p125/bin",
"/home/dangerousbeans/.rvm/bin",
ENV['PATH'] # inherit this
].join(':')
w.env = {
"PATH" => gemsets_path,
"GEM_PATH" => "/home/dangerousbeans/.rvm/gems/ruby-1.9.3-p125#irc_nommer"
}
# scary rvm magic ends
w.log = "/tmp/ircnommer.log"
w.start = "ruby /home/dangerousbeans/irc_nommer/irc_nommer.rb"
w.keepalive
end
The key point is the environments is different between manual and automatic while god execute the [start] command.
So you can add command env to the command. like:
God.watch do |w|
w.start = "cd #{your_app_directory}; env >> log/god.log; your-real-command >> log/god.log 2>&1"
end
There'll be some differences as you type env in the same directory.
Check the difference and add required/correct paragraph to god's env.
Today I encounter an issue, I deployed 2 rails apps in 1 server, both uses god. The App#2 can't startup the command correctly. After do above test I found the cause: God hold an environment variable [BUNDLE_GEMFILE] that points to App#1. So I add a simple line then error gone away:
God.watch do |w|
w.env = {
"BUNDLE_GEMFILE" => "#{$rails_root}/Gemfile"
}
end

Invoke delayed_job capistrano tasks only on specific servers

I have a dedicated server for delayed_job tasks. I want to start, stop, and restart delayed_job workers on only this server. I am using the capistrano recipes provided by delayed_job.
When I only had 1 server, this was my config:
before "deploy:restart", "delayed_job:stop"
after "deploy:restart", "delayed_job:start"
after "deploy:stop", "delayed_job:stop"
after "deploy:start", "delayed_job:start"
Now I want to have those hooks only apply to a separate delayed_job server (role :delayed_job <ip address>). Is this possible to do elegantly? Do I have to wrap each delayed_job tasks in a meta task? Or write my own tasks and not use the ones provided by delayed job?
When you define a task in Capistrano you can restrict the execution of the task to specific role(s). The way you do this is by passing the :role option.
It seems the default delayed_job Capistrano recipe does this.
desc "Stop the delayed_job process"
task :stop, :roles => lambda { roles } do
run "cd #{current_path};#{rails_env} script/delayed_job stop"
end
According to the source code, the task fetches the list of roles from the :delayed_job_server_role configuration variable.
Back to your problem, to narrow the execution of the tasks to a specific group of servers, define a new role (for example worker) in your deploy.rb
role :worker, "192.168.1.1" # Assign the IP of your machine
Then set the :delayed_job_server_role to that name
set :delayed_job_server_role, :worker
That's all. Now the tasks will be executed, but only to the servers listed in the :worker role.

Resources