Invoke delayed_job capistrano tasks only on specific servers - ruby

I have a dedicated server for delayed_job tasks. I want to start, stop, and restart delayed_job workers on only this server. I am using the capistrano recipes provided by delayed_job.
When I only had 1 server, this was my config:
before "deploy:restart", "delayed_job:stop"
after "deploy:restart", "delayed_job:start"
after "deploy:stop", "delayed_job:stop"
after "deploy:start", "delayed_job:start"
Now I want to have those hooks only apply to a separate delayed_job server (role :delayed_job <ip address>). Is this possible to do elegantly? Do I have to wrap each delayed_job tasks in a meta task? Or write my own tasks and not use the ones provided by delayed job?

When you define a task in Capistrano you can restrict the execution of the task to specific role(s). The way you do this is by passing the :role option.
It seems the default delayed_job Capistrano recipe does this.
desc "Stop the delayed_job process"
task :stop, :roles => lambda { roles } do
run "cd #{current_path};#{rails_env} script/delayed_job stop"
end
According to the source code, the task fetches the list of roles from the :delayed_job_server_role configuration variable.
Back to your problem, to narrow the execution of the tasks to a specific group of servers, define a new role (for example worker) in your deploy.rb
role :worker, "192.168.1.1" # Assign the IP of your machine
Then set the :delayed_job_server_role to that name
set :delayed_job_server_role, :worker
That's all. Now the tasks will be executed, but only to the servers listed in the :worker role.

Related

Exec systemctl after deployment by capistrano

I want to restart Nginx-unit after deployment by Capistrano
namespace :deploy do
desc 'Collec Static Files'
task :collectImg do
on roles(:app) do
execute "sudo systemctl restart unit"
end
end
after :publishing, :collectImg
end
After above code, there comes error log like this.
Is there any good way to use systemctl in deployment script???
DEBUG [08ce969a] Command: sudo systemctl restart unit
DEBUG [08ce969a] sudo: a terminal is required to read the password; either use the -S option to read from standard input or configure an askpass helper
First - Great you are using NGINX Unit with Capistrano!
This problem is related to a couple of issues at a time.
1. Enable tty mode
Add this to your deployment configuration deploy.rb. Capistrano 3.
set :pty, true
More Information about PTY
2. Modify your sudoers file on the App Server
You have to allow the deployment user to execute sudo commands without the need of entering a password. You can / should limit this power to specific resources. I have added restart and status as examples.
deploy ALL=NOPASSWD:/bin/systemctl restart unit.serivce, /bin/systemctl status unit.serivce
As a reference see Capistrano Auth
Would like to chat about your capistrano configuration on our Community NGINX channel.

How can i run my ruby service with chef recipe

I have a web service in ruby, and i want to run my ruby service via chef recipe.
I have used execute command as :
execute 'start-ruby' do
command 'ruby /opt/certificate.rb start'
action :run
end
I can see my ruby service running in background on my Amazon instance, but somehow Instance setup is stuck in running setup.
Is there any other alternative from which i can run my ruby service via chef recipe.
The execute resource runs its command synchronously, meaning it waits for it to finish before continuing with the recipe. I'm guessing your start command there starts a foreground-mode daemon (which is how it should work) so it never returns and Chef just waits forever.
Check out https://github.com/poise/application_examples/blob/master/recipes/todo_rails.rb or https://github.com/poise/poise-service for examples of Ruby application deployment and generic service management respectively.

Concurrency with Resque on Heroku (running multiple workers per node)

Pardon my ignorance, but is there to increase the number of processes per dyno for Resque workers? And if so, how?
I'm currently using Unicorn to add concurrency to the web dynos, which has been working great so far. I would like to extend this to the Resque workers. I followed Heroku's guide to set up the concurrency.
Update: The solution below works, but is not recommended. For resque concurrency on heroku use the resque-pool gem.
It is possible if you use the COUNT=* option. Your procfile will look something like:
web: bundle exec unicorn -p $PORT -c ./config/unicorn.rb
resque: env TERM_CHILD=1 COUNT=2 RESQUE_TERM_TIMEOUT=6 QUEUE=* bundle exec rake resque:workers
It is impotant to note that the rake task in the Procfile is resque:workers and not resque:work.
Update Explanation
There are major problems with the COUNT=* option and the rake resque:workers invocation in production on heroku. Because of the way resque starts up the multiple workers using threads, all of the SIGTERM, SIGKILL, etc. handling that allows workers to stop the current job, re-enqueue job, and shut down properly (including de-registering) will ever happen. This is because the the signals are handled by the main process not trapped by the threads. This can cause phantom workers to remain in the worker list long after they've been killed. This is probably why there is a comment in the resque code that warns that the resque:workers should only be used in development mode.

instructions/manual on auto launch/shutdown on EC2

need pretty trivial task
i have server, which in crontab every night will run "something" what will launch new EC2 instance, deploy there code (ruby script), run it, upon completion of the script shutdown the instance.
how to do it the best?
thanks.
Here's an approach that can accomplish this without any external computer/cron job:
EC2 AutoScaling supports schedules for running instances. You could use this to start an instance at a particular time each night.
The instance could be of an AMI that has a startup script that does the setup and running of the job. Or, you could specify a user-data script be passed to the instance that does this job for you.
The script could terminate the instance when it has completed running.
If you are running EBS boot instance, then shutdown -h now in your script will terminate the instance if you specify instance-initiated-shutdown-behavior of terminate.

Resque Workers working on the wrong queue

I have a few Resque jobs running, each started in a separate terminal window like so:
QUEUE=queue_1 rake environment resque:work
QUEUE=queue_2 rake environment resque:work
Queue 1 started first, then queue 2. The problem is, no matter what QUEUE options I send to new workers, they just keep working on queue 1 -- even if I shut both down. Might this a configuration problem? I haven't seen this issue mentioned anywhere.
Are you explicitly setting the ENV['QUEUE'] environment variable in the "environment" or "resque:setup" tasks defined in the Rakefile?

Resources