I am using needing to start and stop a server during some rake tasks. I am using the following code:
task :start_server do
job = fork do
system `http-server ./_site -p 4000`
end
Process.detach(job)
#pid = Process.pid
end
task :stop_server do
puts "stopping server"
system Process.kill('QUIT', #pid)
end
The start works fine but I cannot get it to stop.
I am calling these tasks within capistrano deploy e.g. after 'pdf:generate_pdf', 'pdf:stop_server'
I dont get an error but I can still see pages being served by the web server.
Is there a better way to end the process?
I have found that this command stops the http-server perfectly.
task :stop_server do
system("pkill -f http-server")
end
I have been using resque for background processing, No my problem with code is :
- when I start rake task as "rake resque:work QUEUE=''" as per ryan bates episode no. 271. in remote server the code inside worker class for file maipulation works properly without any filepath issues and I/O errors.
- when i start rake task as "rake resque:work QUEUE='' BACKGROUND=yes" now, the code inside worker class gives "failed:Errno::EIO: Input/output error # io_write - >" error.
Now my question is I want to start the resque queue above rake command only one time and why second point giving error is this issue with filepaths if so then why it runs smoothly as mention in point first.
You can use god to manage your background process. Or nohup can be your solution too as below:
$ nohup bundle exec rake resque:work QUEUE=queue_name PIDFILE=tmp/pids/resque_worker_QUEUE.pid & >> log/resque_worker_QUEUE.log 2>&1
and even this command worked for me:
PIDFILE=./resque.pid BACKGROUND=yes QUEUE="*" rake resque:work >> worker1.log &
Hope that will help you too.
I'm trying to start or restart Unicorn when I do cap production deploy with Capistrano 3.0.1. I have some examples that I got working with Capistrano 2.x using something like:
namespace :unicorn do
desc "Start unicorn for this application"
task :start do
run "cd #{current_path} && bundle exec unicorn -c /etc/unicorn/myapp.conf.rb -D"
end
end
But when I try and use run in the deploy.rb for Capistrano 3.x I get an undefined method error.
Here are a couple of the things I tried:
# within the :deploy I created a task that I called after :finished
namespace :deploy do
...
task :unicorn do
run "cd #{current_path} && bundle exec unicorn -c /etc/unicorn/myapp.conf.rb -D"
end
after :finished, 'deploy:unicorn'
end
I have also tried putting the run within the :restart task
namespace :deploy do
desc 'Restart application'
task :restart do
on roles(:app), in: :sequence, wait: 5 do
# Your restart mechanism here, for example:
# execute :touch, release_path.join('tmp/restart.txt')
execute :run, "cd #{current_path} && bundle exec unicorn -c /etc/unicorn/deployrails.conf.rb -D"
end
end
If I use just run "cd ... " then I'll get awrong number of arguments (1 for 0)` in the local shell.
I can start the unicorn process with unicorn -c /etc/unicorn/deployrails.conf.rb -D from my ssh'd VM shell.
I can kill the master Unicorn process from the VM shell using kill USR2, but even though the process is killed I get an error. I can then start the process again using unicorn -c ...
$ kill USR2 58798
bash: kill: USR2: arguments must be process or job IDs
I'm very new to Ruby, Rails and Deployment in general. I have a VirtualBox setup with Ubuntu, Nginx, RVM and Unicorn, I'm pretty excited so far, but this one is really messing with me, any advice or insight is appreciated.
I'm using following code:
namespace :unicorn do
desc 'Stop Unicorn'
task :stop do
on roles(:app) do
if test("[ -f #{fetch(:unicorn_pid)} ]")
execute :kill, capture(:cat, fetch(:unicorn_pid))
end
end
end
desc 'Start Unicorn'
task :start do
on roles(:app) do
within current_path do
with rails_env: fetch(:rails_env) do
execute :bundle, "exec unicorn -c #{fetch(:unicorn_config)} -D"
end
end
end
end
desc 'Reload Unicorn without killing master process'
task :reload do
on roles(:app) do
if test("[ -f #{fetch(:unicorn_pid)} ]")
execute :kill, '-s USR2', capture(:cat, fetch(:unicorn_pid))
else
error 'Unicorn process not running'
end
end
end
desc 'Restart Unicorn'
task :restart
before :restart, :stop
before :restart, :start
end
Can't say anything specific about capistrano 3(i use 2), but i think this may help: How to run shell commands on server in Capistrano v3?.
Also i can share some unicorn-related experience, hope this helps.
I assume you want 24/7 graceful restart approach.
Let's consult unicorn documentation for this matter. For graceful restart(without downtime) you can use two strategies:
kill -HUP unicorn_master_pid It requires your app to have 'preload_app' directive disabled, increasing starting time of every one of unicorn workers. If you can live with that - go on, it's your call.
kill -USR2 unicorn_master_pid
kill -QUIT unicorn_master_pid
More sophisticated approach, when you're already dealing with performance concerns. Basically it will reexecute unicorn master process, then you should kill it's predecessor. Theoretically you can deal with usr2-sleep-quit approach. Another(and the right one, i may say) way is to use unicorn before_fork hook, it will be executed, when new master process will be spawned and will try to for new children for itself.
You can put something like this in config/unicorn.rb:
# Where to drop a pidfile
pid project_home + '/tmp/pids/unicorn.pid'
before_fork do |server, worker|
server.logger.info("worker=#{worker.nr} spawning in #{Dir.pwd}")
# graceful shutdown.
old_pid_file = project_home + '/tmp/pids/unicorn.pid.oldbin'
if File.exists?(old_pid_file) && server.pid != old_pid_file
begin
old_pid = File.read(old_pid_file).to_i
server.logger.info("sending QUIT to #{old_pid}")
# we're killing old unicorn master right there
Process.kill("QUIT", old_pid)
rescue Errno::ENOENT, Errno::ESRCH
# someone else did our job for us
end
end
end
It's more or less safe to kill old unicorn when the new one is ready to fork workers. You won't get any downtime that way and old unicorn will wait for it's workers to finish.
And one more thing - you may want to put it under runit or init supervision. That way your capistrano tasks will be as simple as sv reload unicorn, restart unicorn or /etc/init.d/unicorn restart. This is good thing.
I'm just going to throw this in the ring: capistrano 3 unicorn gem
However, my issue with the gem (and any approach NOT using an init.d script), is that you may now have two methods of managing your unicorn process. One with this cap task and one with init.d scripts. Things like Monit / God will get confused and you may spend hours debugging why you have two unicorn processes trying to start, and then you may start to hate life.
Currently I'm using the following with capistrano 3 and unicorn:
namespace :unicorn do
desc 'Restart application'
task :restart do
on roles(:app) do
puts "restarting unicorn..."
execute "sudo /etc/init.d/unicorn_#{fetch(:application)} restart"
sleep 5
puts "whats running now, eh unicorn?"
execute "ps aux | grep unicorn"
end
end
end
The above is combined with the preload_app: true and the before_fork and after_fork statements mentioned by #dredozubov
Note I've named my init.d/unicorn script unicorn_application_name.
The new worker that is started should kill off the old one. You can see with ps aux | grep unicorn that the old master hangs around for a few seconds before it disappears.
To view all caps:
cap -T
and it shows:
***
cap unicorn:add_worker # Add a worker (TTIN)
cap unicorn:duplicate # Duplicate Unicorn; alias of unicorn:re...
cap unicorn:legacy_restart # Legacy Restart (USR2 + QUIT); use this...
cap unicorn:reload # Reload Unicorn (HUP); use this when pr...
cap unicorn:remove_worker # Remove a worker (TTOU)
cap unicorn:restart # Restart Unicorn (USR2); use this when ...
cap unicorn:start # Start Unicorn
cap unicorn:stop # Stop Unicorn (QUIT)
***
So, to start unicorn in production:
cap production unicorn:start
and restart:
cap production unicorn:restart
PS do not forget to correct use gem capistrano3-unicorn
https://github.com/tablexi/capistrano3-unicorn
You can try to use native capistrano way as written here:
If preload_app:true and you need capistrano to cleanup your oldbin pid use:
after 'deploy:publishing', 'deploy:restart'
namespace :deploy do
task :restart do
invoke 'unicorn:legacy_restart'
end
end
I am using rails.vim plugin which is pretty awesome. However, I fail to see how could I test all the specs in one command. Right now I need to open a particular spec and do :Rake and that just tests the current opened spec. However, how could I test all the specs? Which command?
Thanks
Have you tried pairing a rake task with a ViM leader mapping?
In your Rakefile, you could set up something like this:
desc 'Continuous integration task'
task :ci do
['rspec',
'cucumber -f progress',
'rake konacha:run'].each do |cmd|
system("bundle exec #{cmd}")
raise "#{cmd} failed!" unless $?.exitstatus == 0
end
end
Then you can setup a leader command in ViM to execute your ci rake task:
nnoremap <leader>T :w\|:!bundle exec rake ci<CR>
Then when you execute <leader> T in normal mode, ViM will shell out and run bundle exec rake ci.
I use tmux, so I prefer the following leader mapping which runs the rake task in a bottom pane:
nnoremap <leader>T :w\|:silent !tmux send-keys -t bottom C-u 'bundle exec rake ci' C-m <CR>\|:redraw!<CR>
I use crontab to invoke rake task at some time for example: every 3 hour
I want to ensure that when crontab ready to execute the rake task
it can check the rake task is running. if it is so don't execute.
how to do this. thanks.
I'll leave this here because I think it's useful:
task :my_task do
pid_file = '/tmp/my_task.pid'
raise 'pid file exists!' if File.exists? pid_file
File.open(pid_file, 'w'){|f| f.puts Process.pid}
begin
# execute code here
ensure
File.delete pid_file
end
end
You could use a lock file for this. When the task runs, try to grab the lock and run the rake task if you get the lock. If you don't get the lock, then don't run rake; you might want to log an error or warning somewhere too or you can end up with your rake task not doing anything for weeks or months before you know about it. When rake exits, unlock the lock file.
Something like RAA might help but I haven't used it so maybe not.
You could also use a PID file. You'd have a file somewhere that holds the rake processes process ID. Before starting rake, you read the PID from that file and see if the process is running; if it isn't then start up rake and write its PID to the PID file. When rake exists, delete the PID file. You'd want to combine this with locking on the PID file if you want to be really strict but this depends on your particular situation.
All you need is a gem named pidfile.
Add this to your Gemfile:
gem 'pidfile', '>= 0.3.0'
And the task could be:
desc "my task"
task :my_task do |t|
PidFile.new(piddir: "/var/lock", pidfile: "#{t.name}.pid")
# do something
end