I have a rake task :
task :kill_process do
system %q(ps -ef | awk '{if($8~"java" || $8~"glassfish" || $8~"ruby" || $8~"god" || $8~"couch"){printf("Killing : %s \n",$2);{system("kill -9 "$2)};}}')
end
This is basically killing processes. And this task is a part of another rake task :
desc "stop the entire system"
task :stop => [...., :kill_process]
There's another task:
desc "start the entire system"
task :start => [....]
When I am doing rake stop && rake start
stop task is executed successfully. but rake start is not executing.
If i execute both tasks separately, then it works fine. but not in rake stop && rake start
What will be better to use here exec function or system or any other, please suggest me.
My only requirement is to kill these mentioned processes at the end of rake stop. But also it should not effect other things like rake stop && rake start should work fine.
As mentioned in the comments, the exit code is 137 which evaluates to false and therefore the other part of the && does not get executed. The reason for this is probably kill -9.
There are a few options now.
Return 0 from your rake task, something like exit(0)
Don't use kill -9
Create restart command which does execute stop and start but without logically depending on each other (&&).
Exit code 137 indicates that a process has received a SIGKILL signal and was thus killed from the outside.
This happens since a Rake task is also executed by Ruby. As such, your stop task is sending a SIGKILL to its own process too (along with all other Ruby processes on the system). Now, since you have specified that you only want to execute the rake start process if the previous process was successful (i.e. had a exit code of 0), your shell doesn't start the rake task.
To quickly fix this, you can instead run rake stop; rake start, i.e run the two processes regardless of their individual exit codes (by default).
However, a better idea would probably to make your stop task more explicit and only kill the specific processes you need rather than everything in sight which looks slightly like a related process. This will likely result in a more stable system overall too when you don't kill potentially unrelated processes all the time.
Related
My aim is to have an alias that will run commands like this:
alias thing="task_1 & && task_2"
The point being that task_1 is a long running task and should be started before task_2 but ultimately both should be running at the same time.
Any suggestions?
If both should be running at the same time, then && is probably not what you want to use. It waits for the exit of the first command and executes the second only if the first was successful. With the backgrounding of the first task, this doesn't really make sense.
I tend to do what you're up to this way:
alias thing="(sleep 5 &); sleep 1;"
(The parenthesis have a side effect that I like: You don't get the notifications about the process being forked or reaped.)
I pretty much need what my Question statement says, currently I have a Capistrano task that looks like this:
desc "stops private pub"
task :stop_private_pub do
run "kill -9 $(lsof -i:9292 -t)"
end
before 'deploy', 'servers:stop_private_pub'
...And it works well when in fact the process in port 9292 is running, The problem is that when the process Isn't running this task will FAIL! and it Halts the whole Deployment Process!
I'm not a UNIX Shell expert, nor am I a Capistrano master, so.. I really need help improving this Capistrano Task, Is there a way to kill -9 only if the process is running?
How can I do this?
Thanks in advance.
You could use Capistrano's capture command (in V3 at least, probably a V2 equivalent) to grab the output from your lsof command, and then only if you get a PID run the kill command.
pid = capture 'lsof', '-i:9292', '-t'
if pid # ensure it's valid here
run "kill -9 #{pid}" # make darn sure pid is an integer if you embed it
end
You could also do:
run "kill -9 $(lsof -i:9292 -t); true"
or add on error continue:
task :stop_web, :roles => :app, :on_error => :continue do
run "dosomething.sh; true"
end
I have these rake tasks that will occasionally fail. I want to use monit to monitor them and to restart them if necessary.
I have read the other ruby/monit threads on StackOverflow. My case is different in that these programs require my Rails environment in order to work. That's why I have them as rake tasks now.
Here is one of the tasks I need to monitor, in it's entirety:
task(process_updates: :environment) do
`echo "#{Process.pid}" > #{Rails.root}/log/process_alerts.pid`
`echo "#{Process.ppid}" > #{Rails.root}/log/process_alerts.ppid`
SynchronizationService::process_alerts
end
My question is, do I leave this as a rake task, since SynchronizationService::process_alerts requires the Rails environment to work? Or is there some other wrapper I should invoke and then just run some *.rb file?
Monit can check for running pid, since you're creating pid when you run task you can create a monit config which should look something like this:
check process alerts with pidfile RAILSROOT/log/process_alerts.pid
start program = "cd PATH_TO_APP; rake YOURTASK" with timeout 120 seconds
alert your#mail.com on { nonexist, timeout }
Of course RAILSROOT, PATH_TO_APP, YOURTASK should correspond to your paths/rake task.
Monit then will check for running process in system using the pidfile value and will start the process using start program command if it can't find running process.
I have a Rakefile that I use to automate some tasks in my project.
Inside some tasks, I call system, but, even if the process return an error,
task continues without any issue.
How can I avoid that? I want to make rake exit when some subprocess return an error.
Thanks in advance
You can evaluate the return value of system
system('inexistent command') or exit!(1)
puts "This line is not reached"
sh is the Rake way to call a command. It will fail with a neat message. Compared with system, sh prints out the command as well.
How do I launch a thread within a rake task then kill the tread when the task is complete.
Essentially I am writing a rake task to test a jekyll site. I would like be able to launch the server, do some other tasks and then destroy the thread when the task is complete. Here is what I have thus far:
task :test_site do
`ejekyll --server`
`git -Xdn`
if agree( "Clean all ignored files?")
git -Xdf
end
end
but unfortunately the only way I know of to stop the jekyll --server is to use ctrl c. I would be happy to hear of a way to stop a jekyll --server in a manor which does not exit the rake task but please just comment as the question is specifically asking about threading and rake tasks.
You want Process.spawn, not a thread. It's a new process, not a thread of execution within an existing process. You get the PID back, so just send Process.kill(:QUIT, pid) or whatever method you want to use to kill the spawned processed.
pid = Process.spawn(
"ejekyll", "--server",
out: "/dev/null",
err: "/dev/null"
)
# you may need to add a short sleep() here
# other stuff
Process.kill(:QUIT, pid) && Process.wait
If ejekyll has a command line option to run in the foreground, it would be better to use that, otherwise if it self-daemonizes you need to know where it stores its PID file, in order to identify and kill the daemon.