Resque worker foreman failing to start workers - ruby

I have a foreman script starting up some workers on a standalone ruby app. Here's the script
Foreman script
worker: bundle exec rake resque:work BACKGROUND=true QUEUE=image VERBOSE=true
When I run the script this is the output I get.
$ foreman start
22:00:38 worker.1 | started with pid 882
22:00:38 worker.1 | exited with code 0
22:00:38 system | sending SIGTERM to all processes
SIGTERM received
The process seems to have exited but when I look at the ps -eaf | grep resque log it shows a resque worker running with pid 884. I've tested this and its always a pid +2 than the original.
When I run the bundle exec command straight from the terminal without foreman, the command executes just fine. Is there anything I'm missing with the foreman script?

So apparently, when running BACKGROUND=true the resque workers get daemonized and therefore the original pid gets deleted and a new one gets spanwed as an orphan process for the worker.
Still, there is an issue when creating 2 background workers with foreman because once one of the workers gets daemonized, foreman will end all processes and only one daemonized worker will be created instead of two.

You should not daemonize the workers with foreman - foreman needs to have all the processes running in the foreground. If you want multiple workers, simply use something like this:
image_worker: bundle exec rake resque:work QUEUE=image VERBOSE=true
other_worker: bundle exec rake resque:work QUEUE=other VERBOSE=true
To start multiple workers on the same queue:
foreman start -m image_worker=2

Related

why rake tasks are not executing using and operator?

I have a rake task :
task :kill_process do
system %q(ps -ef | awk '{if($8~"java" || $8~"glassfish" || $8~"ruby" || $8~"god" || $8~"couch"){printf("Killing : %s \n",$2);{system("kill -9 "$2)};}}')
end
This is basically killing processes. And this task is a part of another rake task :
desc "stop the entire system"
task :stop => [...., :kill_process]
There's another task:
desc "start the entire system"
task :start => [....]
When I am doing rake stop && rake start
stop task is executed successfully. but rake start is not executing.
If i execute both tasks separately, then it works fine. but not in rake stop && rake start
What will be better to use here exec function or system or any other, please suggest me.
My only requirement is to kill these mentioned processes at the end of rake stop. But also it should not effect other things like rake stop && rake start should work fine.
As mentioned in the comments, the exit code is 137 which evaluates to false and therefore the other part of the && does not get executed. The reason for this is probably kill -9.
There are a few options now.
Return 0 from your rake task, something like exit(0)
Don't use kill -9
Create restart command which does execute stop and start but without logically depending on each other (&&).
Exit code 137 indicates that a process has received a SIGKILL signal and was thus killed from the outside.
This happens since a Rake task is also executed by Ruby. As such, your stop task is sending a SIGKILL to its own process too (along with all other Ruby processes on the system). Now, since you have specified that you only want to execute the rake start process if the previous process was successful (i.e. had a exit code of 0), your shell doesn't start the rake task.
To quickly fix this, you can instead run rake stop; rake start, i.e run the two processes regardless of their individual exit codes (by default).
However, a better idea would probably to make your stop task more explicit and only kill the specific processes you need rather than everything in sight which looks slightly like a related process. This will likely result in a more stable system overall too when you don't kill potentially unrelated processes all the time.

resque rake task giving error

I have been using resque for background processing, No my problem with code is :
- when I start rake task as "rake resque:work QUEUE=''" as per ryan bates episode no. 271. in remote server the code inside worker class for file maipulation works properly without any filepath issues and I/O errors.
- when i start rake task as "rake resque:work QUEUE='' BACKGROUND=yes" now, the code inside worker class gives "failed:Errno::EIO: Input/output error # io_write - >" error.
Now my question is I want to start the resque queue above rake command only one time and why second point giving error is this issue with filepaths if so then why it runs smoothly as mention in point first.
You can use god to manage your background process. Or nohup can be your solution too as below:
$ nohup bundle exec rake resque:work QUEUE=queue_name PIDFILE=tmp/pids/resque_worker_QUEUE.pid & >> log/resque_worker_QUEUE.log 2>&1
and even this command worked for me:
PIDFILE=./resque.pid BACKGROUND=yes QUEUE="*" rake resque:work >> worker1.log &
Hope that will help you too.

How can I create a monit process for a Ruby program?

I have these rake tasks that will occasionally fail. I want to use monit to monitor them and to restart them if necessary.
I have read the other ruby/monit threads on StackOverflow. My case is different in that these programs require my Rails environment in order to work. That's why I have them as rake tasks now.
Here is one of the tasks I need to monitor, in it's entirety:
task(process_updates: :environment) do
`echo "#{Process.pid}" > #{Rails.root}/log/process_alerts.pid`
`echo "#{Process.ppid}" > #{Rails.root}/log/process_alerts.ppid`
SynchronizationService::process_alerts
end
My question is, do I leave this as a rake task, since SynchronizationService::process_alerts requires the Rails environment to work? Or is there some other wrapper I should invoke and then just run some *.rb file?
Monit can check for running pid, since you're creating pid when you run task you can create a monit config which should look something like this:
check process alerts with pidfile RAILSROOT/log/process_alerts.pid
start program = "cd PATH_TO_APP; rake YOURTASK" with timeout 120 seconds
alert your#mail.com on { nonexist, timeout }
Of course RAILSROOT, PATH_TO_APP, YOURTASK should correspond to your paths/rake task.
Monit then will check for running process in system using the pidfile value and will start the process using start program command if it can't find running process.

Ruby: bundler exec not behaving as expected when getting signals

I am trying to setup Kibana with Supervisord using Bundler. Installing the Kibana dependencies with Bundler was no problems at all. I tried running bundle exec ruby kibana.rb and it worked. I also tried killing it with Ctrl-C while watching the processes that spawn in htop, and it worked.
However, when killing bundler using supervisord (or signals like SIGINT or SIGTERM for that matter) the two children spawned by it survives. So, if restarting the kibana job in supervisord, the restart will fail as the ports the restarted job will try to allocate are already in use.
From what I can find, bundler exec shouldn't fork and from what I can tell, it doesn't. It just doesn't behave as I expect when it gets signals.
What can I do? Switching from bundler could be a solution, but it is not desirable.
I solved it. I wrote the little run_kibana bash helper below (found at another Stack Overflow thread):
function kill_kibana() {
echo "Trapped termination of Kibana subprocessess"
pkill -TERM -P $1
}
bundle exec ruby kibana.rb &
pid=$!
trap "kill_kibana $pid" SIGINT SIGTERM SIGKILL SIGQUIT
wait $pid
And it works like a charm. Might be reasonable to extend it to send the actual signal that was trapped in the kill_kibana() function.

Rake task for running a server in an independent thread then killing the thread when the task is complete?

How do I launch a thread within a rake task then kill the tread when the task is complete.
Essentially I am writing a rake task to test a jekyll site. I would like be able to launch the server, do some other tasks and then destroy the thread when the task is complete. Here is what I have thus far:
task :test_site do
`ejekyll --server`
`git -Xdn`
if agree( "Clean all ignored files?")
git -Xdf
end
end
but unfortunately the only way I know of to stop the jekyll --server is to use ctrl c. I would be happy to hear of a way to stop a jekyll --server in a manor which does not exit the rake task but please just comment as the question is specifically asking about threading and rake tasks.
You want Process.spawn, not a thread. It's a new process, not a thread of execution within an existing process. You get the PID back, so just send Process.kill(:QUIT, pid) or whatever method you want to use to kill the spawned processed.
pid = Process.spawn(
"ejekyll", "--server",
out: "/dev/null",
err: "/dev/null"
)
# you may need to add a short sleep() here
# other stuff
Process.kill(:QUIT, pid) && Process.wait
If ejekyll has a command line option to run in the foreground, it would be better to use that, otherwise if it self-daemonizes you need to know where it stores its PID file, in order to identify and kill the daemon.

Resources