Ruby running two scripts with mulithreading - ruby

So I'm trying to have my ruby (no rails) application be run with a single call from the terminal i.e. 'ruby run.rb'. However I have two scripts that need to be run, app.rb and app2.rb, the issue is, both these scripts don't finish - they keep on being run so as to keep the system running, this means that one of the scripts never gets run - it calls the first script (app.rb) and never the second (app2.rb) these scripts need to be run concurrently!
It does work when I open another command line and just run one script in each however.
I have tried:
def runApp
system("ruby app.rb")
end
def runApp2
system("ruby app2.rb")
end
t1 = Thread.new{runApp()}
t2 = Thread.new{runApp2()}
t1.join
t2.join
However this will only run the first thread (the one running app.rb) because this is being constantly run. Any ideas how it can also run the second thread concurrently?
:EDIT: One of the scripts is using the Sinatra gem, the other also calls one of its functions every ten seconds.

So one possible solution I've found is
system("ruby app.rb & ruby app2.rb")
This only works however if running from linux I think however so I would still appreciate any further solutions.

According to the documentation you can do it like this:
threads = []
threads << Thread.new{runApp()}
threads << Thread.new{runApp2()}
threads.each { |thr| thr.join }
I guess this works because each is parallel.

Related

How to wait for a system command to execute using ruby script

I have used system method to run a bat file, the bat file opens and run successfully but my execution wont stop until my bat file has completed its execution
I tried several method like system, exec but nothing is working as I expected. I am new to ruby. I want to be able to stop my execution until my bat file has completed its execution.
Code:
system('path/to/file.bat')
From what I know system is using a subshell and waits for the called command to be fully executed to continue execution of the caller script.
I just executed the following code locally:
require 'time'
puts Time.now; system('sleep 5'); puts Time.now
# result:
# 2023-02-03 11:44:19 +0100
# 2023-02-03 11:44:24 +0100
And as you can see there is 5 seconds between the first Time.now call and the second one, so the calling process waited fo the system to finish the execution before printing the time again.
Maybe the behavior is different on windows. Would you mind to try locally and share the result ?

ensure block of code runs when program exits

I want to make sure that a peice of code runs when the ruby program ends. I used the following ways but they do not work in some situations.
def a_method
# do some work
ensure
# code that must be run when method ends and if program exits when it still in this method.
end
def a_method
at_exit{
# run code that needs to be run when process exists
}
# do some work
ensure
# do code that needs to be run when method ends
end
those two methods works very well when the process is killed with a signal other than kill -9 'although I didn't tried all the signals'.
So is there a way to make sure that code runs even if the process is killed with this signal?
Signal 9 is non-catchable, non-ignorable kill, by design. Your at_exit will not run because the operating system will simply terminate any process that receives this signal, not giving it any chance to do any extra work.

how do i run an asynchronous loop in ruby?

I need to execute a process that runs several admin system commands. I want to keep the sudo timestamp current while it runs in case the process run too long.
i have the following code, but it does not seem to work.
sudo_keep_alive = Thread.start do
def sudo
sleep 5.minutes
`sudo -v`
sudo
end
sudo
end
at_exit do
sudo_keep_alive.kill
end
Is there a convention for this?
UPDATE
The reason i cannot run the script has root, is there are other system commands the script runs that cannot run as root. Each command needs to be responsible for running it's own admin commands. The script can potentially take a considerable amount of time to run, so i simply wanted to keep the sudo timestamp fresh in the event a command needs it.
To answer your other question, there is a better way to run an asynchronous loop.
By using head-tail recursion (def sudo; do_something; sudo; end) you risk running into a SystemStackError at around 10000 calls (see How does your favorite language handle deep recursion? ).
Instead, just use a regular old ruby loop.
Thread::new do
loop do
sleep 300 # 5.minutes is not base ruby, it comes from ActiveSupport
call_my_function
end
end
As mentioned by David Unric, there is no need to kill the thread using at_exit, as your main process will automatically kill any active threads when it finishes.
Scrap all this and execute your ruby script as root instead.
$ sudo ruby myscript.rb

Rakefile - stop every tasks in a multitask

I have an application running with Flask, and use Compass as css preprocessor. Which means I need to start the python server and compass for development. I made what I thought was a clever Rakefile to start everything from one command and have everything run in only one terminal window.
Everything works, but the problem is when I try to stop everything (with cmd + c), it only kills the compass task and the Flask server keeps running. How can I make sure every tasks are stopped? Or is there an alternative to simultaneously launch several tasks without this issue?
Here is my rakefile, pretty simple:
# start compass
task :compass do
system "compass watch"
end
# start the flask server
task :python do
system "./server.py"
end
# open the browser once everything is ready
task :open do
`open "http://127.0.0.1:5000"`
end
# the command I run: `$ rake server`
multitask :server => ['compass', 'python', 'open']
EDIT
For the record, I was using a Makefile and everything worked perfectly. But I changed part of my workflow and started using a Rakefile, so I Rakefile'd everything and got rid of the Makefile for simplicity.
That is because system creates new processes for your commands. To make sure they are killed alongside your ruby process, you will need to kill them yourself. For this you need to know their process ids, which system does not provide, but spawn does. Then you can wait for them to exit, or kill the sub-processes when you hit ^C.
An example:
pids = []
task :foo do
pids << spawn("sleep 3; echo foo")
end
task :bar do
pids << spawn("sleep 3; echo bar")
end
desc "run"
multitask :run => [:foo, :bar] do
begin
puts "run"
pids.each { |pid| Process.waitpid(pid) }
rescue
pids.each { |pid| Process.kill("TERM", pid) }
exit
end
end
If you do a rake run on that, the commands get executed, but when you abort, the tasks are sent the TERM signal. There's still an exception that makes it to the top level, but I guess for a Rakefile that is not meant to be published that does not matter too much. Waiting for the processes is necessary or the ruby process will finish before the others and the pids are lost (or have to be dug out of ps).

Troubles with parallel processes in IRB on Mac

I am working with a database via IRB, and I would like to make periodic changes in the database (e.g., every 10 sec) showing the log in STDOUT.
Also, I would like to have manual control being able to change the database and to stop the first process.
So far I came up to the following
def start
stop
#running = Thread.new do
loop do
fork do
puts 'change the database'
end
sleep 10
end
end
nil
end
def stop
#running.kill if #running
end
However, this is not running every 10 sec unless I enter something in the main IRB thread.
How to make it working?
Some versions of readline on OSX are blocking. If one experiences the behavior you described, they can disable readline by putting
IRB.conf[:USE_READLINE] = false
in .irbrc
Works fine for me (tested in irb with ruby 1.9.2-p180 and 1.8.7-p334).

Resources