Using Ruby on Rails app to control console application - ruby

The console application I would like to control is Bluesoleil. It's a Bluetooth software/driver, but details of the software isn't that important I think. What I want to do is basically, type console commands in Windows or Linux terminal environment using web browser running Ruby on Rails app.
So high level design of the Ruby on Rails app would be something like this.
Web browser showing a page with UI for Bluesoleil
Ruby on Rails app render the page for UI, takes in commands from the user and displays result through web browser, just like regular Ruby on Rails app
On the backend, Ruby on Rails types in commands in the console that is running Bluesoleil. And the result shown in the console is grabbed as string by Ruby on Rails.
Is something like this possible with Ruby on Rails?
Just to clear possible confusion, when I say console and console application here, I don't mean Rails console or Ruby console. Console here is just a terminal environment running console applications and so on.
Thank you.

If you only need to run a "one-off" command, just use backticks. If you need to maintain a long-running background process, which accepts commands and returns responses, you can do something like this (some of the details have been edited out, since this code is from a proprietary application):
class Backend
def initialize
#running = false
#server = nil
# if we forget to call "stop", make sure to close down background process on exit
ObjectSpace.define_finalizer(self,lambda { stop if #running })
end
def start
stop if #running
#server = IO.popen("backend","w+")
#running = true
end
def stop
return if not #running
#server << "exit\n"
#server.flush
#running = false
end
def query(*args)
raise "backend not running" if not #running
#server << "details edited out\n"
#server.flush
loop do
response = parse_response
# handle response
# break loop when backend is finished sending data
end
end
private
def parse_response
# details edited out, uses c = #server.getc to read data from backend
# getc will block if there is nothing to read,
# so there needs to be an unambiguous terminator telling you where
# to stop reading
end
end
end
You can adapt this to your own needs. Beware of situations where the background process dies and leaves the main process hanging.
Although it doesn't apply to your situation, if you are designing the background program yourself: Build the background process so that if ANYTHING makes it crash, it will send an unambiguous message like "PANIC" or something which tells the main process to either exit with an error message, or try starting another background process. Also, make sure it is completely unambiguous where each "message" begins and ends, and test the coordination between main/background process well -- if there are bugs on either end, it is very easy to get a situation where both processes get stuck waiting for each other. One more thing: design the "protocol" which the 2 processes speak to each other in a way which makes it easy to maintain synchronization between the 2.

Related

RSpec testing of a multiprocess library

I'm trying to test a gem I'm creating with RSpec. The gem's purpose is to create queues (using 'bunny'). It will serve to communicate between processes on several servers.
But I cannot find documentation on how to safely create processes inside RSpec running environment without spawning several testing processes (all displaying example failures and successes).
Here is what I wanted the tests to do :
Spawn children processes, waiting on the queue
Push messages from the main RSpec process
Consumes the queue on the children processes
Wait for children to stop and get the number of messages received from each child.
For now I implemented a simple case where child is consuming only one message and then stops.
Here is my code currently :
module Queues
# Basic CR accepting only jobs of type cmd_line
class CR
attr_reader :nb_jobs
def initialize
# opening communication pipes
#rout, #wout = IO.pipe
#nb_jobs = nil # not yet available.
end
def main
#todo = JobPipe.instance
job = #todo.pop do |j|
# accept only CMD_LINE type of jobs.
j.type == Messages::Job::CMD_LINE
end
# run command
%x{#{job.cmd}}
#wout.puts "1" # saying that we did one job
end
def run
#pid = Process.fork
if #pid.nil? then
# we are in the child
self.main
#rout.close
#wout.close
exit
end
end
def wait
#nb_jobs = #rout.gets(nil).to_i
Process.wait(#pid)
#rout.close
#wout.close
#nb_jobs
end
end
#job = Messages::Job.new({:type => Messages::Job::CMD_LINE, :cmd => "sleep 1" })
RSpec.describe JobPipe do
context "one Orchestrator and one CR" do
before(:each) do
indalo_queue_pre_configure
end
it "can send a job with Orchestrator and be received by CR" do
cr = CR.new
cr.run # execute the C.R. process
todo = JobPipe.instance
todo.push(#job)
nb_jobs = cr.wait
expect(nb_jobs).to eql(1)
end
end
context "one Orchestrator and severals CR" do
it 'can send one job per CR and get all back' do
crs = Array.new(rand(2..10)) { CR.new }
crs.each do |cr|
cr.run
end
todo = JobPipe.instance
crs.each do |_|
todo.push(#job)
end
nb_jobs = 0
crs.each do |cr|
nb_jobs += cr.wait
end
expect(nb_jobs).to eql(crs.length)
end
end
end
end
Edit: The question is (sorry not putting it right away, this was a mistake):
Is there a way to use correctly RSpec on a multi-process environment ?
I'm not looking for a code review, just wanted to display a clear example of what I wanted to do. Here I used fork, but this duplicate all the process (including RSpec part) and got numerous RSpec outputs which is not what we would expect in a test suite.
I would expect that only the main program states the RSpec outputs/stats and the subprocesses just interact with it.
The only way I see to do that correctly is not fork, but call subprocesses through an other mean. Maybe I answer alone to this question...
But not knowing well RSpec, I was wondering if someone knew how to do it within RSpec without writing external code. It seems to me that having separate codes linked to a single test example is not a good idea.
What I found about multi-process testing is this plugin to RSpec. The only thing is I don't know about the mock concept, but maybe I have to learn about it...
Ok, I found an answer which is to use the &block argument of the Process.fork method. In this case, you don't really duplicate all the process, but just execute the block of code in an other process and then return 0 (like said in the Ruby doc).
This prevent the children to get all the RSpec environment and displaying plenty of times the states of your tests.
PS : Be careful not to forget to redirect STDOUT/STDERR of child process if you don't want them to pollute the STDOUT/STDERR of the test.
PS2: don't forget to close #wout on the parent side if you call #rout.gets(nil) in it, because having it opened on the parent prevent EOF from happening (a bug in the code I presented) even if you close it in the child.
PS3: Use two pipes instead of one to prevent child/parent to talk and listen in the same. Childhood error but I did it again.
PS4: Use exit statement (at the end of the &block) to prevent zombie state of the child and usure parent not waiting too long that the rest of the child process dies.
Sorry for that long post, but it's good it stays also for me ^^

Run when you can

In my sinatra web application, I have a route:
get "/" do
temp = MyClass.new("hello",1)
redirect "/home"
end
Where MyClass is:
class MyClass
#instancesArray = []
def initialize(string,id)
#string = string
#id = id
#instancesArray[id] = this
end
def run(id)
puts #instancesArray[id].string
end
end
At some point I would want to run MyClass.run(1), but I wouldn't want it to execute immediately because that would slow down the servers response to some clients. I would want the server to wait to run MyClass.run(temp) until there was some time with a lighter load. How could I tell it to wait until there is an empty/light load, then run MyClass.run(temp)? Can I do that?
Addendum
Here is some sample code for what I would want to do:
$var = 0
get "/" do
$var = $var+1 # each time a request is recieved, it incriments
end
After that I would have a loop that would count requests/minute (so after a minute it would reset $var to 0, and if $var was less than some number, then it would run tasks util the load increased.
As Andrew mentioned (correctly—not sure why he was voted down), Sinatra stops processing a route when it sees a redirect, so any subsequent statements will never execute. As you stated, you don't want to put those statements before the redirect because that will block the request until they complete. You could potentially send the redirect status and header to the client without using the redirect method and then call MyClass#run. This will have the desired effect (from the client's perspective), but the server process (or thread) will block until it completes. This is undesirable because that process (or thread) will not be able to serve any new requests until it unblocks.
You could fork a new process (or spawn a new thread) to handle this background task asynchronously from the main process associated with the request. Unfortunately, this approach has the potential to get messy. You would have to code around different situations like the background task failing, or the fork/spawn failing, or the main request process not ending if it owns a running thread or other process. (Disclaimer: I don't really know enough about IPC in Ruby and Rack under different application servers to understand all of the different scenarios, but I'm confident that here there be dragons.)
The most common solution pattern for this type of problem is to push the task into some kind of work queue to be serviced later by another process. Pushing a task onto the queue is ideally a very quick operation, and won't block the main process for more than a few milliseconds. This introduces a few new challenges (where is the queue? how is the task described so that it can be facilitated at a later time without any context? how do we maintain the worker processes?) but fortunately a lot of the leg work has already been done by other people. :-)
There is the delayed_job gem, which seems to provide a nice all-in-one solution. Unfortunately, it's mostly geared towards Rails and ActiveRecord, and the efforts people have made in the past to make it work with Sinatra look to be unmaintained. The contemporary, framework-agnostic solutions are Resque and Sidekiq. It might take some effort to get up and running with either option, but it would be well worth it if you have several "run when you can" type functions in your application.
MyClass.run(temp) is never actually executing. In your current request to / path you instantiate a new instance of MyClass then it will immediately do a get request to /home. I'm not entirely sure what the question is though. If you want something to execute after the redirect, that functionality needs to exist within the /home route.
get '/home' do
# some code like MyClass.run(some_arg)
end

Background thread in Rails can't see instance variables

I need to gather up some data from a rails application, aggregate it, and send it off to a remote server periodically. I instantiate my aggregation class in a global variable (I know, I know) in application.rb.
Inside my aggregation class, I fire up a thread that sleeps for 10 seconds, then looks at the queue, processes the data, and sends it. The queue is a hash stored in an instance variable of the class.
From the rails controller, I call a method in the aggregator class to queue the data in the hash. Of course this is on a different thread than the background task that reads the queue. The problem is that the background task never sees any data in the hash. In my log, I print out the object_id of the hash both when I write to it (from the controllers thread), and when I read from it (from the background thread). The hash#object_id matches from both threads, but the background thread never sees the data.
Whats killing me is that this works fine outside of rails. I've set up tests with many threads that really pound on it, and it works fine (there is some thread protection that I am not showing for clarity). Anyone know how the object_id's can match, but the contents are not consistent?
class Aggregator
def initialize
#q = {}
#timer = nil
end
def start
#timer = Thread.new do
loop do
sleep(10)
flush_q
end
end
end
def flush_q
logger.debug "flush: q.object_id = #{#q.object_id}" # matches what I get below
logger.debug "flush: q.length = #{#q.length}" # always zero!
#q.each_pair do |k,v|
# pack it up and send it
end
#q.clear
end
def add(item)
logger.debug "add: q.object_id = #{#q.object_id}" # matches what I get above
#q[item.name] ||= item
logger.debug "add: q.length = #{#q.length}" # increases with each add
# not actually that simple, but not relevant
end
end
I'm going to go out on a limb and assume that your code is deployed using a forking app server (eg unicorn or passenger).
This means that your app is loaded once and then new instances are forked from that master instances. Forking is cheap so this means that new instances of the app can be started up/shutdown really quickly.
I believe that your aggregator instance is getting created/started in this master process. When this forks the process's entire memory space is copied (so there an instance of aggregator in the new process, with the same object id and so on).
However when forking only the current thread is copied , so the aggregator flushing is only happening in the master process, but all the appending is happening in the child processes. You could confirm this by adding Proccess.pid to what you log - you should see that your logging is coming from 2 different process.
One way of fixing this would be to start/restart your thread after the child process has forked. How you do this depends on how the app is being served. With unicorn you can do this in your unicorn config via the after_fork method. With passenger you do
PhusionPassenger.on_event(:starting_worker_process) do |forked|
if forked
...
end
end

How do I loop the restart of a daemon?

I am trying to use Ruby's daemon gem and loop the restart of a daemon that has its own loop. My code looks like this now:
require 'daemons'
while true
listener = Daemons.call(:force => true) do
users = accounts.get_updated_user_list
TweetStream::Client.new.follow(users) do |status|
puts "#{status.text}"
end
end
sleep(60)
listener.restart
end
Running this gives me the following error (after 60 seconds):
undefined method `restart' for #<Daemons::Application:0x007fc5b29f5658> (NoMethodError)
So obviously Daemons.call doesn't return a controllable daemon like I think it does. What do I need to do to set this up correctly. Is a daemon the right tool here?
I think this is what you're after, although I haven't tested it.
class RestartingUserTracker
def initialize
#client = TweetStream::Client.new
end
def handle_status(status)
# do whatever it is you're going to do with the status
end
def fetch_users
accounts.get_updated_user_list
end
def restart
#client.stop_stream
users = fetch_users
#client.follow(users) do |status|
handle_status(status)
end
end
end
EM.run do
client = RestartingUserTracker.new
client.restart
EM::PeriodicTimer.new(60) do
client.restart
end
end
Here's how it works:
TweetStream uses EventMachine internally, as a way of polling the API forever and handling the responses. I can see why you might have felt stuck, because the normal TweetStream API blocks forever and doesn't give you a way to intervene at any point. However, TweetStream does allow you to set up other things in the same event loop. In your case, a timer. I found the documentation on how to do that here: https://github.com/intridea/tweetstream#removal-of-on_interval-callback
By starting up our own EventMachine reactor, we're able to inject our own code into the reactor as well as use TweetStream. In this case, we're using a simple timer that just restarts the client every 60 seconds.
EventMachine is an implementation of something called the Reactor Pattern. If you want to fully understand and maintain this code, it would serve you well to find some resources about it and gain a full understanding. The reactor pattern is very powerful, but can be difficult to grasp at first.
However, this code should get you started. Also, I'd consider renaming the RestartingUserTracker to something more appropriate.

Ruby Eventmachine queueing problem

I have a Http client written in Ruby that can make synchronous requests to URLs. However, to quickly execute multiple requests I decided to use Eventmachine. The idea is to
queue all the requests and execute them using eventmachine.
class EventMachineBackend
...
...
def execute(request)
$q ||= EM.Queue.new
$q.push(request)
$q.pop {|request| request.invoke}
EM.run{EM.next_tick {EM.stop}}
end
...
end
Forgive my use of a global queue variable. I will refactor it later. Is what I am doing in EventMachineBackend#execute the right way of using Eventmachine queues?
One problem I see in my implementation is it is essentially synchronous. I push a request, pop and execute the request and wait for it to complete.
Could anyone suggest a better implementation.
Your the request logic has to be asynchronous for it to work with EventMachine, I suggest that you use em-http-request. You can find an example on how to use it here, it shows how to run the requests in parallel. An even better interface for running multiple connections in parallel is the MultiRequest class from the same gem.
If you want to queue requests and only run a fixed number of them in parallel you can do something like this:
EM.run do
urls = [...] # regular array with URLs
active_requests = 0
# this routine will be used as callback and will
# be run when each request finishes
when_done = proc do
active_requests -= 1
if urls.empty? && active_requests == 0
# if there are no more urls, and there are no active
# requests it means we're done, so shut down the reactor
EM.stop
elsif !urls.empty?
# if there are more urls launch a new request
launch_next.call
end
end
# this routine launches a request
launch_next = proc do
# get the next url to fetch
url = urls.pop
# launch the request, and register the callback
request = EM::HttpRequest.new(url).get
request.callback(&when_done)
request.errback(&when_done)
# increment the number of active requests, this
# is important since it will tell us when all requests
# are done
active_requests += 1
end
# launch three requests in parallel, each will launch
# a new requests when done, so there will always be
# three requests active at any one time, unless there
# are no more urls to fetch
3.times do
launch_next.call
end
end
Caveat emptor, there may very well be some detail I've missed in the code above.
If you think it's hard to follow the logic in my example, welcome to the world of evented programming. It's really tricky to write readable evented code. It all goes backwards. Sometimes it helps to start reading from the end.
I've assumed that you don't want to add more requests after you've started downloading, it doesn't look like it from the code in your question, but should you want to you can rewrite my code to use an EM::Queue instead of a regular array, and remove the part that does EM.stop, since you will not be stopping. You can probably remove the code that keeps track of the number of active requests too, since that's not relevant. The important part would look something like this:
launch_next = proc do
urls.pop do |url|
request = EM::HttpRequest.new(url).get
request.callback(&launch_next)
request.errback(&launch_next)
end
end
Also, bear in mind that my code doesn't actually do anything with the response. The response will be passed as an argument to the when_done routine (in the first example). I also do the same thing for success and error, which you may not want to do in a real application.

Resources