I am trying to run a process that processes flight tracking data and actively turns it into JSON strings (continuous looping process) alongside a Sinatra server that responds to GET requests with these JSON strings. I am trying to use threading to handle this but have had no success. How can I run these two processes side by side? Here are some more specifics:
I have a class Aircraft with an array of Aircraft objects called Aircraft::All. I have a method that continually updates this array that I want to run alongside a Sinatra server that responds to GET requests with the list of aircraft in JSON format.
Here is the code:
# starting the data stream from external process
IO.popen("./dump1090") do |data|
block = ""
# created sinatra server thread
t1 = Thread.new do
set :port, 8080
set :environment, :production
get '/aircrafts' do
return_message = {}
if !Aircraft::All.first.nil?
return_message[:status] == 'success'
return_message[:aircrafts] = message_maker
else
return_message[:status] = 'sorry - something went wrong'
return_message[:aircrafts] = []
end
return_message.to_json
end
end
# parsing the data in main thread -- the process
# I want to run alongside the server (parse_block updates Aircraft::All)
while line = data.gets
if line.to_s.split('').first == '*'
parse_block(block)
puts block
Aircraft::All.reject { |aircraft| Time.now.to_f - aircraft.contact_time > 30 }
block = ""
end
block += line.to_s
end
end
Here the main thread is the Sinatra app and the additional thread loads the data, which is more usual to me.
class Aircraft
#aircrafts = {}
def self.all
#aircrafts
end
end
Thread.new do
no = 1
while true
Aircraft.all[no] = 'Boing'
no += 1
sleep(3)
end
end
get '/aircrafts' do
Aircraft.all.to_json
end
Related
I am encountering an interesting issue with Ruby TCPServer, where once a client connects, it continually uses more and more CPU processing power until it hits 100% and then the entire system starts to bog down and can't process incoming data.
The processing class that is having an issue is designed to be a TCP Client that receives data from an embedded system, processes it, then returns the processed data to be further used (either by other similar data processors, or output to a user).
In this particular case, there is an external piece of code that would like this processed data, but cannot access it from the main parent code (the thing that the original process class is returning it's data to). This external piece may or may not be connected at any point while it is running.
To solve this, I set up a Thread with a TCPServer, and the processing class continually adds to a queue, and the Thread pulls from the queue and sends it to the client.
It works great, except for the performance issues. I am curious if I have something funky going on in my code, or if it's just the nature of this methodology and it will never be performant enough to work.
Thanks in advance for any insight/suggestions with this problem!
Here is my code/setup, with some test helpers:
process_data.rb
require 'socket'
class ProcessData
def initialize
super
#queue = Queue.new
#client_active = false
Thread.new do
# Waiting for connection
#server = TCPServer.open('localhost', 5000)
loop do
Thread.start(#server.accept) do |client|
puts 'Client connected'
# Connection established
#client_active = true
begin
# Continually attempt to send data to client
loop do
unless #queue.empty?
# If data exists, send it to client
begin
until #queue.empty?
client.puts(#queue.pop)
end
rescue Errno::EPIPE => error
# Client disconnected
client.close
end
end
sleep(1)
end
rescue IOError => error
# Client disconnected
#client_active = false
end
end # Thread.start(#server.accept)
end # loop do
end # Thread.new do
end
def read(data)
# Data comes in from embedded system on this method
# Do some processing
processed_data = data.to_i + 5678
# Ready to send data to external client
if #client_active
#queue << processed_data
end
return processed_data
end
end
test_embedded_system.rb (source of the original data)
require 'socket'
#data = '1234'*100000 # Simulate lots of data coming ing
embedded_system = TCPServer.open('localhost', 5555)
client_connection = embedded_system.accept
loop do
client_connection.puts(#data)
sleep(0.1)
end
parent.rb (this is what will create/call the ProcessData class)
require_relative 'process_data'
processor = ProcessData.new
loop do
begin
s = TCPSocket.new('localhost', 5555)
while data = s.gets
processor.read(data)
end
rescue => e
sleep(1)
end
end
random_client.rb (wants data from ProcessData)
require 'socket'
loop do
begin
s = TCPSocket.new('localhost', 5000)
while processed_data = s.gets
puts processed_data
end
rescue => e
sleep(1)
end
end
To run the test in linux, open 3 terminal windows:
Window 1: ./test_embedded_system.rb
Window 2: ./parent.rb
\CPU usage is stable
Window 3: ./random_client.rb
\CPU usage continually grows
I ended up figuring out what the issue was, and unfortunately I lead folks astray with my example.
It turns out my example didn't quite have the issue I was having, and the main difference was the sleep(1) was not in my version of process_data.rb.
That sleep is actually incredibly important, because it is inside of a loop do, and without the sleep, the Thread won't yield the GVL, and will continually eat up CPU resources.
Essentially, it was unrelated to TCP stuff, and more related to Threads and loops.
If you stumble on this question later on, you can put a sleep(0) in your loops if you don't want it to wait, but you want it to yield the GVL.
Check out these answers as well for more info:
Ruby infinite loop causes 100% cpu load
sleep 0 has special meaning?
How can I kill running processes in EventMachine?
Below is an example, I'm starting 10 processes and then I'm trying to erase them all (but it doesn't work). My goal is to not have the "Finished" output.
require "rubygems"
require "eventmachine"
class Event
def start
sleep(5)
puts Time.now.to_s + ": Finished!"
end
end
EventMachine.run do
events = []
10.times {
handle = Event.new
events << handle
EventMachine.defer(proc {
handle.start
})
}
# Terminate all events!
events.each do |handle|
handle = nil
ObjectSpace.garbage_collect
end
end
I'm aware that I could set a variable and check whether it's set when doing the output, but I feel like this isn't the "real" thing, or is this really the only solution there is?
Try EventMachine.stop_event_loop, it will “cause all open connections and accepting servers to be run down and closed”.
I use an api, that is written on top of EM. This means that to make a call, I need to write something like the following:
EventMachine.run do
api.query do |result|
# Do stuff with result
end
EventMachine.stop
end
Works fine.
But now I want to use this same API within a Sinatra controller. I tried this:
get "/foo" do
output = ""
EventMachine.run do
api.query do |result|
output = "Result: #{result}"
end
EventMachine.stop
end
output
end
But this doesn't work. The run block is bypassed, so an empty response is returned and once stop is called, Sinatra shuts down.
Not sure if it's relevant, but my Sinatra app runs on Thin.
What am I doing wrong?
I've found a workaround by busy waiting until data becomes available. Possibly not the best solution, but it works at least:
helpers do
def wait_for(&block)
while (return_val = block.call).nil?
sleep(0.1)
end
return_val
end
end
get "/foo" do
output = nil
EventMachine.run do
api.query do |result|
output = "Result: #{result}"
end
end
wait_for { output }
end
I have a consumer which pulls messages off of a queue via an evented subscription. It takes those messages and then connects with a rather slow http interface. I have a worker pool of 8 and once those are all filled up I need to stop pulling requests from the queue and have the fibers that are working on the http jobs keep working. Here is an example I've thrown together.
def send_request(callback)
EM.synchrony do
while $available <= 0
sleep 2
puts "sleeping"
end
url = 'http://example.com/api/Restaurant/11111/images/?image%5Bremote_url%5D=https%3A%2F%2Firs2.4sqi.net%2Fimg%2Fgeneral%2Foriginal%2F8NMM4yhwsLfxF-wgW0GA8IJRJO8pY4qbmCXuOPEsUTU.jpg&image%5Bsource_type_enum%5D=3'
result = EM::Synchrony.sync EventMachine::HttpRequest.new(url, :inactivity_timeout => 0).send("apost", :head => {:Accept => 'services.v1'})
callback.call(result.response)
end
end
def display(value)
$available += 1
puts value.inspect
end
$available = 8
EM.run do
EM.add_periodic_timer(0.001) do
$available -= 1
puts "Available: #{$available}"
puts "Tick ..."
puts send_request(method(:display))
end
end
I have found that if I call sleep within a while loop in the synchrony block, the reactor loop gets stuck. If I call sleep within an if statement(sleeping just once) then most times it is enough time for the requests to finish but it is unreliable at best. If I use EM::Synchrony.sleep, then the main reactor loop will keep creating new requests.
Is there a way to pause the main loop but have the fibers finish their execution?
sleep 2
...
add_periodic_timer(0.001)
Are you serious?
Have you ever though how many send_request's are sleeping in the loop? And it's adding 1000 every second.
What about this:
require 'eventmachine'
require 'em-http'
require 'fiber'
class Worker
URL = 'http://example.com/api/whatever'
def initialize callback
#callback = callback
end
def work
f = Fiber.current
loop do
http = EventMachine::HttpRequest.new(URL).get :timeout => 20
http.callback do
#callback.call http.response
f.resume
end
http.errback do
f.resume
end
Fiber.yield
end
end
end
def display(value)
puts "Done: #{value.size}"
end
EventMachine.run do
8.times do
Fiber.new do
Worker.new(method(:display)).work
end.resume
end
end
[EDIT NOTE: Noticed I had put the mutex creation in the constructor. Moved it and noticed no change.]
[EDIT NOTE 2: I changed the call to app.exec in a trial run to
while TRUE do
app.processEvents()
puts '."
end
I noticed that once the Soap4r service started running no process events ever got called again]
[EDIT NOTE 3: Created an associated question here: Thread lockup in ruby with Soap4r
I'm attempting to write a ruby program that receives SOAP commands to draw on a monitor (thus allowing remote monitor access). I've put together a simple test app to prototype the idea. The graphic toolkit is QT. I'm having what I assume is a problem with locking. I've added calls to test the methods in the server in the code shown. The server side that I'm testing right now is:
require 'rubygems'
require 'Qt4'
require 'thread'
require 'soap/rpc/standaloneserver'
class Box < Qt::Widget
def initialize(parent = nil)
super
setPalette(Qt::Palette.new(Qt::Color.new(250,0,0)))
setAutoFillBackground(true)
show
end
end
class SOAPServer < SOAP::RPC::StandaloneServer
##mutex = Mutex.new
def initialize(* args)
super
# Exposed methods
add_method(self, 'createWindow', 'x', 'y', 'width', 'length')
end
def createWindow(x, y, width, length)
puts 'received call'
windowID = 0
puts #boxList.length
puts #parent
##mutex.synchronize do
puts 'in lock'
box = Box.new(#parent)
box.setGeometry(x, y, width, length)
windowID = #boxList.push(box).length
print "This:", windowID, "--\n"
end
puts 'out lock'
return windowID
end
def postInitialize (parent)
#parent = parent
#boxList = Array.new
end
end
windowSizeX = 400
windowSizeY = 300
app = Qt::Application.new(ARGV)
mainwindow = Qt::MainWindow.new
mainwindow.resize(windowSizeX, windowSizeY)
mainwindow.show
puts 'Attempting server start'
myServer = SOAPServer.new('monitorservice', 'urn:ruby:MonitorService', 'localhost', 4004)
myServer.postInitialize(mainwindow)
Thread.new do
puts 'Starting?'
myServer.start
puts 'Started?'
end
Thread.new do
myServer.createWindow(10,0,10,10)
myServer.createWindow(10,30,10,10)
myServer.createWindow(10,60,10,10)
myServer.createWindow(10,90,10,10)
end
myServer.createWindow(10,10,10,10)
Thread.new do
app.exec
end
gets
Now when I run this I get the following output:
Attempting server start
Starting?
received call
0
#<Qt::MainWindow:0x60fea28>
in lock
received call
0
#<Qt::MainWindow:0x60fea28>
This:1--
in lock
This:2--
out lock
At that point I hang rather than recieving the total of five additions I expect. Qt does display the squares defined by "createWindow(10,0,10,10)" and "createWindow(10,10,10,10)". Given that "This:1--" and "This:2--" show within a nexted in/out lock pair I'm assuming I'm using mutex horribly wrong. This is my first time with threading in Ruby.