Ruby Event Machine stop or kill deffered operation - ruby

I was wondering if I could stop execution of an operation that has been deffered.
require 'rubygems'
require 'em-websocket'
EM.run do
EM::WebSocket.start(:host => '0.0.0.0', :port => 8080) do |ws|
ws.onmessage do |msg|
op = proc do
sleep 5 # Thread safe IO here that is safely killed
true
end
callback = proc do |result|
puts "Done!"
end
EM.defer(op, callback)
end
end
end
This is an example web socket server. Sometimes when I get a message I want to do some IO, later on another message might come in that needs to read the same thing, the next thing always has precedence over the previous thing. So I want to cancel the first op and do the second.

Here is my solution. It is similar to the EM.queue solution, but just uses a hash.
require 'rubygems'
require 'em-websocket'
require 'json'
EM.run do
EM::WebSocket.start(:host => '0.0.0.0', :port => 3333) do |ws|
mutex = Mutex.new # to make thread safe. See https://github.com/eventmachine/eventmachine/blob/master/lib/eventmachine.rb#L981
queue = EM::Queue.new
ws.onmessage do |msg|
message_type = JSON.parse(msg)["type"]
op = proc do
mutex.synchronize do
if message_type == "preferred"
puts "killing non preferred\n"
queue.size.times { queue.pop {|thread| thread.kill } }
end
queue << Thread.current
end
puts "doing the long running process"
sleep 15 # Thread safe IO here that is safely killed
true
end
callback = proc do |result|
puts "Finished #{message_type} #{msg}"
end
EM.defer(op, callback)
end
end
end

Related

Unable to make socket Accept Non Blocking ruby 2.2

I have been searching the whole day for socket accept non blocking. I found recv non blocking but that wouldn't benefit me in anyway. My script first starts a new socket class. It binds to the client with ip 127.0.0.1 and port 6112. Then it starts multi threading. Multi threading takes #sock.accept. << That is blocking. I have then used accept_nonblock. Though, that would throw me the following error:
IO::EWOULDBLOCKWaitReadable : A non-blocking socket operation could not be completed immediately. - accept(2) would block
I am using Ruby 2.2.
NOTE: I do not intend to use Rails to solve my problem, or give me a shortcut. I am sticking with pure Ruby 2.2.
Here is my script:
require 'socket'
include Socket::Constants
#sock = Socket.new(AF_INET, SOCK_STREAM, 0)
#sockaddr = Socket.sockaddr_in(6112, '127.0.0.1')
#sock.bind(#sockaddr)
#sock.listen(5)
Thread.new(#sock.accept_nonblock) do |connection|
#client = Client.new(ip, connection, self)
#clients.push(#client)
begin
while connection
packet = connection.recv(55555)
if packet == nil
DeleteClient(connection)
else
#toput = "[RECV]: #{packet}"
puts #toput
end
end
rescue Exception => e
if e.class != IOError
line1 = e.backtrace[0].split(".rb").last
line = line1.split(":")[1]
#Log.Error(e, e.class, e.backtrace[0].split(".rb").first + ".rb",line)
puts "#{ e } (#{ e.class })"
end
end
def DeleteClient(connection)
#clients.delete(#client)
connection.close
end
http://docs.ruby-lang.org/en/2.2.0/Socket.html#method-i-accept_nonblock
accept_nonblock raises an exception when it can't immediately accept a connection. You are expected to rescue this exception and then IO.select the socket.
begin # emulate blocking accept
client_socket, client_addrinfo = socket.accept_nonblock
rescue IO::WaitReadable, Errno::EINTR
IO.select([socket])
retry
end
A patch has recently been accepted which will add an exception: false option to accept_nonblock, which will allow you to use it without using exceptions for flow control. I don't know that it's shipped yet, though.
I'm going on a limb here, and posting a large chunk of code.
I hope it will answer both your question and the any related questions others reading this answer might raise.
I'm sorry if I went overboard, I just thought it was almost all relevant.
Issues like looping through an event stack, using IO.select to push events in a non-block manner and other performance issues are all related (in my opinion) to the nonblocking concept of socket programming.
So i'm posting a ruby module which acts as a server with a reactor, using a limited number of threads, rather than thousands of threads, each per connection (12 threads will give you better performance than a hundred). The reactor utilizes the IO.select method with a timeout once all it's active events are handled.
The module can set up multiple listening sockets which use #accept_nonblock, and they all currently act as an echo server.
It's basically the same code I used for the Plezi framework's core... with some stripped down functionality.
The following is a thread-pool with 12 working threads + the main thread (which will sleep and wait for the "TERM" signal)...
...And it's an example of an accept_nonblock with exception handling and a thread pool.
It's a simple socket echo server, test it as a remote client using telnet:
> telnet localhost 3000
Hi!
# => Hi!
bye
#=> will disconnect
here's the code - Good Luck!!!
require 'socket'
module SmallServer
module_function
####
# Replace this method with your actual server logic.
#
# this code will be called when a socket recieves data.
#
# For now, we will just echo.
def got_data io, io_params
begin
got = io.recv_nonblock( 1048576 ) # with maximum number of bytes to read at a time...
puts "echoing: #{got}"
if got.match /^(exit|bye|q)\R/
puts 'closing connection.'
io.puts "bye bye!"
remove_connection io
else
io.puts "echoing: #{got}"
end
rescue => e
# should also log error
remove_connection io
end
end
#########
# main loop and activation code
#
# This will create a thread pool and set them running.
def start
# prepare threads
exit_flag = false
max_threads = 12
threads = []
thread_cycle = Proc.new do
io_review rescue false
true while fire_event
end
(max_threads).times { Thread.new { thread_cycle.call until exit_flag } }
# set signal tarps
trap('INT'){ exit_flag = true; raise "close!" }
trap('TERM'){ exit_flag = true; raise "close!" }
puts "Services running. Press ^C to stop"
# sleep until trap raises exception (cycling might cause the main thread to loose signals that might be caught inside rescue clauses)
(sleep unless SERVICES.empty?) rescue true
# start shutdown.
exit_flag = true
# set fallback tarps
trap('INT'){ puts 'Forced exit.'; Kernel.exit }
trap('TERM'){ puts 'Forced exit.'; Kernel.exit }
puts 'Started shutdown process. Press ^C to force quit.'
# shut down listening sockets
stop_services
# disconnect active connections
stop_connections
# cycle down threads
puts "Waiting for workers to cycle down"
threads.each {|t| t.join if t.alive?}
# rundown any active events
thread_cycle.call
end
#######################
## Events (Callbacks) / Multi-tasking Platform
EVENTS = []
E_LOCKER = Mutex.new
# returns true if there are any unhandled events
def events?
E_LOCKER.synchronize {!EVENTS.empty?}
end
# pushes an event to the event's stack
# if a block is passed along, it will be used as a callback: the block will be called with the values returned by the handler's `call` method.
def push_event handler, *args, &block
if block
E_LOCKER.synchronize {EVENTS << [(Proc.new {|a| push_event block, handler.call(*a)} ), args]}
else
E_LOCKER.synchronize {EVENTS << [handler, args]}
end
end
# Runs the block asynchronously by pushing it as an event to the event's stack
#
def run_async *args, &block
E_LOCKER.synchronize {EVENTS << [ block, args ]} if block
!block.nil?
end
# creates an asynchronous call to a method, with an optional callback (shortcut)
def callback object, method, *args, &block
push_event object.method(method), *args, &block
end
# event handling FIFO
def fire_event
event = E_LOCKER.synchronize {EVENTS.shift}
return false unless event
begin
event[0].call(*event[1])
rescue OpenSSL::SSL::SSLError => e
puts "SSL Bump - SSL Certificate refused?"
rescue Exception => e
raise if e.is_a?(SignalException) || e.is_a?(SystemExit)
error e
end
true
end
#####
# Reactor
#
# IO review code will review the connections and sockets
# it will accept new connections and react to socket input
IO_LOCKER = Mutex.new
def io_review
IO_LOCKER.synchronize do
return false unless EVENTS.empty?
united = SERVICES.keys + IO_CONNECTION_DIC.keys
return false if united.empty?
io_r = (IO.select(united, nil, united, 0.1) )
if io_r
io_r[0].each do |io|
if SERVICES[io]
begin
callback self, :add_connection, io.accept_nonblock, SERVICES[io]
rescue Errno::EWOULDBLOCK => e
rescue => e
# log
end
elsif IO_CONNECTION_DIC[io]
callback(self, :got_data, io, IO_CONNECTION_DIC[io] )
else
puts "what?!"
remove_connection(io)
SERVICES.delete(io)
end
end
io_r[2].each { |io| (remove_connection(io) || SERVICES.delete(io)).close rescue true }
end
end
callback self, :clear_connections
true
end
#######################
# IO - listening sockets (services)
SERVICES = {}
S_LOCKER = Mutex.new
def add_service port = 3000, parameters = {}
parameters[:port] ||= port
parameters.update port if port.is_a?(Hash)
service = TCPServer.new(parameters[:port])
S_LOCKER.synchronize {SERVICES[service] = parameters}
callback Kernel, :puts, "Started listening on port #{port}."
true
end
def stop_services
puts 'Stopping services'
S_LOCKER.synchronize {SERVICES.each {|s, p| (s.close rescue true); puts "Stoped listening on port #{p[:port]}"}; SERVICES.clear }
end
#####################
# IO - Active connections handling
IO_CONNECTION_DIC = {}
C_LOCKER = Mutex.new
def stop_connections
C_LOCKER.synchronize {IO_CONNECTION_DIC.each {|io, params| io.close rescue true} ; IO_CONNECTION_DIC.clear}
end
def add_connection io, more_data
C_LOCKER.synchronize {IO_CONNECTION_DIC[io] = more_data} if io
end
def remove_connection io
C_LOCKER.synchronize { IO_CONNECTION_DIC.delete io; io.close rescue true }
end
# clears closed connections from the stack
def clear_connections
C_LOCKER.synchronize { IO_CONNECTION_DIC.delete_if {|c| c.closed? } }
end
end
start the echo server in irb with:
SmallServer.add_service(3000) ; SmallServer.start

EM::WebSocket.run as well as INotify file-watching

This is the code I currently have:
#!/usr/bin/ruby
require 'em-websocket'
$cs = []
EM.run do
EM::WebSocket.run(:host => "::", :port => 8085) do |ws|
ws.onopen do |handshake|
$cs << ws
end
ws.onclose do
$cs.delete ws
end
end
end
I would like to watch a file with rb-inotify and send a message to all connected clients ($cs.each {|c| c.send "File changed"}) when a file changes. The problem is, I do not understand EventMachine, and I can't seem to find a good tutorial.
So if anyone could explain to me where to put the rb-inotify-related code, I would really appreciate it.
Of course! As soon as I post the question, I figure it out!
#!/usr/bin/ruby
require 'em-websocket'
$cs = []
module Handler
def file_modified
$cs.each {|c| c.send "File was modified!" }
end
end
EM.run do
EM.watch_file("/tmp/foo", Handler)
EM::WebSocket.run(:host => "::", :port => 8085) do |ws|
ws.onopen do |handshake|
$cs << ws
end
ws.onclose do
$cs.delete ws
end
end
end

blocking queue implementation in ruby

In Java, there is a class called ArrayBlockingQueue as part of its concurrent package. It is a thread-safe class where you can add and remove items from the queue without worrying about thread-safety. This class has a put method which allows you to put items in queue. And a take method removes items from the queue. Two great things about put and take is there is no need of a synchronized keyword for synchronization against thread interleaving, and take patiently waits until something is added to the queue, rather than throwing an exception if nothing is in it.
I try to implement something similar in ruby, but the issue is queue.pop seems to block even when items are added to the queue (at least for one of the queues), as shown below:
require 'redis'
require 'date'
def log_debug(str)
debug_str = "#{DateTime.now} #{str}"
puts debug_str
end
class EmailsmsResponder
def initialize
#queue = Queue.new
end
# add to queue
def produce(channel, msg)
#queue << {channel: channel, msg: msg}
puts "queue size: #{#queue.size}"
end
# take from queue
def consume
loop do
log_debug "Whats going on??"
sleep(1)
if !#queue.empty?
item = #queue.pop
log_debug "removing channel #{item[:channel]} and msg #{item[:msg]} from email-sms thread from queue"
end
end
end
end
class SidekiqResponder
def initialize
#queue = Queue.new
end
def produce(channel, msg)
#queue << {channel: channel, msg: msg}
puts "queue size: #{#queue.size}"
end
def consume
loop do
log_debug "Whats going on??"
sleep(1)
if !#queue.empty?
value = #queue.pop
log_debug "removing channel #{item[:channel]} and msg #{item[:msg]} from sidekiq thread from queue"
end
end
end
end
class RedisResponder
def initialize(host,port)
#host = host
#port = port
#email_sms = EmailsmsResponder.new
#sidekiq = SidekiqResponder.new
# timeout so we wait for messages forever
#redis = Redis.new(:host => #host, :port => #port, :timeout => 0)
end
def start_producers
thread = Thread.new do
#redis.subscribe('juggernaut') do |on|
# message block fired for new messages
on.message do |channel, msg|
log_debug "New message"
#email_sms.produce(channel, msg)
#sidekiq.produce(channel, msg)
end
end
end
end
def start_consumers
thread = Thread.new do
#email_sms.consume
#sidekiq.consume
end
end
end
responder = RedisResponder.new('127.0.0.1', 6379)
responder.start_producers.join(responder.start_consumers.join)
While one queue seems to be working properly, the other queue never retrieves anything:
$ ruby redis-client4.rb
2014-07-22T14:53:24-04:00 Whats going on??
2014-07-22T14:53:25-04:00 Whats going on??
2014-07-22T14:53:25-04:00 New message
queue size: 1
queue size: 1
2014-07-22T14:53:26-04:00 removing channel juggernaut and msg {"channels":["/reports/6561/new"],"data":"New reports for unit 6561"} from email-sms thread from queue
2014-07-22T14:53:26-04:00 Whats going on??
2014-07-22T14:53:27-04:00 Whats going on??
2014-07-22T14:53:28-04:00 Whats going on??
2014-07-22T14:53:28-04:00 New message
queue size: 1
queue size: 2
2014-07-22T14:53:29-04:00 removing channel juggernaut and msg {"channels":["/reports/6561/new"],"data":"New reports for unit 6561"} from email-sms thread from queue
2014-07-22T14:53:29-04:00 Whats going on??
2014-07-22T14:53:30-04:00 Whats going on??
2014-07-22T14:53:31-04:00 Whats going on??
2014-07-22T14:53:31-04:00 New message
queue size: 1
queue size: 3
2014-07-22T14:53:32-04:00 removing channel juggernaut and msg {"channels":["/reports/6561/new"],"data":"New reports for unit 6561"} from email-sms thread from queue
2014-07-22T14:53:32-04:00 Whats going on??
2014-07-22T14:53:33-04:00 Whats going on??
2014-07-22T14:53:34-04:00 Whats going on??
2014-07-22T14:53:34-04:00 New message
queue size: 1
queue size: 4
2014-07-22T14:53:35-04:00 removing channel juggernaut and msg {"channels":["/reports/6561/new"],"data":"New reports for unit 6561"} from email-sms thread from queue
2014-07-22T14:53:35-04:00 Whats going on??
2014-07-22T14:53:36-04:00 Whats going on??
2014-07-22T14:53:37-04:00 Whats going on??
2014-07-22T14:53:37-04:00 New message
queue size: 1
queue size: 5
What might I be doing wrong?
I got it working with the code below. I didn't like the fact I had to use 4 threads just to get it working, so if anyone has a more nice solution I would be glad to recommend their solution. But this seems to be working for now:
require 'redis'
require 'date'
def log_debug(str)
debug_str = "#{DateTime.now} #{str}"
puts debug_str
end
class EmailsmsResponder
def initialize
#queue = Queue.new
end
# add to queue
def produce(channel, msg)
#queue << {channel: channel, msg: msg}
puts "queue size: #{#queue.size}"
end
# take from queue
def consume
loop do
log_debug "Whats going on??"
sleep(1)
if !#queue.empty?
item = #queue.pop
log_debug "removing channel #{item[:channel]} and msg #{item[:msg]} from email-sms thread from queue"
end
end
end
end
class SidekiqResponder
def initialize
#queue = Queue.new
end
def produce(channel, msg)
#queue << {channel: channel, msg: msg}
puts "queue size: #{#queue.size}"
end
def consume
loop do
log_debug "Whats going on??"
sleep(1)
if !#queue.empty?
item = #queue.pop
log_debug "removing channel #{item[:channel]} and msg #{item[:msg]} from sidekiq thread from queue"
end
end
end
end
class RedisResponder
def initialize(host,port)
#host = host
#port = port
#email_sms = EmailsmsResponder.new
#sidekiq = SidekiqResponder.new
# timeout so we wait for messages forever
#redis = Redis.new(:host => #host, :port => #port, :timeout => 0)
end
def start_producers
thread = Thread.new do
#redis.subscribe('juggernaut') do |on|
# message block fired for new messages
on.message do |channel, msg|
log_debug "New message"
#email_sms.produce(channel, msg)
#sidekiq.produce(channel, msg)
end
end
end
end
def start_consumers
thread = Thread.new do
t1 = Thread.new { #email_sms.consume }
t2 = Thread.new { #sidekiq.consume }
t1.join(t2.join)
end
end
end
responder = RedisResponder.new('127.0.0.1', 6379)
responder.start_producers.join(responder.start_consumers.join)
I try to implement something similar in ruby, but the issue is
queue.pop seems to block even when items are added to the queue
That's easy to disprove:
require 'thread'
q = Queue.new
q << 'hello'
x = q.pop
puts x
x = q.pop
--output:--
hello
deadlock detected (fatal)
What might I be doing wrong?
Start deleting code and simplifying things to pinpoint where the problem occurs. The fact that you have two classes that are exactly the same means you haven't even begun to simplify.
Then there is this:
def consume
loop do
log_debug "Whats going on??"
sleep(1)
value = queue.pop
log_debug "removing channel: #{channel} msg: #{msg} of sidekiq thread from queue"
end
end
***Error in `consume': undefined local variable or method `queue'

Ctrl+C not killing Sinatra + EM::WebSocket servers

I'm building a Ruby app that runs both an EM::WebSocket server as well as a Sinatra server. Individually, I believe both of these are equipped to handle a SIGINT. However, when running both in the same app, the app continues when I press Ctrl+C. My assumption is that one of them is capturing the SIGINT, preventing the other from capturing it as well. I'm not sure how to go about fixing it, though.
Here's the code in a nutshell:
require 'thin'
require 'sinatra/base'
require 'em-websocket'
EventMachine.run do
class Web::Server < Sinatra::Base
get('/') { erb :index }
run!(port: 3000)
end
EM::WebSocket.start(port: 3001) do |ws|
# connect/disconnect handlers
end
end
I had the same issue. The key for me seemed to be to start Thin in the reactor loop with signals: false:
Thin::Server.start(
App, '0.0.0.0', 3000,
signals: false
)
This is complete code for a simple chat server:
require 'thin'
require 'sinatra/base'
require 'em-websocket'
class App < Sinatra::Base
# threaded - False: Will take requests on the reactor thread
# True: Will queue request for background thread
configure do
set :threaded, false
end
get '/' do
erb :index
end
end
EventMachine.run do
# hit Control + C to stop
Signal.trap("INT") {
puts "Shutting down"
EventMachine.stop
}
Signal.trap("TERM") {
puts "Shutting down"
EventMachine.stop
}
#clients = []
EM::WebSocket.start(:host => '0.0.0.0', :port => '3001') do |ws|
ws.onopen do |handshake|
#clients << ws
ws.send "Connected to #{handshake.path}."
end
ws.onclose do
ws.send "Closed."
#clients.delete ws
end
ws.onmessage do |msg|
puts "Received message: #{msg}"
#clients.each do |socket|
socket.send msg
end
end
end
Thin::Server.start(
App, '0.0.0.0', 3000,
signals: false
)
end
I downgrade thin to version 1.5.1 and it just works. Wired.

EventMachine with em-synchrony I need to correctly throttle my http requests

I have a consumer which pulls messages off of a queue via an evented subscription. It takes those messages and then connects with a rather slow http interface. I have a worker pool of 8 and once those are all filled up I need to stop pulling requests from the queue and have the fibers that are working on the http jobs keep working. Here is an example I've thrown together.
def send_request(callback)
EM.synchrony do
while $available <= 0
sleep 2
puts "sleeping"
end
url = 'http://example.com/api/Restaurant/11111/images/?image%5Bremote_url%5D=https%3A%2F%2Firs2.4sqi.net%2Fimg%2Fgeneral%2Foriginal%2F8NMM4yhwsLfxF-wgW0GA8IJRJO8pY4qbmCXuOPEsUTU.jpg&image%5Bsource_type_enum%5D=3'
result = EM::Synchrony.sync EventMachine::HttpRequest.new(url, :inactivity_timeout => 0).send("apost", :head => {:Accept => 'services.v1'})
callback.call(result.response)
end
end
def display(value)
$available += 1
puts value.inspect
end
$available = 8
EM.run do
EM.add_periodic_timer(0.001) do
$available -= 1
puts "Available: #{$available}"
puts "Tick ..."
puts send_request(method(:display))
end
end
I have found that if I call sleep within a while loop in the synchrony block, the reactor loop gets stuck. If I call sleep within an if statement(sleeping just once) then most times it is enough time for the requests to finish but it is unreliable at best. If I use EM::Synchrony.sleep, then the main reactor loop will keep creating new requests.
Is there a way to pause the main loop but have the fibers finish their execution?
sleep 2
...
add_periodic_timer(0.001)
Are you serious?
Have you ever though how many send_request's are sleeping in the loop? And it's adding 1000 every second.
What about this:
require 'eventmachine'
require 'em-http'
require 'fiber'
class Worker
URL = 'http://example.com/api/whatever'
def initialize callback
#callback = callback
end
def work
f = Fiber.current
loop do
http = EventMachine::HttpRequest.new(URL).get :timeout => 20
http.callback do
#callback.call http.response
f.resume
end
http.errback do
f.resume
end
Fiber.yield
end
end
end
def display(value)
puts "Done: #{value.size}"
end
EventMachine.run do
8.times do
Fiber.new do
Worker.new(method(:display)).work
end.resume
end
end

Resources