How to disconnect redis client in websocket eventmachine - ruby

I'm trying to build a websocket server where each client establish its own redis connections used for publish and subscribe.
When the redis server is running I can see the two new connections being established when a client connects to the websocket server and I can also publish data to the client, but when the client drops the connection to the websocket server I also want to disconnect from Redis . How can I do this?
Maybe I'm doing it wrong, but this is my code.
#require 'redis'
require 'em-websocket'
require 'em-hiredis'
require 'json'
CLIENTS = Hash.new
class PubSub
def initialize(client)
#socket = client.ws
# These clients can only be used for pub sub commands
#publisher = EM::Hiredis.connect #Later I will like to disconnect this
#subscriber = EM::Hiredis.connect #Later I will like to disconnect this
client.connections << #publisher << #subscriber
end
def subscribe(channel)
#channel = channel
#subscriber.subscribe(channel)
#subscriber.on(:message) { |chan, message|
#socket.send message
}
end
def publish(channel,msg)
#publisher.publish(channel, msg).errback { |e|
puts [:publisherror, e]
}
end
def unsubscribe()
#subscriber.unsubscribe(#channel)
end
end
class Client
attr_accessor :connections, :ws
def initialize(ws)
#connections = []
#ws = ws
end
end
EventMachine.run do
# Creates a websocket listener
EventMachine::WebSocket.start(:host => '0.0.0.0', :port => 8081) do |ws|
ws.onopen do
# I instantiated above
puts 'CLient connected. Creating socket'
#client = Client.new(ws)
CLIENTS[ws] = #client
end
ws.onclose do
# Upon the close of the connection I remove it from my list of running sockets
puts 'Client disconnected. Closing socket'
#client.connections.each do |con|
#do something to disconnect from redis
end
CLIENTS.delete ws
end
ws.onmessage { |msg|
puts "Received message: #{msg}"
result = JSON.parse(msg)
if result.has_key? 'channel'
ps = PubSub.new(#client)
ps.subscribe(result['channel'])
elsif result.has_key? 'publish'
ps = PubSub.new(ws)
ps.publish(result['publish']['channel'],result['publish']['msg']);
end
}
end
end

This version of em-hiredis supports close connection: https://github.com/whatupdave/em-hiredis

Here is how I would (and did many times) this:
instead of always opening and closing connections for each client you can keep 1 connection open per Thread/Fiber dependeing on what you are basing your concurrency on, that way if you are using a poll of Thread/Fibers once each one of them have its connections they will keep it and reuse them.
I did not worked much with websocket until now (I was waiting for a standard implementation) but I am sure you can apply that thinking to it too.
You can also do what rails/activerecord: keeo a pool of redis connection, each time you need to use a connection you request one, use it and realease it, it could look like this:
def handle_request(request)
#redis_pool.get_connection do |c|
# [...]
end
end
before yielding the block a connection is taken from the available ones and after it the connection is marked as free.

This was added to em-hiredis: https://github.com/mloughran/em-hiredis/pull/6

Related

Faye Websocket Ruby not working as expected

I am trying to use haproxy to load balance my websocket rack application.
I publish message in channel rates using redis-cli and this succeeds puts "sent" if ws.send(msg)
The client does receive the 'Welcome! from server' message so I know the initial handshake is done.
But, the client never receives the published message in channel 'rates'.
web_socket.rb
require 'faye/websocket'
module WebSocket
class App
KEEPALIVE_TIME = 15 # in seconds
def initialize(app)
#app = app
#mutex = Mutex.new
#clients = []
# #redis = Redis.new(host: 'rds', port: 6739)
Thread.new do
#redis_sub = Redis.new(host: 'rds', port: 6379)
#redis_sub.subscribe('rates') do |on|
on.message do |channel, msg|
p [msg,#clients.length]
#mutex.synchronize do
#clients.each do |ws|
# ws.ping 'Mic check, one, two' do
p ws
puts "sent" if ws.send(msg)
# end
end
end
end
end
end
end
def call(env)
if Faye::WebSocket.websocket?(env)
# WebSockets logic goes here
ws = Faye::WebSocket.new(env, nil) # {ping: KEEPALIVE_TIME }
ws.on :open do |event|
p [:open, ENV['APPID'], ws.object_id]
ws.ping 'Mic check, one, two' do
# fires when pong is received
puts "Welcome sent" if ws.send('Welcome! from server')
#mutex.synchronize do
#clients << ws
end
p [#clients.length, ' Client Connected']
end
end
ws.on :close do |event|
p [:close, ENV['APPID'], ws.object_id, event.code, event.reason]
#mutex.synchronize do
#clients.delete(ws)
end
p #clients.length
ws = nil
end
ws.on :message do |event|
p [:message, event.data]
# #clients.each {|client| client.send(event.data) }
end
# Return async Rack response
ws.rack_response
else
#app.call(env)
end
end
end
end
My haproxy.cfg
frontend http
bind *:8080
mode http
timeout client 1000s
use_backend all
backend all
mode http
timeout server 1000s
timeout tunnel 1000s
timeout connect 1000s
server s1 app1:8080
server s2 app2:8080
server s3 app3:8080
server s4 app4:8080
Chrome dev tools
Please help me!!!
EDIT:
I have tried Thread running in Middleware is using old version of parent's instance variable but this does not work.
As mentioned earlier. The below code succeeds
puts "sent" if ws.send(msg)
Okay, After a lot of searching and testing.. I found that the issue was with not setting a ping. during websocket initialization in the server.
Change this
ws = Faye::WebSocket.new(env, nil) # {ping: KEEPALIVE_TIME }
to
ws = Faye::WebSocket.new(env, nil, {ping: KEEPALIVE_TIME })
My KEEPALIVE_TIME is 0.5 because I am making a stock application where rates change very quickly. You can keep it per your needs.

web server in ruby and connection keep-alive

Web server example:
require 'rubygems'
require 'socket'
require 'thread'
class WebServer
LINE_TERMINATOR = "\r\n".freeze
def initialize(host, port)
#server = TCPServer.new(host, port)
end
def run
response_body = 'Hello World!'.freeze
response_headers = "HTTP/1.1 200 OK#{LINE_TERMINATOR}Connection: Keep-Alive#{LINE_TERMINATOR}Content-Length: #{response_body.bytesize}#{LINE_TERMINATOR}".freeze
loop do
Thread.new(#server.accept) do |socket|
puts "request #{socket}"
sleep 3
socket.setsockopt(Socket::IPPROTO_TCP, Socket::TCP_NODELAY, 1)
socket.write(response_headers)
socket.write(LINE_TERMINATOR)
socket.write(response_body)
# socket.close # if this line is uncommented then it's work.
end
end
end
end
WebServer.new('localhost', 8888).run
if update browser without waiting for the end of the cycle then the following queries are not processed
How can handle incomming request which are persistent socket ?
You need to:
Keep around the sockets you get from the #server.accept call. Store them in an array (socket_array).
Use the IO.select call on the array of sockets to get the set of sockets that can be read:
ready = IO.select(socket_array)
readable = ready[0]
readable.each do |socket|
# Read from socket here
# Do the rest of processing here
Don't close the socket after you have sent the data.
If you need more details leave a comment - I can write more of the code.

is Ruby em-websocket blocking?

I'm writing a ruby program that has 2 threads. One that listens on an incoming UDP connection and another that broadcasts on a websocket from which browsers on the client side read.I'm using the em-websocket gem. However, My UDP listener thread never gets called and it looks like the code stays within the websocket initialization code. I'm guessing because em-websocket is blocking, but I haven't been able to find any info online that suggests that. Is it an error on my side? I'm kinda new to ruby so I'm not able to figure out what I'm doing wrong.
require 'json'
require 'em-websocket'
require 'socket'
socket=nil
text="default"
$x=0
EventMachine.run do
EventMachine::WebSocket.start(:host => "0.0.0.0", :port => 8080) do |ws|
ws.onopen {
ws.send "Hello Client!"
socket=ws
$x=1
}
ws.onmessage { |msg| socket.send "Pong: #{msg}" }
ws.onclose { puts "WebSocket closed" }
end
end
def listen()
puts "listening..."
s = UDPSocket.new
s.bind(nil, 3000)
while 1<2 do
text, sender = s.recvfrom(1024)
puts text
if $x==1 then
socket.send text
end
end
end
t2=Thread.new{listen()}
t2.join
em-websocket is non-blocking, however UDPSocket#recv_from is. Might be better to just use EventMachine's open_datagram_socket instead.
Another thing to note: you should not expose socket as a "global" variable. Every time somebody connects the reference to the previously connected client will be lost. Maybe make some sort of repository for socket connections, or use an observer pattern to broadcast messages when something comes in. What I would do is have a dummy object act as an observer, and whenever a socket is connected/disconnect you register/unregister from the observer:
require 'observer'
class Dummy
include Observable
def receive_data data
changed true
notify_observers data
end
end
# ... later on ...
$broadcaster = Dummy.new
class UDPHandler < EventMachine::Connection
def receive_data data
$broadcaster.receive_data data
end
end
EventMachine.run do
EM.open_datagram_socket "0.0.0.0", 3000, UDPHandler
EM::WebSocket.start :host => "0.0.0.0", :port => 8080 do |ws|
ws.onopen do
$broadcaster.add_observer ws
end
ws.onclose do
$broadcaster.delete_observer ws
end
# ...
end
end
The whole point of EventMachine is to abstract away from the basic socket and threading structure, and handle all the asynchronous bits internally. It's best not to mix the classical libraries like UDPSocket or Thread with EventMachine stuff.

how do I close a redis connection using the sinatra streaming api?

I have the following sinatra app:
require 'sinatra'
require 'redis'
require 'json'
class FeedStream < Sinatra::Application
helpers do
include SessionsHelper
def redis
#redis ||= Redis.connect
end
end
get '/feed', provides: 'text/event-stream' do
stream do |out|
redis.subscribe "feed" do |on|
on.message do |channel, message|
event_data = JSON.parse message
logger.info "received event = #{event_data}"
out << "event: #{event_data['event']}\n"
out << "data: #{{:data => event_data['data'],
:by => current_user}}.to_json\n\n"
end
end
end
end
end
basically, it receives events published by other users to a feed using redis pubsub, and then it sends those events with the sinatra streaming api.
The problem is that, when the browser reconnects to the feed, the redis client keeps connected, and it keeps receiving events, so the redis server gets full of useless connections.
how can i close all this connections once the broser closes the connection to the web server?
I know it's been a while.
Were you looking for quit?
After much research and experimentation, here's the code I'm using with sinatra + sinatra sse gem (which should be easily adapted to Rails 4):
class EventServer < Sinatra::Base
include Sinatra::SSE
set :connections, []
.
.
.
get '/channel/:channel' do
.
.
.
sse_stream do |out|
settings.connections << out
out.callback {
puts 'Client disconnected from sse';
settings.connections.delete(out);
}
redis.subscribe(channel) do |on|
on.subscribe do |channel, subscriptions|
puts "Subscribed to redis ##{channel}\n"
end
on.message do |channel, message|
puts "Message from redis ##{channel}: #{message}\n"
message = JSON.parse(message)
.
.
.
if settings.connections.include?(out)
out.push(message)
else
puts 'closing orphaned redis connection'
redis.unsubscribe
end
end
end
end
end
The redis connection blocks on.message and only accepts (p)subscribe/(p)unsubscribe commands. Once you unsubscribe, the redis connection is no longer blocked and can be released by the web server object which was instantiated by the initial sse request. It automatically clears when you receive a message on redis and sse connection to the browser no longer exists in the collection array.

Sharing DB connections across objects using class methods in ruby?

I am writing a ruby script to be used as Postfix SMTP access policy delegation. The script needs to access a Tokyo Tyrant database. I am using EventMachine to take care of network connections. EventMachine needs a EventMachine::Connection class that is instantiated by EventMachineā€˜s processing loop whenever a new connection is created. so for each connection a class is instantiated and destroyed.
I am creating a connection to Tokyo Tyrant from the post_init of the EventMachine::Connection (ie right after connection is setup) and tearing it down after connection is terminated.
My question is if this is the proper way to connect to db? ie making a connection every yime I need it and tearing it down after I am finished? Wouldn't be better to connect to DB once (when program is started) tear it down during program shutdown? If that is so how should I code that ?
My code is:
require 'rubygems'
require 'eventmachine'
require 'rufus/tokyo/tyrant'
class LineCounter < EM::Connection
ActionAllow = "action=dunno\n\n"
def post_init
puts "Received a new connection"
#tokyo = Rufus::Tokyo::Tyrant.new('server', 1978)
#data_received = ""
end
def receive_data data
#data_received << data
#data_received.lines do |line|
key = line.split('=')[0]
value = line.split('=')[1]
#reverse_client_name = value.strip() if key == 'reverse_client_name'
#client_address = value.strip() if key == 'client_address'
#tokyo[#client_address] = #reverse_client_name
end
puts #client_address, #reverse_client_name
send_data ActionAllow
end
def unbind
#tokyo.close
end
end
EventMachine::run {
host,port = "127.0.0.1", 9997
EventMachine::start_server host, port, LineCounter
puts "Now accepting connections on address #{host}, port #{port}..."
EventMachine::add_periodic_timer( 10 ) { $stderr.write "*" }
}
with regards,
raj
Surprising there's no answers to this question.
What you probably need is a connection pool where you can fetch, use, and return connections as they are required.
class ConnectionPool
def initialize(&block)
#pool = [ ]
#generator = block
end
def fetch
#pool.shift or #generator and #generator.call
end
def release(handle)
#pool.push(handle)
end
def use
if (block_given?)
handle = fetch
yield(handle)
release(handle)
end
end
end
# Declare a pool with an appropriate connection generator
tokyo_pool = ConnectionPool.new do
Rufus::Tokyo::Tyrant.new('server', 1978)
end
# Fetch/Release cycle
tokyo = tokyo_pool.fetch
tokyo[#client_address] = #reverse_client_name
tokyo_pool.release(tokyo)
# Simple block-method for use
tokyo_pool.use do |tokyo|
tokyo[#client_address] = #reverse_client_name
end

Resources