Ruby tcpserver client server - ruby

I have an application that I am coding to have the logging info be sent over tcpsocket to a server and have a monitor client connect to the server to view the logging data . So far i am able to get to the stage where the info is sent to the server however I need some thoughts on how to go about the next stage. Using Ruby tcpsever what methodologies can I use to have the server resend the incoming data to a client? How can I have data stored across threads?
require "socket"
server_socket = TCPServer.new('localhost', 2200)
loop do
# Create a new thread for each connection.
Thread.start(server_socket.accept) do |session|
# check if received is viewer request
line = session.gets
if line =~ /viewer/
#filter = line[/\:(.*?)\:/]
session.puts "Listining for #{filter}"
loop do
if (log = ### need input here from logging app ###)
# Show if filter is set for all or cli matches to filter
if #filter == ':all:' || log =~ /\:(.*?)\:/
session.puts log
end
# Read trace_viewer input.
if session.gets =~ /quit/
# close the connections
session.puts "Closing connection. Bye!"
session.close
break
end
end
end
end
end
end

With clients connecting to the server it sounds like a typical client/server configuration, with some clients sending data, and others requesting it. Is there any reason you don't use a standard HTTP client/server, i.e., a web server, instead of reinventing the wheel? Use Sinatra or Padrino, or even Rails and you'll be mostly finished.
How can I have data stored across threads?
Ruby's Thread module includes Queue, which is good for moving data around between threads. The document page has an example which should help.
The same ideas for using a queue would apply to using tables. I'd recommend using a database to act as a queue for your use, rather than do it in memory. A power outage, or app crash will lose all the data if it's an in-memory queue and the clients haven't retrieved everything. Writing and reading a database means the data would survive such problems.
Keep the schema simple, provide a reasonable index on it, and it should be fast enough for most uses. You'll need some housekeeping code to keep the database clean, but that's easy using SQL, or an ORM like Sequel or ActiveRecord.

Related

Ready to reuse WebSocket connection server with Redis/ZeroMQ backend wanted

I need horizontally scalable WebSocket connection server for chat like system, where browser clients connected to different WebSocket servers coould exchange messages within separate chat rooms.
Clients HaProxy WebSocket server1 WebSocket server2 Redis/ZeroMQ
| | | |
client A ----=------------>o<----------------|------------------>|
| | | |
client B ----=-------------|---------------->o<----------------->|
| | | |
Here client A and client B are connected through HaProxy to two different WebSocket servers, which exchange messages through Redis/ZeroMQ backend, like in that and that questions.
Thinking of building that architecture I wonder if already there is an opensource analog. What such a project would you suggest to look at?
Look into the Plezi Ruby framework. I'm the author and it has automatic Redis scalability built in.
(you just setup the ENV['PL_REDIS_URL'] with the Redis URL)
As for the architecture to achieve this, it's fairly simple... I think.
Each server instance "subscribes" to two channels: a global channel for "broadcasting" (messages sent to all users or a large "family" of users) and a unique channel for "unicasting" (messages intended for a specific user connected to the server).
Each server manages it's internal broadcasting system, so that messages are either routed to a specific user, to a family of connections or all users, as par their target audience.
You can find the source code here. The Redis integration is handled using this code together with the websocket object code.
Web socket broadcasts are handled using both the websocket object on_broadcast callback. The Iodine server handles the inner broadcasting within each server instance using the websocket implementation.
I already posted the inner process architecture details as an answer to this question
I think socket.io has cross server support as well.
Edit (some code)
Due to the comment, I thought I'd put in some code... if you edit your question and add more specifications about the feature you're looking for, I can edit the code here.
I'm using the term "room" since this is what you referred to, although I didn't envision Plezi as just a "chat" framework, it is a very simple use case to demonstrate it's real-time abilities.
If you're using Ruby, you can run the following in the irb terminal (make sure to install Plezi first):
require 'plezi'
class MultiRoom
def on_open
return close unless params[:room] && params[:name]
#name = params[:name]
puts "connected to room #{params[:room]}"
# # if you use JSON to get room data,
# # you can use room arrays like so:
# params[:room] = params[:room].split(',') unless params[:room].is_a?(Array)
end
def on_message data
to_room = params[:room]
# # if you use JSON you can try:
# to_room = JSON.parse(data)['room'] rescue nil
# # we can use class `broadcast`, to broadcast also to self
MultiRoom.broadcast :got_msg, to_room, data, #name if to_room
end
protected
def got_msg room, data, from
write ::ERB::Util.html_escape("#{from}: #{data}") if params[:room] == room
# # OR, on JSON, with room arrays, try something like:
# write data if params[:room].include?(room)
end
end
class EchoConnection
def on_message data
write data
MultiRoom.broadcast "myroom", "Echo?", "system" if data == /^test/i
end
end
route '/echo', EchoConnection
route '/:name/(:room)', MultiRoom
# # add Redis auto-scaling with:
# ENV['PL_REDIS_URL'] = "redis://:password#my.host:6389/0"
exit # if running in terminal, using irb
You can test it out by connecting to: ws://localhost:3000/nickname/myroom
to connect to multiple "rooms" (you need to re-write the code for JSON and multi-room), try: ws://localhost:3000/nickname/myroom,your_room
test the echo by connecting to ws://localhost:3000/echo
Notice that the echo acts differently and allows you to have different websockets for different concerns - i.e., having one connection for updates and messages using JSON and another for multiple file uploading using raw binary data over websockets.

How can I properly handle persistent TCP socket connections (to simulate an HTTP server)?

So, I'm trying to simulate some basic HTTP persistent connections using sockets and Ruby - for a college class.
The point is to build a server - able to handle multiple clients - that receives a file path and gives back the file content - just like an HTTP GET.
The current server implementation loops listening for clients, fires a new thread when there's an incoming connection and reads the file paths from this socket. It's very dumb, but it works fine when working with non-presistent connections - one request per connection.
But they should be persistent.
Which means the client shouldn't worry about closing the connection. In the non-persistent version the servers echoes the response and close the connection - goodbye client, farewell.
But being persistent means the server thread should loop and wait for more incoming requests until... well until there's no more requests. How does the server knows that? It doesn't! Some sort of timeout is needed. I tried to do that with Ruby's Timeout, but it didn't work.
Googling for some solutions - besides being thoroughly advised to avoid using Timeout module - I've seen a lot of posts about the IO.select method, that should handle this socket waiting issue way better than using threads and stuff (which really sounds cool, considering how Ruby threads (don't) work). I'm trying to understand here how IO.select works, but still wasn't able to make it work in the current scenario.
So I aske basically two things:
how can I efficiently work this timeout issue on the server-side, either using some thread based solution, low-level socket options or some IO.select magic?
how can the client side know that the server has closed its side of the connection?
Here's the current code for the server:
require 'date'
module Sockettp
class Server
def initialize(dir, port = Sockettp::DEFAULT_PORT)
#dir = dir
#port = port
end
def start
puts "Starting Sockettp server..."
puts "Serving #{#dir.yellow} on port #{#port.to_s.green}"
Socket.tcp_server_loop(#port) do |socket, client_addrinfo|
handle socket, client_addrinfo
end
end
private
def handle(socket, addrinfo)
Thread.new(socket) do |client|
log "New client connected"
begin
loop do
if client.eof?
puts "#{'-' * 100} end connection"
break
end
input = client.gets.chomp
body = content_for(input)
response = {}
if body
response.merge!({
status: 200,
body: body
})
else
response.merge!({
status: 404,
body: Sockettp::STATUSES[404]
})
end
log "#{addrinfo.ip_address} #{input} -- #{response[:status]} #{Sockettp::STATUSES[response[:status]]}".send(response[:status] == 200 ? :green : :red)
client.puts(response.to_json)
end
ensure
socket.close
end
end
end
def content_for(path)
path = File.join(#dir, path)
return File.read(path) if File.file?(path)
return Dir["#{path}/*"] if File.directory?(path)
end
def log(msg)
puts "#{Thread.current} -- #{DateTime.now.to_s} -- #{msg}"
end
end
end
Update
I was able to simulate the timeout behaviour using the IO.select method, but the implementation doesn't feel good when combining with a couple of threads for accepting new connections and another couple for handling requests. The concurrency makes the situation mad and unstable, and I'm probably not sticking with it unless I can figure out a better way of using this solution.
Update 2
Seems like Timeout is still the best way to handle this. I'm sticking with it till find a better option.
I still don't know how to deal with zombie client connections.
Solution
I endend up using IO.select (got inspired when looking at the webrick code). You cha check the final version here (lib/http/server/client_handler.rb)
You should implement something like heartbeat packets.Client side should send special packets to after few secs/mins to ensure that server doesn't time out the connection on the client end.You just avoid doing anything in this call.

How can I tell if my Ruby server script is being overloaded?

I have a daemonized ruby script running on my server that looks like this:
#server = TCPServer.open(61101)
loop do
#thr = Thread.new(#server.accept) do |sock|
Thread.current[:myArrayOfHashes] = [] # hashes containing attributes of myObject
SystemTimer.timeout_after(5) do
Thread.current[:string] = sock.gets
sock.close
# parse the string and load the data into myArrayOfHashes
Myobject.transaction do # Update the myObjects Table
Thread.current[:myArrayOfHashes].each do |h|
Thread.current[:newMyObject] = Myobject.new
# load up the new object with data
Thread.current[:newMyObject].save
end
end
end
end
#thr.join
end
This server receives and manages data for my rails application which is all running on Mac OS 10.6. The clients call the server every 15 minutes on the 15 and while I currently only have 16 or so clients calling every 15 min on the 15, I'm wondering about the following:
If two clients call at close enough to the same time, will one client's connection attempt fail?
How I can figure out how many client connections my server can accommodate at the same time?
How can I monitor how much memory my server is using?
Also, is there an article you can point me toward that discusses the best way to implement this kind of a server? I mean can I have multiple instances of the server listening on the same port? Would that even help?
I am using Bluepill to monitor my server daemons.
1 and 2
The answer is no, two clients connecting close to each other will not make the connection fail (however multiple clients connecting may fail, see below).
The reason is the operating system has a default so called listening queue built into all server sockets. So even if you are not calling accept fast enough in your program, the OS will still keep buffering incoming connections for you. It will buffer these connections for as long as the listening queue does not get filled.
Now what is the size of this queue then?
In most cases the default size typically used is 5. The size is set after you create the socket and you call listen on this socket (see man page for listen here).
For Ruby TCPSocket automatically calls listen for you, and if you look at the C-source code for TCPSocket you will find that it indeed sets the size to 5:
https://github.com/ruby/ruby/blob/trunk/ext/socket/ipsocket.c#L108
SOMAXCONN is defined as 5 here:
https://github.com/ruby/ruby/blob/trunk/ext/socket/mkconstants.rb#L693
Now what happens if you don't call accept fast enough and the queue gets filled?
The answer is found in the man page of listen:
The backlog argument defines the maximum length to which the queue of pending connections for sockfd may grow. If a connection request arrives when the queue is full, the client may receive an error with an indication of ECONNREFUSED or, if the underlying protocol supports retransmission, the request may be ignored so that a later reattempt at connection succeeds.
In your code however there is one problem which can make the queue fill up if more than 5 clients try to connect at the same time: you're calling #thr.join at the end of the loop.
What effectively happens when you do this is that your server will not accept any new incoming connections until all your stuff inside your accept-thread has finished executing.
So if the database stuff and the other things you are doing inside the accept-thread takes a long time, the listening queue may fill up in the meantime. It depends on how long your processing takes, and how many clients could potentially be connecting at the exact same time.
3
You didn't say which platform you are running on, but on linux/osx the easiest way is to just run top in your console. For more advanced memory monitoring options you might want to check these out:
ruby/ruby on rails memory leak detection
track application memory usage on heroku

Handling event-stream connections in a Sinatra app

There is a great example of a chat app using Server-Sent Events by Konstantin Haase. I am trying to run it and have a problem with callbacks (I use Sinatra 1.3.2 and browse with Chrome 16). They simply do not run (e.g. after page reload), and therefore the number of connections is growing.
Also, connection is closed in 30-60 sec unless one sets a periodic timer to send empty data, as suggested by Konstantin elsewhere.
Can you replicate it? If yes, is it possible to fix these issues somehow? WebSockets work seamlessly in this respect...
# ruby
get '/stream', provides: 'text/event-stream' do
stream :keep_open do |out|
EventMachine::PeriodicTimer.new(20) { out << "data: \n\n" } # added
settings.connections << out
puts settings.connections.count # added
out.callback { puts 'closed'; settings.connections.delete(out) } # modified
end
end
# javascript
var es = new EventSource('/stream');
es.onmessage = function(e) { if (e.data != '') $('#chat').append(e.data + "\n") }; // modified
This was a bug in Sinatra https://github.com/sinatra/sinatra/issues/446
Neat bit of code. But you're right, WebSockets would address these problems. I think there are two problems here:
1) Your browser, the Web server, or a proxy in-between may shut down your connection after a period of time, idle or not. Your suggestion of a periodic timer sending empty data will help, but is no guarantee.
2) As far as I know, there's no built-in way to tell if/when one of these connections is still working. To keep your list of connections from growing, you're going to have to keep track of when each connection was last "used" (maybe the client should ping occasionally, and you would store this datetime.) Then add a periodic timer to check for and kill "stale" connections.
An easier, though perhaps uglier option, is to store the creation time of each connection, and kill it off after n minutes. The client should be smart enough to reconnect.
I know that takes some of the simplicity out of the code. As neat as the example is, I think it's a better candidate for WebSockets.

What is most efficient way to read from TCPServer?

I'm creating a ruby server which is connecting to a TCP client. My server is using a TCPServer and I'm attempting to use TCPServer::recv(), but it doesn't wait for data, so just continues in a tight loop until data is received.
What is the most efficient way to process intermittant data? I'm unable to change the data being sent in since I'm attempting to emulate another server. Which read like statement from TCPServer/TCPSocket would wait for data being sent?
require "socket"
dts = TCPServer.new('localhost', 20000)
s = dts.accept
print(s, " is accepted\n")
loopCount = 0;
loop do
Thread.start(s) do
loopCount = loopCount + 1
lineRcvd = s.recv(1024)
if ( !lineRcvd.empty? )
puts("#{loopCount} Received: #{lineRcvd}")
s.write(Time.now)
end
end
end
s.close
print(s, " is gone\n")
Thanks for your time.
are you sure recv isn't returning "" -- meaning the socket is closed?
If not then perhaps your sockets are set to non blocking somehow?
EventMachine is indeed far faster than using threads for socket programming :)
GL.
-r
Based on the questions you've been asking, I think you should try a framework like EventMachine and write a server that implements what you want instead of trying to fuss around with writing a server wrapper.
That being said, the most efficient way to read from a socket is to use a proper select call and poll all the open connections. While this is a fairly academic thing for anyone who's developed client-server applications before, it is a nuisance because there are a lot of things you can easily get wrong. For example. handling multiple connections can lead to all kinds of troublesome situations if you're not especially careful to avoid blocking calls.
The EventMachine framework makes it easy to develop query/response-type servers because you can always start with a template and work from there, for example, the built-in EventMachine::Protocols::LineAndTextProtocol one works as a great basis for most.

Resources