Mocking a network connection - ruby

Just how about do I mock a network connection?
Suppose I am writing a client-server topology; the client is already in place, but I want to test that when a network connection is received on the server that an event is fired; or that a function is called in response to a network connection being received.
I'm currently using EventMachine, and I'd like to figure out how to unit test these interactions to make sure that mocked network connections/messages will be handled correctly without having to do integration testing and actually having to write a clien tthat would send the appropriate messages to the test interface. I hope this makes sense?
Basically, I want to be able to have a test for the Server to make sure it responds to mocked messages correctly without having to ever open a real internet connection or write a dedicated client just for testing - I would prefer to be able to be able to say 'This is the message I want to receive, now pretend i've received it from a real client and handle it'

I solved this simply by instantiating an instance of the connection class in EventMachine and tehn calling the method directly - not sure why I didn't think of that in the first place!
describe 'Network manager' do
it 'should call the ChangeStatus handler when it receives the ChangeStatus packet' do
# Arrange
connection = TcpConnection.new
# Set up the packet handler
packet_handler = double 'TcpPacketHandler'
# Inject dependency
connection.packet_handler = packet_handler
# Create the message
data = [ 15, # message code + payload size
0, # message code
4, # protocol version
12, # size of string
]
data.push 'hello, world'.bytes.to_a
reader, writer = IO.pipe
writer.write data
writer.close
puts data
# Assert
expect(packet_handler).to receive(:handle_message).with(data[1], anything()).once
message = reader.read
# Act
connection.receive_data message
end
end

Related

Reconnect WebSocket client connection on no response

I have a lot of async connections to a WebSocket server. After a while, the server stops responding and the script just waits on response. I have to dockerize and deploy the script, so the only way to reset the connection is to run the docker image again.
The ideal solution would be something like a keyword argument to WebSockets.open. The code looks like:
map(container) do item
#async begin
WebSockets.open(url) do ws
payload = generatepayload(item)
if writeguarded(ws, JSON3.write(payload))
while isopen(ws)
data, success = readguarded(ws)
### process data
end
end
end
end
end
But any solution where I just have to write julia code would work.
WebSockets. jl's readguarded() allows a timeout read error exception to be ignored. As long as your socket read has a reasonable timeout (it should) handling the read failure yourself may help, since this could give you a new websocket connection with the server. for example:
map(container) do item
#async while true
try
WebSockets.open(url) do ws # reopens the websocket on while loop
payload = generatepayload(item)
if writeguarded(ws, JSON3.write(payload))
while isopen(ws)
data, success = read(ws)
### process data
end
end
end
catch y
#debug y # log something here
maybe check server some other way here and break if offline
end
end
end

How to tell if a TCP socket has been closed by the client in Ruby?

I've read some things suggesting that because of the design of TCP this might not be possible (such as: Java socket API: How to tell if a connection has been closed?), but I'm trying to find explicit confirmation. I have a basic TCP server that accepts connections, and a client that initiates a connection, sends a message, and then closes the connection. Is there a way for the server to know that the client closed the connection?
I found some suggestions to look into checking the file descriptors for the sockets (source: How to check if a given file descriptor stored in a variable is still valid?), using the kernel select command (source: https://bytes.com/topic/c/answers/866296-detecting-if-file-descriptor-closed) as well as using recv to check if the client returns 0 (source: http://man7.org/linux/man-pages/man2/recv.2.html#RETURN_VALUE), but these do not seem to work, at least not when called by Ruby. To test this, I wrote a basic server and client:
test_server.rb
require 'socket'
require 'fcntl'
TIMEOUT = 5
server = TCPServer.new('localhost', 8080)
puts "Starting server"
loop do
client = server.accept
puts "New client: #{client}"
puts "** before closed #{Time.now.to_i} closed=#{client.closed?}"
result = IO.select([client], nil, nil, TIMEOUT)
puts "select result=#{result}"
fd = client.fcntl(Fcntl::F_GETFD, 0)
puts "client fd=#{fd}"
stuff = client.recv(30)
puts "received '#{stuff}'"
begin
r = client.recv(1)
rescue => e
end
puts "received #{r} nil?=#{r.nil?}"
sleep 3
puts "** after closed #{Time.now.to_i} closed=#{client.closed?}"
result = IO.select([client], nil, nil, TIMEOUT)
puts "select result=#{result}"
fd = client.fcntl(Fcntl::F_GETFD, 0)
puts "client fd=#{fd}"
begin
r = client.recv(1)
rescue => e
end
puts "received #{r} nil?=#{r.nil?}"
puts "done!"
end
test_client.rb
require 'socket'
class Client
def initialize
#socket = tcp_socket
end
def tcp_socket
Thread.current[:socket] = TCPSocket.new("localhost", 8080)
end
def send(s, args={})
puts "sending str '#{s}'"
nbytes = #socket.send(s, 0)
puts "received #{nbytes} bytes"
sleep 1
#socket.close
puts "done at #{Time.now.to_i}: #{#socket.closed?}"
end
end
msg = 'hello world this is my message'
server = Client.new
server.send(msg)
The client sends a 30-byte message, waits 1s, then closes the connection.
The server accepts the connection, calls select and fcntl on it to check its status, receives the message, tries to read 1 more byte, sleeps for 3 seconds, then calls select and fcntl and again tries to read 1 byte. The intent here is to check if anything changes that the server can see before and after the client closed the connection (hence the 3-second sleep). The result I get from running the server and then the client code is:
Starting server
New client: #<TCPSocket:0x00007fa0930f0880>
** before closed 1578005539 closed=false
select result=[[#<TCPSocket:fd 10>], [], []]
client fd=1
received 'hello world this is my message'
received nil?=false
** after closed 1578005543 closed=false
select result=[[#<TCPSocket:fd 10>], [], []]
client fd=1
received nil?=false
done!
Before and after the client closed the connection, select still sees the socket as readable, the underlying file descriptor does not change, and recv returns empty string (It's possible the kernel call is returning 0 as specified in the man-page but Ruby is capturing that, and if so I don't know how to see it.). Thus none of these seem to be a reliable indicator of whether the connection was closed from the other side. Is there something I'm missing?
I have seen some other suggestions to incorporate a regular heartbeat back to the client, but I'm wondering if there's a way to avoid that. Reason is that I'm trying to accommodate a case where the client may be sending a message in several pieces separated by a delay (e.g. 100 bytes at 1 second each byte). If the server sends a heartbeat message in the middle of that operation and listens for an OK, I presume the client has to be listening for the heartbeat as well and send its OK back, separate from the ongoing message send, and in my test case, I can't change the client to do that.
I have seen some other suggestions to incorporate a regular heartbeat back to the client, but I'm wondering if there's a way to avoid that.
A heartbeat (ping) is the only viable solution.
There is no way to reliably know if the connection is live except by trying to send data over the wire.
Since TCP/IP doesn't require any traffic when data isn't being sent (or received), there's no way for the TCP stack (not even in the OS kernel) to know if the connection is "live" without attempting to exchange data over the wire.
Some connections will close gracefully, allowing the TCP stack to recognize that the connection was closed - but this isn't always true (you can read more about "half-open" or "half-closed" connections).
For this reason, all servers implement a timeout / ping mechanism to test for lost connectivity.
I'm trying to accommodate a case where the client may be sending a message in several pieces separated by a delay (e.g. 100 bytes at 1 second each byte)...
Remember that TCP/IP is a stream based protocol, not a message based protocol.
This means that your 100 bytes might arrive fragmented or they might be combined with a previous message.
If you're sending messages (rather than streaming data), you need - by design - to mark message boundaries.
Since these message boundaries must be marked, it becomes relatively easy to add a message type marker (to mark ping/pong messages).
You can observer the WebSocket protocol message format to learn more about message based protocol design using a TCP/IP (streamed) connection.

Is this example tcp socket programming sequence of events safe?

I plan on having two services.
HTTP REST service written in Ruby
JSON RPC service written in Go
The Ruby service will open a TCP socket connection to a Go JSON RPC service. It'll do this for each incoming HTTP request it receives. It will send some data over the socket to the Go service and that service will subsequently send back the corresponding data back down the socket.
Go code
The Go service go would look something like this (simplified):
srv := new(service.App) // this would expose a Process method
rpc.Register(srv)
listener, err := net.Listen("tcp", ":8080")
if err != nil {
// handle error
}
for {
conn, err := listener.Accept()
if err != nil {
// handle error
}
go jsonrpc.ServeConn(conn)
}
Notice we serve the incoming connection using a goroutine, so we can handle requests concurrently.
Ruby code
Below is a simple snippet of Ruby code that demonstrates (in theory) the way I would send data to the Go service:
require "socket"
require "json"
socket = TCPSocket.new "localhost", "8080"
b = {
:method => "App.Process",
:params => [{ :Config => JSON.generate({ :foo => :bar }) }],
:id => "0"
}
socket.write(JSON.dump(b))
response = JSON.load socket.readline
My concern is: will this be a safe sequence of events?
I'm not asking if this will be 'thread safe', because i'm not worried about manipulating shared memory across the go routines. I'm more concerned around whether my Ruby HTTP service will get back the data it's expecting?
If I have two parallel requests coming into my HTTP Service (or maybe the Ruby app is hosted behind a load balancer and so different instances of the HTTP service is handling multiple requests), then I could have instance A send the message Foo to the Go service; while instance B sends the message Bar.
The business logic inside the Go service will return different responses depending on its input so I want to be sure that Ruby instance A gets back the correct response for Foo, and B gets back the correct response for Bar.
I assume a socket connection is more like a queue in that if instance A makes a request to the Go service first and then B does, but B is quicker responding for whatever reason, then the Go service will write the response for B to the socket and instance A of the Ruby app will end up reading in the wrong socket data (this is obviously just one possible scenario considering that I could get lucky and have instance B read the socket data before instance A does).
Solutions?
I'm not sure if there is simple solution to this problem. Unless I don't use a TCP socket or RPC and instead rely on standard HTTP in the Go service. But I wanted the performance and less overhead of TCP.
I'm worried the design could get more complicated by maybe having to implement an external queue as a way of synchronising the responses with the Ruby service.
It maybe because the nature of my Ruby service is fundamentally synchronous (HTTP response/request) that I have no option but to switch to HTTP for the Go service.
But wanted to double check with the community first just in case I'm missing something obvious.
Yes this is safe if you create a new connection every time.
That said there are latent issues with your approach:
TCP connections are rather expensive to establish, so you probably want to re-use connections with a connection pool
If you make too many simultaneous requests you will exhaust ports/open file descriptors which will cause your program to crash
You don't have any timeouts in place, so it's possible to end up with orphaned TCP connections which never complete (either because of something bad on the Go side, or network problems)
I think you'd be better off using HTTP (despite the overhead) since libraries are already written to cope with these problems. HTTP is also much more debuggable since you can just curl an endpoint to test it.
Personally I'd probably go with gRPC.

How can I properly handle persistent TCP socket connections (to simulate an HTTP server)?

So, I'm trying to simulate some basic HTTP persistent connections using sockets and Ruby - for a college class.
The point is to build a server - able to handle multiple clients - that receives a file path and gives back the file content - just like an HTTP GET.
The current server implementation loops listening for clients, fires a new thread when there's an incoming connection and reads the file paths from this socket. It's very dumb, but it works fine when working with non-presistent connections - one request per connection.
But they should be persistent.
Which means the client shouldn't worry about closing the connection. In the non-persistent version the servers echoes the response and close the connection - goodbye client, farewell.
But being persistent means the server thread should loop and wait for more incoming requests until... well until there's no more requests. How does the server knows that? It doesn't! Some sort of timeout is needed. I tried to do that with Ruby's Timeout, but it didn't work.
Googling for some solutions - besides being thoroughly advised to avoid using Timeout module - I've seen a lot of posts about the IO.select method, that should handle this socket waiting issue way better than using threads and stuff (which really sounds cool, considering how Ruby threads (don't) work). I'm trying to understand here how IO.select works, but still wasn't able to make it work in the current scenario.
So I aske basically two things:
how can I efficiently work this timeout issue on the server-side, either using some thread based solution, low-level socket options or some IO.select magic?
how can the client side know that the server has closed its side of the connection?
Here's the current code for the server:
require 'date'
module Sockettp
class Server
def initialize(dir, port = Sockettp::DEFAULT_PORT)
#dir = dir
#port = port
end
def start
puts "Starting Sockettp server..."
puts "Serving #{#dir.yellow} on port #{#port.to_s.green}"
Socket.tcp_server_loop(#port) do |socket, client_addrinfo|
handle socket, client_addrinfo
end
end
private
def handle(socket, addrinfo)
Thread.new(socket) do |client|
log "New client connected"
begin
loop do
if client.eof?
puts "#{'-' * 100} end connection"
break
end
input = client.gets.chomp
body = content_for(input)
response = {}
if body
response.merge!({
status: 200,
body: body
})
else
response.merge!({
status: 404,
body: Sockettp::STATUSES[404]
})
end
log "#{addrinfo.ip_address} #{input} -- #{response[:status]} #{Sockettp::STATUSES[response[:status]]}".send(response[:status] == 200 ? :green : :red)
client.puts(response.to_json)
end
ensure
socket.close
end
end
end
def content_for(path)
path = File.join(#dir, path)
return File.read(path) if File.file?(path)
return Dir["#{path}/*"] if File.directory?(path)
end
def log(msg)
puts "#{Thread.current} -- #{DateTime.now.to_s} -- #{msg}"
end
end
end
Update
I was able to simulate the timeout behaviour using the IO.select method, but the implementation doesn't feel good when combining with a couple of threads for accepting new connections and another couple for handling requests. The concurrency makes the situation mad and unstable, and I'm probably not sticking with it unless I can figure out a better way of using this solution.
Update 2
Seems like Timeout is still the best way to handle this. I'm sticking with it till find a better option.
I still don't know how to deal with zombie client connections.
Solution
I endend up using IO.select (got inspired when looking at the webrick code). You cha check the final version here (lib/http/server/client_handler.rb)
You should implement something like heartbeat packets.Client side should send special packets to after few secs/mins to ensure that server doesn't time out the connection on the client end.You just avoid doing anything in this call.

Test Ruby TCPSocket server

I have a small application, serving connections(like a chat). It catches the connection, grabs login from it, then listens to the data and broadcasts it to each connection, except sender.
The problem is i'm not a very advanced tester and do not know, how this can be tested.
# Handle each connection
def serve(io)
io.puts("LOGIN\n")
# Listen for identifier
user = io.gets.chomp
...
# Add connection to the list
#mutex.synchronize { #chatters[user] = io }
# Get and broadcast input until connection returns nil
loop do
incoming = io.gets
broadcast(incoming, io)
end
end
#Send message out to everyone, but sender
def broadcast(message="", sender)
# Mutex for safety - GServer uses threads
#mutex.synchronize do
#chatters.each do |chatter|
socket = chatter[1]
# Do not send to sender
if sock != sender
sock.print(message)
end
end
end
end
If you just want to do unit testing, you could use RSpec mocks (or some other mocking framework) to stub your methods and ensure that the logic works the way you expect. If you actually want to drive an integration test, that's a lot more work, and will require that you create a separate reader and writer for the socket so that you can actually test each piece of the conversation independently for expected behavior.
Someone else has apparently blogged about a similar issue to yours. Perhaps that example will help you.
If your question is more about the test cases you should write, instead of about how to test sockets, then you may want to rewrite your question so that answers will be more on-target.

Resources