Ruby TCPServer sockets - ruby

Maybe I've gotten my sockets programming way mixed up, but shouldn't something like this work?
srv = TCPServer.open(3333)
client = srv.accept
data = ""
while (tmp = client.recv(10))
data += tmp
end
I've tried pretty much every other method of "getting" data from the client TCPSocket, but all of them hang and never break out of the loop (getc, gets, read, etc). I feel like I'm forgetting something. What am I missing?

In order for the server to be well written you will need to either:
Know in advance the amount of data that will be communicated: In this case you can use the method read(size) instead of recv(size). read blocks until the total amount of bytes is received.
Have a termination sequence: In this case you keep a loop on recv until you receive a sequence of bytes indicating the communication is over.
Have the client closing the socket after finishing the communication: In this case read will return with partial data or with 0 and recv will return with 0 size data data.empty?==true.
Defining a communication timeout: You can use the function select in order to get a timeout when no communication was done after a certain period of time. In which case you will close the socket and assume every data was communicated.
Hope this helps.

Hmm, I keep doing this on stack overflow [answering my own questions]. Maybe it will help somebody else. I found an easier way to go about doing what I was trying to do:
srv = TCPServer.open(3333)
client = srv.accept
data = ""
recv_length = 56
while (tmp = client.recv(recv_length))
data += tmp
break if tmp.length < recv_length
end

There is nothing that can be written to the socket so that client.recv(10) returns nil or false.
Try:
srv = TCPServer.open(3333)
client = srv.accept
data = ""
while (tmp = client.recv(10) and tmp != 'QUIT')
data += tmp
end

Related

vb6 - readfile hang on read serial port?

i use readfile to read data from serial port by api instead of WinComm. it works okay but hang on data receive when other side does not sent data, here are some codes:
'----------on port connect section---------------
With cTimeOuts
.ReadIntervalTimeout = 20
.ReadTotalTimeoutMultiplier = 20
.ReadTotalTimeoutConstant = 20
.WriteTotalTimeoutMultiplier = 20
.WriteTotalTimeoutConstant = 20
End With
rtLng = SetCommTimeouts(hPort, cTimeOuts)
'----------on data receive section---------------
With dOverlapped
.Internal = 0
.InternalHigh = 0
.offset = 0
.OffsetHigh = 0
.hEvent = 0
End With
call ReadFile(hPort, BytesBuffer(0), UBound(BytesBuffer) + 1, dwBytesRead, dOverlapped)
i tried:
a. set ReadIntervalTimeout to MAXDWORD and ReadTotalTimeoutMultiplier & ReadTotalTimeoutConstant both to 0, it can avoid hang on receive data, but the receive data will incomplete for random occurred, so this is not my option.
b. some example code on internet using
Call ReadFile(hPort, BytesBuffer(0), UBound(BytesBuffer) + 1, dwBytesRead, 0) using 0 instead of Overlapped setting, but the program crashed.
so i don't know using what to fix this, another thread? or Overlapped? some actual vb6 code is better cause i'm not familiar with this section, thank you!

Pushing data to websocket browser client in Lua

I want to use a NodeMCU device (Lua based top level) to act as a websocket server to 1 or more browser clients.
Luckily, there is code to do this here: NodeMCU Websocket Server
(courtesy of #creationix and/or #moononournation)
This works as described and I am able to send a message from the client to the NodeMCU server, which then responds based on the received message. Great.
My questions are:
How can I send messages to the client without it having to be sent as a response to a client request (standalone sending of data)? When I try to call socket.send() socket is not found as a variable, which I understand, but cannot work out how to do it! :(
Why does the decode() function output the extra variable? What is this for? I'm assuming it will be for packet overflow, but I can never seem to make it return anything, regardless of my message length.
In the listen method, why has the author added a queuing system? is this essential or for applications that perhaps may receive multiple simultaneous messages? Ideally, I'd like to remove it.
I have simplified the code as below:
(excluding the decode() and encode() functions - please see the link above for the full script)
net.createServer(net.TCP):listen(80, function(conn)
local buffer = false
local socket = {}
local queue = {}
local waiting = false
local function onSend()
if queue[1] then
local data = table.remove(queue, 1)
return conn:send(data, onSend)
end
waiting = false
end
function socket.send(...)
local data = encode(...)
if not waiting then
waiting = true
conn:send(data, onSend)
else
queue[#queue + 1] = data
end
end
conn:on("receive", function(_, chunk)
if buffer then
buffer = buffer .. chunk
while true do
local extra, payload, opcode = decode(buffer)
if opcode==8 then
print("Websocket client disconnected")
end
--print(type(extra), payload, opcode)
if not extra then return end
buffer = extra
socket.onmessage(payload, opcode)
end
end
local _, e, method = string.find(chunk, "([A-Z]+) /[^\r]* HTTP/%d%.%d\r\n")
local key, name, value
for name, value in string.gmatch(chunk, "([^ ]+): *([^\r]+)\r\n") do
if string.lower(name) == "sec-websocket-key" then
key = value
break
end
end
if method == "GET" and key then
acceptkey=crypto.toBase64(crypto.hash("sha1", key.."258EAFA5-E914-47DA-95CA-C5AB0DC85B11"))
conn:send(
"HTTP/1.1 101 Switching Protocols\r\n"..
"Upgrade: websocket\r\nConnection: Upgrade\r\n"..
"Sec-WebSocket-Accept: "..acceptkey.."\r\n\r\n",
function ()
print("New websocket client connected")
function socket.onmessage(payload,opcode)
socket.send("GOT YOUR DATA", 1)
print("PAYLOAD = "..payload)
--print("OPCODE = "..opcode)
end
end)
buffer = ""
else
conn:send(
"HTTP/1.1 200 OK\r\nContent-Type: text/plain\r\nContent-Length: 12\r\n\r\nHello World!",
conn.close)
end
end)
end)
I can only answer 1 question, the others may be better suited for the library author. Besides, SO is a format where you ask 1 question normally.
How can I send messages to the client without it having to be sent as a response to a client request (standalone sending of data)?
You can't. Without the client contacting the server first and establishing a socket connection the server wouldn't know where to send the messages to. Even with SSE (server-sent events) it's the client that first initiates a connection to the server.

pyzmq recv_json can't decode message sent by send_json

Here is my code with the extraneous stuff stripped out:
coordinator.py
context = zmq.Context()
socket = context.socket(zmq.ROUTER)
port = socket.bind_to_random_port(ZMQ_ADDRESS)
poller = zmq.Poller()
poller.register(socket, zmq.POLLIN)
while True:
event = poller.poll(1)
if not event:
continue
process_id, val = socket.recv_json()
worker.py
context = zmq.Context()
socket = context.socket(zmq.DEALER)
socket.connect('%s:%s' % (ZMQ_ADDRESS, kwargs['zmq_port']))
socket.send_json(
(os.getpid(), True)
)
what happens when I run it:
process_id, val = socket.recv_json()
File "/Users/anentropic/.virtualenvs/myproj/lib/python2.7/site-packages/zmq/sugar/socket.py", line 380, in recv_json
return jsonapi.loads(msg)
File "/Users/anentropic/.virtualenvs/myproj/lib/python2.7/site-packages/zmq/utils/jsonapi.py", line 71, in loads
return jsonmod.loads(s, **kwargs)
File "/Users/anentropic/.virtualenvs/myproj/lib/python2.7/site-packages/simplejson/__init__.py", line 451, in loads
return _default_decoder.decode(s)
File "/Users/anentropic/.virtualenvs/myproj/lib/python2.7/site-packages/simplejson/decoder.py", line 406, in decode
obj, end = self.raw_decode(s)
File "/Users/anentropic/.virtualenvs/myproj/lib/python2.7/site-packages/simplejson/decoder.py", line 426, in raw_decode
raise JSONDecodeError("No JSON object could be decoded", s, idx)
JSONDecodeError: No JSON object could be decoded: line 1 column 0 (char 0)
and if I dig in with ipdb:
> /Users/anentropic/.virtualenvs/myproj/lib/python2.7/site-packages/zmq/sugar/socket.py(380)recv_json()
379 msg = self.recv(flags)
--> 380 return jsonapi.loads(msg)
381
ipdb> p msg
'\x00\x9f\xd9\x06\xa2'
hmm, that doesn't look like JSON... is this a bug in pyzmq? am I using it wrong?
Hmm, ok, found the answer.
There is an annoying asymmetry in the ØMQ interface, so you have to be aware of the type of socket you are using.
In this case my use of ROUTER/DEALER architecture means that the JSON message sent from the DEALER socket, when I do send_json, gets wrapped in multipart message envelope. The first part is a client id (I guess this is the '\x00\x9f\xd9\x06\xa2' that I got above) and the second part is the JSON string we are interested in.
So in the last line of my coordinator.py I need to do this instead:
id_, msg = socket.recv_multipart()
process_id, val = json.loads(msg)
IMHO this is bad design on the part of ØMQ/pyzmq, the library should abstract this away and have just send and recv methods, that just work.
I got the clue from this question How can I use send_json with pyzmq PUB SUB so it looks like PUB/SUB architecture has the same issue, and no doubt others too.
This is described in the docs but it's not very clear
http://zguide.zeromq.org/page:all#The-Asynchronous-Client-Server-Pattern
Update
In fact, I found in my case I could simplify the code further, by making use of the 'client id' part of the message envelope directly. So the worker just does:
context = zmq.Context()
socket = context.socket(zmq.DEALER)
socket.identity = str(os.getpid()) # or I could omit this and use ØMQ client id
socket.connect('%s:%s' % (ZMQ_ADDRESS, kwargs['zmq_port']))
socket.send_json(True)
It's also worth noting that when you want to send a message the other direction, from the ROUTER, you have to send it as multipart, specifying which client it is destined for, eg:
coordinator.py
context = zmq.Context()
socket = context.socket(zmq.ROUTER)
port = socket.bind_to_random_port(ZMQ_ADDRESS)
poller = zmq.Poller()
poller.register(socket, zmq.POLLIN)
pids = set()
while True:
event = poller.poll(1)
if not event:
continue
process_id, val = socket.recv_json()
pids.add(process_id)
# need some code in here to decide when to stop listening
# and break the loop
for pid in pids:
socket.send_multipart([pid, 'a string message'])
# ^ do your own json encoding if required
I guess there is probably some ØMQ way of doing a broadcast message rather than sending to each client in a loop as I do above. I wish the docs just had a clear description of each available socket type and how to use them.

How to timeout named pipes in ruby?

I saw an article which suggests the following code for a writer:
output = open("my_pipe", "w+") # the w+ means we don't block
output.puts "hello world"
output.flush # do this when we're done writing data
and a reader:
input = open("my_pipe", "r+") # the r+ means we don't block
puts input.gets # will block if there's nothing in the pipe
But could it happen that open, puts, gets will block the program? Is there some kind of timeout in place? Can one change it? Also, how come w+ means non-blocking call? Which open system call flags is it converted to?
Okay, let me share with you my picture of the world. As rogerdpack said, there are two options: 1) using select in blocking mode, 2) using non-blocking mode (O_NONBLOCK flag, read_nonblock, write_nonblock, select methods). I haven't tried, so these are just speculations.
As to why open, puts and gets may block the thread. open call blocks until there are at least one reader and at least one writer. And that must be the reason why we need to specify r+, w+ for open call. Judging from strace output they both are converted to O_RDWR flag. Then there must be some buffer, where not yet received data are stored. And that must be the reason why write methods may block. Read methods may block because they expect more data to be available, than it really is.
UPD
If a process attempts to read from an empty pipe, then read(2) will block until data is available. If a process attempts to write to a full pipe (see below), then write(2) blocks until sufficient data has been read from the pipe to allow the write to complete.
-- http://linux.die.net/man/7/pipe
The FIFO must be opened on both ends (reading and writing) before data can be passed. Normally, opening the FIFO blocks until the other end is opened also.
Under Linux, opening a FIFO for read and write will succeed both in blocking and nonblocking mode. POSIX leaves this behavior undefined. This can be used to open a FIFO for writing while there are no readers available.
-- http://linux.die.net/man/7/fifo
And here's the implementation I came up with:
#!/home/yuri/.rbenv/shims/ruby
require 'timeout'
data = ((0..15).to_a.map { |v|
(v < 10 ? '0'.ord + v : 'a'.ord + v - 10).chr
} * 4096 * 2).reduce('', :+)
timeout = 10
start = Time.now
open('1.fifo', File::WRONLY | File::NONBLOCK) { |out|
out.flock(File::LOCK_EX)
nwritten = 0
data_len = data.length
begin
delta = out.write_nonblock data
data = data[delta..-1]
nwritten += delta
rescue IO::WaitWritable, Errno::EINTR
timeout_left = timeout - (Time.now - start)
if timeout_left < 0
puts Time.now - start
raise Timeout::Error
end
IO.select nil, [out], nil, timeout_left
retry
end while nwritten < data_len
}
puts Time.now - start
But for my problem at hand I decided to ignore this timeout thing. It probably will suffice to handle just situations when there is no reader on the other end of the pipe (Errno::ENXIO):
open('1.fifo', File::WRONLY | File::NONBLOCK) { |out|
out.flock(File::LOCK_EX)
nwritten = 0
data_len = data.length
begin
delta = out.write_nonblock data
data = data[delta..-1]
nwritten += delta
rescue IO::WaitWritable, Errno::EINTR
IO.select nil, [out]
retry
end while nwritten < data_len
}
P.S. Your feedback is appreciated.
This page should answer all your questions... http://www.ruby-doc.org/core-2.0.0/IO.html
In general, puts can always block the current thread, since they may have to wait for IO to complete for it to return. gets can also block the current thread because it will read and read forever until it hits the first newline, then it will return everything it read. HTH.

How to synchronize data between multiple workers

I've the following problem that is begging a zmq solution. I have a time-series data:
A,B,C,D,E,...
I need to perform an operation, Func, on each point.
It makes good sense to parallelize the task using multiple workers via zmq. However, what is tripping me up is how do I synchronize the result, i.e., the results should be time-ordered exactly the way the input data came in. So the end result should look like:
Func(A), Func(B), Func(C), Func(D),...
I should also point out that time to complete,say, Func(A) will be slightly different than Func(B). This may require me to block for a while.
Any suggestions would be greatly appreciated.
You will always need to block for a while in order to synchronize things. You can actually send requests to a pool of workers, and when a response is received - to buffer it if it is not a subsequent one. One simple workflow could be described in a pseudo-language as follows:
socket receiver; # zmq.PULL
socket workers; # zmq.DEALER, the worker thread socket is started as zmq.DEALER too.
poller = poller(receiver, workers);
next_id_req = incr()
out_queue = queue;
out_queue.last_id = next_id_req
buffer = sorted_queue;
sock = poller.poll()
if sock is receiver:
packet_N = receiver.recv()
# send N for processing
worker.send(packet_N, ++next_id_req)
else if sock is workers:
# get a processed response Func(N)
func_N_response, id = workers.recv()
if out_queue.last_id != id-1:
# not subsequent id, buffer it
buffer.push(id, func_N_rseponse)
else:
# in order, push to out queue
out_queue.push(id, func_N_response)
# also consume all buffered subsequent items
while (out_queue.last_id == buffer.min_id() - 1):
id, buffered_N_resp = buffer.pop()
out_queue.push(id, buffered_N_resp)
But here comes the problem what happens if a packet is lost in the processing thread(the workers pool).. You can either skip it after a certain timeout(flush the buffer into the out queue), amd continue filling the out queue, and reorder when the packet comes later, if ever comes.

Resources