Golang io.Reader usage with net.Pipe - go

The problem I'm trying to solve is using io.Reader and io.Writer in a net application without using bufio and strings as per the examples I've been able to find online. For efficiency I'm trying to avoid the memcopys those imply.
I've created a test application using net.Pipe on the play area (https://play.golang.org/p/-7YDs1uEc5). There is a data source and sink which talk through a net.Pipe pair of connections (to model a network connection) and a loopback on the far end to reflect the data right back at us.
The program gets as far as the loopback agent reading the sent data, but as far as I can see the write back to the connection locks; it certainly never completes. Additionally the receiver in the Sink never receives any data whatsoever.
I can't figure out why the write cannot proceed as it's wholly symmetrical with the path that does work. I've written other test systems that use bi-directional network connections but as soon as I stop using bufio and ReadString I encounter this problem. I've looked at the code of those and can't see what I've missed.
Thanks in advance for any help.

The issue is on line 68:
data_received := make([]byte, 0, count)
This line creates a slice with length 0 and capacity count. The call to Read does not read data because the length is 0. The call to Write blocks because the data is never read.
Fix the issue by changing the line to:
data_received := make([]byte, count)
playground example
Note that "Finished Writing" may not be printed because the program can exit before dataSrc finishes executing.

Related

Why sends of "large" array/slice using net/rpc/jsonrpc codec over unix socket connection hang?

I'm trying to send an array of data as an rpc reply using golang's built-in net/rpc server and client and the net/rpc/jsonrpc codec. But I'm running into some trouble.
The data I'm sending is around 48 bytes, and the client will just hang in client.Call.
I've made a playground that replicates the problem:
https://go.dev/play/p/_IQ9SF7TSdc
If you change the constant "N" in the above program to 5,
things work as expected!
Another playground shows how the issue seems to crop up only when the slice/array in question exceeds 49 bytes:
https://go.dev/play/p/R8CQa0mv7vB
Does anyone know what might be the issue? Golang's tests for the array and slice data types are not exactly designed for "large" arrays in mind. Thanks in advance.
On the line where the listener is set up:
listener, err := net.ListenUnix("unixpacket", &net.UnixAddr{RPCPath, "unixpacket"})
Don't use unixpacket. It corresponds to the underlying SOCK_SEQPACKET which is not a stream protocol. Likely large files were separated into packets in a way the receiver was not able to process. Use unix instead, which corresponds to SOCK_STREAM.
See this SO post
for more.

Difficulty in using io.Pipe

Hi friends I want to write a data in a writer and pass it to a library using a reader so that it can read
Now the problem I have is that of png. Encode no longer continues and gets stuck there
r, w := io.Pipe()
err := png.Encode(w, img)
Tell me the solution if possible. Of course, I don't care if this problem is resolved, if you know another solution to the case that the data is written in a writer and read in a reader please suggest, there were secondary solutions, but I use two libraries that one just wants writer and one just reader.
w is blocked waiting for a reader to read the data written to the pipe, thus blocking Encode
Reading from r will unblock Encode to the writer..
each Write to the PipeWriter blocks until it has satisfied one or more
Reads from the PipeReader that fully consume the written data

Protobuffers and Golang --- writing out marshal'ed structs and reading back in

Is there a generally accepted "correct" way for writing out and reading back in marshaled protocol buffer messages from a file?
I've been working on a smaller project that simulates a full network locally with gRPC and am trying to add writing to/ reading from files s.t. I can save state and start from there when its launched again. It seems I was naive in assuming these would remain on a single line:
Sees chain of length 3
from debugging messages I've written; but,
$ wc test.dat
7 8 2483 test.dat
So, I suppose there are an extra 4 newline's... Is there a method of delimiting these that I can use? or do I need to come up with one on my own? I realize this is straightforward, but in my mind, I can only probabilistically guarantee that <<<<DELIMIT>>>> or whatever will never show up and put me back at square 1.
Use proto.Marshal/Unmarshal:
That way you simulate (closest) to receiving the message while avoiding side effects from other Marshal methods.
Alternative: Dump it as []byte and reread it.

Why doesn't IO#seek work for TCPSocket?

I wrote some simple code to learn the structure of a TCPSocket. I thought it's like an IO stream so I tried to use seek to move the "reading position" back a byte:
socket.gets #=> hello world
socket.seek(-5, IO::SEEK_CUR)
socket.gets #=> hello world # this should return world
but, it gives me an error:
server.rb:11:in `seek': Illegal seek (Errno::ESPIPE)
Does anyone have an idea why this doesn't work?
If this was the case then the socket needs to keep all data around if someone would decides to seek backwards (and how would forward seek work, block for more data?). You could probably quite easy write a wrapper class around a socket that keeps track of a position and also buffers all data or blocks if needed etc.
But maybe you could try to use IO#bytes or IO#chars in combination with Enumerator#peek?
TCP/IP would be more like having a series of files on disk, where you can only read forward a file at a time. The files have to be read sequentially, and you can't jump ahead or back. It's not capable of random I/O, like you can do on a disk, it's more like a serial connection you can only read as things appear.
In order to do what you want you have to build a buffer, where you append each block (i.e., file), reconstructing the entire message. If you want to look backwards at any point, you have to look in your buffer. If you want to look forward you have to wait for that block to be received and read and appended.
That's a simple explanation. It's possible to request blocks be resent in IP but really, at the level we normally work at, we're only reading forward.

What is a File IO stream buffer?

I've checked out a few of the forum posts here and can't find quite what I'm looking for. Suppose you are reading in a text document via Ruby. I understand the stream is essentially the characters coming in byte by byte. What is the purpose/best practice of buffering in this case? My book shows plenty examples of the buffer being utilized, but no real description of what the buffer is or why it even exists. What should I be considering when setting the buffer? For example, the book illustrates the following method as:
read(n, buffer=nil) reads in n bytes, until the bytes are ready
I don't understand what the statement "until the bytes are ready" means. Does the buffer play a role in this? Please feel free to point me to another place where this is explained, I couldn't for the life of me find it on my own.
IO can be not only file, but a network socket. and in networks you regularly have a situation where you are ready to process more data, but the remote side have a pause in data sending.
(You usually see a progress bar or a spinner element in your browser in these cases)
So, if you are using regular files, the bytes are always 'ready'.
The Picaxe book for IO#read says:
Reads at most int bytes from the I/O stream or to the end of file if int is omitted. Returns nil if called at end of file. If buffer (a String) is provided, it is resized accordingly, and input is read directly into it.

Resources