Blocking read from input file continuous streaming - go

I have a file open that I am constantly writing to in Golang and reading in julia live. For some reason my code fails that it read EOF at the first blocking read. What am I doing wrong?
ERROR: LoadError: EOFError: read end of file
R.buffer = open("data/live.bin", "r")
if !isopen(R.buffer)
return nothing
end
# Error is here, why would it say EOF if the file is still being written to and open?
# Read the length in first 4 bytes
msglen = read(R.buffer, UInt32)
# Then read up to that length
bytes = read(R.buffer, msglen)
Note the file is still open by golang while julia is reading.
It gets traced to an error on line 399 here https://github.com/JuliaLang/julia/blob/5584620db87603fb8be313092712a9f052b8876f/base/iostream.jl#L399
I have a feeling I need to ccall some methods to get this working.
And here is the golang code
enc, err := os.OpenFile("data/live.bin", os.O_WRONLY|os.O_APPEND|os.O_CREATE, 0660)
if err != nil {
log.Error("Error in File Create!", err.Error())
panic(err)
}
msgLen := make([]byte, 4)
for {
msg := []byte("mymessage")
binary.LittleEndian.PutUint32(msgLen, uint32(len(msg)))
_, err := enc.Write(msgLen)
if err != nil {
log.Fatal(err)
}
// Write message
_, err = enc.Write(msg)
if err != nil {
log.Fatal(err)
}
time.Sleep(5 * time.Second)
}

Related

Reading from a named pipe won't give any output and blocks the code indefinitely

I wrote a piece of code with a IPC purpose. The expected behaviour is that the code reads the content from the named-pipe and prints the string (with the Send("log", buff.String())). First I open the named-pipe 'reader' inside the goroutine, while the reader is open I send a signal that the data can be written to the named-pipe (with the Send("datarequest", "")). Here is the code:
var wg sync.WaitGroup
wg.Add(1)
go func() {
//reader part
file, err := os.OpenFile("tmp/"+os.Args[1], os.O_RDONLY, os.ModeNamedPipe)
if err != nil {
Send("error", err.Error())
}
var buff bytes.Buffer
_, err = io.Copy(&buff, file)
Send("log", buff.String())
if err != nil {
Send("error", err.Error())
}
wg.Done()
}()
Send("datarequest", "")
wg.Wait()
And here is the code which executes when the signal is send:
//writer part
file, err := os.OpenFile("tmp/" + execID, os.O_WRONLY, 0777)
if err != nil {
c <- "[error] error opening file: " + err.Error()
}
bytedata, _ := json.Marshal(moduleParameters)
file.Write(bytedata)
So the behaviour I get it that the code blocks indefinitely when I try to copy it. I really don't know why this happens. When I test it with cat in the terminal I do get the intended result so my question is how do I get the same result with code?
Edit
The execID is the same as os.Args[1]
The writer should close the file after it's done sending using file.Close(). Note that file.Close() may return error.

Transfer contents of directory over net's TCP connection

I am currently learning Go and I am trying to send the contents of a directory to another machine over a plain tcp connection using Go's net package.
It works fine with individual files and small folders, but I run into issues if the folder contains many subfolders and larger files. I am using the filepath.Walk function to traverse over all files in the given directory. For each file or directory I send, I also send a header that provides the receiver with file name, file size, isDir properties so I know for how long I need to read for when reading the content. The issue I am having is that after a while when reading the header, I am reading actual file content of the previous file even though I already read that file from the connection
Here is the writer side. I simply traverse over the directory.
func transferDir(session *Session, dir string) error {
return filepath.Walk(dir, func(path string, info os.FileInfo, err error) error {
if err != nil {
return err
}
header := Header{Name: info.Name(), Size: info.Size(), Path: path}
if info.IsDir() {
header.SetDirBit()
session.WriteHeader(header)
return nil // nothing more to write
}
// content is a file. write the file now byte by byte
file, err := os.Open(path)
inf, err := file.Stat()
header.Size = inf.Size() // get the true size of the file
session.WriteHeader(header)
defer file.Close()
if err != nil {
return err
}
buf := make([]byte, BUF_SIZE)
for {
n, err := file.Read(buf)
if err != nil {
if err == io.EOF {
session.Write(buf[:n])
session.Flush()
break
} else {
log.Println(err)
return err
}
}
session.Write(buf[:n])
session.Flush()
}
return nil
})
And here is the reader part
func (c *Clone) readFile(h Header) error {
file, err := os.Create(h.Path)
defer file.Close()
if err != nil {
return err
}
var receivedByts int64
fmt.Printf("Reading File: %s Size: %d\n", h.Name, h.Size)
for {
if (h.Size - receivedByts) < BUF_SIZE {
n, err := io.CopyN(file, c.sesh, (h.Size - receivedByts))
fmt.Println("Written: %d err: %s\n", n, err)
break
}
n, err := io.CopyN(file, c.sesh, BUF_SIZE)
fmt.Println("Written: %d err: %s\n", n, err)
receivedByts += BUF_SIZE
fmt.Println("Bytes Read: ", receivedByts)
}
return nil
}
Now the weird part is that when I am looking at the print statements I see something like:
Reading File: test.txt Size: 14024
Written 1024 nil
Bytes Read 1024
... This continues all the way to the break statement
And the total of the Bytes read equals the actual file size. Yet, the subsequent read for the header will return content from the test.txt file. Almost like there is still stuff in the buffer, but I think I read it already....

using io.Pipes() for sending and receiving message

I am using os.Pipes() in my program, but for some reason it gives a bad file descriptor error each time i try to write or read data from it.
Is there some thing I am doing wrong?
Below is the code
package main
import (
"fmt"
"os"
)
func main() {
writer, reader, err := os.Pipe()
if err != nil {
fmt.Println(err)
}
_,err= writer.Write([]byte("hello"))
if err != nil {
fmt.Println(err)
}
var data []byte
_, err = reader.Read(data)
if err != nil {
fmt.Println(err)
}
fmt.Println(string(data))
}
output :
write |0: Invalid argument
read |1: Invalid argument
You are using an os.Pipe, which returns a pair of FIFO connected files from the os. This is different than an io.Pipe which is implemented in Go.
The invalid argument errors are because you are reading and writing to the wrong files. The signature of os.Pipe is
func Pipe() (r *File, w *File, err error)
which shows that the returns values are in the order "reader, writer, error".
and io.Pipe:
func Pipe() (*PipeReader, *PipeWriter)
Also returning in the order "reader, writer"
When you check the error from the os.Pipe function, you are only printing the value. If there was an error, the files are invalid. You need to return or exit on that error.
Pipes are also blocking (though an os.Pipe has a small, hard coded buffer), so you need to read and write asynchronously. If you swapped this for an io.Pipe it would deadlock immediately. Dispatch the Read method inside a goroutine and wait for it to complete.
Finally, you are reading into a nil slice, which will read nothing. You need to allocate space to read into, and you need to record the number of bytes read to know how much of the buffer is used.
A more correct version of your example would look like:
reader, writer, err := os.Pipe()
if err != nil {
log.Fatal(err)
}
var wg sync.WaitGroup
wg.Add(1)
go func() {
defer wg.Done()
data := make([]byte, 1024)
n, err = reader.Read(data)
if n > 0 {
fmt.Println(string(data[:n]))
}
if err != nil && err != io.EOF {
fmt.Println(err)
}
}()
_, err = writer.Write([]byte("hello"))
if err != nil {
fmt.Println(err)
}
wg.Wait()

How can i get the error message from a chain of exec commands which emit errors to stdout

In my searches I've found many examples of how to retrieve the stdOut error for a single exec command but I'm struggling with a whole chain of the things.
In my actual code I have 5 exec processes and a file write of the result all joined by io pipes. This is a simplified version of what I have going on, in real world the stdOut from the pipe is used as the stdIn of the next process until we copy across for the final file write :
fileOut, err := os.Create(filePath)
if err != nil {
return fmt.Errorf("file create error: %s : %s", filePath, err)
}
writer := bufio.NewWriter(fileOut)
defer writer .Flush()
var exSortStdErr bytes.Buffer
sort := exec.Command("sort", sortFlag)
sort.Stderr = &sortStdErr
sortStdOut, err := sort.StdoutPipe()
if err != nil {
return fmt.Errorf("sort pipe error: %s : %s", err, sortStdErr.String())
}
if err := sort.Start(); err != nil {
return fmt.Errorf("sort start error : %s : %s", err, sortStdErr.String())
}
io.Copy(writer, sortStdOut)
if err = sort.Wait(); err != nil {
//catching error here
return fmt.Errorf("sort wait error : %s: %s", err, sortStdErr.String())
}
I have simplified the above to illustrate the point (so the actual process throwing the error isn't there), but I know that Im getting an error related to one of the exec processes connected by io pipes in a chain like above but the process in question puts its error to stdOut, recreating from the terminal I see
comm: file 1 is not in sorted order
from a comm process which sits in there somewhere but what i see from the final error catch is simply:
exit status 1
the examples I saw suggested reading from stdOut something like this:
var sortStdErr, sortStdOut bytes.Buffer
sort:= exec.Command("sort", sortFlag)
sort.Stdout = &sortStdOut
sort.Stderr = &sortStdErr
if err := sort.Run(); err !=nil {
fmt.Println("error: %s %s", err, sortStdOut)
}
And this does indeed work, but I don't know how to marry this up with piping the results to the next process. Is there a way to read the error off the pipe from within the cmd.wait error handling or is there a better approach?
On go 1.7, though i doubt that matters.
If someone could point me in the right direction, ideally with an example, that would be much appreciated.
Try this:
var firstStdErr, firstStdOut bytes.Buffer
firstCommand := exec.Command(
"sort",
sortFlag,
)
firstCommand.Stdout = &firstStdOut
firstCommand.Stderr = &firstStdErr
if err := firstCommand.Run(); err !=nil {
fmt.Println("error: %s %s %s", err, firstStdErr, firstStdOut)
} else{
waitStatus := firstCommand.ProcessState.Sys().(syscall.WaitStatus)
if waitStatus.ExitStatus() != 0 {
fmt.Println("Non-zero exit code: " + strconv.Itoa(waitStatus.ExitStatus()))
}
}
var secondStdErr, secondStdOut bytes.Buffer
secondCommand := exec.Command(
"command 2",
)
secondCommand.Stdin = &firstStdOut
secondCommand.Stdout = &secondStdOut
secondCommand.Stderr = &secondStdErr
if err := secondCommand.Run(); err !=nil {
fmt.Println("error: %s %s %s", err, secondStdErr, secondStdOut)
}
fileOut, err := os.Create(filePath)
if err != nil {
fmt.Errorf("file create error: %s : %s", filePath, err)
}
defer fileOut.Close()
// sample writing to a file
fileOut.Write(firstStdErr.Bytes())
fileOut.Write(firstStdOut.Bytes())
fileOut.Write(secondStdOut.Bytes())

Error on trying to read gzip files in golang

I am trying to read gzip files using compress/gzip. I am using http.DetectContentType as I do not know if I get a normal txt file or a gzipped one. My code is very straight forward and as below:
f, err := os.Open(fullpath)
if err != nil {
log.Panicf("Can not open file %s: %v", fullpath, err)
return ""
}
defer f.Close()
buff := make([]byte, 512)
_, err = f.Read(buff)
if err != nil && err != io.EOF{
log.Panicf("Cannot read buffer %v", err);
return ""
}
switch filetype := http.DetectContentType(buff); filetype {
case "application/x-gzip":
log.Println("File Type is", filetype)
reader, err := gzip.NewReader(f)
if err != nil && err != io.EOF{
log.Panicf("Cannot read gzip archive %v", err);
return ""
}
defer reader.Close()
target := "/xx/yy/abcd.txt"
writer, err := os.Create(target)
if err != nil {
log.Panicf("Cannot write unarchived file %v", err);
return ""
}
defer writer.Close()
_, err = io.Copy(writer, reader)
return target
The problem is that the gzip reader always errors out saying "Cannot read gzip archive gzip: invalid header" I have tried the zlib library too but in vain. I gzipped the source file in mac using the command line gzip tool.
Please show me where I am going wrong.
You're reading the first 512 bytes of the file, so the gzip.Reader won't ever see that. Since these are regular files, you can seek back to the start after a successful Read:
f.Seek(0, os.SEEK_SET)

Resources