Hi friends I want to write a data in a writer and pass it to a library using a reader so that it can read
Now the problem I have is that of png. Encode no longer continues and gets stuck there
r, w := io.Pipe()
err := png.Encode(w, img)
Tell me the solution if possible. Of course, I don't care if this problem is resolved, if you know another solution to the case that the data is written in a writer and read in a reader please suggest, there were secondary solutions, but I use two libraries that one just wants writer and one just reader.
w is blocked waiting for a reader to read the data written to the pipe, thus blocking Encode
Reading from r will unblock Encode to the writer..
each Write to the PipeWriter blocks until it has satisfied one or more
Reads from the PipeReader that fully consume the written data
Related
io.ReadWriteCloser has blocking Read() until data available to read.
What if I want to test if it has data available to read, without actually Read() it. Since I need to do some other processing between:
It has data available to read
and
io.Copy(thisReadWriteCloser, anotherReadWriteCloser)
using bufio.Reader Peek() function
bi := bufio.NewReader(i)
bi.Peek(1)
But I have follow up issue not able to re-use the original io.ReadWriteCloser after executing the bi.Peek(1): convert from `bufio.Reader` to `io.ReadWriteCloser`
I have implemented a custom Write interface for my cloud program.
My problem so far is that after i am done copying files to my writer and closed the Writer, the writer still has a few Writes to do(usually maybe 4 writes about 4096 bytes each). The last Write is usually less than 4096.
This has not happened yet but i know it is a probability of 1/4096 that the last Write is 4096 bytes and my program won't terminate.
I am using this for a zipping program and io.EOF is not effective as every write chunk has one, also checking if writer is closed comes too early while there are still some writes to do.
What is the best way to handle this situation?
***EDIT*****
I ended up implementing a more Robust Write(), Flush() and Close() method.Now everything is good if i use defer Close() but i still get the same problem if i manually call Close() at the end
since you have full control on the writer, you could use a waitgroup
to wait in your main for all goroutines to finish.
Problem was solved by implementing a more robust Close() function. I also used defer Close() to make sure that Golang handled all the Goroutines internally.
The problem I'm trying to solve is using io.Reader and io.Writer in a net application without using bufio and strings as per the examples I've been able to find online. For efficiency I'm trying to avoid the memcopys those imply.
I've created a test application using net.Pipe on the play area (https://play.golang.org/p/-7YDs1uEc5). There is a data source and sink which talk through a net.Pipe pair of connections (to model a network connection) and a loopback on the far end to reflect the data right back at us.
The program gets as far as the loopback agent reading the sent data, but as far as I can see the write back to the connection locks; it certainly never completes. Additionally the receiver in the Sink never receives any data whatsoever.
I can't figure out why the write cannot proceed as it's wholly symmetrical with the path that does work. I've written other test systems that use bi-directional network connections but as soon as I stop using bufio and ReadString I encounter this problem. I've looked at the code of those and can't see what I've missed.
Thanks in advance for any help.
The issue is on line 68:
data_received := make([]byte, 0, count)
This line creates a slice with length 0 and capacity count. The call to Read does not read data because the length is 0. The call to Write blocks because the data is never read.
Fix the issue by changing the line to:
data_received := make([]byte, count)
playground example
Note that "Finished Writing" may not be printed because the program can exit before dataSrc finishes executing.
I'm trying to download a file from S3 and upload that file to another bucket in S3. Copy API won't work here because I've been told not to use it.
Getting an object from S3 has a response.Body that's an io.ReadCloser and to upload that file, the payload takes a Body that's an io.ReadSeeker.
The only way I can figure this out is by saving the response.Body to a file then passing that file as a io.ReadSeeker. This would require writing the entire file to disk first then reading the entire file from disk which sounds pretty wrong.
What I would like to do is:
resp, _ := conn.GetObject(&s3.GetObjectInput{Key: "bla"})
conn.PutObject(&s3.PutObjectInput{Body: resp.Body}) // resp.Body is an io.ReadCloser and the field type expects an io.ReadSeeker
Question is, how do I go from an io.ReadCloser to an io.ReadSeeker in the most efficient way possible?
io.ReadSeeker is the interface that groups the basic Read() and Seek() methods. The definition of the Seek() method:
Seek(offset int64, whence int) (int64, error)
An implementation of the Seek() method requires to be able to seek anywhere in the source, which requires all the source to be available or reproducible. A file is a perfect example, the file is saved permanently to your disk and any part of it can be read at any time.
response.Body is implemented to read from the underlying TCP connection. Reading from the underlying TCP connection gives you the data that the client at the other side sends you. The data is not cached, and the client won't send you the data again upon request. That's why response.Body does not implement io.Seeker (and thus io.ReadSeeker either).
So in order to obtain an io.ReadSeeker from an io.Reader or io.ReadCloser, you need something that caches all the data, so that upon request it can seek to anywhere in that.
This caching mechanism may be writing it to a file as you mentioned, or you can read everything into memory, into a []byte using ioutil.ReadAll(), and then you can use bytes.NewReader() to obtain an io.ReadSeeker from a []byte. Of course this has its limitations: all the content must fit into memory, and also you might not want to reserve that amount of memory for this file copy operation.
All in all, an implementation of io.Seeker or io.ReadSeeker requires all the source data to be available, so your best bet is writing it to a file, or for small files reading all into a []byte and streaming the content of that byte slice.
As an alternative, use github.com/aws/aws-sdk-go/service/s3/s3manager.Uploader, which takes an io.Reader as input.
I imagine the reason that PutObject takes an io.ReadSeeker instead of an io.Reader is that requests to s3 need to be signed (and have a content length), but you can't generate a signature until you have all the data. The stream-y way to do this would be to buffer the input into chunks as they come in and use the multipart uploading api to upload each chunk separately. This is (I think) what s3manager.Uploader does behind the scenes.
I want to read text file with goroutines. The order of text that gets read from a file does not matter. How do I read a file with concurrency?
scanner := bufio.NewScanner(file)
for scanner.Scan() {
lines = append(lines, scanner.Text())
}
For example, if the text file contains I like Go, I want to read this file without concerning the order. It could be []string{"Go", "like", "I"}
First of all, if you're reading from io.Reader consider it as reading from the stream. It's the single input source, which you can't 'read in parallel' because of it's nature - under the hood, you're getting byte, waiting for another one, getting one more and so on. Tokenizing it in words comes later, in buffer.
Second, I hope you're not trying to use goroutines as a 'silver bullet' in a 'let's add gouroutines and everything will just speed up' manner. If Go gives you such an easy way to use concurrency, it doesn't mean you should use it everywhere.
And finally, if you really need to split huge file into words in parallel and you think that splitting part will be the bottleneck (don't know your case, but I really doubt that) - then you have to invent your own algorithm and use 'os' package to Seek()/Read() parts of the file, each processed by it's own gouroutine and track somehow which parts were already processed.