I have a bunch of *io.PipeWriter created and would like to make a multiwriter based on all those pipewriters in a function. So I call a function something like
func copyToWriters(reader *bufio.Reader, errs chan error, writers []*io.PipeWriter) {
for _, writer := range writers {
defer writer.Close()
}
mw := io.MultiWriter(writers)
_, err := io.Copy(mw, reader)
if err != nil {
errs <- err
}
}
I call the method with arguments copyToWriters(reader, errs, []*io.PipeWriter{w1, w2})
But it says
cannot use writers (type []*io.PipeWriter) as type []io.Writer in argument to io.MultiWriter. But
if I change io.MultiWriter(writers) to io.MultiWriter(writers[0],writers[1]) it works. How can I make the existing function work by not having to pass writers separately.
Unfortunately, Golang's type system does not allow casting []*io.PipeWriter to []io.Writer even when *io.PipeWriter implements io.Writer since it requires O(n) operation (reference).
The best you can do is create another []io.Writer and copy the pipe writers into it
ws := make([]io.Writer, len(writers))
for i, w := range writers {
ws[i] = w
}
mw := io.MultiWriter(ws...)
And reason why you nead ..., read the document
Related
I am copying a network stream to a file using io.Copy. I would like to extract the current speed, preferably in bytes per second, that the transfer is operating at.
res, err := http.Get(url)
if err != nil {
panic(err)
}
// Open output file
out, err := os.OpenFile("output", os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0644)
if err != nil {
panic(err)
}
// Close output file as well as body
defer out.Close()
defer func(Body io.ReadCloser) {
err := Body.Close()
if err != nil {
panic(err)
}
}(res.Body)
_, err := io.Copy(out, res.Body)
As noted in the comments - the entire transfer rate is easily computed after the fact - especially when using io.Copy. If you want to track "live" transfer rates - and poll the results over a long file transfer - then a little more work is involved.
Below I've outlined a simple io.Reader wrapper to track the overall transfer rate. For brevity, it is not goroutine safe, but would be trivial do make it so. And then one could poll from another goroutine the progress, while the main goroutine does the reading.
You can create a io.Reader wrapper - and use that to track the moment of first read - and then track future read byte counts. The final result may look like this:
r := NewRater(resp.Body) // io.Reader wrapper
n, err := io.Copy(out, r)
log.Print(r) // stringer method shows human readable "b/s" output
To implement this, one approach:
type rate struct {
r io.Reader
count int64 // may have large (2GB+) files - so don't use int
start, end time.Time
}
func NewRater(r io.Reader) *rate { return &rate{r: r} }
then we need the wrapper Read to track the underlying io.Readers progress:
func (r *rate) Read(b []byte) (n int, err error) {
if r.start.IsZero() {
r.start = time.Now()
}
n, err = r.r.Read(b) // underlying io.Reader read
r.count += int64(n)
if err == io.EOF {
r.end = time.Now()
}
return
}
the rate at any time can be polled like so - even before EOF:
func (r *rate) Rate() (n int64, d time.Duration) {
end := r.rend
if end.IsZero() {
end = time.Now()
}
return r.count, end.Sub(r.start)
}
and a simple Stringer method to show b/s:
func (r *rate) String() string {
n, d := r.Rate()
return fmt.Sprintf("%.0f b/s", float64(n)/(d.Seconds()))
}
Note: the above io.Reader wrapper has no locking in place, so operations must be from the same goroutine. Since the question relates to io.Copy - then this is a safe assumption to make.
I have many of net.Conn and i want to implement multi writer. so i can send data to every available net.Conn.
My current approach is using io.MultiWriter.
func route(conn, dst []net.Conn) {
targets := io.MultiWriter(dst[0], dst[1])
_, err := io.Copy(targets, conn)
if err != nil {
log.Println(err)
}
}
but the problem is, i must specify every net.Conn index in io.MultiWriter and it will be a problem, because the slice size is dynamic.
when i try another approach by pass the []net.Conn to io.MultiWriter, like code below
func route(conn, dst []net.Conn) {
targets := io.MultiWriter(dst...)
_, err := io.Copy(targets, conn)
if err != nil {
log.Println(err)
}
}
there is an error "cannot use mirrors (variable of type []net.Conn) as []io.Writer value in argument to io.MultiWriter"
Is there proper way to handle this case? so i can pass the net.Conn slice to io.MultiWriter.
Thank you.
io.MultiWriter() has a param of type ...io.Writer, so you may only pass a slice of type []io.Writer.
So first create a slice of the proper type, copy the net.Conn values to it, then pass it like this:
ws := make([]io.Writer, len(dst))
for i, c := range dst {
ws[i] = c
}
targets := io.MultiWriter(ws...)
When I try to copy from a Reader to a Writer manually, I notice that this works:
func fromAToB(a, b net.Conn) {
buf := make([]byte, 1024*32)
for {
n, err := a.Read(buf)
if n > 0 {
if err != nil {
log.Fatal(err)
}
b.Write(buf[0:n])
}
}
}
But this doesn't
func fromAToB(a, b net.Conn) {
buf := make([]byte, 1024*32)
for {
_, err := a.Read(buf)
if err != nil {
log.Fatal(err)
}
b.Write(buf)
}
}
So the questions are:
Why is the check if n>0 necessary?
Is this only necessary for net.Conn or for any type that implements the Reader and Writer interfaces?
EDIT: The second snippet runs fine without any runtime error, just that the behavior is not correct. I want to know what is the effect of that n>0 check and what will happen under the surface when I remove it.
There's already a function io.Copy to do exactly this. You can see how it's implemented for a good example. It works with all io.Reader/io.Writer types.
I figured it out: without n, it will write the whole buffer (32*1024 bytes) to the Writer instead of just n bytes, and that's the source of the weird behavior.
I want to pass []map[string]interface{} by reference to a function. Here is my attempt.
package main
import "fmt"
func main() {
b := map[string]int{"foo": 1, "bar": 2}
a := [...]map[string]int{b}
fmt.Println(a)
editit(a)
fmt.Println(a)
}
func editit(a interface{}) {
fmt.Println(a)
b := map[string]int{"foo": 3, "bar": 4}
a = [...]map[string]int{b}
fmt.Println(a)
}
https://play.golang.org/p/9Bt15mvud1
Here is another attempt and what I want to do finally. This does not compile.
func (self BucketStats) GetSamples() {
buckets := []make(map[string]interface{})
self.GetAuthRequest(self.url, &buckets)
//ProcessBuckets()
}
func (self BucketStats) GetAuthRequest(rurl string, **data interface{}) (err error) {
client := &http.Client{}
req, err := http.NewRequest("GET", rurl, nil)
req.SetBasicAuth(self.un, self.pw)
resp, err := client.Do(req)
if err != nil {
return
}
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
return
}
// it's all for this!!!
err = json.Unmarshal(body, data)
return
}
Several things wrong here.
First, [...]map[string]int{b} is not in fact a slice, but a fixed-length array. The [...] syntax means "make an array, and set the length at compile time based on what's being put into it". The slice syntax is simply []map[string]int{b}. As a result, your call to editit(a) is in fact passing a copy of the array, not a reference (slices are innately references, arrays are not). When a is reassigned in editit(), you're reassigning the copy, not the original, so nothing changes.
Second, it's almost never useful to use pointers to interfaces. In fact, the Go runtime was changed a few versions back to not automatically dereference pointers to interfaces (like it does for pointers to almost everything else) to discourage this habit. Interfaces are innately references already, so there's little reason to make a pointer to one.
Third, you don't actually need to pass a reference to an interface here. You're trying to unmarshal into the fundamental data structure contained within that interface. You don't actually care about the interface itself. GetAuthRequest(rurl string, data interface{}) works just fine here.
func (self BucketStats) GetSamples() {
var buckets []map[string]interface{}
self.GetAuthRequest(self.url, &buckets)
//ProcessBuckets()
}
func (self BucketStats) GetAuthRequest(rurl string, data interface{}) (err error) {
client := &http.Client{}
req, err := http.NewRequest("GET", rurl, nil)
req.SetBasicAuth(self.un, self.pw)
resp, err := client.Do(req)
if err != nil {
return
}
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
return
}
// it's all for this!!!
err = json.Unmarshal(body, data)
return
}
Let me walk you through what exactly takes place, in order:
buckets := var buckets []map[string]interface{}
We don't need a make here, because json.Unmarshal() will fill it for us.
self.GetAuthRequest(self.url, &buckets)
This passes a reference into an interface field. Within GetAuthRequest, data is an interface with underlying type *[]map[string]interface{} and an underlying value equal to the address of the original buckets variable in GetSamples().
err = json.Unmarshal(body, data)
This passes the interface data, by value, to the interface argument to json.Unmarshal(). Inside json.Unmarshal(), it has new interface v with underlying type *[]map[string]interface{} and an underlying value equal to the address of the original buckets variable in GetSamples(). This interface is a different variable, with a different address in memory, from the interface that held that same data in GetAuthRequest, but the data was copied over, and the data contains a reference to your slice, so you're still good.
json.Unmarshal() will, by reflection, fill the slice pointed to by the interface you handed it with the data in your request. It has a reference to the slice header buckets that you handed it, even though it passed through two interfaces to get there, so any changes it make will affect the original buckets variable.
When you get all the way back up to ProcessBuckets(), the buckets variable will contain all of your data.
As an ancillary suggestion, don't use named returns if your function is more than a few lines long. It's better to explicitly return your variables. This is particularly important because of variable shadowing. For example, in your GetAuthRequest() function, it will never return a non-nil error. This is because you're declaring an error variable err in the function signature, but you're immediately shadowing it with a local variable err using the short declaration in req, err := http.NewRequest("GET", rurl, nil). For the remainder of the function, err now refers to this new error variable rather than the one defined as the return variable. As a result, when you return, the original err variable in the return is always nil. A much better style would be:
func (self BucketStats) GetAuthRequest(rurl string, **data interface{}) error {
client := &http.Client{}
req, err := http.NewRequest("GET", rurl, nil)
req.SetBasicAuth(self.un, self.pw)
resp, err := client.Do(req)
if err != nil {
return err
}
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
return err
}
// it's all for this!!!
return json.Unmarshal(body, data)
}
The function is passing a **interface{} to Unmarshal. To pass the the *[]map[string]interface{} through to Unmarshal, change the function signature to:
func (self BucketStats) GetAuthRequest(rurl string, data interface{}) (err error) {
There is a go tour. I've solved https://tour.golang.org/methods/23 like this:
func (old_reader rot13Reader) Read(b []byte) (int, error) {
const LEN int = 1024
tmp_bytes := make([]byte, LEN)
old_len, err := old_reader.r.Read(tmp_bytes)
if err == nil {
tmp_bytes = tmp_bytes[:old_len]
rot13(tmp_bytes)
return len(tmp_bytes), nil
} else {
return 0, err
}
}
func main() {
s := strings.NewReader("Lbh penpxrq gur pbqr!")
r := rot13Reader{s}
io.Copy(os.Stdout, &r)
}
Where rot13 is correct and debug output right before return shows correct string. But why there is no output to console?
The Read method for an io.Reader needs to operate on the byte slice provided to it. You're reading into a new slice, and never modifying the original.
Just use b throughout the Read method:
func (old_reader rot13Reader) Read(b []byte) (int, error) {
n, err := old_reader.r.Read(b)
rot13(b[:n])
return n, err
}
You're never modifying b in your reader. The semantic of io.Reader's Read function is that you put the data into b's underlying array directly.
Assuming the rot13() function also in-place modifies, this will work (edit: I've tried to keep this code close to your version so you can see what's changed easier. JimB's solution is a more idiomatic solution to this problem):
func (old_reader rot13Reader) Read(b []byte) (int, error) {
tmp_bytes := make([]byte, len(b))
old_len, err := old_reader.r.Read(tmp_bytes)
tmp_bytes = tmp_bytes[:old_len]
rot13(tmp_bytes)
for i := range tmp_bytes {
b[i] = tmp_bytes[i]
}
return old_len, err
}
Example (with stubbed rot13()): https://play.golang.org/p/vlbra-46zk
On a side note, from an idiomatic perspect, old_reader isn't a proper receiver name (nor is old_len a proper variable name). Go prefers short receiver names (like r or rdr in this case), and also prefer camelcase to underscores (underscores will actually fire a golint warning).
Edit2: A more idiomatic version of your code. Kept the same mechanism of action, just cleaned it up a bit.
func (rdr rot13Reader) Read(b []byte) (int, error) {
tmp := make([]byte, len(b))
n, err := rdr.r.Read(tmp)
tmp = tmp[:n]
rot13(tmp)
for i := range tmp {
b[i] = tmp[i]
}
return n, err
}
From this, removing the tmp byte slice and using the destination b directly results in JimB's idiomatic solution to the problem.
Edit3: Updated to fix the issue Paul pointed out in comments.