Could someone please explain (or/and share examples) when and why readers should to be closed explicitly, i.e. implement io.ReadCloser, not just io.Reader.
For example, when you are working with files, or any resource that should be closed to release the allocated resource (or memory for example for your resource, e.g. C code calling from Go).
You may use it when you have Read and Close methods, an example to show that you may use one common function to work with different types using io.ReadCloser:
package main
import (
"fmt"
"io"
"log"
"os"
)
func main() {
f, err := os.Open("./main.go")
if err != nil {
log.Fatal(err)
}
doIt(f)
doIt(os.Stdin)
}
func doIt(rc io.ReadCloser) {
defer rc.Close()
buf := make([]byte, 4)
n, err := rc.Read(buf)
if err != nil {
log.Fatal(err)
}
fmt.Printf("%s\n", buf[:n])
}
Run and enter 12345 as an input, Output:
pack
12345
1234
See also:
Does Go automatically close resources if not explicitly closed?
It's for an explicit definition of Reader and Closer interface. So let's say you write some functionality that reads data, but you also want to close resource after doing it (again not to leak descriptors).
func ...(r io.ReaderCloser) {
defer r.Close()
... // some reading
}
anything you pass in it will need to have both interfaces defined, is it os.File or any custom struct, in this case, you are forcing client of your API to define Read and Close interfaces implementations, not just io.Reader.
Related
I want to wrap Read function net.Conn.Read(). The purpose of this to read the SSL handshake messages. https://pkg.go.dev/net#TCPConn.Read
nc, err := net.Dial("tcp", "google.com:443")
if err != nil {
fmt.Println(err)
}
tls.Client(nc, &tls.Config{})
Are there any ways to do?
Thanks in advance
Use the following code to intercept Read on a net.Conn:
type wrap {
// Conn is the wrapped net.Conn.
// Because it's an embedded field, the
// net.Conn methods are automatically promoted
// to wrap.
net.Conn
}
// Read calls through to the wrapped read and
// prints the bytes that flow through. Replace
// the print statement with whatever is appropriate
// for your application.
func (w wrap) Read(p []byte) (int, error) {
n, err := w.Conn.Read()
fmt.Printf("%x\n", p[:n]) // example
return n, err
}
Wrap like this:
tnc, err :=tls.Client(wrap{nc}, &tls.Config{})
The previous answer gets the job done indeed.
However I would recommend Liz Rice's talk: GopherCon 2018: Liz Rice - The Go Programmer's Guide to Secure Connections
Going through her code in Github, you might find a more elegant way to achieve what you want.
Start with the client code on line 26.
Just getting started with Go and I'm wondering about the following situation:
I have a pretty simple codebase where I simply want to open/close a database connection and execute a simple query. I can do this as follows (just showing the important bits here):
import (
"database/sql"
_ "github.com/lib/pq"
)
func (db *Database) ExecQueryA() {
dbConn, err := sql.Open("postgres", db.psqlconn)
if err != nil {
panic(err)
}
defer dbConn.Close()
_, err = db.Exec(...
if err != nil {
panic(err)
}
}
The above idea works fine, but what if I want to write x more of these functions, I do not want to duplicate this part:
dbConn, err := sql.Open("postgres", db.psqlconn)
if err != nil {
panic(err)
}
defer dbConn.Close()
At the start of each function (i.e. I want to avoid code duplication). In python I would write a context manager for this, I.e. I would use a with .. statement which would open and close the database connection for me. When using Go, what is the best way to avoid code duplication in this use case?
As Brits points out in the comment to your question, the *sql.DB does not need to be open and closed every time you intend to use it. Instead a single shared instance of *sql.DB, Opened once at the launch of your app, is a common and recommended practice.
... the Open function should be called just once. It is rarely necessary
to close a DB.
Note that *sql.DB is not a connection, instead, it is a pool that manages multiple connections, opens as many as necessary (and possible), keeps idle ones around if necessary, frees them if unnecessary, etc. And most of all, it is safe for concurrent use.
DB is a database handle representing a pool of zero or more underlying
connections. It's safe for concurrent use by multiple goroutines.
To answer your actual question, one pattern to reduce the repetition of obtaining-and-releasing resources is to pass a function literal to a wrapper function:
func (db *Database) run(f func(c *sql.DB)) {
c, err := sql.Open("postgres", db.psqlconn)
if err != nil {
panic(err)
}
defer c.Close()
f(c)
}
func (db *Database) ExecQueryA() {
db.run(func(c *sql.DB) {
_, err := c.Exec(...
if err != nil {
panic(err)
}
})
}
I'm trying to read data from some devices via telnet protocol and below is my simple code.
I just want to print some meaningful results.
package main
import (
"fmt"
"github.com/reiver/go-telnet"
)
func main() {
conn, _ := telnet.DialTo("10.253.102.41:23")
fmt.Println(conn)
}
but this is what I got by this way:
&{0xc000006028 0xc000004720 0xc000040640}
It's obvious that it gets you &{0xc000006028 0xc000004720 0xc000040640} cause you are printing the connection object and it's the pointer address of that. If you want to print the data, you have to read it through connection using the Read method of the connection. Something like this:
b := make([]byte, 100)
n, err := conn.Read(b)
if err != nil {
// handle error
}
fmt.Println(string(b))
I would love to be able to use Go-IPFS within my Go program, however it is totally undocumented. This is where my reasearch lead me:
package main
import (
"context"
"fmt"
"io/ioutil"
"log"
"os"
"path/filepath"
"gx/ipfs/QmSP88ryZkHSRn1fnngAaV2Vcn63WUJzAavnRM9CVdU1Ky/go-ipfs-cmdkit/files"
"github.com/ipfs/go-ipfs/core"
"github.com/ipfs/go-ipfs/core/coreunix"
)
func main() {
ctx := context.Background()
node, err := core.NewNode(ctx, &core.BuildCfg{})
if err != nil {
log.Fatalf("Failed to start IPFS node: %v", err)
}
reader, err := coreunix.Cat(ctx, node, "QmPZ9gcCEpqKTo6aq61g2nXGUhM4iCL3ewB6LDXZCtioEB")
if err != nil {
log.Fatalf("Failed to look up IPFS welcome page: %v", err)
}
blob, err := ioutil.ReadAll(reader)
if err != nil {
log.Fatalf("Failed to retrieve IPFS welcome page: %v", err)
}
fmt.Println(string(blob))
}
However I am not sure about the difference of
context.Background() Vs context.TODO() Vs context.WithCancel(context.Background()).
And most importantly, how can choose where IPFS will put the IPFS repo and make sure it also initializes it?
How can I enable and use Pubsub along with its commands subscribe and publish?
How can I add and pin a file to IPFS with the possibility to also input a stream for big files?
Is coreunix.Cat suitable to read files with a stream as well?
How can I keep the node "listening" like when you run the ipfs daemon from the CLI and have everything runnng on all the ports like webui, swarm, etc.?
How about this to add files? Does this use streams or reads the entire file into memory? How can this be improved?
func addFile(ctx *context.Context, node *core.IpfsNode, path *string) error {
file, err := os.Open(*path)
if err != nil {
return err
}
adder, err := coreunix.NewAdder(*ctx, node.Pinning, node.Blockstore, node.DAG)
if err != nil {
return err
}
filename := filepath.Base(*path)
fileReader := files.NewReaderFile(filename, filename, file, nil)
adder.AddFile(fileReader)
adder.PinRoot()
return nil
}
You may want to breakdown your question into smaller pieces, I have been playing with a source code of go-ipfs for a while and here's the general instruction I could give you:
Most of the data structures like, context, DAG, IPFSNode and so on are defined in form of go structures, and you should be able to find them in gx/.../... directory where also you should be able to see detailed information about each variable used ( simple string search through the directory should get you to your needed source file)
All the methods are defined in folder github.com/../.. directory
Clear the concept of pointers as they are using pointers most of the time to pass parameters to functions
As a beginner in Go, I have problems understanding io.Writer.
My target: take a struct and write it into a json file.
Approach:
- use encoding/json.Marshal to convert my struct into bytes
- feed those bytes to an os.File Writer
This is how I got it working:
package main
import (
"os"
"encoding/json"
)
type Person struct {
Name string
Age uint
Occupation []string
}
func MakeBytes(p Person) []byte {
b, _ := json.Marshal(p)
return b
}
func main() {
gandalf := Person{
"Gandalf",
56,
[]string{"sourcerer", "foo fighter"},
}
myFile, err := os.Create("output1.json")
if err != nil {
panic(err)
}
myBytes := MakeBytes(gandalf)
myFile.Write(myBytes)
}
After reading this article, I changed my program to this:
package main
import (
"io"
"os"
"encoding/json"
)
type Person struct {
Name string
Age uint
Occupation []string
}
// Correct name for this function would be simply Write
// but I use WriteToFile for my understanding
func (p *Person) WriteToFile(w io.Writer) {
b, _ := json.Marshal(*p)
w.Write(b)
}
func main() {
gandalf := Person{
"Gandalf",
56,
[]string{"sourcerer", "foo fighter"},
}
myFile, err := os.Create("output2.json")
if err != nil {
panic(err)
}
gandalf.WriteToFile(myFile)
}
In my opinion, the first example is a more straightforward and easier to understand for a beginner... but I have the feeling that the 2nd example is the Go idiomatic way of achieving the target.
Questions:
1. is above assumption correct (that 2nd option is Go idiomatic) ?
2. Is there a difference in the above options ? Which option is better ?
3. other ways to achieve the same target ?
Thank you,
WM
The benefit of using the second method is that if you are passing a Writer interface, you can pass anything which implements Write -- that is not only a file but a http.ResponseWriter, for example, or stdout os.Stdout, without changing the struct methods.
You can see this handy blog post on the package io walkthrough. The author makes the case that passing as parameter readers and writers makes your code more flexible, in part because so many functions use the Reader and Writer interface.
As you come to use Go more, you'll notice how much the standard library leans on Reader and Writer interfaces, and probably come to appreciate it :)
So this function (renamed):
// writes json representation of Person to Writer
func (p *Person) WriteJson(w io.Writer) error {
b, err := json.Marshal(*p)
if err != nil {
return err
}
_, err = w.Write(b)
if err != nil {
return err
}
return err
}
Would write to a File, http Response, a user's Stdout, or even a simple byte Buffer; making testing a bit simpler.
I renamed it because of what is does; that is, this function takes a Person struct and:
Marshals the struct into a json representation
Writes the json to a Writer
Returns any errors arising from marshalling/writing
One more thing, you might be confused as to what a Writer is, because it is not a data type, but rather an interface -- that is a behavior of a data type, a predefined method that a type implements. Anything that implements the Write() method, then, is considered a writer.
This can be a bit difficult for beginners to grasp at first, but there are lots of resources online to help understand interfaces (and ReadWriters are some of the more common interfaces to encounter, along with Error() (ei. all errors)).