How to wrap net.Conn.Read() in Golang - go
I want to wrap Read function net.Conn.Read(). The purpose of this to read the SSL handshake messages. https://pkg.go.dev/net#TCPConn.Read
nc, err := net.Dial("tcp", "google.com:443")
if err != nil {
fmt.Println(err)
}
tls.Client(nc, &tls.Config{})
Are there any ways to do?
Thanks in advance
Use the following code to intercept Read on a net.Conn:
type wrap {
// Conn is the wrapped net.Conn.
// Because it's an embedded field, the
// net.Conn methods are automatically promoted
// to wrap.
net.Conn
}
// Read calls through to the wrapped read and
// prints the bytes that flow through. Replace
// the print statement with whatever is appropriate
// for your application.
func (w wrap) Read(p []byte) (int, error) {
n, err := w.Conn.Read()
fmt.Printf("%x\n", p[:n]) // example
return n, err
}
Wrap like this:
tnc, err :=tls.Client(wrap{nc}, &tls.Config{})
The previous answer gets the job done indeed.
However I would recommend Liz Rice's talk: GopherCon 2018: Liz Rice - The Go Programmer's Guide to Secure Connections
Going through her code in Github, you might find a more elegant way to achieve what you want.
Start with the client code on line 26.
Related
convert from `bufio.Reader` to `io.ReadWriteCloser`
I have an io.ReadWriteCloser in which I want to peek into it without advancing the reader, so I am using bi := bufio.NewReader(i) bi.Peek(1) So far so good, but later when I want to reuse the original io.ReadWriteCloser (i) it has only EOF. So my question is how to convert back from bufio.Reader back to io.ReadWriteCloser
Because the bufio.Reader buffers data from the underlying reader, the application must read from the bufio.Reader after the call to Peek. To get an io.ReadWriteCloser that does this, wrap the bufio.Reader and the original io.ReadWriteCloser: // BufferedReadWriteCloser has all of the methods // from *bufio.Reader and io.ReadWriteCloser. type BufferedReadWriteCloser struct { *bufio.Reader io.ReadWriteCloser } func (rw *BufferedReadWriteCloser) Read(p []byte) (int, error) { return rw.Reader.Read(p) } Here's how to use it: rw := &BufferedReadWriteCloser{bufio.NewReader(i), i} p, err := rw.Peek(1) The value of rw satisfies the io.ReadWriteCloser interface. There is no requirement or assumption that the io.ReadWriteCloser has a Seek method.
As mentioned in my comment above, you need access to the original reader's Seek method. This means that passing the reader around as an io.ReadWriteCloser is insufficient. Having said that, the following helper function may be a workaround: func peek(r io.Reader, n int) ([]byte, error) { bi := bufio.NewReader(r) peeked, err := bi.Peek(n) if err != nil { return nil, err } // Use type assertion to check if r implements the // io.Seeker interface. If it does, then use it to // reset the offset. if seeker, ok := r.(io.Seeker); ok { seeker.Seek(0, 0) } return peeked, nil } Now you can pass the io.ReadWriteCloser to this peek function. The peek function checks if the reader happens to implement the Seek method. If the Seek method is implemented, then peek will call it.
Golang ending (Binance) web service stream using go routine
I'm integrating Binance API into an existing system and while most parts a straight forward, the data streaming API hits my limited understanding of go-routines. I don't believe there is anything special in the golang SDK for Binance, but essentially I only need two functions, one that starts the data stream and processes events with the event handler given as a parameter and a second one that ends the data stream without actually shutting down the client as it would close all other connections. On a previous project, there were two message types for this, but the binance SDK uses an implementation that returns two go channels, one for errors and an another one, I guess from the name, for stopping the data stram. The code I wrote for starting the data stream looks like this: func startDataStream(symbol, interval string, wsKlineHandler futures.WsKlineHandler, errHandler futures.ErrHandler) (err error){ doneC, stopC, err := futures.WsKlineServe(symbol, interval, wsKlineHandler, errHandler) if err != nil { fmt.Println(err) return err } return nil } This works as expected and streams data. A simple test verifies it: func runWSDataTest() { symbol := "BTCUSDT" interval := "15m" errHandler := func(err error) {fmt.Println(err)} wsKlineHandler := func(event *futures.WsKlineEvent) {fmt.Println(event)} _ = startDataStream(symbol, interval, wsKlineHandler, errHandler) } The thing that is not so clear to me, mainly due to incomplete understanding, really is how do I stop the stream. I think the returned stopC channel can be used to somehow issue a end singnal similar to, say, a sigterm on system level and then the stream should end. Say, I have a stopDataStream function that takes a symbol as an argument func stopDataStream(symbol){ } Let's suppose I start 5 data streams for five symbols and now I want to stop just one of the streams. That begs the question of: How do I track all those stopC channels? Can I use a collection keyed with the symbol, pull the stopC channel, and then just issue a signal to end just that data stream? How do I actually write into the stopC channel from the stop function? Again, I don't think this is particularly hard, it's just I could not figure it out yet from the docs so any help would be appreciated. Thank you
(Answer originally written by #Marvin.Hansen) Turned out, just saving & closing the channel solved it all. I was really surprised how easy this is, but here is the code of the updated functions: func startDataStream(symbol, interval string, wsKlineHandler futures.WsKlineHandler, errHandler futures.ErrHandler) (err error) { _, stopC, err := futures.WsKlineServe(symbol, interval, wsKlineHandler, errHandler) if err != nil { fmt.Println(err) return err } // just save the stop channel chanMap[symbol] = stopC return nil } And then, the stop function really becomes embarrassing trivial: func stopDataStream(symbol string) { stopC := chanMap[symbol] // load the stop channel for the symbol close(stopC) // just close it. } Finally, testing it all out: var ( chanMap map[string]chan struct{} ) func runWSDataTest() { chanMap = make(map[string]chan struct{}) symbol := "BTCUSDT" interval := "15m" errHandler := func(err error) { fmt.Println(err) } wsKlineHandler := getKLineHandler() println("Start stream") _ = startDataStream(symbol, interval, wsKlineHandler, errHandler) time.Sleep(3 * time.Second) println("Stop stream") stopDataStream(symbol) time.Sleep(1 * time.Second) } This is it.
Golang reading from serial
I'm trying to read from a serial port (a GPS device on a Raspberry Pi). Following the instructions from http://www.modmypi.com/blog/raspberry-pi-gps-hat-and-python I can read from shell using stty -F /dev/ttyAMA0 raw 9600 cs8 clocal -cstopb cat /dev/ttyAMA0 I get well formatted output $GNGLL,5133.35213,N,00108.27278,W,160345.00,A,A*65 $GNRMC,160346.00,A,5153.35209,N,00108.27286,W,0.237,,290418,,,A*75 $GNVTG,,T,,M,0.237,N,0.439,K,A*35 $GNGGA,160346.00,5153.35209,N,00108.27286,W,1,12,0.67,81.5,M,46.9,M,,*6C $GNGSA,A,3,29,25,31,20,26,23,21,16,05,27,,,1.11,0.67,0.89*10 $GNGSA,A,3,68,73,83,74,84,75,85,67,,,,,1.11,0.67,0.89*1D $GPGSV,4,1,15,04,,,34,05,14,040,21,09,07,330,,16,45,298,34*40 $GPGSV,4,2,15,20,14,127,18,21,59,154,30,23,07,295,26,25,13,123,22*74 $GPGSV,4,3,15,26,76,281,40,27,15,255,20,29,40,068,19,31,34,199,33*7C $GPGSV,4,4,15,33,29,198,,36,23,141,,49,30,172,*4C $GLGSV,3,1,11,66,00,325,,67,13,011,20,68,09,062,16,73,12,156,21*60 $GLGSV,3,2,11,74,62,177,20,75,53,312,36,76,08,328,,83,17,046,25*69 $GLGSV,3,3,11,84,75,032,22,85,44,233,32,,,,35*62 $GNGLL,5153.35209,N,00108.27286,W,160346.00,A,A*6C $GNRMC,160347.00,A,5153.35205,N,00108.27292,W,0.216,,290418,,,A*7E $GNVTG,,T,,M,0.216,N,0.401,K,A*3D $GNGGA,160347.00,5153.35205,N,00108.27292,W,1,12,0.67,81.7,M,46.9,M,,*66 $GNGSA,A,3,29,25,31,20,26,23,21,16,05,27,,,1.11,0.67,0.89*10 $GNGSA,A,3,68,73,83,74,84,75,85,67,,,,,1.11,0.67,0.89*1D $GPGSV,4,1,15,04,,,34,05,14,040,21,09,07,330,,16,45,298,34*40 (I've put some random data in) I'm trying to read this in Go. Currently, I have package main import "fmt" import "log" import "github.com/tarm/serial" func main() { config := &serial.Config{ Name: "/dev/ttyAMA0", Baud: 9600, ReadTimeout: 1, Size: 8, } stream, err := serial.OpenPort(config) if err != nil { log.Fatal(err) } buf := make([]byte, 1024) for { n, err := stream.Read(buf) if err != nil { log.Fatal(err) } s := string(buf[:n]) fmt.Println(s) } } But this prints malformed data. I suspect that this is due to the buffer size or the value of Size in the config struct being wrong, but I'm not sure how to get those values from the stty settings. Looking back, I think the issue is that I'm getting a stream and I want to be able to iterate over lines of the stty, rather than chunks. This is how the stream is outputted: $GLGSV,3 ,1,09,69 ,10,017, ,70,43,0 69,,71,3 2,135,27 ,76,23,2 32,22*6F $GLGSV ,3,2,09, 77,35,30 0,21,78, 11,347,, 85,31,08 1,30,86, 72,355,3 6*6C $G LGSV,3,3 ,09,87,2 4,285,30 *59 $GN GLL,5153 .34919,N ,00108.2 7603,W,1 92901.00 ,A,A*6A
The struct you get back from serial.OpenPort() contains a pointer to an open os.File corresponding to the opened serial port connection. When you Read() from this, the library calls Read() on the underlying os.File. The documentation for this function call is: Read reads up to len(b) bytes from the File. It returns the number of bytes read and any error encountered. At end of file, Read returns 0, io.EOF. This means you have to keep track of how much data was read. You also have to keep track of whether there were newlines, if this is important to you. Unfortunately, the underlying *os.File is not exported, so you'll find it difficult to use tricks like bufio.ReadLine(). It may be worth modifying the library and sending a pull request. As Matthew Rankin noted in a comment, Port implements io.ReadWriter so you can simply use bufio to read by lines. stream, err := serial.OpenPort(config) if err != nil { log.Fatal(err) } scanner := bufio.NewScanner(stream) for scanner.Scan() { fmt.Println(scanner.Text()) // Println will add back the final '\n' } if err := scanner.Err(); err != nil { log.Fatal(err) }
Change fmt.Println(s) to fmt.Print(s) and you will probably get what you want. Or did I misunderstand the question?
Two additions to Michael Hamptom's answer which can be useful: line endings You might receive data that is not newline-separated text. bufio.Scanner uses ScanLines by default to split the received data into lines - but you can also write your own line splitter based on the default function's signature and set it for the scanner: scanner := bufio.NewScanner(stream) scanner.Split(ownLineSplitter) // set custom line splitter function reader shutdown You might not receive a constant stream but only some packets of bytes from time to time. If no bytes arrive at the port, the scanner will block and you can't just kill it. You'll have to close the stream to do so, effectively raising an error. To not block any outer loops and handle errors appropriately, you can wrap the scanner in a goroutine that takes a context. If the context was cancelled, ignore the error, otherwise forward the error. In principle, this can look like var errChan = make(chan error) var dataChan = make(chan []byte) ctx, cancelPortScanner := context.WithCancel(context.Background()) go func(ctx context.Context) { scanner := bufio.NewScanner(stream) for scanner.Scan() { // will terminate if connection is closed dataChan <- scanner.Bytes() } // if execution reaches this point, something went wrong or stream was closed select { case <-ctx.Done(): return // ctx was cancelled, just return without error default: errChan <- scanner.Err() // ctx wasn't cancelled, forward error } }(ctx) // handle data from dataChan, error from errChan To stop the scanner, you would cancel the context and close the connection: cancelPortScanner() stream.Close()
How to set timeout in *os.File/io.Read in golang
I know there is a function called SetReadDeadline that can set a timeout in socket(conn.net) reading, while io.Read not. There is a way that starts another routine as a timer to solve this problem, but it brings another problem that the reader routine(io.Read) still block: func (self *TimeoutReader) Read(buf []byte) (n int, err error) { ch := make(chan bool) n = 0 err = nil go func() { // this goroutime still exist even when timeout n, err = self.reader.Read(buf) ch <- true }() select { case <-ch: return case <-time.After(self.timeout): return 0, errors.New("Timeout") } return } This question is similar in this post, but the answer is unclear. Do you guys have any good idea to solve this problem?
Instead of setting a timeout directly on the read, you can close the os.File after a timeout. As written in https://golang.org/pkg/os/#File.Close Close closes the File, rendering it unusable for I/O. On files that support SetDeadline, any pending I/O operations will be canceled and return immediately with an error. This should cause your read to fail immediately.
Your mistake here is something different: When you read from the reader you just read one time and that is wrong: go func() { n, err = self.reader.Read(buf) // this Read needs to be in a loop ch <- true }() Here is a simple example (https://play.golang.org/p/2AnhrbrhLrv) buf := bytes.NewBufferString("0123456789") r := make([]byte, 3) n, err := buf.Read(r) fmt.Println(string(r), n, err) // Output: 012 3 <nil> The size of the given slice is used when using the io.Reader. If you would log the n variable in your code you would see, that not the whole file is read. The select statement outside of your goroutine is at the wrong place. go func() { a := make([]byte, 1024) for { select { case <-quit: result <- []byte{} return default: _, err = self.reader.Read(buf) if err == io.EOF { result <- a return } } } }() But there is something more! You want to implement the io.Reader interface. After the Read() method is called until the file ends you should not start a goroutine in here, because you just read chunks of the file. Also the timeout inside the Read() method doesn't help, because that timeout works for each call and not for the whole file.
In addition to #apxp's point about looping over Read, you could use a buffer size of 1 byte so that you never block as long is there is data to read. When interacting with external resources anything can happen. It is possible for any given io.Reader implementation to simply block forever. Here, I'll write one for you... type BlockingReader struct{} func (BlockingReader) Read(b []byte) (int, error) { <-make(chan struct{}) return 0, nil } Remember anyone can implement an interface, so you can't make any assumptions that it will behave like *os.File or any other standard library io.Reader. In addition to asinine coding like mine above, an io.Reader could legitimately connect to a resources that can block forever. You cannot kill gorountines, so if an io.Reader truly blocks forever the blocked goroutine will continue to consume resources until your application terminates. However, this shouldn't be a problem, a blocked goroutine does not consume much in the way of resources, and should be fine as long as you don't blindly retry blocked Reads by spawning more gorountines.
Goroutines broke the program
The problem is this: There is a web server. I figured that it would be beneficial to use goroutines in page loading, so I went ahead and did: called loadPage function as a goroutine. However, when doing this, the server simply stops working without errors. It prints a blank, white page. The problem has to be in the function itself- something there is conflicting with the goroutine somehow. These are the relevant functions: func loadPage(w http.ResponseWriter, path string) { s := GetFileContent(path) w.Header().Add("Content-Type", getHeader(path)) w.Header().Add("Content-Length", GetContentLength(path)) fmt.Fprint(w, s) } func GetFileContent(path string) string { cont, err := ioutil.ReadFile(path) e(err) aob := len(cont) s := string(cont[:aob]) return s } func GetFileContent(path string) string { cont, err := ioutil.ReadFile(path) e(err) aob := len(cont) s := string(cont[:aob]) return s } func getHeader(path string) string { images := []string{".jpg", ".jpeg", ".gif", ".png"} readable := []string{".htm", ".html", ".php", ".asp", ".js", ".css"} if ArrayContainsSuffix(images, path) { return "image/jpeg" } if ArrayContainsSuffix(readable, path) { return "text/html" } return "file/downloadable" } func ArrayContainsSuffix(arr []string, c string) bool { length := len(arr) for i := 0; i < length; i++ { s := arr[i] if strings.HasSuffix(c, s) { return true } } return false }
The reason why this happens is because your HandlerFunc which calls "loadPage" is called synchronously with the request. When you call it in a go routine the Handler is actually returning immediately, causing the response to be sent immediately. That's why you get a blank page. You can see this in server.go (line 1096): serverHandler{c.server}.ServeHTTP(w, w.req) if c.hijacked() { return } w.finishRequest() The ServeHTTP function calls your handler, and as soon as it returns it calls "finishRequest". So your Handler function must block as long as it wants to fulfill the request. Using a go routine will actually not make your page any faster. Synchronizing a singe go routine with a channel, as Philip suggests, will also not help you in this case as that would be the same as not having the go routine at all. The root of your problem is actually ioutil.ReadFile, which buffers the entire file into memory before sending it. If you want to stream the file you need to use os.Open. You can use io.Copy to stream the contents of the file to the browser, which will used chunked encoding. That would look something like this: f, err := os.Open(path) if err != nil { http.Error(w, "Not Found", http.StatusNotFound) return } n, err := io.Copy(w, f) if n == 0 && err != nil { http.Error(w, "Error", http.StatusInternalServerError) return } If for some reason you need to do work in multiple go routines, take a look at sync.WaitGroup. Channels can also work. If you are trying to just serve a file, there are other options that are optimized for this, such as FileServer or ServeFile.
In the typical web framework implementations in Go, the route handlers are invoked as Goroutines. I.e. at some point the web framework will say go loadPage(...). So if you call a Go routine from inside loadPage, you have two levels of Goroutines. The Go scheduler is really lazy and will not execute the second level if it's not forced to. So you need to enforce it through synchronization events. E.g. by using channels or the sync package. Example: func loadPage(w http.ResponseWriter, path string) { s := make(chan string) go GetFileContent(path, s) fmt.Fprint(w, <-s) } The Go documentation says this: If the effects of a goroutine must be observed by another goroutine, use a synchronization mechanism such as a lock or channel communication to establish a relative ordering. Why is this actually a smart thing to do? In larger projects you may deal with a large number of Goroutines that need to be coordinated somehow efficiently. So why call a Goroutine if it's output is used nowhere? A fun fact: I/O operations like fmt.Printf do trigger synchronization events too.