How to pipe an HTTP response to a file in Go? - go

How do I convert the below code to use streams/pipes so that I don't need to read the full content into memory?
Something like:
http.Get("http://example.com/").Pipe("./data.txt")
package main
import ("net/http";"io/ioutil")
func main() {
resp, err := http.Get("http://example.com/")
check(err)
defer resp.Body.Close()
body, err := ioutil.ReadAll(resp.Body)
check(err)
err = ioutil.WriteFile("./data.txt", body, 0666)
check(err)
}
func check(e error) {
if e != nil {
panic(e)
}
}

How about io.Copy()? Its documentation can be found at: http://golang.org/pkg/io/#Copy
It's pretty simple, though. Give it an io.Reader and an io.Writer and it copies the data over, one small chunk at a time (e.g. not all in memory at once).
So you might try writing something like:
func main() {
resp, err := http.Get("...")
check(err)
defer resp.Body.Close()
out, err := os.Create("filename.ext")
if err != nil {
// panic?
}
defer out.Close()
io.Copy(out, resp.Body)
}
I haven't tested the above; I just hacked it together quickly from your above example, but it should be close if not on the money.

Another option is File.ReadFrom:
package main
import (
"net/http"
"os"
)
func main() {
r, e := http.Get("http://speedtest.lax.hivelocity.net")
if e != nil {
panic(e)
}
defer r.Body.Close()
f, e := os.Create("index.html")
if e != nil {
panic(e)
}
defer f.Close()
f.ReadFrom(r.Body)
}

Related

Why does net.Conn.close() seem to be closing at the wrong time?

I'm trying to read and write some commands from a TCP client. I want to close a connection after the last function has been executed but for some reason, it seems like the server disconnects the connection in the middle of the function even when explicitly placed afterward.
package main
import (
"bufio"
"fmt"
"io"
"log"
"net"
"strconv"
"strings"
"time"
)
func main() {
listener, err := net.Listen("tcp", "localhost:8000")
if err != nil {
log.Fatal(err)
}
for {
conn, err := listener.Accept()
if err != nil {
log.Print(err)
}
go handleConn(conn)
conn.Close()
}
}
func handleConn(someconnection net.Conn) {
func1(someconnection)
func2(someconnection) //connection drops in the middle of executing this part
}
func func2(someconnection net.Conn) {
//send message(a string)
_, err := io.WriteString(someconnection, dosomething)
if err != nil {
log.Fatal(err)
}
//await reply
//send another message
_, err = io.WriteString(someconnection, dosomething)
if err != nil {
log.Fatal(err)
}
//await reply
//send another message, connection tends to close somewhere here
_, err = io.WriteString(someconnection, dosomething)
if err != nil {
log.Fatal(err)
}
//await,send
_, err = io.WriteString(someconnection, do something)
if err != nil {
log.Fatal(err)
}
//await, read and print message
c := bufio.NewReader(someconnection)
buff1 := make([]byte, maxclientmessagelength)
buff1, err = c.ReadBytes(delimiter)
fmt.Printf("\n%s\n", buff1)
_, err = io.WriteString(someconnection, dosomething)
if err != nil {
log.Fatal(err)
}
}
That means the client trying to communicate backward simply isn't able to communicate but the program runs to the end.
Update 1:
Made some progress by placing the deferred close statement to when the connection was first acquired.
func main() {
listener, err := net.Listen("tcp", "localhost:8000")
if err != nil {
log.Fatal(err)
}
for {
conn, err := listener.Accept()
if err != nil {
log.Print(err)
}
defer conn.Close()
go handleConn(conn)
}}
Now it doesn't necessarily close within the second I hope it to close but at least it now runs all the way through.
Goroutines are asynchronous so after calling handleConn here:
go handleConn(conn)
conn.Close()
the main function continues to execute and closes the connection.
Try just calling the handleConn function regularly (without the go).
The conn.Close needs to de done AFTER handleConn has done its work. You could communicate the back to the main thread using channels but that would be too complex (and also block execution of main thread). This is how it should be done
func main() {
listener, err := net.Listen("tcp", "localhost:8000")
if err != nil {
log.Fatal(err)
}
for {
conn, err := listener.Accept()
if err != nil {
log.Print(err)
}
go handleConn(conn)
// REMOVE BELOW LINE
// conn.Close()
}
}
Add conn.Close inside handleConn
func handleConn(someconnection net.Conn) {
// ADD BELOW LINE
defer someconnection.Close()
func1(someconnection)
func2(someconnection)
}
This makes sure conn.Close is called AFTER func1 and func2 are done executing

The process cannot access the file because it is being used by another process in Golang

The process cannot access the file ... because it is being used by another process
I can't Remover Zip file with this code ..
it's possible? extract and delete the file in one code.
Code
package main
import (
"archive/zip"
"fmt"
"io"
"log"
"net/http"
"os"
"path/filepath"
"strings"
)
func main() {
url := "https://230c07c8-77b2-4c0d-9b82-8c6501a5bc45.filesusr.com/archives/b7572a_9ec985e0031042ef912cb40cafbe6376.zip?dn=7.zip"
out, _ := os.Create("E:\\experi\\1234567890.zip")
defer out.Close()
resp, _ := http.Get(url)
defer resp.Body.Close()
_, _ = io.Copy(out, resp.Body)
files, err := Unzip("E:\\experi\\1234567890.zip", "E:\\experi\\1234567890")
if err != nil {
log.Fatal(err)
}
fmt.Println("Unzipped the following files:\n" + strings.Join(files, "\n"))
}
func Unzip(src string, destination string) ([]string, error) {
var filenames []string
r, err := zip.OpenReader(src)
if err != nil {
return filenames, err
}
defer r.Close()
for _, f := range r.File {
fpath := filepath.Join(destination, f.Name)
if !strings.HasPrefix(fpath, filepath.Clean(destination)+string(os.PathSeparator)){
return filenames, fmt.Errorf("%s is an illegal filepath", fpath)
}
filenames = append(filenames, fpath)
if f.FileInfo().IsDir() {
os.MkdirAll(fpath, os.ModePerm)
continue
}
if err = os.MkdirAll(filepath.Dir(fpath), os.ModePerm); err != nil {
return filenames, err
}
outFile, err := os.OpenFile(fpath,
os.O_WRONLY|os.O_CREATE|os.O_TRUNC | os.O_RDWR,
f.Mode())
if err != nil {
return filenames, err
}
rc, err := f.Open()
if err != nil {
return filenames, err
}
_, err = io.Copy(outFile, rc)
outFile.Close()
rc.Close()
if err != nil {
return filenames, err
}
}
removeFile()
return filenames, nil
}
func removeFile() {
error := os.Remove("E:\\experi\\1234567890.zip")
if error != nil {
log.Fatal(error)
}
}
Output
output text
2020/10/28 13:09:04 remove E:\experi\1234567890.zip: The process cannot access the file because it is being used by another process.
Process finished with exit code 1
Any other way to do this same thing ?
Did I go wrong anywhere?
Help Would be Much Appreciated. Thanks in Advance. :)
out, _ := os.Create("E:\\experi\\1234567890.zip") creates or truncates the file and returns you a *File (so the file is open).
defer out.Close() closes the file "the moment the surrounding function returns" (spec).
So at the time you call Unzip you have the file open. To fix this call out.Close() before the call to Unzip (and please don't assume that calls complete without error).
If you close using the defer, it is closed after performing up to the last line of the function. You must explicitly close the file before remove it.

bug using golang io.pipe to tar files

I have been testing code using io.Pipe to tar and gunzip files into a tar ball and then unzipping using the tar utility. The follow code passes, however the untaring process keeps getting
error:
tar: Truncated input file (needed 1050624 bytes, only 0 available)
tar: Error exit delayed from previous errors.
This issue is really driving me crazy. It has been two weeks. I really need help debugging.
Thanks.
Development enviroment: go version go1.9 darwin/amd64
package main
import (
"archive/tar"
"compress/gzip"
"fmt"
"io"
"log"
"os"
"path/filepath"
"testing"
)
func testTarGzipPipe2(t *testing.T) {
src := "/path/to/file/folder"
pr, pw := io.Pipe()
gzipWriter := gzip.NewWriter(pw)
defer gzipWriter.Close()
tarWriter := tar.NewWriter(gzipWriter)
defer tarWriter.Close()
status := make(chan bool)
go func() {
defer pr.Close()
// tar to local disk
tarFile, err := os.OpenFile("/path/to/tar/ball/test.tar.gz", os.O_RDWR|os.O_CREATE, 0755)
if err != nil {
log.Fatal(err)
}
defer tarFile.Close()
if _, err := io.Copy(tarFile, pr); err != nil {
log.Fatal(err)
}
status <- true
}()
err := filepath.Walk(src, func(path string, info os.FileInfo, err error) error {
if err != nil {
return err
}
header, err := tar.FileInfoHeader(info, info.Name())
if err != nil {
return err
}
// header.Name = strings.TrimPrefix(strings.Replace(path, src, "", -1), string(filepath.Separator))
if err := tarWriter.WriteHeader(header); err != nil {
return err
}
if info.Mode().IsDir() {
return nil
}
fmt.Println(path)
f, err := os.Open(path)
if err != nil {
return err
}
defer f.Close()
if _, err := io.Copy(tarWriter, f); err != nil {
return err
}
return nil
})
if err != nil {
log.Fatal(err)
}
pw.Close()
<-status
}
You are closing the pipe before the deferred Close calls on the gzipWriter and tarWriter. There's no error, because you're not checking the error on either of those close calls. You need to close the tarWriter, then the gzipWriter, then the PipeWriter, in that order.
However, there's no reason for the pipe at all in this code, and you can remove the goroutine and the associated coordination altogether if you write directly to the file.
tarFile, err := os.OpenFile("/tmp/test.tar.gz", os.O_RDWR|os.O_CREATE, 0644)
if err != nil {
log.Fatal(err)
}
defer tarFile.Close()
gzipWriter := gzip.NewWriter(tarFile)
defer gzipWriter.Close()
tarWriter := tar.NewWriter(gzipWriter)
defer tarWriter.Close()

Read whole data with Golang net.Conn.Read

So I'm building a network app in Go and I've seen that Conn.Read reads into a limited byte array, which I had created with make([]byte, 2048) and now the problem is that I don't know the exact length of the content, so it could be too much or not enough.
My question is how can I just read the exact amount of data. I think I have to use bufio, but I'm not sure.
It highly depends on what you're trying to do, and what kind of data you're expecting, for example if you just want to read until the EOF you could use something like this:
func main() {
conn, err := net.Dial("tcp", "google.com:80")
if err != nil {
fmt.Println("dial error:", err)
return
}
defer conn.Close()
fmt.Fprintf(conn, "GET / HTTP/1.0\r\n\r\n")
buf := make([]byte, 0, 4096) // big buffer
tmp := make([]byte, 256) // using small tmo buffer for demonstrating
for {
n, err := conn.Read(tmp)
if err != nil {
if err != io.EOF {
fmt.Println("read error:", err)
}
break
}
//fmt.Println("got", n, "bytes.")
buf = append(buf, tmp[:n]...)
}
fmt.Println("total size:", len(buf))
//fmt.Println(string(buf))
}
//edit: for completeness sake and #fabrizioM's great suggestion, which completely skipped my mind:
func main() {
conn, err := net.Dial("tcp", "google.com:80")
if err != nil {
fmt.Println("dial error:", err)
return
}
defer conn.Close()
fmt.Fprintf(conn, "GET / HTTP/1.0\r\n\r\n")
var buf bytes.Buffer
io.Copy(&buf, conn)
fmt.Println("total size:", buf.Len())
}
You can use the ioutil.ReadAll function:
import (
"fmt"
"io/ioutil"
"net"
)
func whois(domain, server string) ([]byte, error) {
conn, err := net.Dial("tcp", server+":43")
if err != nil {
return nil, err
}
defer conn.Close()
fmt.Fprintf(conn, "%s\r\n", domain)
return ioutil.ReadAll(conn)
}
You can read data something like this:
// import net/textproto
import ("net/textproto", ...)
....
reader := bufio.NewReader(Conn)
tp := textproto.NewReader(reader)
defer Conn.Close()
for {
// read one line (ended with \n or \r\n)
line, _ := tp.ReadLine()
// do something with data here, concat, handle and etc...
}
....

How can I efficiently download a large file using Go?

Is there a way to download a large file using Go that will store the content directly into a file instead of storing it all in memory before writing it to a file? Because the file is so big, storing it all in memory before writing it to a file is going to use up all the memory.
I'll assume you mean download via http (error checks omitted for brevity):
import ("net/http"; "io"; "os")
...
out, err := os.Create("output.txt")
defer out.Close()
...
resp, err := http.Get("http://example.com/")
defer resp.Body.Close()
...
n, err := io.Copy(out, resp.Body)
The http.Response's Body is a Reader, so you can use any functions that take a Reader, to, e.g. read a chunk at a time rather than all at once. In this specific case, io.Copy() does the gruntwork for you.
A more descriptive version of Steve M's answer.
import (
"os"
"net/http"
"io"
)
func downloadFile(filepath string, url string) (err error) {
// Create the file
out, err := os.Create(filepath)
if err != nil {
return err
}
defer out.Close()
// Get the data
resp, err := http.Get(url)
if err != nil {
return err
}
defer resp.Body.Close()
// Check server response
if resp.StatusCode != http.StatusOK {
return fmt.Errorf("bad status: %s", resp.Status)
}
// Writer the body to file
_, err = io.Copy(out, resp.Body)
if err != nil {
return err
}
return nil
}
The answer selected above using io.Copy is exactly what you need, but if you are interested in additional features like resuming broken downloads, auto-naming files, checksum validation or monitoring progress of multiple downloads, checkout the grab package.
Here is a sample. https://github.com/thbar/golang-playground/blob/master/download-files.go
Also I give u some codes might help you.
code:
func HTTPDownload(uri string) ([]byte, error) {
fmt.Printf("HTTPDownload From: %s.\n", uri)
res, err := http.Get(uri)
if err != nil {
log.Fatal(err)
}
defer res.Body.Close()
d, err := ioutil.ReadAll(res.Body)
if err != nil {
log.Fatal(err)
}
fmt.Printf("ReadFile: Size of download: %d\n", len(d))
return d, err
}
func WriteFile(dst string, d []byte) error {
fmt.Printf("WriteFile: Size of download: %d\n", len(d))
err := ioutil.WriteFile(dst, d, 0444)
if err != nil {
log.Fatal(err)
}
return err
}
func DownloadToFile(uri string, dst string) {
fmt.Printf("DownloadToFile From: %s.\n", uri)
if d, err := HTTPDownload(uri); err == nil {
fmt.Printf("downloaded %s.\n", uri)
if WriteFile(dst, d) == nil {
fmt.Printf("saved %s as %s\n", uri, dst)
}
}
}

Resources