Go: Reading and writing compressed gob to file - go

This does not appear to work correctly and I'm not sure what I'm doing wrong. I'm attempting to convert a map into a gob, gzip the binary and save it to a file, then later read it back.
type Object struct {
mystruct map[string][]scorer
}
type scorer struct {
category int
score float64
}
func (t *Object) Load(filename string) error {
fi, err := os.Open(filename)
if err !=nil {
return err
}
defer fi.Close()
fz, err := gzip.NewReader(fi)
if err !=nil {
return err
}
defer fz.Close()
decoder := gob.NewDecoder(fz)
err = decoder.Decode(&t.mystruct)
if err !=nil {
return err
}
return nil
}
func (t *Object) Save(filename string) error {
fi, err := os.Create(filename)
if err !=nil {
return err
}
defer fi.Close()
fz := gzip.NewWriter(fi)
defer fz.Close()
encoder := gob.NewEncoder(fz)
err = encoder.Encode(t.mystruct)
if err !=nil {
return err
}
return nil
}
Something is saved to a file and the gzip appears to be valid, but it is either saving nothing or not loading it back again.
I'm also not sure if I'm doing this correctly as I'm new to Go and I'm finding it difficult to get my head around the readers and writers, since I'm coming from PHP and not used to that.
Any ideas?

Your problem has nothing to do with Readers and Writers: You just cannot encode/decode fields which are un-exported and all your fields are unexported (lowercase). You'll have to use Mystruct, Categoryand Score or write your own BinaryMarshal/BinaryUnmarshal as explained in http://golang.org/pkg/encoding/gob/#example__encodeDecode

Related

Zip a slice of byte into another slice in Golang

I want to achieve exactly opposite of the solution given here, zipping a slice of byte into another slice of byte -
Convert zipped []byte to unzip []byte golang code
Something like -
func ZipBytes(unippedBytes []byte) ([]byte, error) {
// ...
}
[I am going to upload that zipped file as multipart form data for a POST request]
You can compress directly into memory using a bytes.Buffer.
The following example uses compress/zlib since it is the opposite of the example given in the question. Depending on your use case you could easily change it to compress/gzip as well (very similar APIs).
package data_test
import (
"bytes"
"compress/zlib"
"io"
"testing"
)
func compress(buf []byte) ([]byte, error) {
var out bytes.Buffer
w := zlib.NewWriter(&out)
if _, err := w.Write(buf); err != nil {
return nil, err
}
if err := w.Close(); err != nil {
return nil, err
}
return out.Bytes(), nil
}
func decompress(buf []byte) (_ []byte, e error) {
r, err := zlib.NewReader(bytes.NewReader(buf))
if err != nil {
return nil, err
}
defer func() {
if err := r.Close(); e == nil {
e = err
}
}()
return io.ReadAll(r)
}
func TestRoundtrip(t *testing.T) {
want := []byte("test data")
zdata, err := compress(want)
if err != nil {
t.Fatalf("compress: %v", err)
}
got, err := decompress(zdata)
if err != nil {
t.Fatalf("decompress: %v", err)
}
if !bytes.Equal(want, got) {
t.Errorf("roundtrip: got = %q; want = %q", got, want)
}
}

Zip a Directory and not Have the Result Saved in File System

I am able to zip a file using logic similar to the zip writer seen here.
This results in an array of bytes ([]byte) being created within the bytes.Buffer object that is returned. I would just like to know if there is there any way I can upload this 'zipped' array of bytes to an API endpoint that expects a 'multipart/form-data' request body (without having to save it locally).
Supplementary information:
I have code that utilizes this when compressing a folder. I am able to successfully execute an HTTP POST request with the zip file to the endpoint with this logic.
However, this unfortunately saves zipped files in a user's local file system. I would like to try to avoid this :)
You can create multipart writer and write []byte zipped data into field with field name you like and file name like below.
func addZipFileToReq(zipped []byte) (*http.Request, error){
body := bytes.NewBuffer(nil)
writer := multipart.NewWriter(body)
part, err := writer.CreateFormFile(`fileField`, `filename`)
if err != nil {
return nil, err
}
_, err = part.Write(zipped)
if err != nil {
return nil, err
}
err = writer.Close()
if err != nil {
return nil, err
}
r, err := http.NewRequest(http.MethodPost, "https://example.com", body)
if err != nil {
return nil, err
}
r.Header.Set("Content-Type", writer.FormDataContentType())
return r, nil
}
If you want to stream-upload the zip, you should be able to do so with io.Pipe. The following is an incomplete and untested example to demonstrate the general idea. To make it work you'll need to modify it and potentially fix whatever bugs you encounter.
func UploadReader(r io.Reader) error {
req, err := http.NewRequest("POST", "<UPLOAD_URL>", r)
if err != nil {
return err
}
// TODO set necessary headers (content type, auth, etc)
res, err := http.DefaultClient.Do(req)
if err != nil {
return err
} else if res.StatusCode != 200 {
return errors.New("not ok")
}
return nil
}
func ZipDir(dir string, w io.Writer) error {
zw := zip.NewWriter(w)
defer zw.Close()
return filepath.Walk(dir, func(path string, fi os.FileInfo, err error) error {
if err != nil {
return err
}
if !fi.Mode().IsRegular() {
return nil
}
header, err := zip.FileInfoHeader(fi)
if err != nil {
return err
}
header.Name = path
header.Method = zip.Deflate
w, err := zw.CreateHeader(header)
if err != nil {
return err
}
f, err := os.Open(path)
if err != nil {
return err
}
defer f.Close()
if _, err := io.Copy(w, f); err != nil {
return err
}
return nil
})
}
func UploadDir(dir string) error {
r, w := io.Pipe()
ch := make(chan error)
wg := sync.WaitGroup{}
wg.Add(1)
go func() {
defer wg.Done()
defer w.Close()
if err := ZipDir(dir, w); err != nil {
ch <- err
}
}()
wg.Add(1)
go func() {
defer wg.Done()
defer r.Close()
if err := UploadReader(r); err != nil {
ch <- err
}
}()
go func() {
wg.Wait()
close(ch)
}()
return <-ch
}

Golang function generalization for different packages

Imagine these functions needs to be used how can I make this calls generic so that I don't repeat almost the same code.
with "encoding/csv"
func getDataFromCSVFiles(files []string) (error, Data) {
data := Data{}
for _, file := range files {
f, err := os.Open(file)
if err != nil {
panic(err)
return err, data
}
defer f.Close()
r := charmap.ISO8859_1.NewDecoder().Reader(f)
reader := csv.NewReader(r)
for i := 1;;i++ {
rec, err := reader.Read()
if i == 1 {
//Skipping header
continue
}
if err != nil {
if err == io.EOF {
break
}
//TODO log error line and csv filename
log.Fatal(err)
}
addWorkbook(rec, &data)
}
}
return nil, data
}
and with
import fw "github.com/hduplooy/gofixedwidth" which is almost the same except calling fw.NewReader
func getDataFromPRNFiles(files []string) (error, Data) {
data := Data{}
for _, file := range files {
f, err := os.Open(file)
if err != nil {
panic(err)
return err, data
}
defer f.Close()
r := charmap.ISO8859_1.NewDecoder().Reader(f)
reader := fw.NewReader(r)
for i := 1;;i++ {
rec, err := reader.Read()
if i == 1 {
//Skipping header
continue
}
if err != nil {
if err == io.EOF {
break
}
//TODO log error line and csv filename
log.Fatal(err)
}
addWorkbook(rec, &data)
}
}
return nil, data
}
The only apparent difference is:
reader := csv.NewReader(r)
versus:
reader := fw.NewReader(r)
I'm not sure what fw is but presumably both readers implement a common interface:
type StringSliceReader interface {
Read() ([]string, error)
}
So you could pass the openers (csv.NewReader and fw.NewReader) as function arguments:
func getDataFromFiles(files []string, func(r io.Reader) StringArrayReader) (error, Data) {
//...
}
but you'd need to wrap them in little functions to get around the return types:
func newCSVReader(r io.Reader) StringSliceReader {
return csv.NewReader(r)
}
func newFWReader(r io.Reader) StringSliceReader {
return fw.NewReader(r)
}
Also, defer queues up things to execute when the function exits, not on the next iteration of a loop. So if you do this:
for _, file := range files {
f, err := os.Open(file)
if err != nil {
panic(err)
return err, data
}
defer f.Close()
//...
}
and files has a hundred entries then you'll have a hundred open files before any of them are closed. You probably want to move that loop body to a separate function so that you only have one file open at a time.
Furthermore, error is usually the last return value from a function so you should return data, err to be more idiomatic.
The result could look something like this:
type StringSliceReader interface {
Read() ([]string, error)
}
type NewReader func(r io.Reader) StringSliceReader
func newCSVReader(r io.Reader) StringSliceReader {
return csv.NewReader(r)
}
func newFWReader(r io.Reader) StringSliceReader {
return fw.NewReader(r)
}
func getDataFrom(file string, data *Data, newReader NewReader) error {
f, err := os.Open(file)
if err != nil {
return err
}
defer f.Close()
r := charmap.ISO8859_1.NewDecoder().Reader(f)
reader := newReader(r)
for i := 1; ; i++ {
rec, err := reader.Read()
if i == 1 {
continue
}
if err != nil {
if err == io.EOF {
break
}
log.Fatal(err)
}
addWorkbook(rec, data)
}
return nil
}
func getDataFromFiles(files []string, newReader NewReader) (Data, error) {
data := Data{}
for _, file := range files {
err := getDataFrom(file, newReader, &data)
if err != nil {
panic(err)
return data, err
}
}
return data, nil
}
and you could say getDataFromFiles(files, newCSVReader) to read CSVs or getDataFromFiles(files, newFWReader) to read FW files. If you want to read from something else, you'd just need a NewReader function and something that implements the StringSliceReader interface.
You might want to bury/hide the charmap.ISO8859_1.NewDecoder().Reader(f) stuff inside the NewReader functions to make it easier to read non-Latin-1 encoded files. You could also replace newReader NewReader with a map[string]NewReader in getDataFromFiles and choose the NewReader to use based on the file's extension or other format identifier.

Golang most efficient way to invoke method`s together

im looking for the most efficient way to invoke couple of method
together.
Basically what im trying to to is invoke those method together and if something went wrong return error else return the struct Type.
This code is working but i can't get the struct type or error and im not sure if its the correct way.
go func()(struct,err) {
struct,err= sm.MethodA()//return struct type or error
err = sm.MethodB()//return error or nill
return struct,err
}()
In Go, it's idiomatic to return the two values and check for nil against the error
For example:
func myFunc(sm SomeStruct) (MyStruct, error) {
s, err := sm.MethodA()
if err != nil {
return nil, err
}
if err := sm.MethodB(); err != nil {
return nil, err
}
return s, nil
}
One thing to note, is that you're running your function in a goroutine. Any return value inside that goroutine won't be returned to your main goroutine.
In order to get the return values for that go routine you must use channels that will wait for the values.
In your case
errChan := make(chan error)
retChan := make(chan SomeStructType)
go func() {
myVal, err := sm.MethodA()
if err != nil {
errChan <- err
return
}
if err := sm.MethodB(); err != nil {
errChan <- err
return
}
retChan <- myVal
}()
select {
case err := <-errChan:
fmt.Println(err)
case val := <-retChan:
fmt.Printf("My value: %v\n", val)
}
You can mess around with it here to make more sense out of it:
http://play.golang.org/p/TtfFIZerhk

How can I efficiently download a large file using Go?

Is there a way to download a large file using Go that will store the content directly into a file instead of storing it all in memory before writing it to a file? Because the file is so big, storing it all in memory before writing it to a file is going to use up all the memory.
I'll assume you mean download via http (error checks omitted for brevity):
import ("net/http"; "io"; "os")
...
out, err := os.Create("output.txt")
defer out.Close()
...
resp, err := http.Get("http://example.com/")
defer resp.Body.Close()
...
n, err := io.Copy(out, resp.Body)
The http.Response's Body is a Reader, so you can use any functions that take a Reader, to, e.g. read a chunk at a time rather than all at once. In this specific case, io.Copy() does the gruntwork for you.
A more descriptive version of Steve M's answer.
import (
"os"
"net/http"
"io"
)
func downloadFile(filepath string, url string) (err error) {
// Create the file
out, err := os.Create(filepath)
if err != nil {
return err
}
defer out.Close()
// Get the data
resp, err := http.Get(url)
if err != nil {
return err
}
defer resp.Body.Close()
// Check server response
if resp.StatusCode != http.StatusOK {
return fmt.Errorf("bad status: %s", resp.Status)
}
// Writer the body to file
_, err = io.Copy(out, resp.Body)
if err != nil {
return err
}
return nil
}
The answer selected above using io.Copy is exactly what you need, but if you are interested in additional features like resuming broken downloads, auto-naming files, checksum validation or monitoring progress of multiple downloads, checkout the grab package.
Here is a sample. https://github.com/thbar/golang-playground/blob/master/download-files.go
Also I give u some codes might help you.
code:
func HTTPDownload(uri string) ([]byte, error) {
fmt.Printf("HTTPDownload From: %s.\n", uri)
res, err := http.Get(uri)
if err != nil {
log.Fatal(err)
}
defer res.Body.Close()
d, err := ioutil.ReadAll(res.Body)
if err != nil {
log.Fatal(err)
}
fmt.Printf("ReadFile: Size of download: %d\n", len(d))
return d, err
}
func WriteFile(dst string, d []byte) error {
fmt.Printf("WriteFile: Size of download: %d\n", len(d))
err := ioutil.WriteFile(dst, d, 0444)
if err != nil {
log.Fatal(err)
}
return err
}
func DownloadToFile(uri string, dst string) {
fmt.Printf("DownloadToFile From: %s.\n", uri)
if d, err := HTTPDownload(uri); err == nil {
fmt.Printf("downloaded %s.\n", uri)
if WriteFile(dst, d) == nil {
fmt.Printf("saved %s as %s\n", uri, dst)
}
}
}

Resources