I am building a simple testing API in golang for uploading and download image files(PNG, JPEG, JPG):
/pic [POST] for uploading the image and save it to a folder; /pic [GET] for downloading the image to the client.
I have successfully built the /pic [POST] and the image is successfully uploaded to the server's file. And I can open the file in the storage folder. (In both windows localhost server and ubuntu server)
However, when I built the /pic [GET] for downloading the picture, I am able to download the file to the client (my computer), but the downloaded file is somehow corrupted since when I try to open it with different image viewer such as gallery or Photoshop, it says "It looks like we don't support this file format". So it seems like the download is not successful.
Postman result:
File opening in gallery:
Any ideas on why this happens and how should I fix it?
The golang code for downloading the pic is as follows(omitting the error handling):
func PicDownload(w http.ResponseWriter, r *http.Request){
request := make(map[string]string)
reqBody, _ := ioutil.ReadAll(r.Body)
err = json.Unmarshal(reqBody, &request)
// Error handling
file, err := os.OpenFile("./resources/pic/" + request["filename"], os.O_RDONLY, 0666)
// Error handling
buffer := make([]byte, 512)
_, err = file.Read(buffer)
// Error handling
contentType := http.DetectContentType(buffer)
fileStat, _ := file.Stat()
// Set header
w.Header().Set("Content-Disposition", "attachment; filename=" + request["filename"])
w.Header().Set("Content-Type", contentType)
w.Header().Set("Content-Length", strconv.FormatInt(fileStat.Size(), 10))
// Copying the file content to response body
io.Copy(w, file)
return
}
When you are reading the first 512 bytes from the file in order to determine the content type, the underlying file stream pointer moves forward by 512 bytes. When you later call io.Copy, reading continues from that position.
There are two ways to correct this.
The first is to call file.Seek(0, io.SeekStart) before the call to io.Copy(). This will place the pointer back to the start of the file. This solution requires the least amount of code, but means reading the same 512 bytes from the file twice which causes some overhead.
The second solution is to create a buffer that contains the entire file using buffer := make([]byte, fileStat.Size() and using that buffer for both the http.DetectContentType() call and to write the output (write it with w.Write(buffer) instead of using io.Copy(). This approach has the possible downside of loading the entire file into memory at once, which isn't ideal for very large files (io.Copy uses 32KB chunks instead of loading the whole file).
Note: As Peter mentioned in a comment, you must ensure users cannot traverse your filesystem by posting ../../ or something as a filename.
Related
After a lot of tests, we cannot seem to match the speed of gsutil when using the GS Go client libraries.
Even a skeleton file with simplest io.Copy() take A LOT longer the the simplest gsutil.
ctx := context.Background()
client, err := storage.NewClient(ctx, option.WithCredentialsFile(*flags.credsFile))
bucket := client.Bucket("my_bucket")
File, _ := os.Open("path_to_file")
wc := bucket.Object("remoteFile").NewWriter(ctx)
_, _ = io.Copy(wc, File)
err = wc.Close()
Also tried with io.CopyBuffer() when buffer is 128x1024, better, but still slow.
Any way to speed up the upload while using go? we dont want to call any external utilities...
It sounds like the io.Copy implementation is not GCS aware, and instead is doing actual byte copies (reading from the source file and writing to the destination file). In contrast, gsutil is calling the GCS Rewrite API, which for the cases where the source and destination are in the same location and storage class, will do metadata-only copies (avoiding byte copying). Doing it that way is far faster, which would match what you're observing, performance-wise.
Can you use a GCS aware Go implementation -- i.e., one that will call Rewrite rather than reading/writing the underlying object bytes?
Using GoLang SDK for google cloud storage.
Cannot find how to download files in chunks.
The Google Cloud documentation says to download an object from Cloud Storage, you should use the following:
rc, err := client.Bucket(bucket).Object(object).NewReader(ctx)
if err != nil {
return nil, err
}
defer rc.Close()
data, err := ioutil.ReadAll(rc)
if err != nil {
return nil, err
}
return data, nil
Source: https://cloud.google.com/storage/docs/downloading-objects#storage-download-object-code_sample
Given their SDK returns an io.Reader, you don't need to worry about the underlying method being used to be able to reference the download in chunks (although, quickly looking through their source, it just implements http.NewRequest, which does what you want, using the same logic).
The reason it doesn't seem to "chunked" from their example is because of the usage of ioutil.ReadAll, which although great for simple use cases, extracts all of the Readers data into memory (meaning it also has to wait for the data to become available).
For a better understanding of how to deal with a Reader in steps, I recommend taking a look at https://tour.golang.org/methods/21 for a tour of io.Reader and how you can use it more efficiently.
As part of a large file upload feature.
We're using the following to write a 'chunk' of bytes to a file at 'path'. This works fine on local filesystems. Each chunk is correctly written at 'offset'.
f, err := os.OpenFile(path, os.O_APPEND|os.O_WRONLY, os.ModeAppend)
n, err := f.WriteAt(bytes, offset)
On NFS attached storage however the bytes are written at the beginning of the file and not at the requested 'offset'.
Even though it doesn't appear that the process will be able to obtain a lock on the file over NFS. Is there a technique of workaround we could follow to append to a file at 'offset'?
I have a TCP packet connection (net.Conn) set up by listening on a port.
conn, err := ln.Accept()
I need to read the first UVarInt of the Conn.Read([]byte) buffer, which starts at index 0.
Previously, I would only need the first byte, which is easy to do using
packetSize := make([]byte, 1)
conn.Read(packetSize)
// Do stuff with packetSize[0]
However, as previously mentioned, I need to get the first UVarInt I can reach using the net.Conn.Read() method. Keep in mind that a UVarInt can have pretty much any length, which I cannot be sure of (the client doesn't send the size of the UVarInt). I do know however that the UVarInt starts at the very beginning of the buffer.
Wrap the connection with a bufio.Reader:
br := bufio.NewReader(conn)
Use the binary package to read an unsigned varint through the bufio.Reader:
n, err := binary. ReadUvarInt(br)
Because the bufio.Reader can buffer more than the varint, you should use the bufio.Reader for all subsequent reads on the connection.
I'm attempting to compose an image in memory and send it out through http.ResponseWriter without ever touching the file system.
I use the following to create a new file:
file := os.NewFile(0, "temp_destination.png")
However, I don't seem to be able to do anything at all with this file. Here is the function I'm using (which is being called within an http.HandleFunc, which just sends the file's bytes to the browser), which is intended to draw a blue rectangle on a temporary file and encode it as a PNG:
func ComposeImage() ([]byte) {
img := image.NewRGBA(image.Rect(0, 0, 640, 480))
blue := color.RGBA{0, 0, 255, 255}
draw.Draw(img, img.Bounds(), &image.Uniform{blue}, image.ZP, draw.Src)
// in memory destination file, instead of going to the file sys
file := os.NewFile(0, "temp_destination.png")
// write the image to the destination io.Writer
png.Encode(file, img)
bytes, err := ioutil.ReadAll(file)
if err != nil {
log.Fatal("Couldn't read temporary file as bytes.")
}
return bytes
}
If I remove the png.Encode call, and just return the file bytes, the server just hangs and does nothing forever.
Leaving the png.Encode call in results in the file bytes (encoded, includes some of the PNG chunks I'd expect to see) being vomited out to stderr/stdout (I can't tell which) and server hanging indefinitely.
I assume I'm just not using os.NewFile correctly. Can anyone point me in the right direction? Alternative suggestions on how to properly perform in-memory file manipulations are welcome.
os.NewFile is a low level function that most people will never use directly. It takes an already existing file descriptor (system representation of a file) and converts it to an *os.File (Go's representation).
If you never want the picture to touch your filesystem, stay out of the os package entirely. Just treat your ResponseWriter as an io.Writer and pass it to png.Encode.
png.Encode(yourResponseWriter, img)
If you insist on writing to an "in memory file", I suggest using bytes.Buffer:
buf := new(bytes.Buffer)
png.Encode(buf, img)
return buf.Bytes()
Please have a detailed read of the NewFile documentation. NewFile does not create a new file, not at all! It sets up a Go os.File which wraps around an existing file with the given file descriptor (0 in your case which is stdin I think).
Serving images without files is much easier: Just Encode your image to your ResponseWriter. That's what interfaces are there for. No need to write to ome magic "in memory file", no need to read it back with ReadAll, plain and simple: Write to your response.