I use io.Copy() to copy a file, about 700Mb, but it cause out of memory
bodyBuf := &bytes.Buffer{}
bodyWriter := multipart.NewWriter(bodyBuf)
//key step
fileWriter, err := bodyWriter.CreateFormFile(paramName, fileName)
if err != nil {
return nil, err
}
file, err := os.Open(fileName) //the file size is about 700Mb
if err != nil {
return nil, err
}
defer file.Close()
//iocopy
copyLen, err := io.Copy(fileWriter, file) // this cause out of memory
if err != nil {
fmt.Println("io.copy(): ", err)
return nil, err
}
The error message as follow:
runtime: memory allocated by OS (0x752cf000) not in usable range [0x18700000,0x98700000)
runtime: out of memory: cannot allocate 1080229888-byte block (1081212928 in use)
fatal error: out of memory
I allocate enough memory for buf, it cause out of memory in bodyWriter.CreateFormFile()
buf := make([]byte, 766509056)
bodyBuf := bytes.NewBuffer(buf)
bodyWriter := multipart.NewWriter(bodyBuf)
fileWriter, err := bodyWriter.CreateFormFile(paramName, fileName) // out of memory
if err != nil {
return nil, err
}
That's because you are 'copying', to bodyBuf, which is an in-memory buffer, forcing Go to try an allocate a block of memory as big as the entire file.
Based on your use of multipart it looks like you are trying to stream the file over http? In that case, don't pass a bytes.Buffer to multipart.NewWriter, directly pass your http connection instead.
Related
I'm getting the following golintci message:
testdrive/utils.go:92:16: G110: Potential DoS vulnerability via decompression bomb (gosec)
if _, err := io.Copy(targetFile, fileReader); err != nil {
^
Read the corresponding CWE and I'm not clear on how this is expected to be corrected.
Please offer pointers.
func unzip(archive, target string) error {
reader, err := zip.OpenReader(archive)
if err != nil {
return err
}
for _, file := range reader.File {
path := filepath.Join(target, file.Name) // nolint: gosec
if file.FileInfo().IsDir() {
if err := os.MkdirAll(path, file.Mode()); err != nil {
return err
}
continue
}
fileReader, err := file.Open()
if err != nil {
return err
}
defer fileReader.Close() // nolint: errcheck
targetFile, err := os.OpenFile(path, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, file.Mode())
if err != nil {
return err
}
defer targetFile.Close() // nolint: errcheck
if _, err := io.Copy(targetFile, fileReader); err != nil {
return err
}
}
return nil
}
The warning you get comes from a rule provided in gosec.
The rule specifically detects usage of io.Copy on file decompression.
This is a potential issue because io.Copy:
copies from src to dst until either EOF is reached on src or an error occurs.
So, a malicious payload might cause your program to decompress an unexpectedly big amount of data and go out of memory, causing denial of service as mentioned in the warning message.
In particular, gosec will check (source) the AST of your program and warn you about usage of io.Copy or io.CopyBuffer together with any one of the following:
"compress/gzip".NewReader
"compress/zlib".NewReader or NewReaderDict
"compress/bzip2".NewReader
"compress/flate".NewReader or NewReaderDict
"compress/lzw".NewReader
"archive/tar".NewReader
"archive/zip".NewReader
"*archive/zip".File.Open
Using io.CopyN removes the warning because (quote) it "copies n bytes (or until an error) from src to dst", thus giving you (the program writer) control of how many bytes to copy. So you could pass an arbitrarily large n that you set based on the available resources of your application, or copy in chunks.
Based on various pointers provided, replaced
if _, err := io.Copy(targetFile, fileReader); err != nil {
return err
}
with
for {
_, err := io.CopyN(targetFile, fileReader, 1024)
if err != nil {
if err == io.EOF {
break
}
return err
}
}
PS while this helps memory footprint, this wouldn't help a DDOS attack copying very long and/or infinite stream ...
Assuming that you're working on compressed data, you need to use io.CopyN.
You can try a workaround with --nocompress flag. But this will cause the data to be included uncompressed.
See the following PR and related issue : https://github.com/go-bindata/go-bindata/pull/50
I'm trying to download and decrypt HLS streams by using io.ReadFull to process the data in chunks to conserve memory:
Irrelevant parts of code has been left out for simplicity.
func main() {
f, _ := os.Create(out.ts)
for _, v := range mediaPlaylist {
resp, _ := http.Get(v.URI)
for {
r, err := decryptHLS(key, iv, resp.Body)
if err != nil && err == io.EOF {
break
else if err != nil && err != io.ErrUnexpectedEOF {
panic(err)
}
io.Copy(f, r)
}
}
}
func decryptHLS(key []byte, iv []byte, r io.Reader) (io.Reader, error) {
block, _ := aes.NewCipher(key)
buf := make([]byte, 8192)
mode := cipher.NewCBCDecrypter(block, iv)
n, err := io.ReadFull(r, buf)
if err != nil && err != io.ErrUnexpectedEOF {
return nil, err
}
mode.CryptBlocks(buf, buf)
return bytes.NewReader(buf[:n]), err
}
At first this seems to work as file size is correct and no errors during download,
but the video is corrupted. Not completely as the file is still recognized as a video, but image and sound is distorted.
If I change the code to use ioutil.ReadAll instead, the final video files will no longer be corrupted:
func main() {
f, _ := os.Create(out.ts)
for _, v := range mediaPlaylist {
resp, _ := http.Get(v.URI)
segment, _ := ioutil.ReadAll(resp.Body)
r, _ := decryptHLS(key, iv, &segment)
io.Copy(f, r)
}
}
func decryptHLS(key []byte, iv []byte, s *[]byte) io.Reader {
block, _ := aes.NewCipher(key)
mode := cipher.NewCBCDecrypter(block, iv)
mode.CryptBlocks(*s, *s)
return bytes.NewReader(*s)
}
Any ideas why it works correctly when reading the entire segment into memory, and not when using io.ReadFull and processing it in chunks?
Internally, CBCDecrypter makes a copy of your iv, so subsequent blocks start with the initial IV rather than the one that's been mutated by previous decryptions.
Create the decrypter once, and you should be able to keep re-using it to decrypt block by block (assuming the block size is a multiple of the block size expected by this crypto algorithm).
I have a go file server that can receive requests of files up 10GB in size. To keep memory usage low I read the multipart form data into a tmp file. I know behind the scenes FormFile does the same but I still need to transfer it to a regular file for some post upload processing.
f, header, err := r.FormFile("file")
if err != nil {
return nil, fmt.Errorf("could not get file from request %w", err)
}
tmpFile, err := ioutil.TempFile("", "oriio-")
if err != nil {
return nil, err
}
if _, err := io.Copy(tmpFile, f); err != nil {
return nil, fmt.Errorf("could not copy request body to file %w", err)
}
After this I need to grab the first 261 bytes of the file to determine its MIME type.
head := make([]byte, 261)
if _, err := tmpFile.Read(head); err != nil {
return nil, err
}
The issue I'm running into is if I try to read directly from tmpFile the byte array returns 261 0 when I print fmt.Prinf("%x", head) aka invalid data. To verify the data is valid I was saving it to a regular file opening it in my system and the file (in this case an image file) was perfectly in tact. So it is not a corrupt file issue. To get around the problem I now close the tmp file and then reopen it again and that seems to fix everything.
tmpFile, err := ioutil.TempFile("", "oriio-")
if err != nil {
return nil, err
}
if _, err := io.Copy(tmpFile, f); err != nil {
return nil, fmt.Errorf("could not copy request body to file %w", err)
}
tmpFile.Close()
tmpFile, err = os.Open(tmpFile.Name())
if err != nil {
panic(err)
}
head := make([]byte, 261)
if _, err := tmpFile.Read(head); err != nil {
return nil, err
}
Now when I print out the head byte array the proper content is printed. Why is this? Is there some sort of Sync or Flush I have to do with the original tmp file to make it work?
Reading/writing a file changes the current location in the file. After copy, the tmpFile is positioned at the end, so reading from it will read 0 bytes. You have to seek first if you want to read from the beginning of the file:
io.Copy(tmpFile, f)
tmpFile.Seek(0,0)
tmpFile.Read(head)
I am currently learning Go and I am trying to send the contents of a directory to another machine over a plain tcp connection using Go's net package.
It works fine with individual files and small folders, but I run into issues if the folder contains many subfolders and larger files. I am using the filepath.Walk function to traverse over all files in the given directory. For each file or directory I send, I also send a header that provides the receiver with file name, file size, isDir properties so I know for how long I need to read for when reading the content. The issue I am having is that after a while when reading the header, I am reading actual file content of the previous file even though I already read that file from the connection
Here is the writer side. I simply traverse over the directory.
func transferDir(session *Session, dir string) error {
return filepath.Walk(dir, func(path string, info os.FileInfo, err error) error {
if err != nil {
return err
}
header := Header{Name: info.Name(), Size: info.Size(), Path: path}
if info.IsDir() {
header.SetDirBit()
session.WriteHeader(header)
return nil // nothing more to write
}
// content is a file. write the file now byte by byte
file, err := os.Open(path)
inf, err := file.Stat()
header.Size = inf.Size() // get the true size of the file
session.WriteHeader(header)
defer file.Close()
if err != nil {
return err
}
buf := make([]byte, BUF_SIZE)
for {
n, err := file.Read(buf)
if err != nil {
if err == io.EOF {
session.Write(buf[:n])
session.Flush()
break
} else {
log.Println(err)
return err
}
}
session.Write(buf[:n])
session.Flush()
}
return nil
})
And here is the reader part
func (c *Clone) readFile(h Header) error {
file, err := os.Create(h.Path)
defer file.Close()
if err != nil {
return err
}
var receivedByts int64
fmt.Printf("Reading File: %s Size: %d\n", h.Name, h.Size)
for {
if (h.Size - receivedByts) < BUF_SIZE {
n, err := io.CopyN(file, c.sesh, (h.Size - receivedByts))
fmt.Println("Written: %d err: %s\n", n, err)
break
}
n, err := io.CopyN(file, c.sesh, BUF_SIZE)
fmt.Println("Written: %d err: %s\n", n, err)
receivedByts += BUF_SIZE
fmt.Println("Bytes Read: ", receivedByts)
}
return nil
}
Now the weird part is that when I am looking at the print statements I see something like:
Reading File: test.txt Size: 14024
Written 1024 nil
Bytes Read 1024
... This continues all the way to the break statement
And the total of the Bytes read equals the actual file size. Yet, the subsequent read for the header will return content from the test.txt file. Almost like there is still stuff in the buffer, but I think I read it already....
I am trying to stream out bytes of a zip file using io.Pipe() function in golang. I am using pipe reader to read the bytes of each file in the zip and then stream those out and use the pipe writer to write the bytes in the response object.
func main() {
r, w := io.Pipe()
// go routine to make the write/read non-blocking
go func() {
defer w.Close()
bytes, err := ReadBytesforEachFileFromTheZip()
err := json.NewEncoder(w).Encode(bytes)
handleErr(err)
}()
This is not a working implementation but a structure of what I am trying to achieve. I don't want to use ioutil.ReadAll since the file is going to be very large and Pipe() will help me avoid bringing all the data into memory. Can someone help with a working implementation using io.Pipe() ?
I made it work using golang io.Pipe().The Pipewriter writes byte to the pipe in chunks and the pipeReader reader from the other end. The reason for using a go-routine is to have a non-blocking write operation while simultaneous reads happen form the pipe.
Note: It's important to close the pipe writer (w.Close()) to send EOF on the stream otherwise it will not close the stream.
func DownloadZip() ([]byte, error) {
r, w := io.Pipe()
defer r.Close()
defer w.Close()
zip, err := os.Stat("temp.zip")
if err != nil{
return nil, err
}
go func(){
f, err := os.Open(zip.Name())
if err != nil {
return
}
buf := make([]byte, 1024)
for {
chunk, err := f.Read(buf)
if err != nil && err != io.EOF {
panic(err)
}
if chunk == 0 {
break
}
if _, err := w.Write(buf[:chunk]); err != nil{
return
}
}
w.Close()
}()
body, err := ioutil.ReadAll(r)
if err != nil {
return nil, err
}
return body, nil
}
Please let me know if someone has another way of doing it.