G110: Potential DoS vulnerability via decompression bomb (gosec) - go

I'm getting the following golintci message:
testdrive/utils.go:92:16: G110: Potential DoS vulnerability via decompression bomb (gosec)
if _, err := io.Copy(targetFile, fileReader); err != nil {
^
Read the corresponding CWE and I'm not clear on how this is expected to be corrected.
Please offer pointers.
func unzip(archive, target string) error {
reader, err := zip.OpenReader(archive)
if err != nil {
return err
}
for _, file := range reader.File {
path := filepath.Join(target, file.Name) // nolint: gosec
if file.FileInfo().IsDir() {
if err := os.MkdirAll(path, file.Mode()); err != nil {
return err
}
continue
}
fileReader, err := file.Open()
if err != nil {
return err
}
defer fileReader.Close() // nolint: errcheck
targetFile, err := os.OpenFile(path, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, file.Mode())
if err != nil {
return err
}
defer targetFile.Close() // nolint: errcheck
if _, err := io.Copy(targetFile, fileReader); err != nil {
return err
}
}
return nil
}

The warning you get comes from a rule provided in gosec.
The rule specifically detects usage of io.Copy on file decompression.
This is a potential issue because io.Copy:
copies from src to dst until either EOF is reached on src or an error occurs.
So, a malicious payload might cause your program to decompress an unexpectedly big amount of data and go out of memory, causing denial of service as mentioned in the warning message.
In particular, gosec will check (source) the AST of your program and warn you about usage of io.Copy or io.CopyBuffer together with any one of the following:
"compress/gzip".NewReader
"compress/zlib".NewReader or NewReaderDict
"compress/bzip2".NewReader
"compress/flate".NewReader or NewReaderDict
"compress/lzw".NewReader
"archive/tar".NewReader
"archive/zip".NewReader
"*archive/zip".File.Open
Using io.CopyN removes the warning because (quote) it "copies n bytes (or until an error) from src to dst", thus giving you (the program writer) control of how many bytes to copy. So you could pass an arbitrarily large n that you set based on the available resources of your application, or copy in chunks.

Based on various pointers provided, replaced
if _, err := io.Copy(targetFile, fileReader); err != nil {
return err
}
with
for {
_, err := io.CopyN(targetFile, fileReader, 1024)
if err != nil {
if err == io.EOF {
break
}
return err
}
}
PS while this helps memory footprint, this wouldn't help a DDOS attack copying very long and/or infinite stream ...

Assuming that you're working on compressed data, you need to use io.CopyN.
You can try a workaround with --nocompress flag. But this will cause the data to be included uncompressed.
See the following PR and related issue : https://github.com/go-bindata/go-bindata/pull/50

Related

Recently copied file returns all 0s as byte array

I have a go file server that can receive requests of files up 10GB in size. To keep memory usage low I read the multipart form data into a tmp file. I know behind the scenes FormFile does the same but I still need to transfer it to a regular file for some post upload processing.
f, header, err := r.FormFile("file")
if err != nil {
return nil, fmt.Errorf("could not get file from request %w", err)
}
tmpFile, err := ioutil.TempFile("", "oriio-")
if err != nil {
return nil, err
}
if _, err := io.Copy(tmpFile, f); err != nil {
return nil, fmt.Errorf("could not copy request body to file %w", err)
}
After this I need to grab the first 261 bytes of the file to determine its MIME type.
head := make([]byte, 261)
if _, err := tmpFile.Read(head); err != nil {
return nil, err
}
The issue I'm running into is if I try to read directly from tmpFile the byte array returns 261 0 when I print fmt.Prinf("%x", head) aka invalid data. To verify the data is valid I was saving it to a regular file opening it in my system and the file (in this case an image file) was perfectly in tact. So it is not a corrupt file issue. To get around the problem I now close the tmp file and then reopen it again and that seems to fix everything.
tmpFile, err := ioutil.TempFile("", "oriio-")
if err != nil {
return nil, err
}
if _, err := io.Copy(tmpFile, f); err != nil {
return nil, fmt.Errorf("could not copy request body to file %w", err)
}
tmpFile.Close()
tmpFile, err = os.Open(tmpFile.Name())
if err != nil {
panic(err)
}
head := make([]byte, 261)
if _, err := tmpFile.Read(head); err != nil {
return nil, err
}
Now when I print out the head byte array the proper content is printed. Why is this? Is there some sort of Sync or Flush I have to do with the original tmp file to make it work?
Reading/writing a file changes the current location in the file. After copy, the tmpFile is positioned at the end, so reading from it will read 0 bytes. You have to seek first if you want to read from the beginning of the file:
io.Copy(tmpFile, f)
tmpFile.Seek(0,0)
tmpFile.Read(head)

Transfer contents of directory over net's TCP connection

I am currently learning Go and I am trying to send the contents of a directory to another machine over a plain tcp connection using Go's net package.
It works fine with individual files and small folders, but I run into issues if the folder contains many subfolders and larger files. I am using the filepath.Walk function to traverse over all files in the given directory. For each file or directory I send, I also send a header that provides the receiver with file name, file size, isDir properties so I know for how long I need to read for when reading the content. The issue I am having is that after a while when reading the header, I am reading actual file content of the previous file even though I already read that file from the connection
Here is the writer side. I simply traverse over the directory.
func transferDir(session *Session, dir string) error {
return filepath.Walk(dir, func(path string, info os.FileInfo, err error) error {
if err != nil {
return err
}
header := Header{Name: info.Name(), Size: info.Size(), Path: path}
if info.IsDir() {
header.SetDirBit()
session.WriteHeader(header)
return nil // nothing more to write
}
// content is a file. write the file now byte by byte
file, err := os.Open(path)
inf, err := file.Stat()
header.Size = inf.Size() // get the true size of the file
session.WriteHeader(header)
defer file.Close()
if err != nil {
return err
}
buf := make([]byte, BUF_SIZE)
for {
n, err := file.Read(buf)
if err != nil {
if err == io.EOF {
session.Write(buf[:n])
session.Flush()
break
} else {
log.Println(err)
return err
}
}
session.Write(buf[:n])
session.Flush()
}
return nil
})
And here is the reader part
func (c *Clone) readFile(h Header) error {
file, err := os.Create(h.Path)
defer file.Close()
if err != nil {
return err
}
var receivedByts int64
fmt.Printf("Reading File: %s Size: %d\n", h.Name, h.Size)
for {
if (h.Size - receivedByts) < BUF_SIZE {
n, err := io.CopyN(file, c.sesh, (h.Size - receivedByts))
fmt.Println("Written: %d err: %s\n", n, err)
break
}
n, err := io.CopyN(file, c.sesh, BUF_SIZE)
fmt.Println("Written: %d err: %s\n", n, err)
receivedByts += BUF_SIZE
fmt.Println("Bytes Read: ", receivedByts)
}
return nil
}
Now the weird part is that when I am looking at the print statements I see something like:
Reading File: test.txt Size: 14024
Written 1024 nil
Bytes Read 1024
... This continues all the way to the break statement
And the total of the Bytes read equals the actual file size. Yet, the subsequent read for the header will return content from the test.txt file. Almost like there is still stuff in the buffer, but I think I read it already....

How to merge or combine 2 files into single file

I need to take tmp1.zip and append it's tmp1.signed file to the end of it; creating a new tmp1.zip.signed file using Go.
It's essentially same as cat | sc
I could call cmd line from Go, but that seems super inefficient (and cheesy).
So far
Google-ing the words "go combine files" et. al. yields minimal help.
But I have come across a couple of options that I have tried such as ..
f, err := os.OpenFile("tmp1.txt", os.O_APPEND|os.O_CREATE|os.O_WRONLY, 0644)
if err != nil {
log.Fatal(err)
}
if _, err := f.Write([]byte("appended some data\n")); err != nil {
log.Fatal(err)
}
if err := f.Close(); err != nil {
log.Fatal(err)
}
But that is just getting strings added to the end of the file, not really merging the two files, or appending the signature to the original file.
Question
Assuming I am asking the right questions to get one file appended to another, Is there a better sample of how exactly to merge two files into one using Go?
Based on your question, you want to create a new file with the content of both files.
You can use io.Copy to achieve that.
Here is a simple command-line tool implementing it.
package main
import (
"io"
"log"
"os"
)
func main() {
if len(os.Args) != 4 {
log.Fatalln("Usage: %s <zip> <signed> <output>\n", os.Args[0])
}
zipName, signedName, output := os.Args[1], os.Args[2], os.Args[3]
zipIn, err := os.Open(zipName)
if err != nil {
log.Fatalln("failed to open zip for reading:", err)
}
defer zipIn.Close()
signedIn, err := os.Open(signedName)
if err != nil {
log.Fatalln("failed to open signed for reading:", err)
}
defer signedIn.Close()
out, err := os.OpenFile(output, os.O_CREATE|os.O_WRONLY, 0644)
if err != nil {
log.Fatalln("failed to open outpout file:", err)
}
defer out.Close()
n, err := io.Copy(out, zipIn)
if err != nil {
log.Fatalln("failed to append zip file to output:", err)
}
log.Printf("wrote %d bytes of %s to %s\n", n, zipName, output)
n, err = io.Copy(out, signedIn)
if err != nil {
log.Fatalln("failed to append signed file to output:", err)
}
log.Printf("wrote %d bytes of %s to %s\n", n, signedName, output)
}
Basically, it open both files you want to merge, create a new one and copy the content of each file to the new file.

Golang - why is string slice element not included in exec cat unless I sort it

I have a slightly funky issue in golang. Essentially I have a slice of strings which represent file paths. I then run a cat against those filepaths to combine the files before sorting, deduping, etc.
here is the section of code (where 'applicableReductions' is the string slice):
applicableReductions := []string{}
for _, fqFromListName := range fqFromListNames {
filePath := GetFilePath()
//BROKE CODE GOES HERE
}
applicableReductions = append(applicableReductions, filePath)
fileOut, err := os.Create(toListWriteTmpFilePath)
if err != nil {
return err
}
cat := exec.Command("cat", applicableReductions...)
catStdOut, err := cat.StdoutPipe()
if err != nil {
return err
}
go func(cat *exec.Cmd) error {
if err := cat.Start(); err != nil {
return fmt.Errorf("File reduction error (cat) : %s", err)
}
return nil
}(cat)
// Init Writer & write file
writer := bufio.NewWriter(fileOut)
defer writer.Flush()
_, err = io.Copy(writer, catStdOut)
if err != nil {
return err
}
if err = cat.Wait(); err != nil {
return err
}
fDiff.StandardiseData(fileOut, toListUpdateFolderPath, list.Name)
The above works fine. The problem comes when I try to append a new ele to the array. I have a seperate function which creates a new file from db content which is then added to the applicableReductions slice.
func RetrieveDomainsFromDB(collection *Collection, listName, outputPath string) error {
domains, err := domainReviews.GetDomainsForList(listName)
if err != nil {
return err
}
if len(domains) < 1 {
return ErrNoDomainReviewsForList
}
fh, err := os.OpenFile(outputPath, os.O_RDWR, 0774)
if err != nil {
fh, err = os.Create(outputPath)
if err != nil {
return err
}
}
defer fh.Close()
_, err = fh.WriteString(strings.Join(domains, "\n"))
if err != nil {
return err
}
return nil
}
If I call the above function and append the filePath to the applicableReduction slice, it is in there but doesnt get called by cat.
To clarify, when I put the following where it says BROKE CODE GOES HERE:
if dbSource {
err = r.RetrieveDomainsFromDB(collection, ToListName, filePath)
if err != nil {
return err
continue
}
}
The filepath can be seen when doing fmt.Println(applicableReductions) but the content of the files contents are not seen in the cat output file.
I thought perhaps a delay in the file being written so i tried adding a time.wait, tis didnt help. However the solution I found was to sort the slice, e.g this code above the call to exec cat solves the problem but I dont know why:
sort.Strings(applicableReductions)
I have confirmed all files present on both successful and unsucessful runs the only difference is without the sort, the content of the final appended file is missing
An explanation from a go-pro out there would be very much appreciated, let me know if you need more info, debug - happy to oblige to understand
UPDATE
It has been suggested that this is the same issue as here: Golang append an item to a slice, I think I understand the issue there and I'm not saying this isnt the same but I cannot see the same thing happenning - the slice in question is not touched from outside the main function (e.g. no editing of the slice in RetrieveDomainsFromDB function), I create the slice before a loop, append to it within a loop and then use it after the loop - Ive added an example at the top to show how the slice is built - please could someone clarify where this slice is being copied if this is the case
UPDATE AND CLOSE
Please close question - the issue was unrelated to the use of a string slice. Turns out that I was reading from the final output file before bufio-writer had been flushed (at end of function before defer flush kicked in on function return)
I think the sorting was just re-arranging the problem so I didnt notice it persisted or possibly giving some time for the buffer to flush. Either way sorted now with a manual call to flush.
Thanks for all help provided

io.Copy cause out of memory in golang

I use io.Copy() to copy a file, about 700Mb, but it cause out of memory
bodyBuf := &bytes.Buffer{}
bodyWriter := multipart.NewWriter(bodyBuf)
//key step
fileWriter, err := bodyWriter.CreateFormFile(paramName, fileName)
if err != nil {
return nil, err
}
file, err := os.Open(fileName) //the file size is about 700Mb
if err != nil {
return nil, err
}
defer file.Close()
//iocopy
copyLen, err := io.Copy(fileWriter, file) // this cause out of memory
if err != nil {
fmt.Println("io.copy(): ", err)
return nil, err
}
The error message as follow:
runtime: memory allocated by OS (0x752cf000) not in usable range [0x18700000,0x98700000)
runtime: out of memory: cannot allocate 1080229888-byte block (1081212928 in use)
fatal error: out of memory
I allocate enough memory for buf, it cause out of memory in bodyWriter.CreateFormFile()
buf := make([]byte, 766509056)
bodyBuf := bytes.NewBuffer(buf)
bodyWriter := multipart.NewWriter(bodyBuf)
fileWriter, err := bodyWriter.CreateFormFile(paramName, fileName) // out of memory
if err != nil {
return nil, err
}
That's because you are 'copying', to bodyBuf, which is an in-memory buffer, forcing Go to try an allocate a block of memory as big as the entire file.
Based on your use of multipart it looks like you are trying to stream the file over http? In that case, don't pass a bytes.Buffer to multipart.NewWriter, directly pass your http connection instead.

Resources