Placement of defer after error check - go

In Go, one often sees the following idiom:
func CopyFile(dstName, srcName string) (written int64, err error) {
src, err := os.Open(srcName)
if err != nil {
return
}
defer src.Close()
dst, err := os.Create(dstName)
if err != nil {
return
}
defer dst.Close()
return io.Copy(dst, src)
}
Is there any reason why the defer statement comes after the error check? My guess is that this is done in order to avoid dereferencing nil values in case err was not nil.

If the file Open or Create fails then you don't have a valid *File to close. The problem wouldn't be a nil value for *File as Close() will check for nil and simply return immediately in that case - the problem might be if the *File value is non-nil but invalid. Since documentation for os.Open() doesn't explicitly state that a failed call to Open() returns a nil value for *File you can't rely that all underlying implementations of it do in fact return a nil value or will always return a nil value..

Related

G110: Potential DoS vulnerability via decompression bomb (gosec)

I'm getting the following golintci message:
testdrive/utils.go:92:16: G110: Potential DoS vulnerability via decompression bomb (gosec)
if _, err := io.Copy(targetFile, fileReader); err != nil {
^
Read the corresponding CWE and I'm not clear on how this is expected to be corrected.
Please offer pointers.
func unzip(archive, target string) error {
reader, err := zip.OpenReader(archive)
if err != nil {
return err
}
for _, file := range reader.File {
path := filepath.Join(target, file.Name) // nolint: gosec
if file.FileInfo().IsDir() {
if err := os.MkdirAll(path, file.Mode()); err != nil {
return err
}
continue
}
fileReader, err := file.Open()
if err != nil {
return err
}
defer fileReader.Close() // nolint: errcheck
targetFile, err := os.OpenFile(path, os.O_WRONLY|os.O_CREATE|os.O_TRUNC, file.Mode())
if err != nil {
return err
}
defer targetFile.Close() // nolint: errcheck
if _, err := io.Copy(targetFile, fileReader); err != nil {
return err
}
}
return nil
}
The warning you get comes from a rule provided in gosec.
The rule specifically detects usage of io.Copy on file decompression.
This is a potential issue because io.Copy:
copies from src to dst until either EOF is reached on src or an error occurs.
So, a malicious payload might cause your program to decompress an unexpectedly big amount of data and go out of memory, causing denial of service as mentioned in the warning message.
In particular, gosec will check (source) the AST of your program and warn you about usage of io.Copy or io.CopyBuffer together with any one of the following:
"compress/gzip".NewReader
"compress/zlib".NewReader or NewReaderDict
"compress/bzip2".NewReader
"compress/flate".NewReader or NewReaderDict
"compress/lzw".NewReader
"archive/tar".NewReader
"archive/zip".NewReader
"*archive/zip".File.Open
Using io.CopyN removes the warning because (quote) it "copies n bytes (or until an error) from src to dst", thus giving you (the program writer) control of how many bytes to copy. So you could pass an arbitrarily large n that you set based on the available resources of your application, or copy in chunks.
Based on various pointers provided, replaced
if _, err := io.Copy(targetFile, fileReader); err != nil {
return err
}
with
for {
_, err := io.CopyN(targetFile, fileReader, 1024)
if err != nil {
if err == io.EOF {
break
}
return err
}
}
PS while this helps memory footprint, this wouldn't help a DDOS attack copying very long and/or infinite stream ...
Assuming that you're working on compressed data, you need to use io.CopyN.
You can try a workaround with --nocompress flag. But this will cause the data to be included uncompressed.
See the following PR and related issue : https://github.com/go-bindata/go-bindata/pull/50

"too many open files" with os.Create

I have around 220'000 image files (.png) to create. I run into this error message when trying to create the 1'081th file:
panic: open /media/Snaps/pics/image1081_0.png: too many open files
I've added the defer w.Close() line but it did not change the error.
i := 1
for i <= 223129 {
(some other code to prepare the data and create the chart)
img := vgimg.New(450, 600)
dc := draw.New(img)
canvases := table.Align(plots, dc)
plots[0][0].Draw(canvases[0][0])
plots[1][0].Draw(canvases[1][0])
plots[2][0].Draw(canvases[2][0])
testFile := "/media/Snaps/pics/image"+strconv.Itoa(i+60)+"_"+gain_loss+".png"
w, err := os.Create(testFile)
if err != nil {
panic(err)
}
defer w.Close()
png := vgimg.PngCanvas{Canvas: img}
if _, err := png.WriteTo(w); err != nil {
panic(err)
}
//move to next image
i = i + 1
}
Surely this limit can be worked around ? Maybe I'm not closing the files properly ?
The Go Programming Language Specification
Defer statements
A "defer" statement invokes a function whose execution is deferred to
the moment the surrounding function returns, either because the
surrounding function executed a return statement, reached the end of
its function body, or because the corresponding goroutine is
panicking.
DeferStmt = "defer" Expression .
The expression must be a function or method call; it cannot be
parenthesized. Calls of built-in functions are restricted as for
expression statements.
Each time a "defer" statement executes, the function value and
parameters to the call are evaluated as usual and saved anew but the
actual function is not invoked. Instead, deferred functions are
invoked immediately before the surrounding function returns, in the
reverse order they were deferred. If a deferred function value
evaluates to nil, execution panics when the function is invoked, not
when the "defer" statement is executed.
In other words, if you are processing files in a loop, put the processing for a single file in a separate function to pair the Open with the defer Close(). This avoids the "too many open files" error.
For example, use a file processing structure like this to guarantee each file is closed immediately after use.
package main
import (
"fmt"
"io/ioutil"
"os"
)
// process single file
func processFile(name string) error {
f, err := os.Open(name)
if err != nil {
return err
}
defer f.Close()
fi, err := f.Stat()
if err != nil {
return err
}
fmt.Println(fi.Name(), fi.Size())
return nil
}
func main() {
wd, err := os.Getwd()
if err != nil {
fmt.Fprintln(os.Stderr, err)
return
}
fis, err := ioutil.ReadDir(wd)
if err != nil {
fmt.Fprintln(os.Stderr, err)
return
}
// process all files
for _, fi := range fis {
processFile(fi.Name())
if err != nil {
fmt.Fprintln(os.Stderr, err)
}
}
}
Playground: https://play.golang.org/p/FrBWqlMOzaS
Output:
dev 1644
etc 1644
tmp 548
usr 822
Deferred statements are not executed until the surrounding function returns, that is why your files stay open until after the for-loop.
To fix this you can simply insert an anonymous function call inside the loop:
for ... {
func() {
w, err := os.Create(testFile)
if err != nil {
panic(err)
}
defer w.Close()
...
}()
}
That way, after each iteration of the loop, the current file is closed.
ok, i got it, changed defer w.Close() to w.Close() and moved it after
png := vgimg.PngCanvas{Canvas: img}
if _, err := png.WriteTo(w); err != nil {
panic(err)
}
I'm now above 10'000 images and running...

Go pointer to interface is not null [duplicate]

This question already has answers here:
Hiding nil values, understanding why Go fails here
(3 answers)
Closed 5 years ago.
Can someone provide some explanation about this code behaviour:
https://play.golang.org/p/_TjQhthHl3
package main
import (
"fmt"
)
type MyError struct{}
func (e *MyError) Error() string {
return "some error"
}
func main() {
var err error
if err == nil {
fmt.Println("[OK] err is nil ...")
}else{
fmt.Println("[KO] err is NOT nil...")
}
isNil(err)
var err2 *MyError
if err2 == nil {
fmt.Println("[OK] err2 is nil ...")
}else{
fmt.Println("[KO] err2 is NOT nil...")
}
isNil(err2)
}
func isNil(err error){
if err == nil {
fmt.Println("[OK] ... still nil")
}else{
fmt.Println("[KO] .. why not nil?")
}
}
Output is:
[OK] err is nil ...
[OK] ... still nil
[OK] err2 is nil ...
[KO] .. why err2 not nil?
I found this post Check for nil and nil interface in Go but I still don't get it...
error is a built-in interface, and *MyError implements that interface. Even though the value of err2 is nil, when you pass it to isNil, the function gets a non-nil interface value. That value contains information about the type (*MyError) and the value itself, which is a nil pointer.
If you try printing err in isNil, you'll see that in the second case, you get "some error" even though err2 is nil. This demonstrates why err is not nil in that case (it has to contain the type information).
From my understanding your var err2 *MyError definition is generating a pointer to the struct definition and NOT an actual instantiated object.
In Go interface types are represented by a structure with 2 fields: one denotes its actual type and the other is a pointer to the value.
It means that an interface value is never equal to nil unless the value you passed is actually a nil value.
Relevant and good talk: GopherCon 2016: Francesc Campoy - Understanding nil

using io.Pipes() for sending and receiving message

I am using os.Pipes() in my program, but for some reason it gives a bad file descriptor error each time i try to write or read data from it.
Is there some thing I am doing wrong?
Below is the code
package main
import (
"fmt"
"os"
)
func main() {
writer, reader, err := os.Pipe()
if err != nil {
fmt.Println(err)
}
_,err= writer.Write([]byte("hello"))
if err != nil {
fmt.Println(err)
}
var data []byte
_, err = reader.Read(data)
if err != nil {
fmt.Println(err)
}
fmt.Println(string(data))
}
output :
write |0: Invalid argument
read |1: Invalid argument
You are using an os.Pipe, which returns a pair of FIFO connected files from the os. This is different than an io.Pipe which is implemented in Go.
The invalid argument errors are because you are reading and writing to the wrong files. The signature of os.Pipe is
func Pipe() (r *File, w *File, err error)
which shows that the returns values are in the order "reader, writer, error".
and io.Pipe:
func Pipe() (*PipeReader, *PipeWriter)
Also returning in the order "reader, writer"
When you check the error from the os.Pipe function, you are only printing the value. If there was an error, the files are invalid. You need to return or exit on that error.
Pipes are also blocking (though an os.Pipe has a small, hard coded buffer), so you need to read and write asynchronously. If you swapped this for an io.Pipe it would deadlock immediately. Dispatch the Read method inside a goroutine and wait for it to complete.
Finally, you are reading into a nil slice, which will read nothing. You need to allocate space to read into, and you need to record the number of bytes read to know how much of the buffer is used.
A more correct version of your example would look like:
reader, writer, err := os.Pipe()
if err != nil {
log.Fatal(err)
}
var wg sync.WaitGroup
wg.Add(1)
go func() {
defer wg.Done()
data := make([]byte, 1024)
n, err = reader.Read(data)
if n > 0 {
fmt.Println(string(data[:n]))
}
if err != nil && err != io.EOF {
fmt.Println(err)
}
}()
_, err = writer.Write([]byte("hello"))
if err != nil {
fmt.Println(err)
}
wg.Wait()

Golang - why is string slice element not included in exec cat unless I sort it

I have a slightly funky issue in golang. Essentially I have a slice of strings which represent file paths. I then run a cat against those filepaths to combine the files before sorting, deduping, etc.
here is the section of code (where 'applicableReductions' is the string slice):
applicableReductions := []string{}
for _, fqFromListName := range fqFromListNames {
filePath := GetFilePath()
//BROKE CODE GOES HERE
}
applicableReductions = append(applicableReductions, filePath)
fileOut, err := os.Create(toListWriteTmpFilePath)
if err != nil {
return err
}
cat := exec.Command("cat", applicableReductions...)
catStdOut, err := cat.StdoutPipe()
if err != nil {
return err
}
go func(cat *exec.Cmd) error {
if err := cat.Start(); err != nil {
return fmt.Errorf("File reduction error (cat) : %s", err)
}
return nil
}(cat)
// Init Writer & write file
writer := bufio.NewWriter(fileOut)
defer writer.Flush()
_, err = io.Copy(writer, catStdOut)
if err != nil {
return err
}
if err = cat.Wait(); err != nil {
return err
}
fDiff.StandardiseData(fileOut, toListUpdateFolderPath, list.Name)
The above works fine. The problem comes when I try to append a new ele to the array. I have a seperate function which creates a new file from db content which is then added to the applicableReductions slice.
func RetrieveDomainsFromDB(collection *Collection, listName, outputPath string) error {
domains, err := domainReviews.GetDomainsForList(listName)
if err != nil {
return err
}
if len(domains) < 1 {
return ErrNoDomainReviewsForList
}
fh, err := os.OpenFile(outputPath, os.O_RDWR, 0774)
if err != nil {
fh, err = os.Create(outputPath)
if err != nil {
return err
}
}
defer fh.Close()
_, err = fh.WriteString(strings.Join(domains, "\n"))
if err != nil {
return err
}
return nil
}
If I call the above function and append the filePath to the applicableReduction slice, it is in there but doesnt get called by cat.
To clarify, when I put the following where it says BROKE CODE GOES HERE:
if dbSource {
err = r.RetrieveDomainsFromDB(collection, ToListName, filePath)
if err != nil {
return err
continue
}
}
The filepath can be seen when doing fmt.Println(applicableReductions) but the content of the files contents are not seen in the cat output file.
I thought perhaps a delay in the file being written so i tried adding a time.wait, tis didnt help. However the solution I found was to sort the slice, e.g this code above the call to exec cat solves the problem but I dont know why:
sort.Strings(applicableReductions)
I have confirmed all files present on both successful and unsucessful runs the only difference is without the sort, the content of the final appended file is missing
An explanation from a go-pro out there would be very much appreciated, let me know if you need more info, debug - happy to oblige to understand
UPDATE
It has been suggested that this is the same issue as here: Golang append an item to a slice, I think I understand the issue there and I'm not saying this isnt the same but I cannot see the same thing happenning - the slice in question is not touched from outside the main function (e.g. no editing of the slice in RetrieveDomainsFromDB function), I create the slice before a loop, append to it within a loop and then use it after the loop - Ive added an example at the top to show how the slice is built - please could someone clarify where this slice is being copied if this is the case
UPDATE AND CLOSE
Please close question - the issue was unrelated to the use of a string slice. Turns out that I was reading from the final output file before bufio-writer had been flushed (at end of function before defer flush kicked in on function return)
I think the sorting was just re-arranging the problem so I didnt notice it persisted or possibly giving some time for the buffer to flush. Either way sorted now with a manual call to flush.
Thanks for all help provided

Resources