Why copyBuffer implements while loop - go

I am trying to understand how copyBuffer works under the hood, but what is not clear to me is the use of while loop
for {
nr, er := src.Read(buf)
//...
}
Full code below:
// copyBuffer is the actual implementation of Copy and CopyBuffer.
// if buf is nil, one is allocated.
func copyBuffer(dst Writer, src Reader, buf []byte) (written int64, err error) {
// If the reader has a WriteTo method, use it to do the copy.
// Avoids an allocation and a copy.
if wt, ok := src.(WriterTo); ok {
return wt.WriteTo(dst)
}
// Similarly, if the writer has a ReadFrom method, use it to do the copy.
if rt, ok := dst.(ReaderFrom); ok {
return rt.ReadFrom(src)
}
size := 32 * 1024
if l, ok := src.(*LimitedReader); ok && int64(size) > l.N {
if l.N < 1 {
size = 1
} else {
size = int(l.N)
}
}
if buf == nil {
buf = make([]byte, size)
}
for {
nr, er := src.Read(buf)
if nr > 0 {
nw, ew := dst.Write(buf[0:nr])
if nw > 0 {
written += int64(nw)
}
if ew != nil {
err = ew
break
}
if nr != nw {
err = ErrShortWrite
break
}
}
if er != nil {
if er != EOF {
err = er
}
break
}
}
return written, err
}
It writes to nw, ew := dst.Write(buf[0:nr]) when nr is the number of bytes read, so why is the while loop necessary?

Let's assume that src does not implement WriterTo and dst does not implement ReaderFrom, since otherwise we would not get down to the for loop at all.
Let's further assume, for simplicity, that src does not implement LimitedReader, so that size is 32 * 1024: 32 kBytes. (There is no real loss of generality here as LimitedReader just allows the source to pick an even smaller number, at least in this case.)
Finally, let's assume buf is nil. (Or, if it's not nil, let's assume it has a capacity of 32768 bytes. If it has a large capacity, we can just change the rest of the assumptions below, so that src has more bytes than there are in the buffer.)
So: we enter the loop with size holding the size of the temporary buffer buf, which is 32k. Now suppose the source is a file that holds 64k. It will take at least two src.Read() calls to read it! Clearly we need an outer loop. That's the overall for here.
Now suppose that src.Read() really does read the full 32k, so that nr is also 32 * 1024. The code will now call dst.Write(), passing the full 32k of data. Unlike src.Read()—which is allowed to only read, say, 1k instead of the full 32k—the next chunk of code requires that dst.Write() write all 32k. If it doesn't, the loop will break with err set to ErrShortWrite.
(An alternative would have been to keep calling dst.Write() with the remaining bytes, so that dst.Write() could write only 1k of the 32k, requiring 32 calls to get it all written.)
Note that src.Read() can choose to read only, say, 1k instead of 32k. If the actual file is 64k, it will then take 64 trips, rather than 2, through the outer loop. (An alternative choice would have been to force such a reader to implement the LimitedReaderinterface. That's not as flexible, though, and is not what LimitedReader is intended for.)

func copyBuffer(dst Writer, src Reader, buf []byte) (written int64, err error)
when the total data size to copy if larger than len(buf), nr, er := src.Read(buf) will try read at most len(buf) data every time.
that's how copyBuffer works:
for {
copy `len(buf)` data from `src` to `dst`;
if EOF {
//done
break;
}
if other Errors {
return Error
}
}

In the normal case, you would just call Copy rather than CopyBuffer.
func Copy(dst Writer, src Reader) (written int64, err error) {
return copyBuffer(dst, src, nil)
}
The option to have a user-supplied buffer is, I think, just for extreme optimization scenarios. The use of the word "Buffer" in the name is possibly a source of confusion since the function is not copying the buffer -- just using it internally.
There are two reasons for the looping...
The buffer might not be large enough to copy all of the data (the size of which is not necessarily known in advance) in one pass.
Reader, though not 'Writer', may return partial results when it makes sense to do so.
Regarding the second item, consider that the Reader does not necessarily represent a fixed file or data buffer. It could, instead, be a live stream from some other thread or process. As such, there are many valid scenarios for stream data to be read and processed on an as-available basis. Although CopyBuffer doesn't do this, it still has to work with such behaviors from any Reader.

Related

Size control on logging an unknown length of parameters

The Problem:
Right now, I'm logging my SQL query and the args that related to that query, but what will happen if my args weight a lot? say 100MB?
The Solution:
I want to iterate over the args and once they exceeded the 0.5MB I want to take the args up till this point and only log them (of course I'll use the entire args set in the actual SQL query).
Where am stuck:
I find it hard to find the size on the disk of an interface{}.
How can I print it? (there is a nicer way to do it than %v?)
The concern is mainly focused on the first section, how can I find the size, I need to know the type, if its an array, stack, heap, etc..
If code helps, here is my code structure (everything sits in dal pkg in util file):
package dal
import (
"fmt"
)
const limitedLogArgsSizeB = 100000 // ~ 0.1MB
func parsedArgs(args ...interface{}) string {
currentSize := 0
var res string
for i := 0; i < len(args); i++ {
currentEleSize := getSizeOfElement(args[i])
if !(currentSize+currentEleSize =< limitedLogArgsSizeB) {
break
}
currentSize += currentEleSize
res = fmt.Sprintf("%s, %v", res, args[i])
}
return "[" + res + "]"
}
func getSizeOfElement(interface{}) (sizeInBytes int) {
}
So as you can see I expect to get back from parsedArgs() a string that looks like:
"[4378233, 33, true]"
for completeness, the query that goes with it:
INSERT INTO Person (id,age,is_healthy) VALUES ($0,$1,$2)
so to demonstrate the point of all of this:
lets say the first two args are equal exactly to the threshold of the size limit that I want to log, I will only get back from the parsedArgs() the first two args as a string like this:
"[4378233, 33]"
I can provide further details upon request, Thanks :)
Getting the memory size of arbitrary values (arbitrary data structures) is not impossible but "hard" in Go. For details, see How to get memory size of variable in Go?
The easiest solution could be to produce the data to be logged in memory, and you can simply truncate it before logging (e.g. if it's a string or a byte slice, simply slice it). This is however not the gentlest solution (slower and requires more memory).
Instead I would achieve what you want differently. I would try to assemble the data to be logged, but I would use a special io.Writer as the target (which may be targeted at your disk or at an in-memory buffer) which keeps track of the bytes written to it, and once a limit is reached, it could discard further data (or report an error, whatever suits you).
You can see a counting io.Writer implementation here: Size in bits of object encoded to JSON?
type CounterWr struct {
io.Writer
Count int
}
func (cw *CounterWr) Write(p []byte) (n int, err error) {
n, err = cw.Writer.Write(p)
cw.Count += n
return
}
We can easily change it to become a functional limited-writer:
type LimitWriter struct {
io.Writer
Remaining int
}
func (lw *LimitWriter) Write(p []byte) (n int, err error) {
if lw.Remaining == 0 {
return 0, io.EOF
}
if lw.Remaining < len(p) {
p = p[:lw.Remaining]
}
n, err = lw.Writer.Write(p)
lw.Remaining -= n
return
}
And you can use the fmt.FprintXXX() functions to write into a value of this LimitWriter.
An example writing to an in-memory buffer:
buf := &bytes.Buffer{}
lw := &LimitWriter{
Writer: buf,
Remaining: 20,
}
args := []interface{}{1, 2, "Looooooooooooong"}
fmt.Fprint(lw, args)
fmt.Printf("%d %q", buf.Len(), buf)
This will output (try it on the Go Playground):
20 "[1 2 Looooooooooooon"
As you can see, our LimitWriter only allowed to write 20 bytes (LimitWriter.Remaining), and the rest were discarded.
Note that in this example I assembled the data in an in-memory buffer, but in your logging system you can write directly to your logging stream, just wrap it in LimitWriter (so you can completely omit the in-memory buffer).
Optimization tip: if you have the arguments as a slice, you may optimize the truncated rendering by using a loop, and stop printing arguments once the limit is reached.
An example doing this:
buf := &bytes.Buffer{}
lw := &LimitWriter{
Writer: buf,
Remaining: 20,
}
args := []interface{}{1, 2, "Loooooooooooooooong", 3, 4, 5}
io.WriteString(lw, "[")
for i, v := range args {
if _, err := fmt.Fprint(lw, v, " "); err != nil {
fmt.Printf("Breaking at argument %d, err: %v\n", i, err)
break
}
}
io.WriteString(lw, "]")
fmt.Printf("%d %q", buf.Len(), buf)
Output (try it on the Go Playground):
Breaking at argument 3, err: EOF
20 "[1 2 Loooooooooooooo"
The good thing about this is that once we reach the limit, we don't have to produce the string representation of the remaining arguments that would be discarded anyway, saving some CPU (and memory) resources.

How can I retrieve an image data buffer from clipboard memory (uintptr)?

I'm trying to use syscall with user32.dll to get the contents of the clipboard. I expect it to be image data from a Print Screen.
Right now I've got this:
if opened := openClipboard(0); !opened {
fmt.Println("Failed to open Clipboard")
}
handle := getClipboardData(CF_BITMAP)
// get buffer
img, _, err := Decode(buffer)
I need to get the data into a readable buffer using the handle.
I've had some inspiration from AllenDang/w32 and atotto/clipboard on github. The following would work for text, based on atotto's implementation:
text := syscall.UTF16ToString((*[1 << 20]uint16)(unsafe.Pointer(handle))[:])
But how can I get a buffer containing image data I can decode?
[Update]
Going by the solution #kostix provided, I hacked together a half working example:
image.RegisterFormat("bmp", "bmp", bmp.Decode, bmp.DecodeConfig)
if opened := w32.OpenClipboard(0); opened == false {
fmt.Println("Error: Failed to open Clipboard")
}
//fmt.Printf("Format: %d\n", w32.EnumClipboardFormats(w32.CF_BITMAP))
handle := w32.GetClipboardData(w32.CF_DIB)
size := globalSize(w32.HGLOBAL(handle))
if handle != 0 {
pData := w32.GlobalLock(w32.HGLOBAL(handle))
if pData != nil {
data := (*[1 << 25]byte)(pData)[:size]
// The data is either in DIB format and missing the BITMAPFILEHEADER
// or there are other issues since it can't be decoded at this point
buffer := bytes.NewBuffer(data)
img, _, err := image.Decode(buffer)
if err != nil {
fmt.Printf("Failed decoding: %s", err)
os.Exit(1)
}
fmt.Println(img.At(0, 0).RGBA())
}
w32.GlobalUnlock(w32.HGLOBAL(pData))
}
w32.CloseClipboard()
AllenDang/w32 contains most of what you'd need, but sometimes you need to implement something yourself, like globalSize():
var (
modkernel32 = syscall.NewLazyDLL("kernel32.dll")
procGlobalSize = modkernel32.NewProc("GlobalSize")
)
func globalSize(hMem w32.HGLOBAL) uint {
ret, _, _ := procGlobalSize.Call(uintptr(hMem))
if ret == 0 {
panic("GlobalSize failed")
}
return uint(ret)
}
Maybe someone will come up with a solution to get the BMP data. In the meantime I'll be taking a different route.
#JimB is correct: user32!GetClipboardData() returns a HGLOBAL, and a comment example over there suggests using kernel32!GlobalLock() to a) globally lock that handle, and b) yield a proper pointer to the memory referred to by it.
You will need to kernel32!GlobalUnlock() the handle after you're done with it.
As to converting pointers obtained from Win32 API functions to something readable by Go, the usual trick is casting the pointer to an insanely large slice. To cite the "Turning C arrays into Go slices" of "the Go wiki article on cgo":
To create a Go slice backed by a C array (without copying the original
data), one needs to acquire this length at runtime and use a type
conversion to a pointer to a very big array and then slice it to the
length that you want (also remember to set the cap if you're using Go 1.2 > or later), for example (see http://play.golang.org/p/XuC0xqtAIC for a
runnable example):
import "C"
import "unsafe"
...
var theCArray *C.YourType = C.getTheArray()
length := C.getTheArrayLength()
slice := (*[1 << 30]C.YourType)(unsafe.Pointer(theCArray))[:length:length]
It is important to keep in mind that the Go garbage collector will not
interact with this data, and that if it is freed from the C side of
things, the behavior of any Go code using the slice is nondeterministic.
In your case it will be simpler:
h := GlobalLock()
defer GlobalUnlock(h)
length := somehowGetLengthOfImageInTheClipboard()
slice := (*[1 << 30]byte)(unsafe.Pointer((uintptr(h)))[:length:length]
Then you need to actually read the bitmap.
This depends on the format of the Device-Independent Bitmap (DIB) available for export from the clipboard.
See this and this for a start.
As usually, definitions of BITMAPINFOHEADER etc are easily available online in the MSDN site.

Efficiently listing files in a directory having very many entries

I need to recursively read a directory structure, but I also need to perform an additional step once I have read through all entries for each directory. Therefore, I need to write my own recursion logic (and can't use the simplistic filepath.Walk routine). However, the ioutil.ReadDir and filepath.Glob routines only return slices. What if I'm pushing the limits of ext4 or xfs and have a directory with files numbering into the billions? I would expect golang to have a function that returns an unsorted series of os.FileInfo (or, even better, raw strings) over a channel rather than a sorted slice. How do we efficiently read file entries in this case?
All of the functions cited above seem to rely on readdirnames in os/dir_unix.go, and, for some reason, it just makes an array when it seems like it would've been easy to spawn a gothread and and push the values into a channel. There might have been sound logic to do this, but it's not clear what it is. I'm new to Go, so I also could've easy missed some principle that's obvious to everyone else.
This is the sourcecode, for convenience:
func (f *File) readdirnames(n int) (names []string, err error) {
// If this file has no dirinfo, create one.
if f.dirinfo == nil {
f.dirinfo = new(dirInfo)
// The buffer must be at least a block long.
f.dirinfo.buf = make([]byte, blockSize)
}
d := f.dirinfo
size := n
if size <= 0 {
size = 100
n = -1
}
names = make([]string, 0, size) // Empty with room to grow.
for n != 0 {
// Refill the buffer if necessary
if d.bufp >= d.nbuf {
d.bufp = 0
var errno error
d.nbuf, errno = fixCount(syscall.ReadDirent(f.fd, d.buf))
if errno != nil {
return names, NewSyscallError("readdirent", errno)
}
if d.nbuf <= 0 {
break // EOF
}
}
// Drain the buffer
var nb, nc int
nb, nc, names = syscall.ParseDirent(d.buf[d.bufp:d.nbuf], n, names)
d.bufp += nb
n -= nc
}
if n >= 0 && len(names) == 0 {
return names, io.EOF
}
return names, nil
}
ioutil.ReadDir and filepath.Glob are just convenience functions around reading directory entries.
You can read directory entries in batches by directly using the Readdir or Readdirnames methods, if you supply an n argument > 0.
For something as basic as reading directory entries, there's no need to add the overhead of a goroutine and channel, and also provide an alternate way to return the error. You can always wrap the batched calls with your own goroutine and channel pattern if you wish.

Most efficient way to read Zlib compressed file in Golang?

I'm reading in and at the same time parsing (decoding) a file in a custom format, which is compressed with zlib. My question is how can I efficiently uncompress and then parse the uncompressed content without growing the slice? I would like to parse it whilst reading it into a reusable buffer.
This is for a speed-sensitive application and so I'd like to read it in as efficiently as possible. Normally I would just ioutil.ReadAll and then loop again through the data to parse it. This time I'd like to parse it as it's read, without having to grow the buffer into which it is read, for maximum efficiency.
Basically I'm thinking that if I can find a buffer of the perfect size then I can read into this, parse it, and then write over the buffer again, then parse that, etc. The issue here is that the zlib reader appears to read an arbitrary number of bytes each time Read(b) is called; it does not fill the slice. Because of this I don't know what the perfect buffer size would be. I'm concerned that it might break up some of the data that I wrote into two chunks, making it difficult to parse because one say uint64 could be split from into two reads and therefore not occur in the same buffer read - or perhaps that can never happen and it's always read out in chunks of the same size as were originally written?
What is the optimal buffer size, or is there a way to calculate this?
If I have written data into the zlib writer with f.Write(b []byte) is it possible that this same data could be split into two reads when reading back the compressed data (meaning I will have to have a history during parsing), or will it always come back in the same read?
You can wrap your zlib reader in a bufio reader, then implement a specialized reader on top that will rebuild your chunks of data by reading from the bufio reader until a full chunk is read. Be aware that bufio.Read calls Read at most once on the underlying Reader, so you need to call ReadByte in a loop. bufio will however take care of the unpredictable size of data returned by the zlib reader for you.
If you do not want to implement a specialized reader, you can just go with a bufio reader and read as many bytes as needed with ReadByte() to fill a given data type. The optimal buffer size is at least the size of your largest data structure, up to whatever you can shove into memory.
If you read directly from the zlib reader, there is no guarantee that your data won't be split between two reads.
Another, maybe cleaner, solution is to implement a writer for your data, then use io.Copy(your_writer, zlib_reader).
OK, so I figured this out in the end using my own implementation of a reader.
Basically the struct looks like this:
type reader struct {
at int
n int
f io.ReadCloser
buf []byte
}
This can be attached to the zlib reader:
// Open file for reading
fi, err := os.Open(filename)
if err != nil {
return nil, err
}
defer fi.Close()
// Attach zlib reader
r := new(reader)
r.buf = make([]byte, 2048)
r.f, err = zlib.NewReader(fi)
if err != nil {
return nil, err
}
defer r.f.Close()
Then x number of bytes can be read straight out of the zlib reader using a function like this:
mydata := r.readx(10)
func (r *reader) readx(x int) []byte {
for r.n < x {
copy(r.buf, r.buf[r.at:r.at+r.n])
r.at = 0
m, err := r.f.Read(r.buf[r.n:])
if err != nil {
panic(err)
}
r.n += m
}
tmp := make([]byte, x)
copy(tmp, r.buf[r.at:r.at+x]) // must be copied to avoid memory leak
r.at += x
r.n -= x
return tmp
}
Note that I have no need to check for EOF because I my parser should stop itself at the right place.

Under what circumstances would the two return values of crypto/rand read() ever be useful?

The typical usage of crypto/rand goes something like this:
salt := make([]byte, saltLength)
n,err := rand.Read(salt)
Which fills the byte slice I have labeled "salt" here with a sequence of random bytes.
Under what circumstances might the random number generator fail? Would it be insecure to fall back to a math/rand equivalent in the event that err is not nil?
Since the length of the byte slice is already known, n also seems useless to me, is there any reason I wouldn't just use _,err in its place?
To be safe your code should look more like this:
package main
import (
"crypto/rand"
"fmt"
)
func main() {
saltLength := 16
salt := make([]byte, saltLength)
n, err := rand.Read(salt[:cap(salt)])
if err != nil {
// handle error
}
salt = salt[:n]
if len(salt) != saltLength {
// handle error
}
fmt.Println(len(salt), salt)
}
Output:
16 [191 235 81 37 175 238 93 202 230 158 41 199 202 85 67 209]
n may be less than len(salt) if insufficient entropy is available. You should always check for errors.
For example, one of the many ways to obtain a sequence of random numbers is the getrandom system call on Linux or the CryptGenRandom API call on Windows.
References:
random: introduce getrandom(2) system call
CryptGenRandom function
ADDENDUM:
The crypto/rand package is a cryptographically secure pseudorandom number generator. Package math/rand is not cryptographically secure.
There are too many paths in even a simple program to test them all. Therefore, the only way to write programs with zero defects and zero bugs is to write readable, maintainable code that is provably correct. Systematic Programming by Niklaus Wirth is a good primer. It's worthwhile to spend time on constructing a robust general form, which can easily be adapted to each special case and that is easily maintainable as requirements change.
For example, for the io.Reader interface, typical usage is a looping pattern.
func Reader(rdr io.Reader) error {
bufLen := 256
buf := make([]byte, bufLen)
for {
n, err := rdr.Read(buf[:cap(buf)])
if n == 0 {
if err == nil {
continue
}
if err == io.EOF {
break
}
return err
}
buf = buf[:n]
// process read buffer
if err != nil && err != io.EOF {
return err
}
}
return nil
}
type Reader
type Reader interface {
Read(p []byte) (n int, err error)
}
Reader is the interface that wraps the basic Read method.
Read reads up to len(p) bytes into p. It returns the number of bytes
read (0 <= n <= len(p)) and any error encountered. Even if Read
returns n < len(p), it may use all of p as scratch space during the
call. If some data is available but not len(p) bytes, Read
conventionally returns what is available instead of waiting for more.
When Read encounters an error or end-of-file condition after
successfully reading n > 0 bytes, it returns the number of bytes read.
It may return the (non-nil) error from the same call or return the
error (and n == 0) from a subsequent call. An instance of this general
case is that a Reader returning a non-zero number of bytes at the end
of the input stream may return either err == EOF or err == nil. The
next Read should return 0, EOF regardless.
Callers should always process the n > 0 bytes returned before
considering the error err. Doing so correctly handles I/O errors that
happen after reading some bytes and also both of the allowed EOF
behaviors.
Implementations of Read are discouraged from returning a zero byte
count with a nil error, and callers should treat that situation as a
no-op.
We only want to allocate the buffer once, before we start the Read loop. However, we want the compiler and runtime to detect if we stray outside the valid buffer length n in the Read loop, so we write buf = buf[:n]. However, when we loop to the next Read we explicitly want the full buffer: buf[:cap(buf).
It's never wrong to write Read(buf[:cap(buf)]). Even though you may not have a Read loop now, you may add one later, and you may forget to reset the buffer length. There may be special case for a particular Read implementation, like an underlying ReadFull. Now you have to read and monitor the underlying code to prove that your code is correct. Documentation is not always reliable. And you can't safely switch to another io.Reader Read implementation.
When you access the salt slice, salt[:len(salt)], you are using len(salt) not n. If they differ, you have a bug.
"implementations should follow a general principle of robustness: be
conservative in what you do, be liberal in what you accept from
others." Jon
Postel

Resources