Is it a good idea to use a list of *bufio.Scanner for files to be read later in golang? - go

I have a list of delimited files to be read after I obtained their path. Instead of saving path as a string, I'm wondering can I simply store a list of *bufio.Scanner so those will be much easier to be read later (and code will be cleaner too)? Here is a quick example:
func main(){
scannerList := read(filenameList)
dowork(scannerList)
}
func read(filenameList []string) (scannerList []*bufio.Scanner){
for _, filename := range filenameList{
op, _ := os.Open(filename)
defer op.Close()
scanner := bufio.NewScanner(op)
scannerList = append(scannerList, scanner)
}
return
}
func dowork(scannerList []*bufio.Scanner){
for _, scanner := range scannerList{
for scanner.Scan(){
//read stuff
}
//do stuff
}
}
My code similar to above example compiles, but I don't know if this is recommended (or works). Any comments? Thanks!

A Scanner is a complicated structure, and one that embeds a buffer. The buffer can grow dynamically (depending on what the scan function requests) up to 64kB (MaxScanTokenSize).
So in general it is not a good idea to keep redundant Scanners around, as the buffers cannot be released until the Scanners are discarded. But perhaps a few extra kilobytes of memory don't matter much in your case.

Related

Is it bad practice to redeclare slices in every loop iteration?

As an example:
for {
myData := <-myChan
buf := new(bytes.Buffer)
encoder := gob.NewEncoder(buf)
err := encoder.Encode(myData)
...
I could put buf := new(... above the for loop to save processor and maybe some memory, but will that cause any problems? Examples I see have the new in the loop.
Edit: for the case above, the encoder could go above the for loop to, so why doesn't it (in examples I've seen)?
I would expect to reuse the buffer:
buf := new(bytes.Buffer)
for {
buf.Reset()
//...
}
The idiomatic Go style would be to write the clearest code possible as long as it solves the problem at hand within its time and space constraints.
In other words, I wouldn't worry about the efficiency here because:
The compiler can do clever optimizations that would remove any differences
Almost certainly if there are differences, they are negligible
If you find out that the allocation takes a considerable portion of each loop iteration, then #peterSO's answer below is a good pattern on how to reuse a bytes.Buffer by calling its Reset method.

How can I retrieve an image data buffer from clipboard memory (uintptr)?

I'm trying to use syscall with user32.dll to get the contents of the clipboard. I expect it to be image data from a Print Screen.
Right now I've got this:
if opened := openClipboard(0); !opened {
fmt.Println("Failed to open Clipboard")
}
handle := getClipboardData(CF_BITMAP)
// get buffer
img, _, err := Decode(buffer)
I need to get the data into a readable buffer using the handle.
I've had some inspiration from AllenDang/w32 and atotto/clipboard on github. The following would work for text, based on atotto's implementation:
text := syscall.UTF16ToString((*[1 << 20]uint16)(unsafe.Pointer(handle))[:])
But how can I get a buffer containing image data I can decode?
[Update]
Going by the solution #kostix provided, I hacked together a half working example:
image.RegisterFormat("bmp", "bmp", bmp.Decode, bmp.DecodeConfig)
if opened := w32.OpenClipboard(0); opened == false {
fmt.Println("Error: Failed to open Clipboard")
}
//fmt.Printf("Format: %d\n", w32.EnumClipboardFormats(w32.CF_BITMAP))
handle := w32.GetClipboardData(w32.CF_DIB)
size := globalSize(w32.HGLOBAL(handle))
if handle != 0 {
pData := w32.GlobalLock(w32.HGLOBAL(handle))
if pData != nil {
data := (*[1 << 25]byte)(pData)[:size]
// The data is either in DIB format and missing the BITMAPFILEHEADER
// or there are other issues since it can't be decoded at this point
buffer := bytes.NewBuffer(data)
img, _, err := image.Decode(buffer)
if err != nil {
fmt.Printf("Failed decoding: %s", err)
os.Exit(1)
}
fmt.Println(img.At(0, 0).RGBA())
}
w32.GlobalUnlock(w32.HGLOBAL(pData))
}
w32.CloseClipboard()
AllenDang/w32 contains most of what you'd need, but sometimes you need to implement something yourself, like globalSize():
var (
modkernel32 = syscall.NewLazyDLL("kernel32.dll")
procGlobalSize = modkernel32.NewProc("GlobalSize")
)
func globalSize(hMem w32.HGLOBAL) uint {
ret, _, _ := procGlobalSize.Call(uintptr(hMem))
if ret == 0 {
panic("GlobalSize failed")
}
return uint(ret)
}
Maybe someone will come up with a solution to get the BMP data. In the meantime I'll be taking a different route.
#JimB is correct: user32!GetClipboardData() returns a HGLOBAL, and a comment example over there suggests using kernel32!GlobalLock() to a) globally lock that handle, and b) yield a proper pointer to the memory referred to by it.
You will need to kernel32!GlobalUnlock() the handle after you're done with it.
As to converting pointers obtained from Win32 API functions to something readable by Go, the usual trick is casting the pointer to an insanely large slice. To cite the "Turning C arrays into Go slices" of "the Go wiki article on cgo":
To create a Go slice backed by a C array (without copying the original
data), one needs to acquire this length at runtime and use a type
conversion to a pointer to a very big array and then slice it to the
length that you want (also remember to set the cap if you're using Go 1.2 > or later), for example (see http://play.golang.org/p/XuC0xqtAIC for a
runnable example):
import "C"
import "unsafe"
...
var theCArray *C.YourType = C.getTheArray()
length := C.getTheArrayLength()
slice := (*[1 << 30]C.YourType)(unsafe.Pointer(theCArray))[:length:length]
It is important to keep in mind that the Go garbage collector will not
interact with this data, and that if it is freed from the C side of
things, the behavior of any Go code using the slice is nondeterministic.
In your case it will be simpler:
h := GlobalLock()
defer GlobalUnlock(h)
length := somehowGetLengthOfImageInTheClipboard()
slice := (*[1 << 30]byte)(unsafe.Pointer((uintptr(h)))[:length:length]
Then you need to actually read the bitmap.
This depends on the format of the Device-Independent Bitmap (DIB) available for export from the clipboard.
See this and this for a start.
As usually, definitions of BITMAPINFOHEADER etc are easily available online in the MSDN site.

Golang high cpu usage on simple webserver unable to understand why?

So I have a simple net/http webserver. All it does is is deliver 100MB of random bytes, which I intend to use for network speed testing. My handler for the 100mb endpoint is really simple (pasted below). The code works fine and I get my random byte file, the problem is when I run this and someone downloads these 100megabytes, the CPU for this program shoots up to 150% and stays there until this handler finishes running. Am I doing something very wrong here? What could I do to improve this handler's performance?
func downloadHandler(w http.ResponseWriter, r *http.Request) {
str := RandStringBytes(8192); //generates 8192 bytes of randomness
sz := 1000*1000*100; //100Megabytes
iter := sz/len(str)+1;
w.Header().Set("Content-Type", "application/octet-stream")
w.Header().Set("Content-Length", strconv.Itoa( sz ))
for i := 0; i < iter ; i++ {
fmt.Fprintf(w, str )
}
}
The problem is that fmt.Fprintf() expects a format string:
func Fprintf(w io.Writer, format string, a ...interface{}) (n int, err error)
And you pass it a big, 8 KB format string. The fmt package has to analyze the format string, it is not something that gets to the output as is. Most definately this is what is eating your CPU.
If the random string contains the special % sign, that even makes your case worse, as then fmt.Fprintf() might expect further arguments which you don't "deliver", so the fmt package also has to (will) include error messages in the output, such as:
fmt.Fprintf(os.Stdout, "aaa%bbb%d")
Output:
aaa%!b(MISSING)bb%!d(MISSING)
Use fmt.Fprint() instead which does not expect a format string:
fmt.Fprint(w, str)
Or even better, convert your random string to a byte slice once, and just keep writing that:
data := []byte(str)
for i := 0; i < iter; i++ {
if _, err := w.Write(data); err != nil {
// Handle error, e.g. return
}
}
Delivering large amount of data – you won't get a faster solution than writing a prepared byte slice in a loop (maybe slightly if you vary the size of the slice). If your solution is still "slow", that might be due to your RandStringBytes() function which we don't know anything about, or your output might be compressed (gzipped) if you use other handlers or some framework (which does use relatively high CPU). Also if the client that receives the response is also on your computer (e.g. a browser), it –or a firewall / antivirus software– may check / analyze the response for malicious code (which may also be resource intensive).

Most efficient way to read Zlib compressed file in Golang?

I'm reading in and at the same time parsing (decoding) a file in a custom format, which is compressed with zlib. My question is how can I efficiently uncompress and then parse the uncompressed content without growing the slice? I would like to parse it whilst reading it into a reusable buffer.
This is for a speed-sensitive application and so I'd like to read it in as efficiently as possible. Normally I would just ioutil.ReadAll and then loop again through the data to parse it. This time I'd like to parse it as it's read, without having to grow the buffer into which it is read, for maximum efficiency.
Basically I'm thinking that if I can find a buffer of the perfect size then I can read into this, parse it, and then write over the buffer again, then parse that, etc. The issue here is that the zlib reader appears to read an arbitrary number of bytes each time Read(b) is called; it does not fill the slice. Because of this I don't know what the perfect buffer size would be. I'm concerned that it might break up some of the data that I wrote into two chunks, making it difficult to parse because one say uint64 could be split from into two reads and therefore not occur in the same buffer read - or perhaps that can never happen and it's always read out in chunks of the same size as were originally written?
What is the optimal buffer size, or is there a way to calculate this?
If I have written data into the zlib writer with f.Write(b []byte) is it possible that this same data could be split into two reads when reading back the compressed data (meaning I will have to have a history during parsing), or will it always come back in the same read?
You can wrap your zlib reader in a bufio reader, then implement a specialized reader on top that will rebuild your chunks of data by reading from the bufio reader until a full chunk is read. Be aware that bufio.Read calls Read at most once on the underlying Reader, so you need to call ReadByte in a loop. bufio will however take care of the unpredictable size of data returned by the zlib reader for you.
If you do not want to implement a specialized reader, you can just go with a bufio reader and read as many bytes as needed with ReadByte() to fill a given data type. The optimal buffer size is at least the size of your largest data structure, up to whatever you can shove into memory.
If you read directly from the zlib reader, there is no guarantee that your data won't be split between two reads.
Another, maybe cleaner, solution is to implement a writer for your data, then use io.Copy(your_writer, zlib_reader).
OK, so I figured this out in the end using my own implementation of a reader.
Basically the struct looks like this:
type reader struct {
at int
n int
f io.ReadCloser
buf []byte
}
This can be attached to the zlib reader:
// Open file for reading
fi, err := os.Open(filename)
if err != nil {
return nil, err
}
defer fi.Close()
// Attach zlib reader
r := new(reader)
r.buf = make([]byte, 2048)
r.f, err = zlib.NewReader(fi)
if err != nil {
return nil, err
}
defer r.f.Close()
Then x number of bytes can be read straight out of the zlib reader using a function like this:
mydata := r.readx(10)
func (r *reader) readx(x int) []byte {
for r.n < x {
copy(r.buf, r.buf[r.at:r.at+r.n])
r.at = 0
m, err := r.f.Read(r.buf[r.n:])
if err != nil {
panic(err)
}
r.n += m
}
tmp := make([]byte, x)
copy(tmp, r.buf[r.at:r.at+x]) // must be copied to avoid memory leak
r.at += x
r.n -= x
return tmp
}
Note that I have no need to check for EOF because I my parser should stop itself at the right place.

Go - What is really a multipart.File?

In the docs it is said that
If stored on disk, the File's underlying concrete type will be an
*os.File.
In this case everything is clear. Great. But, what happens if not, if the file is stored in memory?
My actual problem is that I´m trying to get the size of the different files stored in memory that I got though an html form but I can not use os.Stat to do fileInfo.Size() because I don´t have the location of the file, just it´s name.
fhs := req.MultipartForm.File["files"]
for _, fileHeader := range fhs {
file, _ := fileHeader.Open()
log.Println(len(file)) // Gives an error because is of type multipart.File
fileInfo, err := os.Stat(fileHeader.Filename) // Gives an error because it´s just the name, not the complete path
// Here I would do things with the file
}
You can exploit the fact that multipart.File implements io.Seeker to find its size.
cur, err := file.Seek(0, 1)
size, err := file.Seek(0, 2)
_, err := file.Seek(cur, 0)
The first line finds the file's current offset. The second seeks to the end of the file and returns where it is in relation to the beginning of the file. This is the size of the file. The third seeks to the offset we were at before trying to find the size.
You can read more about the seek method here.
if you call parseMultipartForm(0) this will write the entire file to disk instead of storing anything in memory, followed by f, _ := FormFile("file") then you can stat the file with fi, _ := f.(*os.File).Stat()
Depending on what you want to do with the data, the best thing to do may be to read the file into a byte slice with ioutil.ReadAll. (You might want the data as a byte slice eventually, anyway.) Once you've done that, you an find the length with len.

Resources