Golang high cpu usage on simple webserver unable to understand why? - performance

So I have a simple net/http webserver. All it does is is deliver 100MB of random bytes, which I intend to use for network speed testing. My handler for the 100mb endpoint is really simple (pasted below). The code works fine and I get my random byte file, the problem is when I run this and someone downloads these 100megabytes, the CPU for this program shoots up to 150% and stays there until this handler finishes running. Am I doing something very wrong here? What could I do to improve this handler's performance?
func downloadHandler(w http.ResponseWriter, r *http.Request) {
str := RandStringBytes(8192); //generates 8192 bytes of randomness
sz := 1000*1000*100; //100Megabytes
iter := sz/len(str)+1;
w.Header().Set("Content-Type", "application/octet-stream")
w.Header().Set("Content-Length", strconv.Itoa( sz ))
for i := 0; i < iter ; i++ {
fmt.Fprintf(w, str )
}
}

The problem is that fmt.Fprintf() expects a format string:
func Fprintf(w io.Writer, format string, a ...interface{}) (n int, err error)
And you pass it a big, 8 KB format string. The fmt package has to analyze the format string, it is not something that gets to the output as is. Most definately this is what is eating your CPU.
If the random string contains the special % sign, that even makes your case worse, as then fmt.Fprintf() might expect further arguments which you don't "deliver", so the fmt package also has to (will) include error messages in the output, such as:
fmt.Fprintf(os.Stdout, "aaa%bbb%d")
Output:
aaa%!b(MISSING)bb%!d(MISSING)
Use fmt.Fprint() instead which does not expect a format string:
fmt.Fprint(w, str)
Or even better, convert your random string to a byte slice once, and just keep writing that:
data := []byte(str)
for i := 0; i < iter; i++ {
if _, err := w.Write(data); err != nil {
// Handle error, e.g. return
}
}
Delivering large amount of data – you won't get a faster solution than writing a prepared byte slice in a loop (maybe slightly if you vary the size of the slice). If your solution is still "slow", that might be due to your RandStringBytes() function which we don't know anything about, or your output might be compressed (gzipped) if you use other handlers or some framework (which does use relatively high CPU). Also if the client that receives the response is also on your computer (e.g. a browser), it –or a firewall / antivirus software– may check / analyze the response for malicious code (which may also be resource intensive).

Related

Why copyBuffer implements while loop

I am trying to understand how copyBuffer works under the hood, but what is not clear to me is the use of while loop
for {
nr, er := src.Read(buf)
//...
}
Full code below:
// copyBuffer is the actual implementation of Copy and CopyBuffer.
// if buf is nil, one is allocated.
func copyBuffer(dst Writer, src Reader, buf []byte) (written int64, err error) {
// If the reader has a WriteTo method, use it to do the copy.
// Avoids an allocation and a copy.
if wt, ok := src.(WriterTo); ok {
return wt.WriteTo(dst)
}
// Similarly, if the writer has a ReadFrom method, use it to do the copy.
if rt, ok := dst.(ReaderFrom); ok {
return rt.ReadFrom(src)
}
size := 32 * 1024
if l, ok := src.(*LimitedReader); ok && int64(size) > l.N {
if l.N < 1 {
size = 1
} else {
size = int(l.N)
}
}
if buf == nil {
buf = make([]byte, size)
}
for {
nr, er := src.Read(buf)
if nr > 0 {
nw, ew := dst.Write(buf[0:nr])
if nw > 0 {
written += int64(nw)
}
if ew != nil {
err = ew
break
}
if nr != nw {
err = ErrShortWrite
break
}
}
if er != nil {
if er != EOF {
err = er
}
break
}
}
return written, err
}
It writes to nw, ew := dst.Write(buf[0:nr]) when nr is the number of bytes read, so why is the while loop necessary?
Let's assume that src does not implement WriterTo and dst does not implement ReaderFrom, since otherwise we would not get down to the for loop at all.
Let's further assume, for simplicity, that src does not implement LimitedReader, so that size is 32 * 1024: 32 kBytes. (There is no real loss of generality here as LimitedReader just allows the source to pick an even smaller number, at least in this case.)
Finally, let's assume buf is nil. (Or, if it's not nil, let's assume it has a capacity of 32768 bytes. If it has a large capacity, we can just change the rest of the assumptions below, so that src has more bytes than there are in the buffer.)
So: we enter the loop with size holding the size of the temporary buffer buf, which is 32k. Now suppose the source is a file that holds 64k. It will take at least two src.Read() calls to read it! Clearly we need an outer loop. That's the overall for here.
Now suppose that src.Read() really does read the full 32k, so that nr is also 32 * 1024. The code will now call dst.Write(), passing the full 32k of data. Unlike src.Read()—which is allowed to only read, say, 1k instead of the full 32k—the next chunk of code requires that dst.Write() write all 32k. If it doesn't, the loop will break with err set to ErrShortWrite.
(An alternative would have been to keep calling dst.Write() with the remaining bytes, so that dst.Write() could write only 1k of the 32k, requiring 32 calls to get it all written.)
Note that src.Read() can choose to read only, say, 1k instead of 32k. If the actual file is 64k, it will then take 64 trips, rather than 2, through the outer loop. (An alternative choice would have been to force such a reader to implement the LimitedReaderinterface. That's not as flexible, though, and is not what LimitedReader is intended for.)
func copyBuffer(dst Writer, src Reader, buf []byte) (written int64, err error)
when the total data size to copy if larger than len(buf), nr, er := src.Read(buf) will try read at most len(buf) data every time.
that's how copyBuffer works:
for {
copy `len(buf)` data from `src` to `dst`;
if EOF {
//done
break;
}
if other Errors {
return Error
}
}
In the normal case, you would just call Copy rather than CopyBuffer.
func Copy(dst Writer, src Reader) (written int64, err error) {
return copyBuffer(dst, src, nil)
}
The option to have a user-supplied buffer is, I think, just for extreme optimization scenarios. The use of the word "Buffer" in the name is possibly a source of confusion since the function is not copying the buffer -- just using it internally.
There are two reasons for the looping...
The buffer might not be large enough to copy all of the data (the size of which is not necessarily known in advance) in one pass.
Reader, though not 'Writer', may return partial results when it makes sense to do so.
Regarding the second item, consider that the Reader does not necessarily represent a fixed file or data buffer. It could, instead, be a live stream from some other thread or process. As such, there are many valid scenarios for stream data to be read and processed on an as-available basis. Although CopyBuffer doesn't do this, it still has to work with such behaviors from any Reader.

Is it a good idea to use a list of *bufio.Scanner for files to be read later in golang?

I have a list of delimited files to be read after I obtained their path. Instead of saving path as a string, I'm wondering can I simply store a list of *bufio.Scanner so those will be much easier to be read later (and code will be cleaner too)? Here is a quick example:
func main(){
scannerList := read(filenameList)
dowork(scannerList)
}
func read(filenameList []string) (scannerList []*bufio.Scanner){
for _, filename := range filenameList{
op, _ := os.Open(filename)
defer op.Close()
scanner := bufio.NewScanner(op)
scannerList = append(scannerList, scanner)
}
return
}
func dowork(scannerList []*bufio.Scanner){
for _, scanner := range scannerList{
for scanner.Scan(){
//read stuff
}
//do stuff
}
}
My code similar to above example compiles, but I don't know if this is recommended (or works). Any comments? Thanks!
A Scanner is a complicated structure, and one that embeds a buffer. The buffer can grow dynamically (depending on what the scan function requests) up to 64kB (MaxScanTokenSize).
So in general it is not a good idea to keep redundant Scanners around, as the buffers cannot be released until the Scanners are discarded. But perhaps a few extra kilobytes of memory don't matter much in your case.

Most efficient way to read Zlib compressed file in Golang?

I'm reading in and at the same time parsing (decoding) a file in a custom format, which is compressed with zlib. My question is how can I efficiently uncompress and then parse the uncompressed content without growing the slice? I would like to parse it whilst reading it into a reusable buffer.
This is for a speed-sensitive application and so I'd like to read it in as efficiently as possible. Normally I would just ioutil.ReadAll and then loop again through the data to parse it. This time I'd like to parse it as it's read, without having to grow the buffer into which it is read, for maximum efficiency.
Basically I'm thinking that if I can find a buffer of the perfect size then I can read into this, parse it, and then write over the buffer again, then parse that, etc. The issue here is that the zlib reader appears to read an arbitrary number of bytes each time Read(b) is called; it does not fill the slice. Because of this I don't know what the perfect buffer size would be. I'm concerned that it might break up some of the data that I wrote into two chunks, making it difficult to parse because one say uint64 could be split from into two reads and therefore not occur in the same buffer read - or perhaps that can never happen and it's always read out in chunks of the same size as were originally written?
What is the optimal buffer size, or is there a way to calculate this?
If I have written data into the zlib writer with f.Write(b []byte) is it possible that this same data could be split into two reads when reading back the compressed data (meaning I will have to have a history during parsing), or will it always come back in the same read?
You can wrap your zlib reader in a bufio reader, then implement a specialized reader on top that will rebuild your chunks of data by reading from the bufio reader until a full chunk is read. Be aware that bufio.Read calls Read at most once on the underlying Reader, so you need to call ReadByte in a loop. bufio will however take care of the unpredictable size of data returned by the zlib reader for you.
If you do not want to implement a specialized reader, you can just go with a bufio reader and read as many bytes as needed with ReadByte() to fill a given data type. The optimal buffer size is at least the size of your largest data structure, up to whatever you can shove into memory.
If you read directly from the zlib reader, there is no guarantee that your data won't be split between two reads.
Another, maybe cleaner, solution is to implement a writer for your data, then use io.Copy(your_writer, zlib_reader).
OK, so I figured this out in the end using my own implementation of a reader.
Basically the struct looks like this:
type reader struct {
at int
n int
f io.ReadCloser
buf []byte
}
This can be attached to the zlib reader:
// Open file for reading
fi, err := os.Open(filename)
if err != nil {
return nil, err
}
defer fi.Close()
// Attach zlib reader
r := new(reader)
r.buf = make([]byte, 2048)
r.f, err = zlib.NewReader(fi)
if err != nil {
return nil, err
}
defer r.f.Close()
Then x number of bytes can be read straight out of the zlib reader using a function like this:
mydata := r.readx(10)
func (r *reader) readx(x int) []byte {
for r.n < x {
copy(r.buf, r.buf[r.at:r.at+r.n])
r.at = 0
m, err := r.f.Read(r.buf[r.n:])
if err != nil {
panic(err)
}
r.n += m
}
tmp := make([]byte, x)
copy(tmp, r.buf[r.at:r.at+x]) // must be copied to avoid memory leak
r.at += x
r.n -= x
return tmp
}
Note that I have no need to check for EOF because I my parser should stop itself at the right place.

Under what circumstances would the two return values of crypto/rand read() ever be useful?

The typical usage of crypto/rand goes something like this:
salt := make([]byte, saltLength)
n,err := rand.Read(salt)
Which fills the byte slice I have labeled "salt" here with a sequence of random bytes.
Under what circumstances might the random number generator fail? Would it be insecure to fall back to a math/rand equivalent in the event that err is not nil?
Since the length of the byte slice is already known, n also seems useless to me, is there any reason I wouldn't just use _,err in its place?
To be safe your code should look more like this:
package main
import (
"crypto/rand"
"fmt"
)
func main() {
saltLength := 16
salt := make([]byte, saltLength)
n, err := rand.Read(salt[:cap(salt)])
if err != nil {
// handle error
}
salt = salt[:n]
if len(salt) != saltLength {
// handle error
}
fmt.Println(len(salt), salt)
}
Output:
16 [191 235 81 37 175 238 93 202 230 158 41 199 202 85 67 209]
n may be less than len(salt) if insufficient entropy is available. You should always check for errors.
For example, one of the many ways to obtain a sequence of random numbers is the getrandom system call on Linux or the CryptGenRandom API call on Windows.
References:
random: introduce getrandom(2) system call
CryptGenRandom function
ADDENDUM:
The crypto/rand package is a cryptographically secure pseudorandom number generator. Package math/rand is not cryptographically secure.
There are too many paths in even a simple program to test them all. Therefore, the only way to write programs with zero defects and zero bugs is to write readable, maintainable code that is provably correct. Systematic Programming by Niklaus Wirth is a good primer. It's worthwhile to spend time on constructing a robust general form, which can easily be adapted to each special case and that is easily maintainable as requirements change.
For example, for the io.Reader interface, typical usage is a looping pattern.
func Reader(rdr io.Reader) error {
bufLen := 256
buf := make([]byte, bufLen)
for {
n, err := rdr.Read(buf[:cap(buf)])
if n == 0 {
if err == nil {
continue
}
if err == io.EOF {
break
}
return err
}
buf = buf[:n]
// process read buffer
if err != nil && err != io.EOF {
return err
}
}
return nil
}
type Reader
type Reader interface {
Read(p []byte) (n int, err error)
}
Reader is the interface that wraps the basic Read method.
Read reads up to len(p) bytes into p. It returns the number of bytes
read (0 <= n <= len(p)) and any error encountered. Even if Read
returns n < len(p), it may use all of p as scratch space during the
call. If some data is available but not len(p) bytes, Read
conventionally returns what is available instead of waiting for more.
When Read encounters an error or end-of-file condition after
successfully reading n > 0 bytes, it returns the number of bytes read.
It may return the (non-nil) error from the same call or return the
error (and n == 0) from a subsequent call. An instance of this general
case is that a Reader returning a non-zero number of bytes at the end
of the input stream may return either err == EOF or err == nil. The
next Read should return 0, EOF regardless.
Callers should always process the n > 0 bytes returned before
considering the error err. Doing so correctly handles I/O errors that
happen after reading some bytes and also both of the allowed EOF
behaviors.
Implementations of Read are discouraged from returning a zero byte
count with a nil error, and callers should treat that situation as a
no-op.
We only want to allocate the buffer once, before we start the Read loop. However, we want the compiler and runtime to detect if we stray outside the valid buffer length n in the Read loop, so we write buf = buf[:n]. However, when we loop to the next Read we explicitly want the full buffer: buf[:cap(buf).
It's never wrong to write Read(buf[:cap(buf)]). Even though you may not have a Read loop now, you may add one later, and you may forget to reset the buffer length. There may be special case for a particular Read implementation, like an underlying ReadFull. Now you have to read and monitor the underlying code to prove that your code is correct. Documentation is not always reliable. And you can't safely switch to another io.Reader Read implementation.
When you access the salt slice, salt[:len(salt)], you are using len(salt) not n. If they differ, you have a bug.
"implementations should follow a general principle of robustness: be
conservative in what you do, be liberal in what you accept from
others." Jon
Postel

What is go lang http.Request Body in term of computer science?

in http.Request type Body is closed when request is send by client. Why it need to be closed, why it can not be string, which you can read over and over?
This is called a stream. It's useful because it lets you handle data without having the whole set of data available in memory. It also lets you give the results of the operations you may do faster : you don't wait for the whole set to be computed.
As soon as you want to handle big data or worry about performances, you need streams.
It's also a convenient abstraction that lets you handle data one by one even when the whole set is available without having to handle an offset to iterate over the whole.
You can store the request stream as a string using the bytes and the io package:
func handler(w http.ResponseWriter, r *http.Request) {
var bodyAsString string
b := new(bytes.Buffer)
_, err := io.Copy(b, r)
if err == io.EOF {
bodyAsString = b.String()
}
}

Resources