Is this code ok to avoid a big HTTP request? Golang - go

I am currently learning to use golang as a server side language. I'm learning how to handle forms, and so I wanted to see how I could prevent some malicious client from sending a very large (in the case of a form with multipart/form-data) file and causing the server to run out of memory. For now this is my code which I found in a question here on stackoverflow:
part, _ := ioutil.ReadAll(io.LimitReader(r.Body, 8388608))
r.Body = ioutil.NopCloser(io.MultiReader(bytes.NewReader(part), r.Body))
In my code r is equal to *http.Request. So, I think that code works well, but what happens is that when I send a file regardless of its size (according to my code, the maximum size is 8M) my code still receives the entire file, so I have doubts that my code actually works. So my question is. Does my code really work wrong? Is there a concept that I am missing and that is why I think my code is malfunctioning? How can I limit the size of an http request correctly?
Update
I tried to run the code that was shown in the answers, I mean, this code:
part, _ := ioutil.ReadAll(io.LimitReader(r.Body, 8388608))
r.Body = ioutil.NopCloser(bytes.NewReader(part))
But when I run that code, and when I send a file larger than 8M I get this message from my web browser:
The connection was reset
The connection to the server was reset while the page was loading.
How can I solve that? How can I read only 8M maximum but without getting that error?

I would ask the question: "How is your service intended/expected to behave if it receives a request greater than the maximum size?"
Perhaps you could simply check the ContentLength of the request and immediately return a 400 Bad Request if it exceeds your maximum?
func MyHandler(rw http.ResponseWriter, rq *http.Request) {
if rq.ContentLength > 8388608 {
rw.WriteHeader(http.StatusBadRequest)
rw.Write([]byte("request content limit exceeded"))
return
}
// ... normal processing
}
This has the advantage of not reading anything and deciding not to proceed at the earliest possible opportunity (short of some throttling on the ingress itself), minimising cpu and memory load on your process.
It also simplifies your normal processing which then does not have to be concerned with catering for circumstances where a partial request might be involved, or aborting and possibly having to clean up processing if the request content limit is reached before all content has been processed..

Your code reads:
r.Body = ioutil.NopCloser(io.MultiReader(bytes.NewReader(part), r.Body))
This means that you are assigned a new io.MultiReader to your body that:
reads at most 8388608 from a byte slice in memory
and then reads the rest of the body after those 8388608 bytes
To ensure that you only read 8388608 bytes at most, replace that line with:
r.Body = ioutil.NopCloser(bytes.NewReader(part))

Related

HTTP request without last byte?

I'm looking to test load my app in Golang. I haven't found this functionality in already existing tools, I tried all of them. Here is what I'm trying to do:
Create 100 exactly the same HTTP requests (as goroutines)
From each goroutine connect to HTTP server and send the body of the response (which can be up to few MB), except the last byte
Synchronize between all goroutines - pretty much wait until all threads are at the point where there is only 1 byte left to send
Based on input from Terminal (for example, when I hit Enter), send the remaining byte, so I can test how the server handles this type of load - 100 large requests at the same time
I looked at the docs of the standard HTTP library, and I don't think it's possible wit standard tools. I'm looking to rewrite some parts of HTTP library to have this support, or maybe even use the plain old OS sockets to perform this type of functionality. It will require a lot of time just to implement that.
I'm wondering if I'm missing something here, some kind of HTTP library feature that allows to do that easily? Appreiate any suggestion that might work without a full rewrite.
To my understanding there is no way to send part of a http request then the rest at the end, but I believe I can help with the concurrency part.
Two variables here, threads (mind the python terminology) = number of simultaneous goroutines, number = number of times to
func main() {
fmt.Println("Input # of times to run")
var number int
fmt.Scan(&number)
fmt.Println("Input # of threads")
var threads int
fmt.Scan(&threads)
swg := sizedwaitgroup.New(threads)
for i := 0; i < number; i++ {
swg.Add()
go func(i int) {
defer swg.Done()//Ensure to put your request after this line
//Do request
}(i)
}
swg.Wait()
}
This code uses the github.com/remeh/sizedwaitgroup library
Bear in mind, if one of the first requests is completed, it will start another without waiting for others to finish.
Here's it in practice:
https://codeshare.io/3A3dj4
https://pastebin.com/DP1sn1m4
Edit:
If you further and manage to send all but the last byte of the http request, you'll be wanted to use channels to communicate when to send the last byte, I'm not too good at them but this guide is great:
https://go.dev/blog/pipelines

Ruby progressbar with down gem

I am implementing a file downloader by using the down gem.
I need to add a progress bar to my program for fancy outputs. I found a gem called ruby-progressbar. However, I couldn't integrate it to my code base even though I followed the instructions documented on the official site. Here's what I have done so far:
First, I thought of using progress_proc. It was a bad idea because progress_proc returns chunked partial of the data.
Second, I streamed the data and built an idea on calculating chunked data. It worked well actually, but it smells bad to me.
Plus, here is the small part of my code base. I hope it helps you understand the concept.
progressbar = ProgressBar.create(title: 'File 1')
Down.download(url, progress_proc: ->(progress) { progressbar.progress = progress }) # It doesn't work
progressbar = ProgressBar.create(title: 'File 1')
file = Down.open(url, progress_proc: ->(progress) { progressbar.progress = progress })
chunked = 0
loop do
break if file.eof?
file.read(1024)
chunked += 1024
progressbar.progress = (chunked / file.size) * 100
end
# This worked well as I remember. It can be faulty because I wrote it down without testing.
In the HTTP protocol, there are two different ways on how a client can determine the full length of a response:
In the most common case, the entire response is sent by the server in one go. Here, the length of the response body in bytes is set in the Content-Length header of the response. Thus, if the response is not chunked, you can get the value of this header and read the response in one go as it is sent by the server.
The second option is for the server to send a chunked response. Here, the server sends chunks of the entire response, one after another. Each chunk is prefixed with the length of the chunk. However, the client has no way to know how many chunks there are in total, nor how large the total response may be. Often, this is even unknown to the server as the first chunks are already sent before the entire response is available to the server.
The down gem follows these two approaches by offering two interfaces:
In the first case (i.e. if the content length of the entire response is known), the gem will call the given content_length_proc once.
In the second case, as the entire length of the response is unknown before it was received in total, the down gem calls the progress_proc once for each chunk received. In this case, it is up to you to show something useful. In general, you can NOT show a progress bar as a percentage of completion here.

Request body too large causing connection reset in Go

I have a simple multipart form which uploads to a Go app. I wanted to set a restriction on the upload size, so I did the following:
func myHandler(rw http.ResponseWriter, request *http.Request){
request.Body = http.MaxBytesReader(rw, request.Body, 1024)
err := request.ParseMultipartForm(1024)
if err != nil{
// Some response.
}
}
Whenever an upload exceeds the maximum size, I get a connection reset like the following:
and yet the code continues executing. I can't seem to provide any feedback to the user. Instead of severing the connection I'd prefer to say "You've exceeded the size limit". Is this possible?
This code works as intended. Description of http.MaxBytesReader
MaxBytesReader is similar to io.LimitReader but is intended for
limiting the size of incoming request bodies. In contrast to
io.LimitReader, MaxBytesReader's result is a ReadCloser, returns a
non-EOF error for a Read beyond the limit, and closes the underlying
reader when its Close method is called.
MaxBytesReader prevents clients from accidentally or maliciously
sending a large request and wasting server resources.
You could use io.LimitReader to read just N bytes and then do the handling of the HTTP request on your own.
The only way to force a client to stop sending data is to forcefully close the connection, which is what you're doing with http.MaxBytesReader.
You could use a io.LimitReader wrapped in a ioutil.NopCloser, and notify the client of the error state. You could then check for more data, and try and drain the connection up to another limit to keep it open. However, clients that aren't responding correctly to MaxBytesReader may not work in this case either.
The graceful way to handle something like this is using Expect: 100-continue, but that only really applies to clients other than web browsers.

Stop processing a http POST request if it is larger than x size

If I have a basic http handler for POST requests, how can I stop processing if the payload is larger than 100 KB?
From what I understand, in my POST Handler, behind the scenes the server is streaming the POSTED data. But if I try and access it, it will block correct?
I want to stop processing if it is over 100 KB in size.
Use http.MaxBytesReader to limit the amount of data read from the client. Execute this line of code
r.Body = http.MaxBytesReader(w, r.Body, 100000)
before calling r.ParseForm, r.FormValue or any other request method that reads the body.
Wrapping the request body with io.LimitedReader limits the amount of data read by the application, but does not necessarily limit the amount of data read by the server on behalf of the application.
Checking the request content length is unreliable because the field is not set to the actual request body size when chunked encoding is used.
I believe you can simply check http.Request.ContentLength param to know about the size of the posted request prior to decide whether to go ahead or return error if larger than expected.

Limiting file size in FormFile

I'm letting users upload a file using FormFile. At what point should I check if the file size is too large. When I do
file, header, fileErr := r.FormFile("file")
A file object is already created. So have I incurred the cost of reading in the entire file already?
https://golang.org/pkg/net/http#Request.FormFile
Use http.MaxBytesReader to limit the number of bytes read from the request. Before calling ParseMultiPartForm or FormFile, execute this line:
r.Body = http.MaxBytesReader(w, r.Body, max)
where r is the *http.Request and w is the http.Response.
MaxBytesReader limits the bytes read for the entire request body and not an individual file. A limit on the request body size can be a good approximation of a limit on the file size when there's only one file upload. If you need to enforce a specific limit for one or more files, then set the MaxBytesReader limit large enough for all expected request data and check FileHeader.Size for each file.
When the http.MaxBytesReader limit is breached, the server stops reading from the request and closes the connection after the handler returns.
If you want to limit the amount of memory used instead of the request body size, then call r.ParseMultipartForm(maxMemory) before calling r.FormFile(). This will use up to maxMemory bytes for file parts, with the remainder stored in temporary files on disk. This call does not limit the total number of bytes read from the client or the size of an uploaded file.
Checking the request Content-Length header does not work for two reasons:
The content length is not set for chunked request bodies.
The server may read the entire request body to support connection keep-alive. Breaching the MaxBytesReader limit is the only way to ensure that the server stops reading the request body.
Some people are suggesting to rely on Content-Length header and I have to warn you not to use it at all. This header can be any number because it can be changed by a client regardless of the actual file size.
Use MaxBytesReader because:
MaxBytesReader prevents clients from accidentally or maliciously
sending a large request and wasting server resources.
Here is an example:
r.Body = http.MaxBytesReader(w, r.Body, 2 * 1024 * 1024) // 2 Mb
clientFile, handler, err := r.FormFile(formDataKey)
if err != nil {
log.Println(err)
return
}
If your request body is bigger than 2 Mb, you will see something like this: multipart: NextPart: http: request body too large
Calling FormFile calls ParseMultiPartForm, which will parse the entire request body, using up to 32M by default before storing the contents in temporary files. You can call ParseMultiPartForm yourself before calling FormFile to determine how much memory to consume, but the body will still be parsed.
Th client may provide a Content-Length header in the multipart.FileHeader which you could use, but that is dependent on the client.
If you want to limit the incoming request size, wrap the request.Body with MaxBytesReader in your handler before parsing any of the Body.
You have r.ContentLength int64 field in request struct and r.Header.Get("Content-Length") string method. Maybe that can help.

Resources