Basic web tweaks that all applications should have - go

Currently my web app is just a router and handlers.
What are some important things I am missing to make this production worthy?
I believe I have to set the # of procs to ensure this uses maximum goroutines?
Should I be using output buffering?
Anything else you see missing that is best-practise?
var (
templates = template.Must(template.ParseFiles("templates/home.html")
)
func main() {
r := mux.NewRouter()
r.HandleFunc("/", WelcomeHandler)
http.ListenAndServe(":9000", r)
}
func WelcomeHandler(w http.ResponseWriter, r *http.Request) {
homePage, err := api.LoadHomePage()
if err != nil {
}
tmpl := "home"
renderTemplate(w, tmpl, homePage)
}
func renderTemplate(w http.ResponseWriter, tmpl string, hp *HomePage) {
err := templates.ExecuteTemplate(w, tmpl+".html", hp)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
}
}

You don't need to set/change runtime.GOMAXPROCS() as since Go 1.5 it defaults to the number of available CPU cores.
Buffering output? From the performance point of view, you don't need to. But there may be other considerations for which you may.
For example, your renderTemplate() function may potentially panic. If executing the template starts writing to the output, it involves setting the HTTP response code and other headers prior to writing data. And if a template execution error occurs after that, it will return an error, and so your code attempts to send back an error response. At this point HTTP headers are already written, and this http.Error() function will try to set headers again => panic.
One way to avoid this is to first render the template into a buffer (e.g. bytes.Buffer), and if no error is returned by the template execution, then you can write the content of the buffer to the response writer. If error occurs, then of course you won't write the content of the buffer, but send back an error response just like you did.
To sum it up, your code is production ready performance-wise (excluding the way you handle template execution errors).

WelcomeHandler should return when err != nil is true.
Log the error when one is hit to help investigation.
Place templates = template.Must(template.ParseFiles("templates/home.html") in the init. Split it into separate lines. If template.ParseFiles returns an then error make a Fatal log. And if you have multiple templates to initialize then initialize them in goroutines with a common WaitGroup to speed up the startup.
Since you are using mux, HTTP Server is too clean with its URLs might also be good to know.
You might also want to reconsider the decision of letting the user's know why they got the http.StatusInternalServerError response.
Setting the GOMAXPROCS > 1 if you have more the one core would definitely be a good idea but I would keep it less than number of cores available.

Related

Should there be a new datastore.Client per HTTP request?

The official Go documentation on the datastore package (client library for the GCP datastore service) has the following code snippet for demonstartion:
type Entity struct {
Value string
}
func main() {
ctx := context.Background()
// Create a datastore client. In a typical application, you would create
// a single client which is reused for every datastore operation.
dsClient, err := datastore.NewClient(ctx, "my-project")
if err != nil {
// Handle error.
}
k := datastore.NameKey("Entity", "stringID", nil)
e := new(Entity)
if err := dsClient.Get(ctx, k, e); err != nil {
// Handle error.
}
old := e.Value
e.Value = "Hello World!"
if _, err := dsClient.Put(ctx, k, e); err != nil {
// Handle error.
}
fmt.Printf("Updated value from %q to %q\n", old, e.Value)
}
As one can see, it states that the datastore.Client should ideally only be instantiated once in an application. Now given that the datastore.NewClient function requires a context.Context object does it mean that it should get instantiated only once per HTTP request or can it safely be instantiated once globally with a context.Background() object?
Each operation requires a context.Context object again (e.g. dsClient.Get(ctx, k, e)) so is that the point where the HTTP request's context should be used?
I'm new to Go and can't really find any online resources which explain something like this very well with real world examples and actual best practice patterns.
You may use any context.Context for the datastore client creation, it may be context.Background(), that's completely fine. Client creation may be lengthy, it may require connecting to a remote server, authenticating, fetching configuration etc. If your use case has limited time, you may pass a context with timeout to abort the operation. Also if creation takes longer than the time you have, you may use a context with cancel and abort the mission at your will. These are just options which you may or may not use. But the "tools" are given via context.Context.
Later when you use the datastore.Client during serving (HTTP) client requests, then using the request's context is reasonable, so if a request gets cancelled, then so will its context, and so will the datastore operation you issue, rightfully, because if the client cannot see the result, then there's no point completing the query. Terminating the query early you might not end up using certain resources (e.g. datastore reads), and you may lower the server's load (by aborting jobs whose result will not be sent back to the client).

Error Handling For ResposeWriter / Write

I am using osin, Go Lang oAuth Server to try and build a oAuth sever.
So I have used, or i am trying to use the complete example given, to give me a good place to start playing with the code to see what I can do.
However, I have a lot of errors with the file. Now most seem to be about error checking and i have seem to fix them (I am using Visual Code, which as very good Go Lang support). However, no matter what I try I cant seem to fix the error handling for w.Write,
http.HandleFunc("/appauth/code", func(w http.ResponseWriter, r *http.Request) {
err := r.ParseForm() //*1
if err != nil {
log.Panic(err)
}
code := r.Form.Get("code")
w.Write([]byte("<html><body>")) *2
*1 - This was also missing an error and found my understanding this is how that is meant to be dealt with, please tell me if I am wrong?
*2 - With this I am getting, a errors unhandled call within Visual Code, and no matter how I try to deal with it, it will not work?
This is what I have tried,
err := w.Write([]byte("<html><body>"))
w.Write([]byte("<html><body>") (int, error) ) <- the Write seems to return an int and error?
So I am not sure how to deal with this type of error?
Discard the int if you don't need it:
_, err := w.Write([]byte("<html><body>"))

Extending Golang's http.Resp.Body to handle large files

I have a client application which reads in the full body of a http response into a buffer and performs some processing on it:
body, _ = ioutil.ReadAll(containerObject.Resp.Body)
The problem is that this application runs on an embedded device, so responses that are too large fill up the device RAM, causing Ubuntu to kill the process.
To avoid this, I check the content-length header and bypass processing if the document is too large. However, some servers (I'm looking at you, Microsoft) send very large html responses without setting content-length and crash the device.
The only way I can see of getting around this is to read the response body up to a certain length. If it reaches this limit, then a new reader could be created which first streams the in-memory buffer, then continues reading from the original Resp.Body. Ideally, I would assign this new reader to the containerObject.Resp.Body so that callers would not know the difference.
I'm new to GoLang and am not sure how to go about coding this. Any suggestions or alternative solutions would be greatly appreciated.
Edit 1: The caller expects a Resp.Body object, so the solution needs to be compatible with that interface.
Edit 2: I cannot parse small chunks of the document. Either the entire document is processed or it is passed unchanged to the caller, without loading it into memory.
If you need to read part of the response body, then reconstruct it in place for other callers, you can use a combination of an io.MultiReader and ioutil.NopCloser
resp, err := http.Get("http://google.com")
if err != nil {
return err
}
defer resp.Body.Close()
part, err := ioutil.ReadAll(io.LimitReader(resp.Body, maxReadSize))
if err != nil {
return err
}
// do something with part
// recombine the buffered part of the body with the rest of the stream
resp.Body = ioutil.NopCloser(io.MultiReader(bytes.NewReader(part), resp.Body))
// do something with the full Response.Body as an io.Reader
If you can't defer resp.Body.Close() because you intend to return the response before it's read in its entirety, you will need to augment the replacement body so that the Close() method applies to the original body. Rather than using the ioutil.NopCloser as the io.ReadCloser, create your own that refers to the correct method calls.
type readCloser struct {
io.Closer
io.Reader
}
resp.Body = readCloser{
Closer: resp.Body,
Reader: io.MultiReader(bytes.NewReader(part), resp.Body),
}

net/http server: too many open files error

I'm trying to develop a simple job queue server with some worker that query it but I encountered a problem with my net/http server. I'm surely doing something bad but after ~3 minutes my server start displaying :
http: Accept error: accept tcp [::]:4200: accept4: too many open files; retrying in 1s
For information it receive 10 request per second in my test case.
Here's two files to reproduce this error :
// server.go
package main
import (
"net/http"
)
func main() {
http.HandleFunc("/get", func(rw http.ResponseWriter, r *http.Request) {
http.Error(rw, "Try again", http.StatusInternalServerError)
})
http.ListenAndServe(":4200", nil)
}
// worker.go
package main
import (
"net/http"
"time"
)
func main() {
for {
res, _ := http.Get("http://localhost:4200/get")
defer res.Body.Close()
if res.StatusCode == http.StatusInternalServerError {
time.Sleep(100 * time.Millisecond)
continue
}
return
}
}
I already done some search about this error and I found some interesting response but none of these fixed my issue.
The first response I saw was to correctly close the Body in the http.Get response, as you can see I did it.
The second response was to change the file descriptor ulimit of my system but as I will not control where my app will run I prefer to not use this solution (But for information it's set at 1024 on my system)
Can someone explain me why this problem happen and how I can fix it in my code ?
Thanks a lot for your time
EDIT :
EDIT 2 : In comment Martin says that I'm not closing the Body, I tried to close it (without defer so) and it fixed the issue. Thanks Martin ! I was thinking that continue will execute my defer, I was wrong.
I found a post explaining the root problem in a lot more detail.
Nathan Smith even explains how to control timeouts on the TCP level, if needed.
Below is a summary of everything I could find on this particular problem, as well as the best practices to avoid this problem in future.
Problem
When a response is received regardless of whether response-body is required or not, the connection is kept alive until the response-body stream is closed. So, as mentioned in this thread, always close the response-body. Even if you do not need to use/read the body content:
func Ping(url string) (bool) {
// simple GET request on given URL
res, err := http.Get(url)
if err != nil {
// if unable to GET given URL, then ping must fail
return false
}
// always close the response-body, even if content is not required
defer res.Body.Close()
// is the page status okay?
return res.StatusCode == http.StatusOK
}
Best Practice
As mentioned by Nathan Smith never use the http.DefaultClient in production systems, this includes calls like http.Get as it uses http.DefaultClient at its base.
Another reason to avoid http.DefaultClient is that it is a Singleton (package level variable), meaning that the garbage collector will not try to clean it up, which will leave idling subsequent streams/sockets alive.
Instead create your own instance of http.Client and remember to always specify a sane Timeout:
func Ping(url string) (bool) {
// create a new instance of http client struct, with a timeout of 2sec
client := http.Client{ Timeout: time.Second * 2 }
// simple GET request on given URL
res, err := client.Get(url)
if err != nil {
// if unable to GET given URL, then ping must fail
return false
}
// always close the response-body, even if content is not required
defer res.Body.Close()
// is the page status okay?
return res.StatusCode == http.StatusOK
}
Safety Net
The safety net is for that newbie on the team, who does not know the shortfalls of http.DefaultClient usage. Or even that very useful, but not so active, open-source library that is still riddled with http.DefaultClient calls.
Since http.DefaultClient is a Singleton we can easily change the Timeout setting, just to ensure that legacy code does not cause idle connections to remain open.
I find it best to set this on the package main file in the init function:
package main
import (
"net/http"
"time"
)
func init() {
/*
Safety net for 'too many open files' issue on legacy code.
Set a sane timeout duration for the http.DefaultClient, to ensure idle connections are terminated.
Reference: https://stackoverflow.com/questions/37454236/net-http-server-too-many-open-files-error
*/
http.DefaultClient.Timeout = time.Minute * 10
}
As Martin say in comment I don't really closed the Body after the Get request. I used defer res.Body.Close() but it's not executed since I'm staying in the for loop. So continue dont't trigger defer
Please note that in some cases the setting in /etc/sysctl.conf
net.ipv4.tcp_tw_recycle = 1
Could cause this error because TCP connections remain open.
A temporary solution, just increase the number of open files:
ulimit -Sn 10000

Is it advisable to (further) limit the size of forms when using golang?

I searched around and as far as I can tell, POST form requests are already limited to 10MB (http://golang.org/src/net/http/request.go#L721).
If I were to go about reducing this in my ServeHTTP method, I'm not sure how to properly do it. I would try something like this:
r.Body = http.MaxBytesReader(w, r.Body, MaxFileSize)
err := r.ParseForm()
if err != nil {
//redirect to some error page
return
}
But would returning upon error close the connection as well? How would I prevent having to read everything? I found this: https://stackoverflow.com/a/26393261/2202497, but what if content length is not set and in the middle of reading I realize that the file is too big.
I'm using this as a security measure to prevent someone from hogging my server's resources.
The correct way to limit the size of the request body is to do as you suggested:
r.Body = http.MaxBytesReader(w, r.Body, MaxFileSize)
err := r.ParseForm()
if err != nil {
// redirect or set error status code.
return
}
MaxBytesReader sets a flag on the response when the limit is reached. When this flag is set, the server does not read the remainder of the request body and the server closes the connection on return from the handler.
If you are concerned about malicious clients, then you should also set Server.ReadTimeout, Server.WriteTimeout and possibly Server.MaxHeaderBytes.
If you want to set the request body limit for all of your handlers, then wrap root handler with a handler that sets the limit before delegating to the root handler:
type maxBytesHandler struct {
h http.Handler
n int64
}
func (h *maxBytesHandler) ServeHTTP(w http.ResponseWriter, r *http.Request) {
r.Body = http.MaxBytesReader(w, r.Body, h.n)
h.h.ServeHTTP(w, r)
}
Wrap the root handler when calling ListenAndServe:
log.Fatal(http.ListenAndServe(":8080", &maxBytesHandler{h:mux, n:4096))
or when configuring a server:
s := http.Server{
Addr: ":8080",
Handler: &maxBytesReader{h:mux, n:4096},
}
log.Fatal(s.ListenAndServe())
There's no need for a patch as suggested in another answer. MaxBytesReader is the official way to limit the size of the request body.
Edit: As others cited MaxByteReader is the supported way. It is interesting that the default reader is instead, limitreader after type asserting for max byte reader.
Submit a patch to the Go source code and make it configurable! You are working with an open source project after all. Adding a setter to http.Request and some unit tests for it is probably only 20 minutes worth of work. Having a hardcoded value here is a bit clunky, give back and fix it :).
You can of course implement your own ParseForm(r *http.Request) method if you really need to override this. Go is essentially BSD, so you can copy paste the library ParseForm and change the limit, but thats a bit ugly no?

Resources