Golang - Testing HTTP Request in Method - go

I'm a little confused when it comes to testing in Go. I've read that abstracting to interfaces should be the ideal way to go in some cases, in other cases I see TestTables. I'm not too sure when to apply either one. For instance, how would one go about testing the function below.
type User{
Name string `json:"name"`
IsMarried bool `json:"isMarried"`
Nicknames []string `json:"nicknames"`
}
func (u *User) Create() (*http.Response, error) {
data, err := json.Marshal(u)
if err != nil {
return nil, err
}
urll := EndpointBase+"/users"
req, err := http.NewRequest(http.MethodPost, urll, bytes.NewReader(data))
if err != nil {
return nil, err
}
resp, err := auth.Session.Client.Do(req)
if err != nil {
return nil, err
}
return resp, nil
}

Abstracting to interfaces and table-driven tests are unrelated concepts that are commonly both used.
You would abstract to interfaces for your dependencies so that you can mock/stub them as needed (in this case, your dependencies are whatever you're calling with HTTP, whatever auth is, and whatever the global EndpointBase is).
Table-driven tests allow you to write multiple test cases more efficiently with less repeated code in your test.
I'd say that unit testing this function won't have much value though, because it's such a thin wrapper around an HTTP call. An integration test would be more useful, in which case abstracting to interfaces wouldn't help with testing (though it could be a good design decision anyway).

Related

Best way to pass context

I've done a lot of research regarding context, but I can't seem to find a generally accepted answer, plus I'm new to Go.
In my current code I've var ctx = context.Background(),which is used in various places.
My concern is, aren't all my code modifying the same context since it's a global variable? .
Yes, I know context is request scoped.
This is part of my code for context.
var ctx = context.Background()
var db *firestore.Client
var auth *aut.Client
func init() {
app, err := firebase.NewApp(ctx, nil)
if err != nil {
log.Fatal(err)
}
db, err = app.Firestore(ctx)
if err != nil {
log.Fatal(err)
}
auth, err = app.Auth(ctx)
if err != nil {
log.Fatal(err)
}
}
func SetRate(r int) (err error) {
//TODO: create last updated field
_, err = db.Collection("Rate").Doc("rate").Set(ctx, map[string]int{"USDT": r})
if err != nil {
log.Println(err)
return err
}
return nil
}
Please try not to use overly complicated words to describe a term.
Its an accepted practice in go to pass context from function to function. Normally, the first parameter of every function if context type. I have seen that whenever a context is passed down and has some use-case with in the method scope, a new context is created from parent context.
It is best practice to create a context inside of a function and pass it between functions as needed, rather than having the one context shared across the package. For something like a HTTP server, you will typically see a unique context for each incoming API call.

Add headers for each HTTP request using client

I know that I can add headers to each HTTP request manually using
cli := &http.Client{}
req, err := http.NewRequest("GET", "https://myhost", nil)
req.Header.Add("X-Test", "true")
if err != nil {
panic(err)
}
rsp, err := cli.Do(req)
but I want to add this header automatically for each HTTP request in my app.
What is the best way to do it?
I'm aware of three possible solutions to this. In (my) order of preference:
Wrap http.NewRequest with custom code that adds desired headers:
func MyRequest(method, path string, body io.Reader) (*http.Request, error) {
req, err := http.NewRequest(method, path, body)
if err != nil {
return nil, err
}
req.Header.Add("X-Test", "true")
return req, nil
}
This approach has the advantage of being straight-forward, non-magical, and portable. It will work with any third-party software, that adds its own headers, or sets custom transports.
The only case where this won't work is if you depend on a third-party library to create your HTTP requests. I expect this is rare (I don't recall ever running into this in my own experience). And even in such a case, perhaps you can wrap that call instead.
Wrap calls to client.Do to add headers, and possibly any other shared logic.
func MyDo(client *http.Client, req *http.Request) (*http.Response, error) {
req.Header.Add("X-Test", "true")
// Any other common handling of the request
res, err := client.Do(req)
if err != nil {
return nil, err
}
// Any common handling of response
return res, nil
}
This approach is also straight-forward, and has the added advantage (over #1) of making it easy to reduce other boilerplate. This general method can also work very well in conjunction with #1. One possible draw-back is that you must always call your MyDo method directly, meaning you cannot rely on third party software which calls http.Do itself.
Use a custom http.Transport
type myTransport struct{}
func (t *myTransport) RoundTrip(req *http.Request) (*http.Response, error) {
req.Header.Add("X-Test", "true")
return http.DefaultTransport.RoundTrip(req)
}
Then use it like this:
client := &Client{Transport: &myTransport{}}
req := http.NewRequest("GET", "/foo", nil)
res, err := client.Do(req)
This approach has the advantage of working "behind the scenes" with just about any other software, so if you rely on a third-party library to create your http.Request objects, and to call http.Do, this may be your only option.
However, this has the potential disadvantage of being non-obvious, and possibly breaking if you're using any third-party software which also sets a custom transport (without bothering to honor an existing custom transport).
Ultimately, which method you use will depend on what type of portability you need with third-party software. But if that's not a concern, I suggest using the most obvious solution, which, by my estimation, is the order provided above.
It's possible to configure http.Client to use custom transport, which can handle each request in the client (found this implementation in golang.org/x/oauth2 library). This example appends headers to each http request:
type transport struct {
headers map[string]string
base http.RoundTripper
}
func (t *transport) RoundTrip(req *http.Request) (*http.Response, error) {
for k, v := range t.headers {
req.Header.Add(k, v)
}
base := t.base
if base == nil {
base = http.DefaultTransport
}
return base.RoundTrip(req)
}
func main() {
cli := &http.Client{
Transport: &transport{
headers: map[string]string{
"X-Test": "true",
},
},
}
rsp, err := cli.Get("http://localhost:8080")
defer rsp.Body.Close()
if err != nil {
panic(err)
}
}

Properly handling errors

Typically in Go you find the following convention:
res, err := thingThatCanError(arg)
if err != nil {
// handle it
}
However, it's obvious this gets VERY unruly very quickly for a large number of these calls:
res, err := thingThatCanError(arg)
if err != nil {
// handle it
}
res, err2 := thingThatCanError(arg)
if err2 != nil {
// handle it
}
res, err3 := thingThatCanError(arg)
if err3 != nil {
// handle it
}
There's more lines of boilerplate error handling than code! This website says to avoid this but does not give an example on how to clean up this smell. A useful example comes straight from the Go blog that shows us how to clean up a homogenous HTTP app with an error handler that makes sense.
But imagine each of these calls aren't homogenous, as in with the same "central idea", so a single "error handler struct" wouldn't make a lot of sense.
Is there a way to clean up this type of code smell with functions that don't "mesh together" nicely in terms of errors?
Unfortunately there's sometimes no way around these patterns. You could use panic/defer as a makeshift try/catch system but the community looks down upon it.
If statements in Go can be combined with assignments so
err := thing.Do()
if err != nil {
return err
}
can become
if err := thing.Do(); err != nil {
return err
}

Split function into 2 function for test coverage

How can I test the error for ioutil.ReadAll(rep.Body)? Do I need to split my function in two, one which will make the request, and another one which will read the body and return the bytes and error?
func fetchUrl(URL string) ([]bytes, error) {
resp, err := http.Get(URL)
if err != nil {
return nil, err
}
body, err := ioutil.ReadAll(resp.Body)
resp.Body.Close()
if err != nil {
return nil, err
}
return body, nil
}
Do I need to split my function in two, one which will make the request, and another one which will read the body and return the bytes and error?
The first one is called http.Get and the other one ioutil.ReadAll, so I don't think there's anything to split. You just created a function that uses two other functions together which you should assume are working correctly. You could even simplify your function to make it more obvious:
func fetchURL(URL string) ([]byte, error) {
resp, err := http.Get(URL)
if err != nil {
return nil, err
}
defer resp.Body.Close()
return ioutil.ReadAll(resp.Body)
}
If you want to test anything is your fetchURL function using http.Get and ioutil.ReadAll together. I wouldn't personally bother to test it directly, but if you insist on it, you can overwrite http.DefaultTransport for a single test and provide your own, which returns http.Response with body implementing some error scenarios (e.g. and error during body read).
Here is the sketch idea:
type BrokenTransport struct {
}
func (*BrokenTransport) RoundTrip(*http.Request) (*http.Response, error) {
// Return Response with Body implementing specific error behaviour
}
http.DefaultTransport = &BrokenTransport{}
// http.Get will now use your RoundTripper.
// You should probably restore http.DefaultTransport after the test.
Basically yes, unless you're using net/http/httptest or a similar way to mock your HTTP server when testing.
But the question is: what would you really be testing? That ioutil.ReadAll() detects errors? But I'm sure this was already covered by the test suite of the Go's stdlib.
Hence I'd say that in this particular case you're about to test for the testing's sake. IMO for such trivial cases it's better to concentrate on how the fetched result is further processed.

Avoid checking if error is nil repetition?

I'm currently learning go and some of my code looks like this:
a, err := doA()
if err != nil {
return nil, err
}
b, err := doB(a)
if err != nil {
return nil, err
}
c, err := doC(b)
if err != nil {
return nil, err
}
... and so on ...
This looks kinda wrong to me because the error checking takes most of the lines. Is there a better way to do error handling? Can I maybe avoid this with some refactoring?
UPDATE: Thank you for all the answers. Please note that in my example doB depends on a, doC depends on b and so on. Therefore most suggested refactorings don't work in this case. Any other suggestion?
This is a common complaint, and there are several answers to it.
Here are a few common ones:
1 - It's not so bad
This is a very common reaction to these complaints. The fact you have a few extra lines of code in your code is not in fact so bad. It's just a bit of cheap typing, and very easy to handle when on the reading side.
2 - It's actually a good thing
This is based on the fact that typing and reading these extra lines is a very good reminder that in fact your logic might escape at that point, and you have to undo any resource management that you've put in place in the lines preceding it. This is usually brought up in comparison with exceptions, which can break the flow of logic in an implicit way, forcing the developer to always have the hidden error path in mind instead. Some time ago I wrote a more in-depth rant about this here.
3 - Use panic/recover
In some specific circumstances, you may avoid some of that work by using panic with a known type, and then using recover right before your package code goes out into the world, transforming it into a proper error and returning that instead. This technique is seen most commonly to unroll recursive logic such as (un)marshalers.
I personally try hard to not abuse this too much, because I correlate more closely with points 1 and 2.
4 - Reorganize the code a bit
In some circumstances, you can reorganize the logic slightly to avoid the repetition.
As a trivial example, this:
err := doA()
if err != nil {
return err
}
err := doB()
if err != nil {
return err
}
return nil
can also be organized as:
err := doA()
if err != nil {
return err
}
return doB()
5 - Use named results
Some people use named results to strip out the err variable from the return statement. I'd recommend against doing that, though, because it saves very little, reduces the clarity of the code, and makes the logic prone to subtle issues when one or more results get defined before the bail-out return statement.
6 - Use the statement before the if condition
As Tom Wilde well reminded in the comment below, if statements in Go accept a simple statement before the condition. So you can do this:
if err := doA(); err != nil {
return err
}
This is a fine Go idiom, and used often.
In some specific cases, I prefer to avoid embedding the statement in this fashion just to make it stand on its own for clarity purposes, but this is a subtle and personal thing.
You could use named return parameters to shorten things a bit
Playground link
func doStuff() (result string, err error) {
a, err := doA()
if err != nil {
return
}
b, err := doB(a)
if err != nil {
return
}
result, err = doC(b)
if err != nil {
return
}
return
}
After you've been programming in Go a while you'll appreciate that having to check the error for every function makes you think about what it actually means if that function goes wrong and how you should be dealing with it.
If you have many of such re-occurring situations where you have several of these
error checks you may define yourself a utility function like the following:
func validError(errs ...error) error {
for i, _ := range errs {
if errs[i] != nil {
return errs[i]
}
}
return nil
}
This enables you to select one of the errors and return if there is one which
is non-nil.
Example usage (full version on play):
x, err1 := doSomething(2)
y, err2 := doSomething(3)
if e := validError(err1, err2); e != nil {
return e
}
Of course, this can be only applied if the functions do not depend on each other
but this is a general precondition of summarizing error handling.
You could create context type with result value and error.
type Type1 struct {
a int
b int
c int
err error
}
func (t *Type1) doA() {
if t.err != nil {
return
}
// do something
if err := do(); err != nil {
t.err = err
}
}
func (t *Type1) doB() {
if t.err != nil {
return
}
// do something
b, err := t.doWithA(a)
if err != nil {
t.err = err
return
}
t.b = b
}
func (t *Type1) doC() {
if t.err != nil {
return
}
// do something
c, err := do()
if err != nil {
t.err = err
return
}
t.c = c
}
func main() {
t := Type1{}
t.doA()
t.doB()
t.doC()
if t.err != nil {
// handle error in t
}
}
It looks wrong to you perhaps because you are used to not handling errors at the call site. This is quite idiomatic for go but looks like a lot of boilerplate if you aren't used to it.
It does come with some advantages though.
you have to think about what the proper way to handle this error is at the site where the error was generated.
It's easy reading the code to see every point at which the code will abort and return early.
If it really bugs you you can get creative with for loops and anonymous functions but that often gets complicated and hard to read.
You can pass an error as a function argument
func doA() (A, error) {
...
}
func doB(a A, err error) (B, error) {
...
}
c, err := doB(doA())
I've noticed some methods in the "html/template" package do this e.g.
func Must(t *Template, err error) *Template {
if err != nil {
panic(err)
}
return t
}

Resources