Testing a handler that connects to db in go - go

I have a handler which connects to a db and retrieves the records. I wrote a test case for that and it goes this way:
main_test.go
package main
import (
"os"
"fmt"
"testing"
"net/http"
"net/http/httptest"
)
var a App
func TestMain(m *testing.M) {
a = App{}
a.InitializeDB(fmt.Sprintf("postgres://****:****#localhost/db?sslmode=disable"))
code := m.Run()
os.Exit(code)
}
func TestRulesetGet(t *testing.T) {
req, err := http.NewRequest("GET", "/1/sig/", nil)
if err != nil {
t.Fatal(err)
}
// We create a ResponseRecorder (which satisfies http.ResponseWriter) to record the response.
rr := httptest.NewRecorder()
handler := http.HandlerFunc(a.Get)
handler.ServeHTTP(rr, req)
// Check the response body is what we expect.
if len(rr.Body.String()) != 0 {
fmt.Println("Status OK : ", http.StatusOK)
fmt.Println("handler returned body: got ",
rr.Body.String())
}
}
I feel like this is a very basic test case where I'm just checking the response length (This is because I expect a slice). I'm not really sure whether this is the right way to write a test case. Please point out some loop holes so that I could write a solid test cases for my remaining handlers. And also, I'm not using the actual Error, and Fatal methods for checking the errors.

If it works then it's not wrong, but it doesn't validate much, only that the response body is non-empty - not even that it's successful. I'm not sure why it prints out http.StatusOK, which is a constant, and doesn't tell you what the status code in the response was.
Personally when I do this level of test I check, at the very least, that the response code is as expected, the response body unmarshals correctly (if it's JSON or XML), the response data is basically sane, etc. For more complex payloads I might use a golden file test. For critical code I might use a fuzz (aka monte carlo) test. For performance-critical code I'll likely add benchmarks and load tests. There are practically infinite ways to test code. You'll have to figure out what your needs are and how to meet them.

Related

unserialize php in goland

I have a file with serialized array in PHP.
The content of the file locks like this
a:2:{i:250;s:7:"my_catz";s:7:"abcd.jp";a:2:{s:11:"category_id";i:250;s:13:"category_name";s:7:"my_catz";}}
The array unserialized is this
(
[250] => my_catz
[abcd.jp] => Array
(
[category_id] => 250
[category_name] => my_catz
)
)
Now, i want to get the content of the file in GO, unserialize it convert it to an array.
In GO i can get the content of the file using
dat, err := os.ReadFile("/etc/squid3/compiled-categories.db")
if err != nil {
if e.Debug {
log.Printf("error reading /etc/squid3/compiled-categories.db: ", err)
}
}
And unserialized it using github.com/techoner/gophp library
package categorization
import (
"fmt"
"os"
"github.com/techoner/gophp"
"log"
"errors"
)
type Data struct {
Website string
Debug bool
}
func (e Data) CheckPersonalCategories() (int,string) {
if e.Debug {
log.Printf("Checking Personal Categories")
}
if _, err := os.Stat("/etc/squid3/compiled-categories.db"); errors.Is(err, os.ErrNotExist) {
if e.Debug {
log.Printf("/etc/squid3/compiled-categories.db not exit: ", err)
}
return 0,""
}
dat, err := os.ReadFile("/etc/squid3/compiled-categories.db")
if err != nil {
if e.Debug {
log.Printf("error reading /etc/squid3/compiled-categories.db: ", err)
}
}
out, _ := gophp.Unserialize(dat)
fmt.Println(out["abcd.jp"])
return 0,""
}
But I can't access to the array, for example, when I try access to array key using out["abcd.jp"] i get this error message
invalid operation: out["abcd.jp"] (type interface {} does not support indexing)
The result of out is
map[250:my_catz abcd.jp:map[category_id:250 category_name:my_catz]]
Seams that is unserializing
Don't make assumptions about what is and isn't succeeding in your code. Error responses are the only reliable way to know whether a function succeeded. In this case the assumption may hold, but ignoring errors is always a mistake. Invest time in catching errors and at least panic them - don't instead waste your time ignoring errors and then trying to debug unreliable code.
invalid operation: out["abcd.jp"] (type interface {} does not support indexing)
The package you're using unfortunately doesn't provide any documentation so you have to read the source to understand that gophp.Unserialize returns (interface{}, error). This makes sense; php can serialize any value, so Unserialize must be able to return any value.
out is therefore an interface{} whose underlying value depends on the data. To turn an interface{} into a particular value requires a type assertion. In this case, we think the underlying data should be map[string]interface{}. So we need to do a type assertion:
mout, ok := out.(map[string]interface{})
Before we get to the working code, one more point I'd like you to think about. Look at the code below: I started it from your code, but the resemblance is very slight. I took out almost all the code because it was completely irrelevant to your question. I added the input data to the code to make a minimal reproduction of your code (as I asked you to do and you declined to do). This is a very good use of your time for 2 reasons: first, it makes it a lot easier to get answers (both because it shows sufficient effort on your part and because it simplifies the description of the problem), and second, because it's excellent practice for debugging. I make minimal reproductions of code flows all the time to better understand how to do things.
You'll notice you can run this code now without any additional effort. That's the right way to provide a minimal reproducible example - not with a chunk of mostly irrelevant code which still can't be executed by anybody.
The Go Plaground is a great way to demonstrate go-specific code that others can execute and investigate. You can also see the code below at https://go.dev/play/p/QfCl08Gx53e
package main
import (
"fmt"
"github.com/techoner/gophp"
)
type Data struct {
Website string
Debug bool
}
func main() {
var dat = []byte(`a:2:{i:250;s:7:"my_catz";s:7:"abcd.jp";a:2:{s:11:"category_id";i:250;s:13:"category_name";s:7:"my_catz";}}`)
out, err := gophp.Unserialize(dat)
if err != nil {
panic(err)
}
if mout, ok := out.(map[string]interface{}); ok {
fmt.Println(mout["abcd.jp"])
}
}

Testing Golang function containing call to sql.Open connection without a DB

So I'm just getting to grips with Golang. I'm writing an application for funsies just to understand stuff and get my head around it.
I have a whole bunch of functions that will interact with a DB where I pass in *SQL.DB for the function to use. I can test those easily enough using a mocked interface from sqlmock.
No problem there.
I'm now writing the Initialisation function for the application which will initiate the DB connection which will be attached to a struct and from there passed into utility functions.
However, I am struggling to find a way to easily test that connection without having the hassle of setting up an actual database.
So I guessing that I have probably either badly structured my app, or I've missed something, potentially pretty obvious.
So here is some example code to illustrate my predicament.
util.go
package main
import (
"log"
"database/sql"
"github.com/go-sql-driver/mysql"
)
func DoDBStuff(db *sql.DB) {
rows, err := db.Query("SELECT column1, column2 FROM example")
if err != nil {
log.Error(err)
}
// do stuff with rows
}
util_test.go
package main
import (
"testing"
"github.com/DATA-DOG/go-sqlmock"
)
func TestDoDBStuff(t *testing.T) {
db, mock, err := sqlmock.New()
if err != nil {
t.Fatalf("An error '%s' was not expected when opening a stub database connection", err)
}
defer db.Close()
rows := sqlmock.NewRows([]string{"col1", "col2"})
rows.AddRow("val1", "val2")
rows.AddRow("val3", "val4")
mock.ExpectQuery("^SELECT column1, column2 from example$").WillReturnRows(rows)
DoDBStuff(db)
if err := mock.ExpectationsWereMet(); err != nil {
t.Errorf("there were unfulfilled expectations: %s", err)
}
}
That all works fine, I can test my DB queries.
However Now I want to test Initialising the App.
package main
import (
"database/sql"
"github.com/go-sql-driver/mysql"
)
type App {
DB *sql.DB
// some other data structures
}
func (a *App) InitApp(connectionString string) {
a.DB = sql.Open("mysql", connectionString)
// other init stuff
}
But as I can't pass in the SQL I don't think it can be mocked, certainly not easily. So I'm struggling a bit on how to move forward.
I am intending for this to sit behind a rest API, so on startup, the app will need to initialize before being able to process any requests.
Ideally, I'd like to be able to test the REST interface without having to set up a database, delay testing with real data until I can feed the code into a Dev environment.
So really I want to know:
Is what I'm intending possible?
Is there a better approach?
If not what am I missing? Poor test design or poor code set up?
Edit:
Folling #peter's comment I just want to clarify.
I want to test the functionality of the InitDB() function but with the sql.Open call I would need to have a Database for it to connect to, If I don't then I the call would fail and I could not effectively test the function.
There is Dockertest which creates a Docker container running whatever you want (i.e. MySQL), and you can test against that. It's designed for being able to do proper integration testing, so it should be able to do what you want (you wont need sqlmock anymore too).

How to get logrus to print stack of pkg/errors

I'm using github.com/sirupsen/logrus and github.com/pkg/errors. When I hand an error wrapped or created from pkg/errors, all I see in the log out is the error message. I want to see the stack trace.
From this issue, https://github.com/sirupsen/logrus/issues/506, I infer that logrus has some native method for working with pkg/errors.
How can I do this?
The comment on your Logrus issue is incorrect (and incidentally, appears to come from someone with no affiliation with Logrus, and who has made no contributions to Logrus, so not actually from "the Logrus team").
It is easy to extract the stack trace in a pkg/errors error, as documented:
type stackTracer interface {
StackTrace() errors.StackTrace
}
This means that the easiest way to log the stack trace with logrus would be simply:
if stackErr, ok := err.(stackTracer); ok {
log.WithField("stacktrace", fmt.Sprintf("%+v", stackErr.StackTrace()))
}
As of today, when my a pull request of mine was merged with pkg/errors, this is now even easier, if you're using JSON logging:
if stackErr, ok := err.(stackTracer); ok {
log.WithField("stacktrace", stackErr.StackTrace())
}
This will produce a log format similar to "%+v", but without newlines or tabs, with one log entry per string, for easy marshaling into a JSON array.
Of course, both of these options force you to use the format defined by pkg/errors, which isn't always ideal. So instead, you can iterate through the stack trace, and produce your own formatting, possibly producing a format easily marshalable to JSON.
if err, ok := err.(stackTracer); ok {
for _, f := range err.StackTrace() {
fmt.Printf("%+s:%d\n", f, f) // Or your own formatting
}
}
Rather than printing each frame, you can coerce it into any format you like.
The inference is wrong. Logrus does not actually know how to handle the error.
Update the Logrus team official said that this is NOT a supported feature, https://github.com/sirupsen/logrus/issues/895#issuecomment-457656556.
A Java-ish Response
In order to universally work with error handlers in this way, I composed a new version of Entry, which is from Logrus. As the example shows, create a new Entry with what ever common fields you want (below the example is a logger set in a handler that keeps track of the caller id. Pass PgkError through your layers as you work the Entry. When you need to log specific errors, like call variables experiencing the error, start with the PkgError.WithError(...) then add your details.
This is a starting point. If you want to use this generally, implement all of the Entity interface on PkgErrorEntry. Continue to delegate to the internal entry, but return a new PkgErrorEntry. Such a change would make the value true drop in replacement for Entry.
package main
import (
"fmt"
"github.com/sirupsen/logrus"
"strings"
unwrappedErrors "errors"
"github.com/pkg/errors"
)
// PkgErrorEntry enables stack frame extraction directly into the log fields.
type PkgErrorEntry struct {
*logrus.Entry
// Depth defines how much of the stacktrace you want.
Depth int
}
// This is dirty pkg/errors.
type stackTracer interface {
StackTrace() errors.StackTrace
}
func (e *PkgErrorEntry) WithError(err error) *logrus.Entry {
out := e.Entry
common := func(pError stackTracer) {
st := pError.StackTrace()
depth := 3
if e.Depth != 0 {
depth = e.Depth
}
valued := fmt.Sprintf("%+v", st[0:depth])
valued = strings.Replace(valued, "\t", "", -1)
stack := strings.Split(valued, "\n")
out = out.WithField("stack", stack[2:])
}
if err2, ok := err.(stackTracer); ok {
common(err2)
}
if err2, ok := errors.Cause(err).(stackTracer); ok {
common(err2)
}
return out.WithError(err)
}
func someWhereElse() error {
return unwrappedErrors.New("Ouch")
}
func level1() error {
return level2()
}
func level2() error {
return errors.WithStack(unwrappedErrors.New("All wrapped up"))
}
func main() {
baseLog := logrus.New()
baseLog.SetFormatter(&logrus.JSONFormatter{})
errorHandling := PkgErrorEntry{Entry: baseLog.WithField("callerid", "1000")}
errorHandling.Info("Hello")
err := errors.New("Hi")
errorHandling.WithError(err).Error("That should have a stack.")
err = someWhereElse()
errorHandling.WithError(err).Info("Less painful error")
err = level1()
errorHandling.WithError(err).Warn("Should have multiple layers of stack")
}
A Gopher-ish way
See https://www.reddit.com/r/golang/comments/ajby88/how_to_get_stack_traces_in_logrus/ for more detail.
Ben Johnson wrote about making errors part of your domain. An abbreviated version is that you should put tracer attributes onto a custom error. When code directly under your control errors or when an error from a 3rd party library occurs, the code immediately dealing with the error should put a unique value into the custom error. This value will print as part of the custom error's Error() string implementation.
When developers get the log file, they will be able to grep the code base for that unique value. Ben says "Finally, we need to be able to provide all this information plus a logical stack trace to our operator so they can debug issues. Go already provides a simple method, error.Error(), to print error information so we can utilize that."
Here's Ben's example
// attachRole inserts a role record for a user in the database
func (s *UserService) attachRole(ctx context.Context, id int, role string) error {
const op = "attachRole"
if _, err := s.db.Exec(`INSERT roles...`); err != nil {
return &myapp.Error{Op: op, Err: err}
}
return nil
}
An issue I have with the grep-able code is that it's easy for the value to diverge from the original context. For example, say the name of the function was changed from attachRole to something else and the function was longer. It possible that the op value can diverge from the function name. Regardless, this appears to satisfy the general need of tracing to a problem, while treating errors a first class citizens.
Go2 might throw a curve at this into more the Java-ish response. Stay tuned.
https://go.googlesource.com/proposal/+/refs/changes/97/159497/3/design/XXXXX-error-values.md
Use custom hook to extract stacktrace
import (
"fmt"
"github.com/pkg/errors"
"github.com/sirupsen/logrus"
)
type StacktraceHook struct {
}
func (h *StacktraceHook) Levels() []logrus.Level {
return logrus.AllLevels
}
func (h *StacktraceHook) Fire(e *logrus.Entry) error {
if v, found := e.Data[logrus.ErrorKey]; found {
if err, iserr := v.(error); iserr {
type stackTracer interface {
StackTrace() errors.StackTrace
}
if st, isst := err.(stackTracer); isst {
stack := fmt.Sprintf("%+v", st.StackTrace())
e.Data["stacktrace"] = stack
}
}
}
return nil
}
func main() {
logrus.SetFormatter(&logrus.TextFormatter{DisableQuote: true})
logrus.AddHook(&StacktraceHook{})
logrus.WithError(errors.New("Foo")).Error("Wrong")
}
Output
time=2009-11-10T23:00:00Z level=error msg=Wrong error=Foo stacktrace=
main.main
/tmp/sandbox1710078453/prog.go:36
runtime.main
/usr/local/go-faketime/src/runtime/proc.go:250
runtime.goexit
/usr/local/go-faketime/src/runtime/asm_amd64.s:1594

net/http server: too many open files error

I'm trying to develop a simple job queue server with some worker that query it but I encountered a problem with my net/http server. I'm surely doing something bad but after ~3 minutes my server start displaying :
http: Accept error: accept tcp [::]:4200: accept4: too many open files; retrying in 1s
For information it receive 10 request per second in my test case.
Here's two files to reproduce this error :
// server.go
package main
import (
"net/http"
)
func main() {
http.HandleFunc("/get", func(rw http.ResponseWriter, r *http.Request) {
http.Error(rw, "Try again", http.StatusInternalServerError)
})
http.ListenAndServe(":4200", nil)
}
// worker.go
package main
import (
"net/http"
"time"
)
func main() {
for {
res, _ := http.Get("http://localhost:4200/get")
defer res.Body.Close()
if res.StatusCode == http.StatusInternalServerError {
time.Sleep(100 * time.Millisecond)
continue
}
return
}
}
I already done some search about this error and I found some interesting response but none of these fixed my issue.
The first response I saw was to correctly close the Body in the http.Get response, as you can see I did it.
The second response was to change the file descriptor ulimit of my system but as I will not control where my app will run I prefer to not use this solution (But for information it's set at 1024 on my system)
Can someone explain me why this problem happen and how I can fix it in my code ?
Thanks a lot for your time
EDIT :
EDIT 2 : In comment Martin says that I'm not closing the Body, I tried to close it (without defer so) and it fixed the issue. Thanks Martin ! I was thinking that continue will execute my defer, I was wrong.
I found a post explaining the root problem in a lot more detail.
Nathan Smith even explains how to control timeouts on the TCP level, if needed.
Below is a summary of everything I could find on this particular problem, as well as the best practices to avoid this problem in future.
Problem
When a response is received regardless of whether response-body is required or not, the connection is kept alive until the response-body stream is closed. So, as mentioned in this thread, always close the response-body. Even if you do not need to use/read the body content:
func Ping(url string) (bool) {
// simple GET request on given URL
res, err := http.Get(url)
if err != nil {
// if unable to GET given URL, then ping must fail
return false
}
// always close the response-body, even if content is not required
defer res.Body.Close()
// is the page status okay?
return res.StatusCode == http.StatusOK
}
Best Practice
As mentioned by Nathan Smith never use the http.DefaultClient in production systems, this includes calls like http.Get as it uses http.DefaultClient at its base.
Another reason to avoid http.DefaultClient is that it is a Singleton (package level variable), meaning that the garbage collector will not try to clean it up, which will leave idling subsequent streams/sockets alive.
Instead create your own instance of http.Client and remember to always specify a sane Timeout:
func Ping(url string) (bool) {
// create a new instance of http client struct, with a timeout of 2sec
client := http.Client{ Timeout: time.Second * 2 }
// simple GET request on given URL
res, err := client.Get(url)
if err != nil {
// if unable to GET given URL, then ping must fail
return false
}
// always close the response-body, even if content is not required
defer res.Body.Close()
// is the page status okay?
return res.StatusCode == http.StatusOK
}
Safety Net
The safety net is for that newbie on the team, who does not know the shortfalls of http.DefaultClient usage. Or even that very useful, but not so active, open-source library that is still riddled with http.DefaultClient calls.
Since http.DefaultClient is a Singleton we can easily change the Timeout setting, just to ensure that legacy code does not cause idle connections to remain open.
I find it best to set this on the package main file in the init function:
package main
import (
"net/http"
"time"
)
func init() {
/*
Safety net for 'too many open files' issue on legacy code.
Set a sane timeout duration for the http.DefaultClient, to ensure idle connections are terminated.
Reference: https://stackoverflow.com/questions/37454236/net-http-server-too-many-open-files-error
*/
http.DefaultClient.Timeout = time.Minute * 10
}
As Martin say in comment I don't really closed the Body after the Get request. I used defer res.Body.Close() but it's not executed since I'm staying in the for loop. So continue dont't trigger defer
Please note that in some cases the setting in /etc/sysctl.conf
net.ipv4.tcp_tw_recycle = 1
Could cause this error because TCP connections remain open.
A temporary solution, just increase the number of open files:
ulimit -Sn 10000

Basic web tweaks that all applications should have

Currently my web app is just a router and handlers.
What are some important things I am missing to make this production worthy?
I believe I have to set the # of procs to ensure this uses maximum goroutines?
Should I be using output buffering?
Anything else you see missing that is best-practise?
var (
templates = template.Must(template.ParseFiles("templates/home.html")
)
func main() {
r := mux.NewRouter()
r.HandleFunc("/", WelcomeHandler)
http.ListenAndServe(":9000", r)
}
func WelcomeHandler(w http.ResponseWriter, r *http.Request) {
homePage, err := api.LoadHomePage()
if err != nil {
}
tmpl := "home"
renderTemplate(w, tmpl, homePage)
}
func renderTemplate(w http.ResponseWriter, tmpl string, hp *HomePage) {
err := templates.ExecuteTemplate(w, tmpl+".html", hp)
if err != nil {
http.Error(w, err.Error(), http.StatusInternalServerError)
}
}
You don't need to set/change runtime.GOMAXPROCS() as since Go 1.5 it defaults to the number of available CPU cores.
Buffering output? From the performance point of view, you don't need to. But there may be other considerations for which you may.
For example, your renderTemplate() function may potentially panic. If executing the template starts writing to the output, it involves setting the HTTP response code and other headers prior to writing data. And if a template execution error occurs after that, it will return an error, and so your code attempts to send back an error response. At this point HTTP headers are already written, and this http.Error() function will try to set headers again => panic.
One way to avoid this is to first render the template into a buffer (e.g. bytes.Buffer), and if no error is returned by the template execution, then you can write the content of the buffer to the response writer. If error occurs, then of course you won't write the content of the buffer, but send back an error response just like you did.
To sum it up, your code is production ready performance-wise (excluding the way you handle template execution errors).
WelcomeHandler should return when err != nil is true.
Log the error when one is hit to help investigation.
Place templates = template.Must(template.ParseFiles("templates/home.html") in the init. Split it into separate lines. If template.ParseFiles returns an then error make a Fatal log. And if you have multiple templates to initialize then initialize them in goroutines with a common WaitGroup to speed up the startup.
Since you are using mux, HTTP Server is too clean with its URLs might also be good to know.
You might also want to reconsider the decision of letting the user's know why they got the http.StatusInternalServerError response.
Setting the GOMAXPROCS > 1 if you have more the one core would definitely be a good idea but I would keep it less than number of cores available.

Resources