Should I avoid package singletons in golang? - go

at the moment I have a package store with following content:
package store
var (
db *Database
)
func Open(url string) error {
// open db connection
}
func FindAll(model interface{}) error {
// return all entries
}
func Close() {
// close db connection
}
This allows me to use store.FindAll from other packages after I have done store.Open in main.go.
However as I saw so far most packages prefer to provide a struct you need to initialize yourself. There are only few cases where this global approach is used.
What are downsides of this approach and should I avoid it?

You can't instantiate connections to 2 storages at once.
You can't easily mock out storage in unit tests of dependent code using convenient tools like gomock.

The standard http package has a ServerMux for generic usecases and but also has one default instance of ServerMux called DefaultServerMux (http://golang.org/pkg/net/http/#pkg-variables) for convenience. So that when you call http.HandleFunc it creates the handler on the default mux. You can find the same approach used in log and many other packages. This is essentially your "singleton" approach.
However, I don't think it's a good idea to follow that pattern in your use case, since the users need to call Open regardless of the default database. And, because of that, using a default instance would not really help and instead would actually make it less convenient:
d := store.Open(...)
defer d.Close()
d.FindAll(...)
is much easier to both write and read than:
store.Open(...)
defer store.Close()
store.FindAll(...)
And, also there are semantic problems: what should happen if someone calls Open twice:
store.Open(...)
defer store.Close()
...
store.Open(...)
store.FindAll(...) // Which db is this referring to?

Related

How to design event-driven API with separate packages for server and event handles?

Background
I have a servercore package which includes server struct and all core logic for sending/receiving messages from clients.
The server will operate with different flavours - e.g. EU, USA, AUS. Each flavour has its own set of distinct methods which can be invoked by the clients.
I would like to create separate packages which include (only) those methods. E.g. euhandles package.
The problem
These methods, in some cases, have to rely on the original server methods implemented in servercore package.
How can this be designed in golang in an elegant fashion?
Potential Solutions
(1) simply move the methods into separate package - doesn't work
package euhandles cannot create methods on servercore.Server struct. This is prohibited in Go (can't create methods on third-party structs).
(2) define functions in separate packages and then simply "register" them - doesn't work
Server.RegisterHandle("someEventName",euhandles.methodFromEu)
Problem - methodFromEu function will be unable to access any server methods.
(3) use embedding:
type serverEU struct { server *servercore.Server }
func (s *serverEU) HandleThat {}
s := new(serverEU)
s.server.registerHandle("someEventName", s.HandleThat)
Problem - it becomes a bit cumbersome (extra layer added just to implement a few handles/methods), doesn't seem "clean".
(4) Dependency Injection
I just thought of this shortly after posting the question, adding for the sake of comprehensiveness:
# in euhandles:
func HandleThat(s *server)
# elsewhere:
s.registerHandle("someEventName", euhandles.HandleThat)
# in servercore:
func (s *server) registerHandle(name string, handleFunc func(*server)) {
s.handles[name]=func(s *server) { handleFunc(s)}
}
Not sure how good/appropriate this is considered to be among Go-programmers.
Is there any idiomatic, clean way to separate events/handles from the core server?
The first thing I'd do is to use embedding, though without the additional indirection:
type ServerEU struct {
*servercore.Server
}
s := ServerEU{Server:&baseServer}
s.registerHandle("someEventName", s.HandleThat)
Another thing you can try is function pointers in the server:
type Server struct {
// stuff
LocaleSpecificFunc func(args)
}
And in the package:
func NewUEServer() *Server {
s:=Server{//initializers}
s.LocaleSpecificFunc=func(args) {
// Here, LocaleSpecificFunc implementation can use s
}
return &s
}
If you have to pass HandleThat to registerHandle() at some point, HandleThat is not an integral part of a server. So your DI option (4) makes more sense than embedding actually.

Golang object destructor for C/C++ bindings

We are building cryptorgraphic libraries with C/C++, and now adding also Golang support for it.
CGO binding works fine except one thing We need to call some function to free C pointers from memory manually.
Currently we are doing like this, by making some Go interface wrapper for cleaning up memory.
func SomeFunc() {
cObj := NewObjectFromCPP()
defer cObj.Free()
}
We also tried to use runtime.SetFinilizer to clean memory when Golang GC trying to clean wrapped object. BUT it turns out that runtime.SetFinilizer callback is not running every time, or not running at all, because in documentation it says it will run eventually.
Our current solution is hacky from my point of view, and wanted to get some input from people who already done something like this.
What is the right way of cleaning C/C++ memory from Go besides directly calling manual methods?
The convention in Go for disposing of things going out of scope is to use defer(), as you are doing.
There is also another convention of using Close() as your disposal method, and in fact many parts of the library assume this convention (such as Closer).
func doThings() {
if thing, err := openThingHoldingResources(); err != nil {
// TODO: Handle error
}
defer thing.Close()
// TODO: Do stuff with thing
}
One of the primary design goals of Go is to keep magic to a minimum, and functions that invoke automatically on object lifecycle events (constructors & destructors) is one of those magics they eschew.

How to share single database connection between multiple packages

I have two packages named client and worker. I want to share same ssdb, mysql and redis connection with both the packages.
One more similar problem that i am facing to share auth between these two packages.
app
-> client pkg
-> worker pkg
main.go (contains auth as global variable)
Can anyone please suggest me the best way to implement these two things ?
There's lots of ways to do this and each approach has pros and cons and it really depends on what you are doing. One easy way to do this is to have a third package with the DB connection and import it from the other two packages and from the main package.
app
-> client pkg // import "app/global"
-> worker pkg // import "app/global"
-> global pkg // Contains ssdb and auth as global variables
main.go
Another approach that might be better depending on what you are doing is to have the functions on the client and worker packages accept a db connection as a parameter and from main.go initialize the db and pass it as a parameter when you call a function that needs it.
It depends on what you are doing but for big projects it's easier to maintain if you just have one package doing all your db operations and you access it from all the places you need to do something. That way you don't have to worry about this issue because only one package has to worry about the db connection even if several packages use it.
Edit:
The problem with global variables is that they can be modified at the same time from everywhere in your project and it can introduce race conditions, but there is nothing wrong in using them when this is not an issue.
In this case, you are just setting the value once, when you connect to the DB and then just use the object.
You mentioned you want to have another package for the auth, I recommend just having one package and having in it everything you need to access from more than one package, in this case ssdb and auth.
Here's one approach that is not always obvious to new Go developers, is a little elbow grease to implement but not terribly
complex, and usually works fine in beginner apps:
app
client // imports storage
worker // imports storage
config // all environment-related config goes here
storage // storage-engine-generic interface to the packages below it
ssdb // all ssdb-specific code here
mysql // all mysql-specific code here
redis // ditto
It uses package variables. If you're paranoid about an accidental write to an exported package variable, you can avoid the problem by using unexported package variables. Take advantage of the
limited definition of Exported Identifiers in Go (see language
specification).
In main, call
config.Init(configfile)
storage.Init()
Define your config.Init function to read the config file and set package variables to the connection information for your
databases. If you're using enexported package variables, then allow public read-only access through exported functions. Otherwise you may be able to skip the functions, depending on what other features you want.
In storage, your Init function calls
ssdb.Init()
mysql.Init()
redis.Init()
Then also in storage you'll have public functions that client and server use that aren't specific to a storage engine, such as
func GetImage(id string) ([]byte) {
return mysql.GetImage(id)
}
or whatever is appropriate for your application. The storage level of abstraction may or may not be worth it for you depending on how you change your app in the future. You decide whether it's worth investing in it.
In mysql package, you import config, and you have something like
var db *sql.DB
func Init() {
getDb()
}
func getDb() (*sql.DB) {
if db == nil { // or something
config.Log.Println("Opening db connection to mysql")
db, err := sql.Open("mysql", config.MysqlConnectionString())
// do something with err, possibly have a retry loop
}
return db
}
func GetImage(id string) ([]byte)
db := getDb()
// ...
The functions in the mysql package can use the db unexported package variable, but other packages cannot.
Using an unexported package variable with exported-function read-only access is not a terrible practice or particularly complex. That said, it's usually unecessary. If db were the exported package variable Db, would you suddenly type
mysql.Db, _ = sql.Open("mysql", "LEEERRROOYYYYY!!!!")
in your client code (and also decide to import mysql and sql to do it) and then deploy to production? Why would you be more likely to do that than to intentionally break any other part of your code?
Note that if you just typed
mysql.Db = "LEEERRROOYYYYYY!!!!"
Your application would fail to compile because of a type mismatch.

Does golang foment no file structure?

I've been looking into golang in order to build a web app, I like the language and everything, but I'm having trouble wrapping my head around the concept of structure in golang. It seems it pretty much forces me to have no file structure, no folders, no division, no separation of concerns. Is there any way to organize the .go files in a way that I'm not seeing? So far file structuring has been a headache and it's the only bad experience I've had with the language. Thank you!
You are partially right. Go does not enforce anything regarding file and package structure, except that it forbids circular dependencies. IMHO, this is a good thing, since you have freedom to choose what best suites you.
However, it puts burden on you to decide what is the best. I have tried few approaches and depending on what I am doing (e.g. library, command line tool, service) I believe different approaches are best.
If you are creating only command line tool, let root package (root of your repository) be main. If it is small tool, that is all you need. It might happen that you command line tool grows, so you might want to separate some stuff to their own that can, but does not have to be, in same repository.
If you are creating library, do the same, except that package name will be name of your library, not main.
If you need combination (something that is useful both as the library and command line tool), I would go with putting library code (everything public for the library) in VCS root, with potential sub-packages and cmd/toolname for your binary.
When it comes to web services, I found it is most practical to follow these guidelines. It is best to read entire blog post, but in short - define your domain in VCS root, create cmd/app (or multiple) as command line entry point and create one package per dependency (e.g. memcache, database, http, etc). Your sub-packages never depend on each other explicitly, they only share domain definitions from root. It takes some getting used to and I am still adapting it to my use case, but so far it looks promising.
As #del-boy said it depends on what you want to do, I went over this problem multiple times but what suited me more when developing a golang web app is to divide your packages by dependencies
- myproject
-- cmd
--- main.go
-- http
--- http.go
-- postgres
--- postgres.go
-- mongodb
--- mongodb.go
myproject.go
myproject.go will contain the Interfaces and Structs that contain the main domain or business models
For example you can have inside myproject.go
type User struct {
MongoID bson.ObjectId `bson:"_id,omitempty"`
PostgresID string
Username string
}
and an Interface like this
type UserService interface {
GetUser(username string) (*User, error)
}
Now in your http package you will handle exposing your api endpoints
//Handler represents an HTTP API interface for our app.
type Handler struct {
Router *chi.Mux // you can use whatever router you like
UserService myproject.UserService
}
func (h *Handler) ServeHTTP(w http.ResponseWriter, r *Request){
//this just a wrapper for the Router ServeHTTP
h.Router.ServeHTTP(w,r)
}
func (h *Handler) someHandler(w http.ResponseWriter, r *Request){
//get the username from the request
//
user := h.UserService.GetUser(username)
}
in your postgres.go you can have a struct that implements UserService
type PostgresUserService struct {
DB *sql.DB
}
and then you implement the service
func (s *PostgresUserService) GetUser(username string) {
//implement the method
}
and the same thing can be done with mongodb
type MongoUserService struct {
Session *mgo.Session
}
func (s *MongoUserService) GetUser(username string) {
//implement the method
}
Now in your cmd/main.go you can have something like this
func main(){
postgresDB, err := postgres.Connect()
mongoSession, err := mongo.Connect()
postgresService := postgres.PostgresUserService{DB: postgresDB}
mongoService := mongo.MongoUserService{Session: mongoSession}
//then pass your services to your http handler
// based on the underlying service your api will act based on the underlying service you passed
myHandler := http.Handler{}
myHandler.UserService = postgresService
}
assuming you changed your underlying store you only have to change it in here and you will not change anything
This design is heavily inspired from this blog, I hope you find it helpful

What is the preferred way to implement testing mocks in Go?

I am building a simple CLI tool in in Go that acts as a wrapper for various password stores (Chef Vault, Ansible Vault, Hashicorp Vault, etc). This is partially as an exercise to get familiar with Go.
In working on this, I came across a situation where I was writing tests and found I needed to create interfaces for many things, just to have the ability to mock dependencies. As such, a fairly simple implementation seems to have a bunch of abstraction, for the sake of the tests.
However, I was recently reading The Go Programming Language and found an example where they mocked their dependencies in the following way.
func Parse() map[string]string {
s := openStore()
// Do something with s to parse into a map…
return s.contents
}
var storeFunc = func openStore() *Store {
// concrete implementation for opening store
}
// and in the testing file…
func TestParse(t *testing.T) {
openStore := func() {
// set contents of mock…
}
parse()
// etc...
}
So for the sake of testing, we store this concrete implementation in a variable, and then we can essentially re-declare the variable in the tests and have it return what we need.
Otherwise, I would have created an interface for this (despite currently only having one implementation) and inject that into the Parse method. This way, we could mock it for the test.
So my question is: What are the advantages and disadvantages of each approach? When is it more appropriate to create an interface for the purposes of a mock, versus storing the concrete function in a variable for re-declaration in the test?
For testing purposes, I tend to use the mocking approach you described instead of creating new interfaces. One of the reasons being, AFAIK, there are no direct ways to identify which structs implement an interface, which is important to me if I wanted to know whether the mocks are doing the right thing.
The main drawback of this approach is that the variable is essentially a package-level global variable (even though it's unexported). So all the drawbacks with declaring global variables apply.
In your tests, you will definitely want to use defer to re-assign storeFunc back to its original concrete implementation once the tests completed.
var storeFunc = func *Store {
// concrete implementation for opening store
}
// and in the testing file…
func TestParse(t *testing.T) {
storeFuncOriginal := storeFunc
defer func() {
storeFunc = storeFuncOriginal
}()
storeFunc := func() {
// set contents of mock…
}
parse()
// etc...
}
By the way, var storeFunc = func openStore() *Store won't compile.
There is no "right way" of answering this.
Having said this, I find the interface approach more general and more clear than defining a function variable and setting it for the test.
Here are some comments on why:
The function variable approach does not scale well if there are several functions you need to mock (in your example it is just one function).
The interface makes more clear which is the behaviour being injected to the function/module as opposed to the function variable which ends up hidden in the implementation.
The interface allows you to inject a type with a state (a struct) which might be useful for configuring the behaviour of the mock.
You can of course rely on the "function variable" approach for simple cases and use the "interface" for more complex functionality, but if you want to be consistent and use just one approach I'd go with the "interface".
I tackle the problem differently. Given
function Parse(s Store) map[string] string{
// Do stuff on the interface Store
}
you have several advantages:
You can use a mock or a stub Store as you see fit.
Imho, the code becomes more transparent. The signature alone makes clear that a Store implementation is required. And the code does not need to be polluted with error handling for opening the Store.
The code documentation can be kept more concise.
However, this makes something pretty obvious: Parse is a function which can be attached to a store, which most likely makes more sense than to parse the store around.

Resources