I'm trying to understand the way interfaces can be used.
Below is a contrived example to demonstrate my question.
I have the main package, which instantiates a test database, and then passes this to a test server, where the server is then initialised.
Then there is a call to the server, which does a dummy database insert (using the dummy database dependency, passed on server initialisation).
main.go
package main
import (
"interfaces/database"
"interfaces/server"
)
func main() {
db := database.Start()
s := server.Start(db)
s.HandleInsert()
}
database.go
package database
import "fmt"
type Database struct {
pool string
}
func Start() *Database{
database := &Database{}
database.pool = "examplepool"
return database
}
func(db *Database) Select() {
fmt.Println("Running a Select")
}
func(db *Database) Insert() {
fmt.Println("Running an Insert")
}
func (db *Database) Delete() {
fmt.Println("Running a Delete")
}
server.go
package server
import "fmt"
type Database interface {
Select()
Insert()
Delete()
}
type Server struct {
server string
db Database
}
func Start(db Database) *Server {
fmt.Println("Created Server")
s := &Server{"exampleserver", db}
return s
}
func(s *Server) HandleInsert() {
s.db.Insert()
}
The thing is, in the server package, to make use of the database package, I've had to write out all the methods that the database object has. I've only got three methods, but my Database object could easily have more. This goes against Go's philosophy of having small interfaces. What am I missing here? I don't want to import the Database package into the Server package, as I want to encapsulate each package as much as possible.
Another question is, say I have other packages that want to make use of this Database package. Should they also contain a similar Database interface? Should I maybe have a package called "interfaces" which contains the Database interface, that can then be imported?
The idea for the layout of this code came from this video: https://youtu.be/rWBSMsLG8po
At first glance, I'd say you're doing things in the idiomatic golang way:
Return the type (database Start returns *Database, as it should)
The package defines interfaces for its dependencies (as you're doing)
The size of the interface isn't determined by what a given type exports (i.e. what methods your database.Database type implements), but rather what functionality you need. If your server package only ever needs to use Select, Insert, and Delete, then that's what the interface should be. The type you're passing to the server package could implement SelectNASASecretLizardFiles, but if you're not using it, the server package doesn't have to know the method exists. The server.Database interface remains as simple as it is now.
That's essentially what a small interface means. Golang interfaces are implemented implicitly (sometimes people call it ducktype interfaces). Any type that implements the 3 methods you defined in server.Database can be used as a dependency. This makes your packages really easy to unit-test (mocking is trivial).
The "downside" can be that, if you have several packages depending on the Database type, you can end up with duplicate definitions of the same interface. However, if one of the packages requires access to an additional function (or doesn't need to use the Insert method), changing the interface for that package doesn't affect any of the other packages. This fits with the whole concept of golang packages being self-contained.
In your particular case, though, I think there's room for a judgement call. If you're interfacing with a DB of sorts, I think it's a fair assumption to make that most, if not all, packages will all need to be able to select data. It's common to see a small, base interface defined in a common package:
|
|-- server
| |
| |--> dependencies.go (defines Databse interface for server pkg)
|
|-- foo
| |
| |--> dependencies.go (defines Database interface for this package)
|
|-- common (bad package name, but self explanatory)
| |
| |--> database.go (defines common subset of database interfaces)
Where the interfaces look like this:
package common
type DB interface {
// don't return a slice of maps, this is just an example
Select(query string, args ...interface{}) (rows []map[string]interface{}, err error)
Close() error
}
package server
import "your.project/common"
type Database interface {
common.DB // embedded common interface
Insert(query string, vals ...interface{}) error
Delete(query, id string) error
}
This is a common way to structure your code, while ensuring easy mocking and testing.
Speaking of mocking/testing, just a tip, but have a look at a tool called mockgen. You can have mocks for unit tests generated for your interfaces per-package by adding a single comment like this:
package server
import "your.project/common"
//go:generate go run github.com/golang/mock/mockgen -destination mocks/db_mock.go -package mocks your.project/server Database
type Database interface {
common.DB // embedded common interface
Insert(query string, vals ...interface{}) error
Delete(query, id string) error
}
Running go generate will spit out the mocks you can then import in your unit tests.
Other comments
Something I couldn't help notice is that your database package declares a type called Database. Why is the type exported? and why does it have the same name as the package? Using a type called database.Database is just code smell. Stuttering names should be avoided. Perhaps calling the handle Handle or Conn makes more sense: db.Handle or db.Conn is much more descriptive of what you're actually dealing with, and it's shorter to type.
The function to get the DB connection is also weirdly named (Start). It's a constructor function, so I think it'd make more sense to call it New, resulting in the code:
db := database.New()
Related
I know this has been asked in various forms many times before but I just can't seem to implement what I'm learning in the way that I need. Any help is appreciated.
I have a series of exchanges which all implement roughly the same APIs. For example, each of them have a GetBalance endpoint. However, some have one or two unique things which need to be accessed within the functions. For example, exchange1 needs to use a client when calling it's balance API, while exchange2 requires both the client variable as well as a clientFutures variable. This is an important note for later.
My background is normal OOP. Obviously Go is different in many ways, hence I'm getting tripped up here.
My current implementation and thinking is as follows:
In exchanges module
type Balance struct {
asset string
available float64
unavailable float64
total float64
}
type Api interface {
GetBalances() []Balance
}
In Binance module
type BinanceApi struct {
key string
secret string
client *binance.Client
clientFutures *futures.Client
Api exchanges.Api
}
func (v *BinanceApi) GetBalance() []exchanges.Balance {
// Requires v.client and v.clientFutures
return []exchanges.Balance{}
}
In Kraken module
type KrakenApi struct {
key string
secret string
client *binance.Client
Api exchanges.Api
}
func (v *KrakenApi) GetBalance() []exchanges.Balance {
// Requires v.client
return []exchanges.Balance{}
}
In main.go
var exchange *Api
Now my thought was I should be able to call something like exchange.GetBalance() and it would use the GetBalance function from above. I would also need some kind of casting? I'm quite lost here. The exchange could either be Binance or Kraken--that gets decided at runtime. Some other code basically calls a GetExchange function which returns an instance of the required API object (already casted in either BinanceApi/KrakenApi)
I'm aware inheritance and polymorphism don't work like other languages, hence my utter confusion. I'm struggling to know what needs to go where here. Go seems to require loads of annoying code necessary for what other languages do on the fly 😓
using *exchanges.Api is quite weird. You're wanting something that implements a given interface. What the underlying type is (whether it's a pointer or a value receiver) is not important, so use exchanges.Api instead.
There is another issue, though. In golang, interfaces are implicit (sometimes referred to as duck-type interfaces). Generally speaking, this means that the interface is not declared in the package that implements it, but rather in the package that depends on a given interface. Some say that you should be liberal in terms of what values you return, but restrictive in terms of what arguments you accept. What this boils down to in your case, is that you'd have something like an api package, that looks somewhat like this:
package api
func NewKraken(args ...any) *KrakenExchange {
// ...
}
func NewBinance(args ...any) *BinanceExchange {
}
then in your other packages, you'd have something like this:
package kraken // or maybe this could be an exchange package
type API interface {
GetBalances() []types.Balance
}
func NewClient(api API, otherArgs ...T) *KrakenClient {
}
So when someone looks at the code for this Kraken package, they can instantly tell what dependencies are required, and what types it works with. The added benefit is that, should binance or kraken need additional API calls that aren't shared, you can go in and change the specific dependencies/interfaces, without ending up with one massive, centralised interface that is being used all over the place, but each time you only end up using a subset of the interface.
Yet another benefit of this approach is when writing tests. There are tools like gomock and mockgen, which allow you to quickly generate mocks for unit tests simply by doing this:
package foo
//go:generate go run github.com/golang/mock/mockgen -destination mocks/dep_mock.go -package mocks your/module/path/to/foo Dependency
type Dependency interface {
// methods here
}
Then run go generate and it'll create a mock object in your/module/path/to/foo/mocks that implements the desired interface. In your unit tests, import he mocks package, and you can do things like:
ctrl := gomock.NewController(t)
dep := mocks.NewDependencyMock(ctrl)
defer ctrl.Finish()
dep.EXPECT().GetBalances().Times(1).Return(data)
k := kraken.NewClient(dep)
bal := k.Balances()
require.EqualValues(t, bal, data)
TL;DR
The gist of it is:
Interfaces are interfaces, don't use pointers to interfaces
Declare interfaces in the package that depends on them (ie the user), not the implementation (provider) side.
Only declare methods in an interface if you are genuinely using them in a given package. Using a central, overarching interface makes this harder to do.
Having the dependency interface declared along side the user makes for self-documenting code
Unit testing and mocking/stubbing is a lot easier to do, and to automate this way
Background
I have a servercore package which includes server struct and all core logic for sending/receiving messages from clients.
The server will operate with different flavours - e.g. EU, USA, AUS. Each flavour has its own set of distinct methods which can be invoked by the clients.
I would like to create separate packages which include (only) those methods. E.g. euhandles package.
The problem
These methods, in some cases, have to rely on the original server methods implemented in servercore package.
How can this be designed in golang in an elegant fashion?
Potential Solutions
(1) simply move the methods into separate package - doesn't work
package euhandles cannot create methods on servercore.Server struct. This is prohibited in Go (can't create methods on third-party structs).
(2) define functions in separate packages and then simply "register" them - doesn't work
Server.RegisterHandle("someEventName",euhandles.methodFromEu)
Problem - methodFromEu function will be unable to access any server methods.
(3) use embedding:
type serverEU struct { server *servercore.Server }
func (s *serverEU) HandleThat {}
s := new(serverEU)
s.server.registerHandle("someEventName", s.HandleThat)
Problem - it becomes a bit cumbersome (extra layer added just to implement a few handles/methods), doesn't seem "clean".
(4) Dependency Injection
I just thought of this shortly after posting the question, adding for the sake of comprehensiveness:
# in euhandles:
func HandleThat(s *server)
# elsewhere:
s.registerHandle("someEventName", euhandles.HandleThat)
# in servercore:
func (s *server) registerHandle(name string, handleFunc func(*server)) {
s.handles[name]=func(s *server) { handleFunc(s)}
}
Not sure how good/appropriate this is considered to be among Go-programmers.
Is there any idiomatic, clean way to separate events/handles from the core server?
The first thing I'd do is to use embedding, though without the additional indirection:
type ServerEU struct {
*servercore.Server
}
s := ServerEU{Server:&baseServer}
s.registerHandle("someEventName", s.HandleThat)
Another thing you can try is function pointers in the server:
type Server struct {
// stuff
LocaleSpecificFunc func(args)
}
And in the package:
func NewUEServer() *Server {
s:=Server{//initializers}
s.LocaleSpecificFunc=func(args) {
// Here, LocaleSpecificFunc implementation can use s
}
return &s
}
If you have to pass HandleThat to registerHandle() at some point, HandleThat is not an integral part of a server. So your DI option (4) makes more sense than embedding actually.
I have two packages named client and worker. I want to share same ssdb, mysql and redis connection with both the packages.
One more similar problem that i am facing to share auth between these two packages.
app
-> client pkg
-> worker pkg
main.go (contains auth as global variable)
Can anyone please suggest me the best way to implement these two things ?
There's lots of ways to do this and each approach has pros and cons and it really depends on what you are doing. One easy way to do this is to have a third package with the DB connection and import it from the other two packages and from the main package.
app
-> client pkg // import "app/global"
-> worker pkg // import "app/global"
-> global pkg // Contains ssdb and auth as global variables
main.go
Another approach that might be better depending on what you are doing is to have the functions on the client and worker packages accept a db connection as a parameter and from main.go initialize the db and pass it as a parameter when you call a function that needs it.
It depends on what you are doing but for big projects it's easier to maintain if you just have one package doing all your db operations and you access it from all the places you need to do something. That way you don't have to worry about this issue because only one package has to worry about the db connection even if several packages use it.
Edit:
The problem with global variables is that they can be modified at the same time from everywhere in your project and it can introduce race conditions, but there is nothing wrong in using them when this is not an issue.
In this case, you are just setting the value once, when you connect to the DB and then just use the object.
You mentioned you want to have another package for the auth, I recommend just having one package and having in it everything you need to access from more than one package, in this case ssdb and auth.
Here's one approach that is not always obvious to new Go developers, is a little elbow grease to implement but not terribly
complex, and usually works fine in beginner apps:
app
client // imports storage
worker // imports storage
config // all environment-related config goes here
storage // storage-engine-generic interface to the packages below it
ssdb // all ssdb-specific code here
mysql // all mysql-specific code here
redis // ditto
It uses package variables. If you're paranoid about an accidental write to an exported package variable, you can avoid the problem by using unexported package variables. Take advantage of the
limited definition of Exported Identifiers in Go (see language
specification).
In main, call
config.Init(configfile)
storage.Init()
Define your config.Init function to read the config file and set package variables to the connection information for your
databases. If you're using enexported package variables, then allow public read-only access through exported functions. Otherwise you may be able to skip the functions, depending on what other features you want.
In storage, your Init function calls
ssdb.Init()
mysql.Init()
redis.Init()
Then also in storage you'll have public functions that client and server use that aren't specific to a storage engine, such as
func GetImage(id string) ([]byte) {
return mysql.GetImage(id)
}
or whatever is appropriate for your application. The storage level of abstraction may or may not be worth it for you depending on how you change your app in the future. You decide whether it's worth investing in it.
In mysql package, you import config, and you have something like
var db *sql.DB
func Init() {
getDb()
}
func getDb() (*sql.DB) {
if db == nil { // or something
config.Log.Println("Opening db connection to mysql")
db, err := sql.Open("mysql", config.MysqlConnectionString())
// do something with err, possibly have a retry loop
}
return db
}
func GetImage(id string) ([]byte)
db := getDb()
// ...
The functions in the mysql package can use the db unexported package variable, but other packages cannot.
Using an unexported package variable with exported-function read-only access is not a terrible practice or particularly complex. That said, it's usually unecessary. If db were the exported package variable Db, would you suddenly type
mysql.Db, _ = sql.Open("mysql", "LEEERRROOYYYYY!!!!")
in your client code (and also decide to import mysql and sql to do it) and then deploy to production? Why would you be more likely to do that than to intentionally break any other part of your code?
Note that if you just typed
mysql.Db = "LEEERRROOYYYYYY!!!!"
Your application would fail to compile because of a type mismatch.
I've been looking into golang in order to build a web app, I like the language and everything, but I'm having trouble wrapping my head around the concept of structure in golang. It seems it pretty much forces me to have no file structure, no folders, no division, no separation of concerns. Is there any way to organize the .go files in a way that I'm not seeing? So far file structuring has been a headache and it's the only bad experience I've had with the language. Thank you!
You are partially right. Go does not enforce anything regarding file and package structure, except that it forbids circular dependencies. IMHO, this is a good thing, since you have freedom to choose what best suites you.
However, it puts burden on you to decide what is the best. I have tried few approaches and depending on what I am doing (e.g. library, command line tool, service) I believe different approaches are best.
If you are creating only command line tool, let root package (root of your repository) be main. If it is small tool, that is all you need. It might happen that you command line tool grows, so you might want to separate some stuff to their own that can, but does not have to be, in same repository.
If you are creating library, do the same, except that package name will be name of your library, not main.
If you need combination (something that is useful both as the library and command line tool), I would go with putting library code (everything public for the library) in VCS root, with potential sub-packages and cmd/toolname for your binary.
When it comes to web services, I found it is most practical to follow these guidelines. It is best to read entire blog post, but in short - define your domain in VCS root, create cmd/app (or multiple) as command line entry point and create one package per dependency (e.g. memcache, database, http, etc). Your sub-packages never depend on each other explicitly, they only share domain definitions from root. It takes some getting used to and I am still adapting it to my use case, but so far it looks promising.
As #del-boy said it depends on what you want to do, I went over this problem multiple times but what suited me more when developing a golang web app is to divide your packages by dependencies
- myproject
-- cmd
--- main.go
-- http
--- http.go
-- postgres
--- postgres.go
-- mongodb
--- mongodb.go
myproject.go
myproject.go will contain the Interfaces and Structs that contain the main domain or business models
For example you can have inside myproject.go
type User struct {
MongoID bson.ObjectId `bson:"_id,omitempty"`
PostgresID string
Username string
}
and an Interface like this
type UserService interface {
GetUser(username string) (*User, error)
}
Now in your http package you will handle exposing your api endpoints
//Handler represents an HTTP API interface for our app.
type Handler struct {
Router *chi.Mux // you can use whatever router you like
UserService myproject.UserService
}
func (h *Handler) ServeHTTP(w http.ResponseWriter, r *Request){
//this just a wrapper for the Router ServeHTTP
h.Router.ServeHTTP(w,r)
}
func (h *Handler) someHandler(w http.ResponseWriter, r *Request){
//get the username from the request
//
user := h.UserService.GetUser(username)
}
in your postgres.go you can have a struct that implements UserService
type PostgresUserService struct {
DB *sql.DB
}
and then you implement the service
func (s *PostgresUserService) GetUser(username string) {
//implement the method
}
and the same thing can be done with mongodb
type MongoUserService struct {
Session *mgo.Session
}
func (s *MongoUserService) GetUser(username string) {
//implement the method
}
Now in your cmd/main.go you can have something like this
func main(){
postgresDB, err := postgres.Connect()
mongoSession, err := mongo.Connect()
postgresService := postgres.PostgresUserService{DB: postgresDB}
mongoService := mongo.MongoUserService{Session: mongoSession}
//then pass your services to your http handler
// based on the underlying service your api will act based on the underlying service you passed
myHandler := http.Handler{}
myHandler.UserService = postgresService
}
assuming you changed your underlying store you only have to change it in here and you will not change anything
This design is heavily inspired from this blog, I hope you find it helpful
at the moment I have a package store with following content:
package store
var (
db *Database
)
func Open(url string) error {
// open db connection
}
func FindAll(model interface{}) error {
// return all entries
}
func Close() {
// close db connection
}
This allows me to use store.FindAll from other packages after I have done store.Open in main.go.
However as I saw so far most packages prefer to provide a struct you need to initialize yourself. There are only few cases where this global approach is used.
What are downsides of this approach and should I avoid it?
You can't instantiate connections to 2 storages at once.
You can't easily mock out storage in unit tests of dependent code using convenient tools like gomock.
The standard http package has a ServerMux for generic usecases and but also has one default instance of ServerMux called DefaultServerMux (http://golang.org/pkg/net/http/#pkg-variables) for convenience. So that when you call http.HandleFunc it creates the handler on the default mux. You can find the same approach used in log and many other packages. This is essentially your "singleton" approach.
However, I don't think it's a good idea to follow that pattern in your use case, since the users need to call Open regardless of the default database. And, because of that, using a default instance would not really help and instead would actually make it less convenient:
d := store.Open(...)
defer d.Close()
d.FindAll(...)
is much easier to both write and read than:
store.Open(...)
defer store.Close()
store.FindAll(...)
And, also there are semantic problems: what should happen if someone calls Open twice:
store.Open(...)
defer store.Close()
...
store.Open(...)
store.FindAll(...) // Which db is this referring to?