Can I subclass and redefine a method in Golang? - go

I'm using a Github Client that allows me to call github API methods more easily.
This library allows me to provide my own *http.Client when I initialize it:
httpClient := &http.Client{}
githubClient := github.NewClient(httpClient)
It works fine but now I need something else. I want to customize the Client so that every request (i.e. the Do method) gets a custom header added.
I've read a bit about embedding and this is what I've tried so far:
package trackerapi
import(
"net/http"
)
type MyClient struct{
*http.Client
}
func (my *MyClient)Do(req *http.Request) (*http.Response, error){
req.Header.Set("cache-control","max-stale=3600")
return my.Client.Do(req)
}
But the compiler does not let me use my custom MyClient in place of the default one:
httpClient := &trackerapi.MyClient{}
// ERROR: Cannot use httpClient (type *MyClient) as type
//*http.Client.
githubClient := github.NewClient(httpClient)
I'm a bit of a golang newbie so my question is: Is this the right way to do what I want to and, if not, what's the recommended approach?

Can I subclass ... in Golang?
Short answer: No. Go is not object oriented, therefore it has no classes, therefore subclassing is categorically an impossibility.
Longer answer:
You're on the right track with embedding, but you won't be able to substitute your custom client for anything that expects an *http.Client. This is what Go interfaces are for. But the standard library doesn't use an interface here (it does for some things, where it makes sense).
Another possible approach which may, depending on exact needs, work, is to use a custom transport, rather than a custom client. This does use an interface. You may be able to use a custom RoundTripper that adds the necessary headers, and this you can assign to an *http.Client struct.

Related

Interface management in Go

I know this has been asked in various forms many times before but I just can't seem to implement what I'm learning in the way that I need. Any help is appreciated.
I have a series of exchanges which all implement roughly the same APIs. For example, each of them have a GetBalance endpoint. However, some have one or two unique things which need to be accessed within the functions. For example, exchange1 needs to use a client when calling it's balance API, while exchange2 requires both the client variable as well as a clientFutures variable. This is an important note for later.
My background is normal OOP. Obviously Go is different in many ways, hence I'm getting tripped up here.
My current implementation and thinking is as follows:
In exchanges module
type Balance struct {
asset string
available float64
unavailable float64
total float64
}
type Api interface {
GetBalances() []Balance
}
In Binance module
type BinanceApi struct {
key string
secret string
client *binance.Client
clientFutures *futures.Client
Api exchanges.Api
}
func (v *BinanceApi) GetBalance() []exchanges.Balance {
// Requires v.client and v.clientFutures
return []exchanges.Balance{}
}
In Kraken module
type KrakenApi struct {
key string
secret string
client *binance.Client
Api exchanges.Api
}
func (v *KrakenApi) GetBalance() []exchanges.Balance {
// Requires v.client
return []exchanges.Balance{}
}
In main.go
var exchange *Api
Now my thought was I should be able to call something like exchange.GetBalance() and it would use the GetBalance function from above. I would also need some kind of casting? I'm quite lost here. The exchange could either be Binance or Kraken--that gets decided at runtime. Some other code basically calls a GetExchange function which returns an instance of the required API object (already casted in either BinanceApi/KrakenApi)
I'm aware inheritance and polymorphism don't work like other languages, hence my utter confusion. I'm struggling to know what needs to go where here. Go seems to require loads of annoying code necessary for what other languages do on the fly 😓
using *exchanges.Api is quite weird. You're wanting something that implements a given interface. What the underlying type is (whether it's a pointer or a value receiver) is not important, so use exchanges.Api instead.
There is another issue, though. In golang, interfaces are implicit (sometimes referred to as duck-type interfaces). Generally speaking, this means that the interface is not declared in the package that implements it, but rather in the package that depends on a given interface. Some say that you should be liberal in terms of what values you return, but restrictive in terms of what arguments you accept. What this boils down to in your case, is that you'd have something like an api package, that looks somewhat like this:
package api
func NewKraken(args ...any) *KrakenExchange {
// ...
}
func NewBinance(args ...any) *BinanceExchange {
}
then in your other packages, you'd have something like this:
package kraken // or maybe this could be an exchange package
type API interface {
GetBalances() []types.Balance
}
func NewClient(api API, otherArgs ...T) *KrakenClient {
}
So when someone looks at the code for this Kraken package, they can instantly tell what dependencies are required, and what types it works with. The added benefit is that, should binance or kraken need additional API calls that aren't shared, you can go in and change the specific dependencies/interfaces, without ending up with one massive, centralised interface that is being used all over the place, but each time you only end up using a subset of the interface.
Yet another benefit of this approach is when writing tests. There are tools like gomock and mockgen, which allow you to quickly generate mocks for unit tests simply by doing this:
package foo
//go:generate go run github.com/golang/mock/mockgen -destination mocks/dep_mock.go -package mocks your/module/path/to/foo Dependency
type Dependency interface {
// methods here
}
Then run go generate and it'll create a mock object in your/module/path/to/foo/mocks that implements the desired interface. In your unit tests, import he mocks package, and you can do things like:
ctrl := gomock.NewController(t)
dep := mocks.NewDependencyMock(ctrl)
defer ctrl.Finish()
dep.EXPECT().GetBalances().Times(1).Return(data)
k := kraken.NewClient(dep)
bal := k.Balances()
require.EqualValues(t, bal, data)
TL;DR
The gist of it is:
Interfaces are interfaces, don't use pointers to interfaces
Declare interfaces in the package that depends on them (ie the user), not the implementation (provider) side.
Only declare methods in an interface if you are genuinely using them in a given package. Using a central, overarching interface makes this harder to do.
Having the dependency interface declared along side the user makes for self-documenting code
Unit testing and mocking/stubbing is a lot easier to do, and to automate this way

How to design event-driven API with separate packages for server and event handles?

Background
I have a servercore package which includes server struct and all core logic for sending/receiving messages from clients.
The server will operate with different flavours - e.g. EU, USA, AUS. Each flavour has its own set of distinct methods which can be invoked by the clients.
I would like to create separate packages which include (only) those methods. E.g. euhandles package.
The problem
These methods, in some cases, have to rely on the original server methods implemented in servercore package.
How can this be designed in golang in an elegant fashion?
Potential Solutions
(1) simply move the methods into separate package - doesn't work
package euhandles cannot create methods on servercore.Server struct. This is prohibited in Go (can't create methods on third-party structs).
(2) define functions in separate packages and then simply "register" them - doesn't work
Server.RegisterHandle("someEventName",euhandles.methodFromEu)
Problem - methodFromEu function will be unable to access any server methods.
(3) use embedding:
type serverEU struct { server *servercore.Server }
func (s *serverEU) HandleThat {}
s := new(serverEU)
s.server.registerHandle("someEventName", s.HandleThat)
Problem - it becomes a bit cumbersome (extra layer added just to implement a few handles/methods), doesn't seem "clean".
(4) Dependency Injection
I just thought of this shortly after posting the question, adding for the sake of comprehensiveness:
# in euhandles:
func HandleThat(s *server)
# elsewhere:
s.registerHandle("someEventName", euhandles.HandleThat)
# in servercore:
func (s *server) registerHandle(name string, handleFunc func(*server)) {
s.handles[name]=func(s *server) { handleFunc(s)}
}
Not sure how good/appropriate this is considered to be among Go-programmers.
Is there any idiomatic, clean way to separate events/handles from the core server?
The first thing I'd do is to use embedding, though without the additional indirection:
type ServerEU struct {
*servercore.Server
}
s := ServerEU{Server:&baseServer}
s.registerHandle("someEventName", s.HandleThat)
Another thing you can try is function pointers in the server:
type Server struct {
// stuff
LocaleSpecificFunc func(args)
}
And in the package:
func NewUEServer() *Server {
s:=Server{//initializers}
s.LocaleSpecificFunc=func(args) {
// Here, LocaleSpecificFunc implementation can use s
}
return &s
}
If you have to pass HandleThat to registerHandle() at some point, HandleThat is not an integral part of a server. So your DI option (4) makes more sense than embedding actually.

Does golang foment no file structure?

I've been looking into golang in order to build a web app, I like the language and everything, but I'm having trouble wrapping my head around the concept of structure in golang. It seems it pretty much forces me to have no file structure, no folders, no division, no separation of concerns. Is there any way to organize the .go files in a way that I'm not seeing? So far file structuring has been a headache and it's the only bad experience I've had with the language. Thank you!
You are partially right. Go does not enforce anything regarding file and package structure, except that it forbids circular dependencies. IMHO, this is a good thing, since you have freedom to choose what best suites you.
However, it puts burden on you to decide what is the best. I have tried few approaches and depending on what I am doing (e.g. library, command line tool, service) I believe different approaches are best.
If you are creating only command line tool, let root package (root of your repository) be main. If it is small tool, that is all you need. It might happen that you command line tool grows, so you might want to separate some stuff to their own that can, but does not have to be, in same repository.
If you are creating library, do the same, except that package name will be name of your library, not main.
If you need combination (something that is useful both as the library and command line tool), I would go with putting library code (everything public for the library) in VCS root, with potential sub-packages and cmd/toolname for your binary.
When it comes to web services, I found it is most practical to follow these guidelines. It is best to read entire blog post, but in short - define your domain in VCS root, create cmd/app (or multiple) as command line entry point and create one package per dependency (e.g. memcache, database, http, etc). Your sub-packages never depend on each other explicitly, they only share domain definitions from root. It takes some getting used to and I am still adapting it to my use case, but so far it looks promising.
As #del-boy said it depends on what you want to do, I went over this problem multiple times but what suited me more when developing a golang web app is to divide your packages by dependencies
- myproject
-- cmd
--- main.go
-- http
--- http.go
-- postgres
--- postgres.go
-- mongodb
--- mongodb.go
myproject.go
myproject.go will contain the Interfaces and Structs that contain the main domain or business models
For example you can have inside myproject.go
type User struct {
MongoID bson.ObjectId `bson:"_id,omitempty"`
PostgresID string
Username string
}
and an Interface like this
type UserService interface {
GetUser(username string) (*User, error)
}
Now in your http package you will handle exposing your api endpoints
//Handler represents an HTTP API interface for our app.
type Handler struct {
Router *chi.Mux // you can use whatever router you like
UserService myproject.UserService
}
func (h *Handler) ServeHTTP(w http.ResponseWriter, r *Request){
//this just a wrapper for the Router ServeHTTP
h.Router.ServeHTTP(w,r)
}
func (h *Handler) someHandler(w http.ResponseWriter, r *Request){
//get the username from the request
//
user := h.UserService.GetUser(username)
}
in your postgres.go you can have a struct that implements UserService
type PostgresUserService struct {
DB *sql.DB
}
and then you implement the service
func (s *PostgresUserService) GetUser(username string) {
//implement the method
}
and the same thing can be done with mongodb
type MongoUserService struct {
Session *mgo.Session
}
func (s *MongoUserService) GetUser(username string) {
//implement the method
}
Now in your cmd/main.go you can have something like this
func main(){
postgresDB, err := postgres.Connect()
mongoSession, err := mongo.Connect()
postgresService := postgres.PostgresUserService{DB: postgresDB}
mongoService := mongo.MongoUserService{Session: mongoSession}
//then pass your services to your http handler
// based on the underlying service your api will act based on the underlying service you passed
myHandler := http.Handler{}
myHandler.UserService = postgresService
}
assuming you changed your underlying store you only have to change it in here and you will not change anything
This design is heavily inspired from this blog, I hope you find it helpful

How come I can use a packages method when it says it belongs to a different type?

Sorry for the horrible horrible title, first off if anybody can offer an edit for a better title after reading my question please, submit it, I'm pretty bad with my terminology at the moment.
So, simple question:
Reading through the net/http package on how to make http.Get requests and it says all I have to do is
resp, err := http.Get(blah)
Ok fair enough so scrolling down the list to see what parameters this Get function took, I couldn't find it directly under the functions of the http package
So scrolling down I find a Get method under type Client
So how come I don't have to first http.Client then make my Get request to that? Just a little confused. Thanks for any help.
Those are two different versions of the method. In one case; http.Get it's defined at the package level, this works much like a static method in C# or Java. In the other it has a receiver of type http.Client it's more like an instance method on that type in C# or Java. The type http.Client is as you'd expect in the same package.
package level get:
http://golang.org/pkg/net/http/#Get
func Get(url string) (resp *Response, err error)
//^ absence of receiver = package scoped
//^ uppercase method name so it is 'exported' which is about like public
client receiver get:
http://golang.org/pkg/net/http/#Client.Get
func (c *Client) Get(url string) (resp *Response, err error)
//^ this is the receiver

Should I avoid package singletons in golang?

at the moment I have a package store with following content:
package store
var (
db *Database
)
func Open(url string) error {
// open db connection
}
func FindAll(model interface{}) error {
// return all entries
}
func Close() {
// close db connection
}
This allows me to use store.FindAll from other packages after I have done store.Open in main.go.
However as I saw so far most packages prefer to provide a struct you need to initialize yourself. There are only few cases where this global approach is used.
What are downsides of this approach and should I avoid it?
You can't instantiate connections to 2 storages at once.
You can't easily mock out storage in unit tests of dependent code using convenient tools like gomock.
The standard http package has a ServerMux for generic usecases and but also has one default instance of ServerMux called DefaultServerMux (http://golang.org/pkg/net/http/#pkg-variables) for convenience. So that when you call http.HandleFunc it creates the handler on the default mux. You can find the same approach used in log and many other packages. This is essentially your "singleton" approach.
However, I don't think it's a good idea to follow that pattern in your use case, since the users need to call Open regardless of the default database. And, because of that, using a default instance would not really help and instead would actually make it less convenient:
d := store.Open(...)
defer d.Close()
d.FindAll(...)
is much easier to both write and read than:
store.Open(...)
defer store.Close()
store.FindAll(...)
And, also there are semantic problems: what should happen if someone calls Open twice:
store.Open(...)
defer store.Close()
...
store.Open(...)
store.FindAll(...) // Which db is this referring to?

Resources