Is there a way to create method hooks for structs in Go? - go

I want to create before save and after save method hooks for my Go structs how can I achieve this?
type Person struct {
FirstName string
LastName string
}
func (p *Person) Save() {
// call beforeSave()
// Save person data
// call afterSave()
}
func (p *Person) Update() {
// call beforeUpdate()
// Update person data
// call afterUpdate()
}
type Order struct {
Number bson.ObjectId
Items []Item
}
func (o *Order) Save() {
// call beforeSave()
// Save order data
// call afterSave()
}
func (o *Order) Update() {
// call beforeUpdate()
// Update order data
// call afterUpdate()
}
For any struct I create as a model I want it to have a beforeSave() and afterSave() hook called automatically and be able to further override if necessary.

Packages like gorm use callback hooks heavily. But if you are writing your own engine (for some specific logic) using interfaces can help greatly (sample).

https://medium.com/#matryer/the-http-handler-wrapper-technique-in-golang-updated-bc7fbcffa702
I would only add to that article above, which I think well answers your question, that you can go further using pipelines (methods that return the associated variable they are bound to). It's particularly interesting as in the above article, as relates to being able to interpose anything between two processes, and you can use this to create tees and journalling/logging systems, and it can go a long way towards compensating for the kinda primitive error handling in Go.
I designed a number of related byte slice processing libraries that were able to be easily chained without needing to break a a line, although obviously more than about 3-4 in a line can get confusing and you can find yourself wanting to pass things in a DAG style pattern, at which point I think channels would make sense.

Related

A repository/store for database as interface or per table interface?

What I designed first was to have a Store interface as follow:
// store.go
type Store interface {
CreateUser(user model.User) (string, error)
GetProfile(userId string) (model.User, error)
CreateHouse(user model.House) (string, error)
}
And in another file, mongo_store.go, its implementation codes:
type mongoStore struct {
store *mongo.Client
}
func (mc *mongoUserStore) CreateUser(user model.User) (string, error) {
}
// And so on...
In mongo_store.go I have another method that returns an instance of MongoStore:
func NewMongoDBStore() Store {
// Some code to connect to MongoDB and finally
s := &mongoStore{
store: client,
}
return s
}
I've gone this way to abstract away DB layer. So in code we pass store around and call let's say CreateUser as an example.
My team members had the object of creating Store interface per table. So we should have UserStore interface with their methods or HouseStore with their own methods.
First question is that is this a best practice to change the code this way? I could not come up with a good argument to reject their change request. It's been said that this way we can mock less code in tests and also it is not polluted, all in one place for all methods that work with DB.
My Second Question is if we go the second approach, how NewMongoDBStore should return different store types. So instead of Store as return type we have to have different store types like UserStore, HouseStore, etc.
I always try to stick to one rule when designing new interfaces in Go: keep interfaces as small as possible. You can see that stdlib also tries to follow that rule, see for example fmt.Stringer, http.Handler or json.Marshaler. Look how in the json library they even separated json.Marshaler and json.Unmarshaler (same for the io.Reader and io.Writer), which, you can say, seem to be very connected together.
Coming back to your example, I think that your team makes a good point - I would go for the separation of the storages interfaces. The only situation in which I wouldn't do that is if you are sure that this interface will never expand and will always stick to this very limited number of methods. But I think this is very unlikely for the storage-like interfaces. For example in the near future you could like to add some more-grained filtering methods, or e.g. a method to insert storage objects in a batch.
In my opinion you can only benefit from separating the interfaces and here is why:
It's true that it is easier to mock an interface with a 1-2 methods than an interface with, let's say, 10 methods.
It's always better to separate functionalities into smaller pieces as you may not need to use all of them at once in every place. To give you a better picture you can have one service which would use your UserStore and your HouseStore implementations, but you can also have a second service that wouldn't need a HouseStore and would only use a UserStore implementation. Thanks to that it would be much easier to mock the second service (as it uses only a UserStore) and if you later add any methods to the HouseStore there is no possible way it could affect the second service anyhow as it knows nothing about this interface.
I think the above answers your first question. Coming to the second question you can solve it in two ways I think:
First way is something I usually do. You can simply create separate implementations for separate interfaces. So if you have, following your example, a file store.go containing interfaces:
type UserStore interface {
CreateUser(user model.User) (string, error)
// Rest of the methods ...
}
type HouseStore interface {
CreateHouse(house model.House) (string, error)
// Rest of the methods ...
}
I would make a user_mongo_store.go with MongoDB implementation for the UserStore ...
type userMongoStore struct {
store *mongo.Client
}
func (s *userMongoStore) CreateUser(user model.User) (string, error) {
// CreateUser method implementation ...
}
func NewUserMongoStore() UserStore {
// Some code to connect to MongoDB and finally
s := &userMongoStore{
store: client,
}
return s
}
// Rest of the UserStore methods implementations ...
... and I would also make a house_mongo_store.go file with MongoDB implementation for the HouseStore:
type houseMongoStore struct {
store *mongo.Client
}
func (s *houseMongoStore) CreateHouse(house model.House) (string, error) {
// CreateHouse method implementation ...
}
func NewHouseMongoStore() HouseStore {
// Some code to connect to MongoDB and finally
s := &houseMongoStore{
store: client,
}
return s
}
// Rest of the HouseStore methods implementations ...
You could ask here if will not feel inconvinient to keep two MongoDB storages implementations separated as they could contain the same MongoDB-related operations. Answer to that question is no: you can always create e.g. mongo_store.go to keep all the common functions that will be shared by all the MongoDB storages implementations.
The only disadvantage I can see here is a little bit more code in general, but in the end it gives you much cleaner, better separated and more modular code.
Second way, which I would recommend less, is to use the (in my opinion) very powerful Go feature which is a fact that you don't declare implementing an interface (unlike in e.g. Java), you just have to implement all the interfaces methods in your struct and you can use it as all these interfaces implementations. In your case you could stick to the single mongoStore struct and make it implement both the UserStore and the HouseStore interfaces methods. That way you would end up with something like this:
type mongoStore struct {
store *mongo.Client
}
func (s *mongoStore) CreateUser(user model.User) (string, error) {
// CreateUser method implementation ...
}
func (s *mongoStore) CreateHouse(house model.House) (string, error) {
// CreateHouse method implementation ...
}
// Rest of the UserStore and HouseStore
// interfaces methods implementations ...
but this solution leaves us with a problem: how to create a function to create UserStore and HouseStore interfaces implementations. Well, in this situation you could either make mongoStore struct exported and use it directly as both a UserStore and HouseStore implementations or, which looks a little bit more exotic but is still a valid piece of code, you could make a function that would return this single struct as both implementations, e.g.:
func NewMongoStores() (UserStore, HouseStore) {
s := &mongoStore{
store: client,
}
return s, s
}
I think I gave you some options, but to sum up, I would encourage you to keep your interfaces and their implementations separated.

How to "force" Golang func to take a new instance of a struct

So a struct holds data that could get mutated. Is there some trick or technique in Golang that can tell a func that it must accept a new instance of a struct? In other words, try to best avoid reusing data that may have been mutated before the fact or may get mutated during func lifecycle. (I could avoid mutating stuff, but other devs on my team might not get the memo).
To illustrate:
type CMRequest struct {
Endpoint string
Method string
}
func (cmreq CMRequest) Run() (res *goreq.Response) {
/// this could mutate cmreq
}
obviously Run() could mutate the data in cmreq, so I am wondering if there is a good pattern to force the creation of fresh data every time? The only thing I can think of is to keep the struct private, and do something like this:
type cmrequest struct {
Endpoint string
Method string
}
func (cmreq cmrequest) Run() (res *goreq.Response) {
/// this could mutate cmreq
}
and then expose a helper func:
func MakeRequestAndUnmarshalBody(d CMRequestNoReuse) (*goreq.Response) {
// check that d has a unique memory location?
cmreq := NewCPRequest(d)
res := cmreq.Run()
return res
}
so the helper func would be public, and it would create a new instance of the struct every time? is there any other way to go about it? I still can't force the user to pass in new data every time, although I could check to see if the memory location of d CMRequestNoReuse is unique?
Actually, no, in your example, this doesn't mutate data in CMRequest instance:
func (cmreq CMRequest) Run() (res *goreq.Response) {
/// this could mutate cmreq
}
When you pass object by value, you actually get copy of it, not reference to it.
To mutate, you need to pass by pointer. I.e. Run method would look like:
func (cmreq *CMRequest) Run() (res *goreq.Response) {
/// this actually can mutate cmreq
}
Then you actually have access to original object via pointer and can mutate it.
Here is example in go playground. But note, if one of the fields of struct is pointer -- you still can mutate it.

How to initialize map without code duplication?

I have a struct type named as game as follows:
type game struct {
commands map[string]*command
// ...
}
And I want to initialize a map in a struct of this type in the init function. I do it like this
func (game *game) init() {
game.commands = make(map[string]*command)
// ...
}
As I think, there is some code duplication. It would be neat if I could declare the type (map[string]*command) only once. Is there a way to do that? I tried to use reflect but it doesn't seem to work because make builtin takes a type and not a value.
If you are worried that repeating map[string]*command two times is duplication, you can define a new type from it.
type commandsMap map[string]*command
type game struct {
commands commandsMap
// ...
}
func (game *game) init() {
game.commands = make(commandsMap)
// ...
}
There is not a code duplication here. Code duplication is when you have multiple points in your code that does the same thing.
That being said, you can leave your code like it is or you can use a Constructor, which has the benefit of restricting this initialization where you type is and is also a cleaner approach imho.
type game struct {
commands map[string]*command
}
func game() *game {
return &game{commands: make(map[string]*command)}
}
That way, when you need a game object, you can just do
gameObject := game()
and then use the map methods as you normally would (or you can also make a utility method just for that)

Sync access for the same variable

I want to make sure that my update function is executed only by one thread at a time for a given value.
func update1(int id){
...
makeUpdate(id)
...
}
func update2(int id){
...
makeUpdate(id)
...
}
So, how should I write my makeUpdate() function that the myUpdate block is executed only once for a given id value? That means if update1 is updating the record with the id "15" and update2 with the id "20", the block access should not be synchronized.
As comments suggest - you need to protect data access not functional access.
Easiest way to achieve this, is to make a struct type with a lock - and attach the critical functional update as a method e.g.
type MyData struct {
l sync.Mutex
// add any other task related attributes here too
}
// makeUpdate *MUST* use a pointer to our struct (i.e. 'm *MyData')
// as Mutex logic breaks if copied (so no 'm MyData')
func (m *MyData) makeUpdate(id int) {
m.l.Lock()
defer m.l.Unlock()
fmt.Printf("better makeUpdate(%d)\n", id)
// do critical stuff here
// don't dilly-dally - lock is still being used - so return quickly
}
Try this out in the playground.

Function as argument, access inner parameter

The package valyala/fasthttp implements the following function type:
type RequestHandler func(ctx *RequestCtx)
It is used in buaazp/fasthttprouter like this:
func (r *Router) Handle(method, path string, handle fasthttp.RequestHandler) {
//...
}
I am trying to wrap these like this (open for suggestions on implementation):
//myapp/router
type Request struct {
fasthttp.RequestCtx
}
type RequestHandler func(*Request)
func Handle(method string, path string, handler RequestHandler) {
//I need to access the fasthttp.RequestCtx stuff in here...
}
How can I achieve this? Or, if this is not the way to go at all, how can I achieve my goal as mentioned below for a router package?
BACKGROUND
Goal: My wish is to wrap tooling packages (sessions, database, routing, etc.) in order to make my app agnostic to the implementation of these packages. I wish to do this primarily for the purpose of being able to extend these with domain-specific functionality, and being able to switch one 3rd party lib for another, if I ever would need to do so. It also makes debugging and logging easier.
Method: I create native types and functions, which map to the functionality of the imported packages.
Problem: I am stuck on how to wrap a foreign (i.e. imported) function type properly.
At all your idea looks very good. Some things you could change:
//myapp/router
// Using a composition is idiomatic go code
// this should work. It can't get better.
type Request struct {
fasthttp.RequestCtx
}
// I would make the RequestHandler as a real Handler. In go it would be
// a interface
type RequestHandler interface{
Request(*Request)
}
// If you have a function, which needs to access parameters from `Request`
// you should take this as an input.
func Handle(method string, path string, req *Request) {
//Access Request via req.Request ...
}
Because if you pass a function or an interface into your function, which needs also Request as input the caller needs to create that before he calls your Handle function. Why not change that function just for the input you really need?

Resources