Insert data with Gorm with reflect - go

I'm creating a basic REST service. My intent is to write the logic for the resources as abstractly as possible. What I mean is if I have already created a CRUD logic for endpoint /devices for example, then when I need a new resource endpoint like /cars, I should not be repeating myself over the CRUD procedures.
In another language like Python, classes and methods are first class objects and that can be stored in a list or dictionary (map) and then instantiated as needed. In Go it doesn't seem as easy. I tried to use the reflect package.
First I create a TypeRegistry according to this.
var TypeRegistry = make(map[string]reflect.Type)
TypeRegistry["devices"] = reflect.TypeOf(models.Device{}) // models.Device{} is the Gorm SQL table model
Then I have handler creator which is intended to handle the creation of all types of resources like this (error handling redacted):
func CreateOneHandler(typeString string) func(http.ResponseWriter, *http.Request) {
return func(w http.ResponseWriter, r *http.Request) {
defer r.Body.Close()
jsn, _ = ioutil.ReadAll(r.Body)
jsonBytes, _ := datamapper.CreateOne(typeString, jsn)
w.Write(jsonBytes)
}
}
I'm using Chi, so I bind the handlers like this:
func addRoute(r chi.Router, endpoint string, typeString string) {
r.Route("/"+endpoint, func(r chi.Router) {
typeString := endpoint
r.Post("/", CreateOneHandler(typeString))
})
}
The idea is to, after defining the Gorm models, simply add routes by calling it repeatedly, addRoute(r, "devices"); addRoute(r, "cars") for a consistent REST interface across multiple models.
Now within CreateOne() I want to insert something into the table:
func CreateOne(typeString string, json []byte) ([]byte, error) {
modelType := typeregistry.TypeRegistry[typeString]
value := reflect.New(modelType)
db.Create(modelPtr.Elem()) // ==> Now this doesn't work
}
How do I make it work? Gorm said "create failed no such table: value". Because a reflect value or reflect type isn't the same as if I were just to instantiate objects the regular way. How do I make it work?
(A side note: given the static nature of the type switch and type assertions, I am already compromising some of my designs which would probably be possible in a language like Python. It seems to me like it's unavoidable to litter code with type switches which tried to check whether it is a device, car or any number of new models explicitly. In a regular object-oriented language maybe this would be simple polymorphic method call. Any pointer to better design would be appreciated as well.)

Related

A repository/store for database as interface or per table interface?

What I designed first was to have a Store interface as follow:
// store.go
type Store interface {
CreateUser(user model.User) (string, error)
GetProfile(userId string) (model.User, error)
CreateHouse(user model.House) (string, error)
}
And in another file, mongo_store.go, its implementation codes:
type mongoStore struct {
store *mongo.Client
}
func (mc *mongoUserStore) CreateUser(user model.User) (string, error) {
}
// And so on...
In mongo_store.go I have another method that returns an instance of MongoStore:
func NewMongoDBStore() Store {
// Some code to connect to MongoDB and finally
s := &mongoStore{
store: client,
}
return s
}
I've gone this way to abstract away DB layer. So in code we pass store around and call let's say CreateUser as an example.
My team members had the object of creating Store interface per table. So we should have UserStore interface with their methods or HouseStore with their own methods.
First question is that is this a best practice to change the code this way? I could not come up with a good argument to reject their change request. It's been said that this way we can mock less code in tests and also it is not polluted, all in one place for all methods that work with DB.
My Second Question is if we go the second approach, how NewMongoDBStore should return different store types. So instead of Store as return type we have to have different store types like UserStore, HouseStore, etc.
I always try to stick to one rule when designing new interfaces in Go: keep interfaces as small as possible. You can see that stdlib also tries to follow that rule, see for example fmt.Stringer, http.Handler or json.Marshaler. Look how in the json library they even separated json.Marshaler and json.Unmarshaler (same for the io.Reader and io.Writer), which, you can say, seem to be very connected together.
Coming back to your example, I think that your team makes a good point - I would go for the separation of the storages interfaces. The only situation in which I wouldn't do that is if you are sure that this interface will never expand and will always stick to this very limited number of methods. But I think this is very unlikely for the storage-like interfaces. For example in the near future you could like to add some more-grained filtering methods, or e.g. a method to insert storage objects in a batch.
In my opinion you can only benefit from separating the interfaces and here is why:
It's true that it is easier to mock an interface with a 1-2 methods than an interface with, let's say, 10 methods.
It's always better to separate functionalities into smaller pieces as you may not need to use all of them at once in every place. To give you a better picture you can have one service which would use your UserStore and your HouseStore implementations, but you can also have a second service that wouldn't need a HouseStore and would only use a UserStore implementation. Thanks to that it would be much easier to mock the second service (as it uses only a UserStore) and if you later add any methods to the HouseStore there is no possible way it could affect the second service anyhow as it knows nothing about this interface.
I think the above answers your first question. Coming to the second question you can solve it in two ways I think:
First way is something I usually do. You can simply create separate implementations for separate interfaces. So if you have, following your example, a file store.go containing interfaces:
type UserStore interface {
CreateUser(user model.User) (string, error)
// Rest of the methods ...
}
type HouseStore interface {
CreateHouse(house model.House) (string, error)
// Rest of the methods ...
}
I would make a user_mongo_store.go with MongoDB implementation for the UserStore ...
type userMongoStore struct {
store *mongo.Client
}
func (s *userMongoStore) CreateUser(user model.User) (string, error) {
// CreateUser method implementation ...
}
func NewUserMongoStore() UserStore {
// Some code to connect to MongoDB and finally
s := &userMongoStore{
store: client,
}
return s
}
// Rest of the UserStore methods implementations ...
... and I would also make a house_mongo_store.go file with MongoDB implementation for the HouseStore:
type houseMongoStore struct {
store *mongo.Client
}
func (s *houseMongoStore) CreateHouse(house model.House) (string, error) {
// CreateHouse method implementation ...
}
func NewHouseMongoStore() HouseStore {
// Some code to connect to MongoDB and finally
s := &houseMongoStore{
store: client,
}
return s
}
// Rest of the HouseStore methods implementations ...
You could ask here if will not feel inconvinient to keep two MongoDB storages implementations separated as they could contain the same MongoDB-related operations. Answer to that question is no: you can always create e.g. mongo_store.go to keep all the common functions that will be shared by all the MongoDB storages implementations.
The only disadvantage I can see here is a little bit more code in general, but in the end it gives you much cleaner, better separated and more modular code.
Second way, which I would recommend less, is to use the (in my opinion) very powerful Go feature which is a fact that you don't declare implementing an interface (unlike in e.g. Java), you just have to implement all the interfaces methods in your struct and you can use it as all these interfaces implementations. In your case you could stick to the single mongoStore struct and make it implement both the UserStore and the HouseStore interfaces methods. That way you would end up with something like this:
type mongoStore struct {
store *mongo.Client
}
func (s *mongoStore) CreateUser(user model.User) (string, error) {
// CreateUser method implementation ...
}
func (s *mongoStore) CreateHouse(house model.House) (string, error) {
// CreateHouse method implementation ...
}
// Rest of the UserStore and HouseStore
// interfaces methods implementations ...
but this solution leaves us with a problem: how to create a function to create UserStore and HouseStore interfaces implementations. Well, in this situation you could either make mongoStore struct exported and use it directly as both a UserStore and HouseStore implementations or, which looks a little bit more exotic but is still a valid piece of code, you could make a function that would return this single struct as both implementations, e.g.:
func NewMongoStores() (UserStore, HouseStore) {
s := &mongoStore{
store: client,
}
return s, s
}
I think I gave you some options, but to sum up, I would encourage you to keep your interfaces and their implementations separated.

How can I separate generated code package and user code but have them accessible from one place in code

I am newer to golang, so I have some courses that I bought from udemy to help break me into the language. One of them I found very helpful for a general understanding as I took on a project in the language.
In the class that I took, all of the sql related functions were in the sqlc folder with the structure less broken out:
sqlc
generatedcode
store
One of those files is a querier that is generated by sqlc that contains an interface with all of the methods that were generated. Here is the general idea of what it currently looks like: https://github.com/techschool/simplebank/tree/master/db/sqlc
package db
import (
"context"
"github.com/google/uuid"
)
type Querier interface {
AddAccountBalance(ctx context.Context, arg AddAccountBalanceParams) (Account, error)
CreateAccount(ctx context.Context, arg CreateAccountParams) (Account, error)
...
}
var _ Querier = (*Queries)(nil)
Would it be possible to wrap both what sqlc generates AND any queries that a developer creates (dynamic queries) into a single querier? I'm also trying to have it so that the sqlc generated code is in its own folder. The structure I am aiming for is:
sql
sqlc
generatedcode
store - (wraps it all together)
dynamicsqlfiles
This should clear up what a I mean by store: https://github.com/techschool/simplebank/blob/master/db/sqlc/store.go
package db
import (
"context"
"database/sql"
"fmt"
)
// Store defines all functions to execute db queries and transactions
type Store interface {
Querier
TransferTx(ctx context.Context, arg TransferTxParams) (TransferTxResult, error)
}
// SQLStore provides all functions to execute SQL queries and transactions
type SQLStore struct {
db *sql.DB
*Queries
}
// NewStore creates a new store
func NewStore(db *sql.DB) Store {
return &SQLStore{
db: db,
Queries: New(db),
}
}
I'm trying to run everything through that store (both generated and my functions), so I can make a call similar to the CreateUser function in this file (server.store.): https://github.com/techschool/simplebank/blob/master/api/user.go
arg := db.CreateUserParams{
Username: req.Username,
HashedPassword: hashedPassword,
FullName: req.FullName,
Email: req.Email,
}
user, err := server.store.CreateUser(ctx, arg)
if err != nil {
if pqErr, ok := err.(*pq.Error); ok {
switch pqErr.Code.Name() {
case "unique_violation":
ctx.JSON(http.StatusForbidden, errorResponse(err))
return
}
}
ctx.JSON(http.StatusInternalServerError, errorResponse(err))
return
}
I've tried creating something that houses another querier interface that embeds the generated one, then creating my own db.go that uses the generated DBTX interface but has its own Queries struct, and New function. It always gives me an error that the Queries struct I created aren't implementing the functions I made, despite having it implemented in one of the custom methods I made.
I deleted that branch, and have been clicking through the simplebank project linked above to see if I can find another way this could be done, or if I missed something. If it can't be done, that's okay. I'm just using this as a good opportunity to learn a little more about the language, and keep some code separated if possible.
UPDATE:
There were only a few pieces I had to change, but I modified the store.go to look more like:
// sdb is imported, but points to the generated Querier
// Store provides all functions to execute db queries and transactions
type Store interface {
sdb.Querier
DynamicQuerier
}
// SQLStore provides all functions to execute SQL queries and transactions
type SQLStore struct {
db *sql.DB
*sdb.Queries
*dynamicQueries
}
// NewStore creates a new Store
func NewStore(db *sql.DB) Store {
return &SQLStore{
db: db,
Queries: sdb.New(db),
dynamicQueries: New(db),
}
}
Then just created a new Querier and struct for the methods I would be creating. Gave them their own New function, and tied it together in the above. Before, I was trying to figure out a way to reuse as much of the generated code as possible, which I think was the issue.
Why I wanted the Interface:
I wanted a structure that separated the files I would be working in more from the files that I would never touch (generated). This is the new structure:
I like how the generated code put everything in the Querier interface, then checked that anything implementing it satisfied all of the function requirements. So I wanted to replicate that for the dynamic portion which I would be creating on my own.
It might be complicating it a bit more than it would 'NEED' to be, but it also provides an additional set of error checking that is nice to have. And in this case, even while maybe not necessary, it ended up being doable.
Would it be possible to wrap both what sqlc generates AND any queries that a developer creates (dynamic queries) into a single querier?
If I'm understanding your question correctly I think that you are looking for something like the below (playground):
package main
import (
"context"
"database/sql"
)
// Sample SQL C Code
type DBTX interface {
ExecContext(context.Context, string, ...interface{}) (sql.Result, error)
PrepareContext(context.Context, string) (*sql.Stmt, error)
QueryContext(context.Context, string, ...interface{}) (*sql.Rows, error)
QueryRowContext(context.Context, string, ...interface{}) *sql.Row
}
type Queries struct {
db DBTX
}
func (q *Queries) DeleteAccount(ctx context.Context, id int64) error {
// _, err := q.db.ExecContext(ctx, deleteAccount, id)
// return err
return nil // Pretend that this always works
}
type Querier interface {
DeleteAccount(ctx context.Context, id int64) error
}
//
// Your custom "dynamic" queries
//
type myDynamicQueries struct {
db DBTX
}
func (m *myDynamicQueries) GetDynamicResult(ctx context.Context) error {
// _, err := q.db.ExecContext(ctx, deleteAccount, id)
// return err
return nil // Pretend that this always works
}
type myDynamicQuerier interface {
GetDynamicResult(ctx context.Context) error
}
// Combine things
type allDatabase struct {
*Queries // Note: You could embed this directly into myDynamicQueries instead of having a seperate struct if that is your preference
*myDynamicQueries
}
type DatabaseFunctions interface {
Querier
myDynamicQuerier
}
func main() {
// Basic example
var db DatabaseFunctions
db = getDatabase()
db.DeleteAccount(context.Background(), 0)
db.GetDynamicResult(context.Background())
}
// getDatabase - Perform whatever is needed to connect to database...
func getDatabase() allDatabase {
sqlc := &Queries{db: nil} // In reality you would use New() to do this!
myDyn := &myDynamicQueries{db: nil} // Again it's often cleaner to use a function
return allDatabase{Queries: sqlc, myDynamicQueries: myDyn}
}
The above is all in one file for simplicity but could easily pull from multiple packages e.g.
type allDatabase struct {
*generatedcode.Queries
*store.myDynamicQueries
}
If this does not answer your question then please show one of your failed attempts (so we can see where you are going wrong).
One general comment - do you really need the interface? A common recommendation is "Accept interfaces, return structs". While this may not always apply I suspect you may be introducing interfaces where they are not really necessary and this may add unnecessary complexity.
I thought that the Store, which was housing both Queriers, was tying it all together. Can you explain a little with the example above (in the question post) why it's not necessary? How does SQLStore get access to all of the Querier interface functions?
The struct SQLStore is what is "tying it all together". As per the Go spec:
Given a struct type S and a named type T, promoted methods are included in the method set of the struct as follows:
If S contains an embedded field T, the method sets of S and *S both include promoted methods with receiver T. The method set of *S also includes promoted methods with receiver *T.
If S contains an embedded field *T, the method sets of S and *S both include promoted methods with receiver T or *T.
So an object of type SQLStore:
type SQLStore struct {
db *sql.DB
*sdb.Queries
*dynamicQueries
}
var foo SQLStore // Assume that we are actually providing values for all fields
Will implement all of the methods of sdb.Queries and, also, those in dynamicQueries (you can also access the sql.DB members via foo.db.XXX). This means that you can call foo.AddAccountBalance() and foo.MyGenericQuery() (assuming that is in dynamicQueries!) etc.
The spec says "In its most basic form an interface specifies a (possibly empty) list of methods". So you can think of an interface as a list of functions that must be implemented by whatever implementation (e.g. struct) you assign to the interface (the interface itself does not implement anything directly).
This example might help you understand.
Hopefully that helps a little (as I'm not sure which aspect you don't understand I'm not really sure what to focus on).

How to structure my Go app for transactions via pgx

I have the following models
type UsersModel struct {
db *pgx.Conn
}
func (u *UsersModel) SignupUser(ctx context.Context, payload SignupRequest) (SignupQueryResult, error) {
err := u.db.Exec("...")
return SignupQueryResult{}, err
}
type SessionsModel struct {
db *pgx.Conn
}
func (s *SessionsModel) CreateSession(ctx context.Context, payload CreateSessionRequest) error {
_, err := s.db.Exec("...")
return err
}
and my service calls UsersModel.SignupUser as follows
type SignupService struct {
userModel signupServiceUserModel
}
func (ss *SignupService) Signup(ctx context.Context, request SignupRequest) (SignupQueryResult, error) {
return ss.userModel.SignupUser(ctx, request)
}
Now, I need to tie SignupUser and CreateSession in a transaction instead of isolated operations, not sure what the best way to structure this is, and how to pass transaction around while maintaining that abstraction of DB specific stuff from services. Or should I just call the sessions table insert query(which I'm putting in *SessionsModel.CreateSession directly in *UsersModel.SignupUser?
For reference, transactions in pgx happen by calling *pgx.Conn.Begin() which returns a concrete pgx.Tx , on which you execute the same functions as you would on *px.Conn , followed by *pgx.Tx.Commit() or *pgx.Tx.Rollback()
Questions I have are:
Where to start transaction - model or service?
If in service, how do I do that while abstracting that there's an underlying DB from service?
How do I pass transaction between models?
There is no right or wrong answer for this since there are multiple ways to do it. However, I share how I'd do it and why.
make sure to keep the service layer clean of any concrete DB implementation, so if you switch to a completely new DB you do not need to change other pieces.
about the solution, I would create a completely new method called SignupUserAndCreateSession that encloses all the logic you need. I wouldn't worry because you have the two original methods in one, as per my understanding in this scenario both of them are tightly coupled by design, so this would not be an anti-pattern.
I would avoid moving around the *pgx.Tx between methods since anyway you would depend on another level that makes sure to commit or roll back, and this might cause errors in future implementations.

Golang service/daos implementation

Coming from a Java background, I have some questions on how things are typically done in Golang. I am specifically talking about services and dao's/repositories.
In java, I would use dependency injection (probably as singleton/application-scoped), and have a Service injected into my rest endpoint / resource.
To give a bit more context. Imagine the following Golang code:
func main() {
http.ListenAndServe("localhost:8080", nil)
}
func init() {
r := httptreemux.New()
api := r.NewGroup("/api/v1")
api.GET("/blogs", GetAllBlogs)
http.Handle("/", r)
}
Copied this directly from my code, main and init are split because google app engine.
So for now I have one handler. In that handler, I expect to interact with a BlogService.
The question is, where, and in what scope should I instantiate a BlogService struct and a dao like datastructure?
Should I do it everytime the handler is triggered, or make it constant/global?
For completeness, here is the handler and blogService:
// GetAllBlogs Retrieves all blogs from GCloud datastore
func GetAllBlogs(w http.ResponseWriter, req *http.Request, params map[string]string) {
c := appengine.NewContext(req)
// need a reference to Blog Service at this point, where to instantiate?
}
type blogService struct{}
// Blog contains the content and meta data for a blog post.
type Blog struct {...}
// newBlogService constructs a new service to operate on Blogs.
func newBlogService() *blogService {
return &blogService{}
}
func (s *blogService) ListBlogs(ctx context.Context) ([]*Blog, error) {
// Do some dao-ey / repository things, where to instantiate BlogDao?
}
You can use context.Context to pass request scoped values into your handlers (available in Go 1.7) , if you build all your required dependencies during the request/response cycle (which you should to avoid race conditions, except for dependencies that manage concurrency on their own like sql.DB). Put all your services into a single container for instance, then query the context for that value :
container := request.Context.Value("container").(*Container)
blogs,err := container.GetBlogService().ListBlogs()
read the following material :
https://golang.org/pkg/context/
https://golang.org/pkg/net/http/#Request.Context

Defining an interface method with interface return type

TLDR Here is a playground that demonstrates the issue if you try to run it: https://play.golang.org/p/myQtUVg1iq
I am making a REST API and have many types of resources that can be retrieved via a GET request
GET http://localhost/api/users
GET http://localhost/api/groups
I have a models package which abstracts how the different resources are implemented:
func(m *UserManager) Get() []Users {
// Internal logic, assume returns correct results
}
func(m *GroupManager) Get() []Groups {
// Internal logic, assume returns correct results
}
A routes file setups all the routes and handlers:
users := models.UserManager{}
groups := models.GroupManager{}
func GetUsersHandler (w http.ResponseWriter, r *http.Request) {
users := users.Get()
// Implementation details, writing to w as JSON
}
func GetGroupsHandler (w http.ResponseWriter, r *http.Request) {
groups := groups.Get()
// Implementation details, writing to w as JSON
}
func registerRoutes(r *mux.Router) {
r.handleFunc("/api/users", GetUsersHandler).Method("GET")
r.handleFunc("/api/groups", GetGroupsHandler).Method("GET")
}
I am trying to make this more generic by creating an interface and then only needing a single GetHandler. Something like this:
type Getter interface {
Get() []interface{}
}
func GetHandler(g Getter) {
return func(w http.ResponseWriter, r *http.Request) {
results := g.Get()
// Implementation details, writing to w as JSON
}
}
func registerRoutes(r *mux.Router) {
r.handleFunc("/api/users", GetHandler(&users)).Method("GET")
r.handleFunc("/api/groups", GetHandler(&groups)).Method("GET")
}
This is really close to working, the only problem is the return type from the models is a specific object type, but the interface just uses the interface return type. Is there any way to solve this without making the models return []interface{}?
https://play.golang.org/p/myQtUVg1iq
Try not to approach the problem like you would other OOP languages. You can't have covariant containers in Go, so you either have to use an empty interface{}, or you have to structure your program differently.
If your Get methods are different and you want to group types in an interface, use another method (sometimes we even have noop methods just for interfaces), or just pass in users or groups as an interface{}. You'll need to do a type switch or assertion at some point in the call chain anyway, and once you know what type it is you can handle it accordingly.
It's hard to tell without more code, but in this case, the easiest path may just be to have each type be an http.Handler itself, and it can dispatch accordingly.
I ended up avoiding this problem entirely and instead of trying to reduce the amount of code I am using the new go generate feature in Go 1.4 to create the code that is necessary for each resource.

Resources