I try to implement repository pattern in Go app (simple web service) and try to find better way to escape code duplication.
Here is a code
Interfaces are:
type IRoleRepository interface {
GetAll() ([]Role, error)
}
type ISaleChannelRepository interface {
GetAll() ([]SaleChannel, error)
}
And implementation:
func (r *RoleRepository) GetAll() ([]Role, error) {
var result []Role
var err error
var rows *sql.Rows
if err != nil {
return result, err
}
connection := r.provider.GetConnection()
defer connection.Close()
rows, err = connection.Query("SELECT Id,Name FROM Position")
defer rows.Close()
if err != nil {
return result, err
}
for rows.Next() {
entity := new(Role)
err = sqlstruct.Scan(entity, rows)
if err != nil {
return result, err
}
result = append(result, *entity)
}
err = rows.Err()
if err != nil {
return result, err
}
return result, err
}
func (r *SaleChannelRepository) GetAll() ([]SaleChannel, error) {
var result []SaleChannel
var err error
var rows *sql.Rows
if err != nil {
return result, err
}
connection := r.provider.GetConnection()
defer connection.Close()
rows, err = connection.Query("SELECT DISTINCT SaleChannel 'Name' FROM Employee")
defer rows.Close()
if err != nil {
return result, err
}
for rows.Next() {
entity := new(SaleChannel)
err = sqlstruct.Scan(entity, rows)
if err != nil {
return result, err
}
result = append(result, *entity)
}
err = rows.Err()
if err != nil {
return result, err
}
return result, err
}
As you can see differences are in a few words. I try to find something like Generics from C#, but didnt find.
Can anyone help me?
No, Go does not have generics and won't have them in the forseeable future¹.
You have three options:
Refactor your code so that you have a single function which accepts an SQL statement and another function, and:
Queries the DB with the provided statement.
Iterates over the result's rows.
For each row, calls the provided function whose task is to
scan the row.
In this case, you'll have a single generic "querying" function,
and the differences will be solely in "scanning" functions.
Several variations on this are possible but I suspect you have the idea.
Use the sqlx package which basically is to SQL-driven databases what encoding/json is to JSON data streams: it uses reflection on your types to create and execute SQL to populate them.
This way you'll get reusability on another level: you simply won't write boilerplate code.
Use code generation which is the Go-native way of having "code templates" (that's what generics are about).
This way, you (usually) write a Go program which takes some input (in whatever format you wish), reads it and writes out one or more files which contain Go code, which is then compiled.
In your, very simple, case, you can start with a template of your Go function and some sort of a table which maps SQL statement to the types to create from the data selected.
I'd note that your code indeed looks woefully unidiomatic.
No one in their right mind implements "repository patterns" in Go, but that's sort of okay so long it keeps you happy—we all are indoctrinated to a certain degree with the languages/environments we're accustomed to,—but your connection := r.provider.GetConnection() looks alarming: the Go's database/sql is drastically different from "popular" environments and frameworks so I'd highly recommend to start with this and this.
¹ (Update as of 2021-05-31) Go will have generics as the proposal to implement them has been accepted and the work implementing them is in progress.
Forgive me if I'm misunderstanding, but a better pattern might be something like the following:
type RepositoryItem interface {
Name() string // for example
}
type Repository interface {
GetAll() ([]RepositoryItem, error)
}
At the moment, you essentially have multiple interfaces for each type of repository, so unless you're going to implement multiple types of RoleRepository, you might as well not have the interface.
Having generic Repository and RepositoryItem interfaces might make your code more extensible (not to mention easier to test) in the long run.
A contrived example might be (if we assume a Repository vaguely correlates to a backend) implementations such as MySQLRepository and MongoDBRepository. By abstracting the functionality of the repository, you're protecting against future mutations.
I would very much advise seeing #kostix's answer also, though.
interface{} is the "generic type" in Go. I can imagine doing something like this:
package main
import "fmt"
type IRole struct {
RoleId uint
}
type ISaleChannel struct {
Profitable bool
}
type GenericRepo interface{
GetAll([]interface{})
}
// conceptual repo to store all Roles and SaleChannels
type Repo struct {
IRoles []IRole
ISaleChannels []ISaleChannel
}
func (r *Repo) GetAll(ifs []interface{}) {
// database implementation here before type switch
for _, v := range ifs {
switch v := v.(type) {
default:
fmt.Printf("unexpected type %T\n", v)
case IRole:
fmt.Printf("Role %t\n", v)
r.IRoles = append(r.IRoles, v)
case ISaleChannel:
fmt.Printf("SaleChannel %d\n", v)
r.ISaleChannels = append(r.ISaleChannels, v)
}
}
}
func main() {
getter := new(Repo)
// mock slice
data := []interface{}{
IRole{1},
IRole{2},
IRole{3},
ISaleChannel{true},
ISaleChannel{false},
IRole{4},
}
getter.GetAll(data)
fmt.Println("IRoles: ", getter.IRoles)
fmt.Println("ISaleChannels: ", getter.ISales)
}
This way you don't have to end up with two structs and/or interfaces for IRole and ISale
Related
I have to keep multi type struct in slice and seed them. I took with variadic parameter of interface type and foreach them. If I call the method of interface it works, but when I trying to reach to struct I can't. How can I solve that?
Note: Seed() method return the file name of datas.
The Interface:
type Seeder interface {
Seed() string
}
Method:
func (AirportCodes) Seed() string {
return "airport_codes.json"
}
SeederSlice:
seederModelList = []globals.Seeder{
m.AirportCodes{},
m.Term{},
}
And the last one, SeedSchema function:
func (db *Database) SeedSchema(models ...globals.Seeder) error {
var (
subjects []globals.Seeder
fileByte []byte
err error
// tempMember map[string]interface{}
)
if len(models) == 0 {
subjects = seederModelList
} else {
subjects = models
}
for _, model := range subjects {
fileName := model.Seed()
fmt.Printf("%+v\n", model)
if fileByte, err = os.ReadFile("db/seeds/" + fileName); err != nil {
fmt.Println("asd", err)
// return err
}
if err = json.Unmarshal(fileByte, &model); err != nil {
fmt.Println("dsa", err)
// return err
}
modelType := reflect.TypeOf(model).Elem()
modelPtr2 := reflect.New(modelType)
fmt.Printf("%s\n", modelPtr2)
}
return nil
}
I can reach exact model but can't create a member and seed.
After some back and forth in the comments, I'll just post this minimal answer here. It's by no means a definitive "this is what you do" type answer, but I hope this can at least provide you with enough information to get you started. To get to this point, I've made a couple of assumptions based on the snippets of code you've provided, and I'm assuming you want to seed the DB through a command of sorts (e.g. your_bin seed). That means the following assumptions have been made:
The Schemas and corresponding models/types are present (like AirportCodes and the like)
Each type has its own source file (name comes from Seed() method, returning a .json file name)
Seed data is, therefore, assumed to be in a format like [{"seed": "data"}, {"more": "data"}].
The seed files can be appended, and should the schema change, the data in the seed files could be changed all together. This is of less importance ATM, but still, it's an assumption that should be noted.
OK, so let's start by moving all of the JSON files in a predictable location. In a sizeable, real world application you'd use something like XDG base path, but for the sake of brevity, let's assume you're running this in a scratch container from / and all relevant assets have been copied in to said container.
It'd make sense to have all seed files in the base path under a seed_data directory. Each file contains the seed data for a specific table, and therefore all the data within a file maps neatly onto a single model. Let's ignore relational data for the time being. We'll just assume that, for now, the data in these files is at least internally consistent, and any X-to-X relational data will have to right ID fields allowing for JOIN's and the like.
Let's start
So we have our models, and the data in JSON files. Now we can just create a slice of said models, making sure that data that you want/need to be present before other data is inserted is represented as a higher entry (lower index) than the other. Kind of like this:
seederModelList = []globals.Seeder{
m.AirportCodes{}, // seeds before Term
m.Term{}, // seeds after AirportCodes
}
But instead or returning the file name from this Seed method, why not pass in the connection and have the model handle its own data like this:
func (_ AirportCodes) Seed(db *gorm.DB) error {
// we know what file this model uses
data, err := os.ReadFile("seed_data/airport_codes.json")
if err != nil {
return err
}
// we have the data, we can unmarshal it as AirportCode instances
codes := []*AirportCodes{}
if err := json.Unmarshal(data, &codes); err != nil {
return err
}
// now INSERT, UPDATE, or UPSERT:
db.Clauses(clause.OnConflict{
UpdateAll: true,
}).Create(&codes)
}
Do the same for other models, like Terms:
func (_ Terms) Seed(db *gorm.DB) error {
// we know what file this model uses
data, err := os.ReadFile("seed_data/terms.json")
if err != nil {
return err
}
// we have the data, we can unmarshal it as Terms instances
terms := []*Terms{}
if err := json.Unmarshal(data, &terms); err != nil {
return err
}
// now INSERT, UPDATE, or UPSERT:
return db.Clauses(clause.OnConflict{
UpdateAll: true,
}).Create(&terms)
}
Of course, this does result in a bit of a mess considering we have DB access in a model, which should really be just a DTO if you ask me. This also leaves a lot to be desired in terms of error handling, but the basic gist of it would be this:
func main() {
db, _ := gorm.Open(mysql.Open(dsn), &gorm.Config{}) // omitted error handling for brevity
seeds := []interface{
Seed(*gorm.DB) error
}{
model.AirportCodes{},
model.Terms{},
// etc...
}
for _, m := range seeds {
if err := m.Seed(db); err != nil {
panic(err)
}
}
db.Close()
}
OK, so this should get us started, but let's just move this all into something a bit nicer by:
Moving the whole DB interaction out of the DTO/model
Wrap things into a transaction, so we can roll back on error
Update the initial slice a bit to make things cleaner
So as mentioned earlier, I'm assuming you have something like repositories to handle DB interactions in a separate package. Rather than calling Seed on the model, and passing the DB connection into those, we should instead rely on our repositories:
db, _ := gorm.Open() // same as before
acs := repo.NewAirportCodes(db) // pass in connection
tms := repo.NewTerms(db) // again...
Now our model can still return the JSON file name, or we can have that as a const in the repos. At this point, it doesn't really matter. The main thing is, we can have the actual inserting of data done in the repositories.
You can, if you want, change your seed slice thing to something like this:
calls := []func() error{
acs.Seed, // assuming your repo has a Seed function that does what it's supposed to do
tms.Seed,
}
Then perform all the seeding in a loop:
for _, c := range calls {
if err := c(); err != nil {
panic(err)
}
}
Now, this just leaves us with the issue of the transaction stuff. Thankfully, gorm makes this really rather simple:
db, _ := gorm.Open()
db.Transaction(func(tx *gorm.DB) error {
acs := repo.NewAirportCodes(tx) // create repo's, but use TX for connection
if err := acs.Seed(); err != nil {
return err // returning an error will automatically rollback the transaction
}
tms := repo.NewTerms(tx)
if err := tms.Seed(); err != nil {
return err
}
return nil // commit transaction
})
There's a lot more you can fiddle with here like creating batches of related data that can be committed separately, you can add more precise error handling and more informative logging, handle conflicts better (distinguish between CREATE and UPDATE etc...). Above all else, though, something worth keeping in mind:
Gorm has a migration system
I have to confess that I've not dealt with gorm in quite some time, but IIRC, you can have the tables be auto-migrated if the model changes, and run either custom go code and or SQL files on startup which can be used, rather easily, to seed the data. Might be worth looking at the feasibility of that...
In the effort of learning Go a bit better, I am trying to refactor a series of functions which accept a DB connection as the first argument into struct methods and something a bit more "idiomatically" Go.
Right now my "data store" methods are something like this:
func CreateA(db orm.DB, a *A) error {
db.Exec("INSERT...")
}
func CreateB(db orm.DB, b *B) error {
db.Exec("INSERT...")
}
These the functions work perfectly fine. orm.DB is the DB interface of go-pg.
Since the two functions accept a db connection I can either pass an actual connection or a transaction (which implements the same interface). I can be sure that both functions issuing SQL INSERTs run in the same transaction, avoiding having inconsistent state in the DB in case either one of them fails.
The trouble started when I decided to read more about how to structure the code a little better and to make it "mockable" in case I need to.
So I googled a bit, read the article Practical Persistence in Go: Organising Database Access and tried to refactor the code to use proper interfaces.
The result is something like this:
type Store {
CreateA(a *A) error
CreateB(a *A) error
}
type DB struct {
orm.DB
}
func NewDBConnection(p *ConnParams) (*DB, error) {
.... create db connection ...
return &DB{db}, nil
}
func (db *DB) CreateA(a *A) error {
...
}
func (db *DB) CreateB(b *B) error {
...
}
which allows me to write code like:
db := NewDBConnection()
DB.CreateA(a)
DB.CreateB(b)
instead of:
db := NewDBConnection()
CreateA(db, a)
CreateB(db, b)
The actual issue is that I lost the ability to run the two functions in the same transaction. Before I could do:
pgDB := DB.DB.(*pg.DB) // convert the interface to an actual connection
pgDB.RunInTransaction(func(tx *pg.Tx) error {
CreateA(tx, a)
CreateB(tx, b)
})
or something like:
tx := db.DB.Begin()
err = CreateA(tx, a)
err = CreateB(tx, b)
if err != nil {
tx.Rollback()
} else {
tx.Commit()
}
which is more or less the same thing.
Since the functions were accepting the common interface between a connection and a transaction I could abstract from my model layer the transaction logic sending down either a full connection or a transaction. This allowed me to decide in the "HTTP handler" when to create a trasaction and when I didn't need to.
Keep in mind that the connection is a global object representing a pool of connections handled automatically by go, so the hack I tried:
pgDB := DB.DB.(*pg.DB) // convert the interface to an actual connection
err = pgDB.RunInTransaction(func(tx *pg.Tx) error {
DB.DB = tx // replace the connection with a transaction
DB.CreateA(a)
DB.CreateB(a)
})
it's clearly a bad idea, because although it works, it works only once because we replace the global connection with a transaction. The following request breaks the server.
Any ideas? I can't find information about this around, probably because I don't know the right keywords being a noob.
I've done something like this in the past (using the standard sql package, you may need to adapt it to your needs):
var ErrNestedTransaction = errors.New("nested transactions are not supported")
// abstraction over sql.TX and sql.DB
// a similar interface seems to be already defined in go-pg. So you may not need this.
type executor interface {
Exec(query string, args ...interface{}) (sql.Result, error)
Query(query string, args ...interface{}) (*sql.Rows, error)
QueryRow(query string, args ...interface{}) *sql.Row
}
type Store struct {
// this is the actual connection(pool) to the db which has the Begin() method
db *sql.DB
executor executor
}
func NewStore(dsn string) (*Store, error) {
db, err := sql.Open("sqlite3", dsn)
if err != nil {
return nil, err
}
// the initial store contains just the connection(pool)
return &Store{db, db}, nil
}
func (s *Store) RunInTransaction(f func(store *Store) error) error {
if _, ok := s.executor.(*sql.Tx); ok {
// nested transactions are not supported!
return ErrNestedTransaction
}
tx, err := s.db.Begin()
if err != nil {
return err
}
transactedStore := &Store{
s.db,
tx,
}
err = f(transactedStore)
if err != nil {
tx.Rollback()
return err
}
return tx.Commit()
}
func (s *Store) CreateA(thing A) error {
// your implementation
_, err := s.executor.Exec("INSERT INTO ...", ...)
return err
}
And then you use it like
// store is a global object
store.RunInTransaction(func(store *Store) error {
// this instance of Store uses a transaction to execute the methods
err := store.CreateA(a)
if err != nil {
return err
}
return store.CreateB(b)
})
The trick is to use the executor instead of the *sql.DB in your CreateX methods, which allows you to dynamically change the underlying implementation (tx vs. db). However, since there is very little information out there on how to deal with this issue, I can't assure you that this is the "best" solution. Other suggestions are welcome!
Sorry if this question is a bit basic.
I am trying to use Golang interfaces to make the implementation of CRUD more dynamic.
I have implemented an interface as follows
type Datastore interface {
AllQuery() ([]interface{}, error)
ReadQuery() ([]interface{}, error)
UpdateQuery() ([]interface{}, error)
CreateQuery() ([]interface{}, error)
DestroyQuery() ([]interface{}, error)//Im not sure if the return value implementation is correct
}
That can be used with a multitude of models category Category,tag Tag.etc
It implements the methods indicative of the structs which represent the models in the app.
Here is the simplified handler/controller
func UpdateHandler(c handler.context) error {
p := new(models.Post)
return Update(p,c)
}
This is the function implementing the interface
func Update(data Datastore,c handler.context) error{
if err := c.Bind(data); err != nil {
log.Error(err)
}
d, err := data.UpdateQuery()
//stuff(err checking .etc)
return c.JSON(fasthttp.StatusOK, d)///the returned value is used here
}
This is the method I am using to query the database
func (post Post) UpdateQuery() ([]interface{}, error){
//run query using the
return //I dont know how to structure the return statement
}
How do I structure the interface above and the methods it implements so that I can return the result of the query back to the implementing function.
Please let me know if I need to add anything to the question or improve it I will try to do so promptly.
Thanks!
I think you should store the return value to a variable. Also make sure that this return value (result) is slice of interface.
If it's not then convert it by
v := reflect.ValueOf(s)
intf := make([]interface{}, v.Len())
In your case your UpdateQuery function might look like
func (post Post) UpdateQuery() (interface{}, bool) {
result,err := []Struct{}
return result, err
}
Demo :
https://play.golang.org/p/HOU56KibUd
Is there a more built-in wrapper to make a function that returns (X, error) successfully execute or abort, like regexp.MustCompile?
I'm talking about something like this, but more "built-in".
There is not. The best you'll get is something like this:
func Must(fn func() (interface{}, error)) interface{} {
v, err := fn()
if err != nil {
log.Fatalln(err)
}
return v
}
Then to use it:
Must(func() (interface{}, error) {
return template.ParseGlob(pattern)
}).(*template.Template)
Assuming that template.ParseGlob(pattern) is the call you wanted to wrap.
Go does not have parametric polymorphism, so this kind of code will end up requiring type assertions to restore the original type and so (in my opinion) is more effort than it's worth. The tidiest, idiomatic error handling you'll get for long chains of potential failure is simply to check for an error, and return it. Defer your cleanup handlers:
func MyFunc() (err error) {
a, err := blah1()
if err != nil {
return
}
defer a.Close()
b, err := blah2(a)
if err != nil {
return
}
defer b.Close()
// ad nauseam
}
Long and tedious, but at least it's explicit and easy to follow. Here are two modules I wrote that are crying out for parametric polymorphism that might give you some ideas for dealing without it:
bitbucket.org/anacrolix/dms/futures
bitbucket.org/anacrolix/dms/cache
Since Go 1.18 we can define typed Must instead of interface{}:
func Must[T any](obj T, err error) T {
if err != nil {
panic(err)
}
return obj
}
How to use: https://go.dev/play/p/ajQAjfro0HG
func success() (int, error) {
return 0, nil
}
func fail() (int, error) {
return -1, fmt.Errorf("Failed")
}
func main() {
n1 := Must(success())
fmt.Println(n1)
var n2 int = Must(fail())
fmt.Println(n2)
}
Must fails inside main, when fail() returns non-nil error
You can even define Mustn for more than 1 return parameter, e.g.
func Must2[T1 any, T2 any](obj1 T1, obj2 T2, err error) (T1, T2) {
if err != nil {
panic(err)
}
return obj1, obj2
}
I don't think a built-in mechanism would make sense since you could very well handle a non-nil error in various ways, as does the examples in the template package itself: see "text/template/examplefiles_test.go", illustrating 2 different usage of 'err':
// Here starts the example proper.
// T0.tmpl is the first name matched, so it becomes the starting template,
// the value returned by ParseGlob.
tmpl := template.Must(template.ParseGlob(pattern))
err := tmpl.Execute(os.Stdout, nil)
if err != nil {
log.Fatalf("template execution: %s", err)
}
// Output:
// T0 invokes T1: (T1 invokes T2: (This is T2))
In the particular case of the helper function (*Template) Must(), transforming an error into an exception (panic) isn't always the right course for all go programs (as debated in this thread), and to cover all the possible way to handle an error would mean to create a lot of "built-in" mechanisms.
I have encountered the same problem myself and decided to develop the following solution: https://github.com/boramalper/must
Example:
database := must.MV(sql.Open("sqlite3", "...")).(*sql.DB)
defer must.M(database.Close())
must.M(database.Ping())
// Use MustValVoid (or MVV shortly) if you don't care about
// the return value.
must.MVV(database.Exec(`
PRAGMA foreign_key_check;
...
`))
I am not sure why all the answers here are using the log package, when the
source itself uses panic:
func MustCompile(str string) *Regexp {
regexp, err := Compile(str)
if err != nil {
panic(`regexp: Compile(` + quote(str) + `): ` + err.Error())
}
return regexp
}
My recommendation would be instead of a generic Must wrapper, just implement
Must variants as needed in your code.
https://github.com/golang/go/blob/go1.16.5/src/regexp/regexp.go#L305-L314
In order to determine whether a given type implements an interface using the reflect package, you need to pass a reflect.Type to reflect.Type.Implements(). How do you get one of those types?
As an example, trying to get the type of an uninitialized error (interface) type does not work (it panics when you to call Kind() on it)
var err error
fmt.Printf("%#v\n", reflect.TypeOf(err).Kind())
Do it like this:
var err error
t := reflect.TypeOf(&err).Elem()
Or in one line:
t := reflect.TypeOf((*error)(nil)).Elem()
Even Shaws response is correct, but brief. Some more details from the reflect.TypeOf method documentation:
// As interface types are only used for static typing, a common idiom to find
// the reflection Type for an interface type Foo is to use a *Foo value.
writerType := reflect.TypeOf((*io.Writer)(nil)).Elem()
fileType := reflect.TypeOf((*os.File)(nil)).Elem()
fmt.Println(fileType.Implements(writerType))
For googlers out there I just ran into the dreaded scannable dest type interface {} with >1 columns (XX) in result error.
Evan Shaw's answer did not work for me. Here is how I solved it. I am also using the lann/squirrel library, but you could easily take that out.
The solution really isn't that complicated, just knowing the magic combination of reflect calls to make.
The me.GetSqlx() function just returns an instance to *sqlx.DB
func (me *CommonRepo) Get(query sq.SelectBuilder, dest interface{}) error {
sqlst, args, err := query.ToSql()
if err != nil {
return err
}
// Do some reflection magic so that Sqlx doesn't hork on interface{}
v := reflect.ValueOf(dest)
return me.GetSqlx().Get(v.Interface(), sqlst, args...)
}
func (me *CommonRepo) Select(query sq.SelectBuilder, dest interface{}) error {
sqlst, args, err := query.ToSql()
if err != nil {
return err
}
// Do some reflection magic so that Sqlx doesn't hork on interface{}
v := reflect.ValueOf(dest)
return me.GetSqlx().Select(v.Interface(), sqlst, args...)
}
Then to invoke it you can do:
func (me *myCustomerRepo) Get(query sq.SelectBuilder) (rec Customer, err error) {
err = me.CommonRepo.Get(query, &rec)
return
}
func (me *myCustomerRepo) Select(query sq.SelectBuilder) (recs []Customer, err error) {
err = me.CommonRepo.Select(query, &recs)
return
}
This allows you to have strong types all over but have all the common logic in one place (CommonRepo in this example).