New Gopher here, coming from Java land.
Let's say I have a some generic storage interface:
package repositories
type Repository interface {
Get(key string) string
Save(key string) string
}
I support multiple different backends (Redis, Boltdb, etc) by implementing this interface in separate packages. However, each implementation has unique configuration values that need to be passed in. So I define a constructor in each package, something like:
package redis
type Config struct {
...
}
func New(config *Config) *RedisRepository {
...
}
and
package bolt
type Config struct {
...
}
func New(config *Config) *BoltRepository {
...
}
main.go reads a json configuration file that looks something like:
type AppConfig struct {
DatabaseName string,
BoltConfig *bolt.Config,
RedisConfig *redis.Config,
}
Based on the value of DatabaseName, the app will instantiate the desired repository. What is the best way to do this? Where do I do it? Right now I'm doing some kind of horrible factoryfactory method which seems very much like a Go anti-pattern.
in my main.go, I have a function that reads the above reflected configuration values, selecting the proper configuration (either BoltConfig or RedisConfig) based on the value of DatabaseName:
func newRepo(c reflect.Value, repoName string) (repositories.Repository, error) {
t := strings.Title(repoName)
repoConfig := c.FieldByName(t).Interface()
repoFactory, err := repositories.RepoFactory(t)
if err != nil {
return nil, err
}
return repoFactory(repoConfig)
}
and in my repositories package, I have a factory that looks for the repository type and returns a factory function that produces an instantiated repository:
func RepoFactory(provider string) (RepoProviderFunc, error) {
r, ok := repositoryProviders[provider]
if !ok {
return nil, fmt.Errorf("repository does not exist for provider: %s", r)
}
return r, nil
}
type RepoProviderFunc func(config interface{}) (Repository, error)
var ErrBadConfigType = errors.New("wrong configuration type")
var repositoryProviders = map[string]RepoProviderFunc{
redis: func(config interface{}) (Repository, error) {
c, ok := config.(*redis.Config)
if !ok {
return nil, ErrBadConfigType
}
return redis.New(c)
},
bolt: func(config interface{}) (Repository, error) {
c, ok := config.(*bolt.Config)
if !ok {
return nil, ErrBadConfigType
}
return bolt.New(c)
},
}
bringing it all together, my main.go looks like:
cfg := &AppConfig{}
err = json.Unmarshal(data, cfg)
if err != nil {
log.Fatalln(err)
}
c := reflect.ValueOf(*cfg)
repo, err := newRepo(c, cfg.DatabaseName)
if err != nil {
log.Fatalln(err)
}
And yes, the second I was done typing this code I recoiled at the horror I had brought into this world. Can someone please help me escape this factory hell? What's a better way to do this type of thing -i.e selecting an interface implementation at runtime.
Do you need dynamic registration? It seems like the list of backends is already baked into your server because of the AppConfig type, so you may be better just writing the simplest possible factory code:
func getRepo(cfg *AppConfig) (Repository, error) {
switch cfg.DatabaseName {
case "bolt":
return bolt.New(cfg.BoltConfig), nil
case "redis":
return redis.New(cfg.RedisConfig), nil
}
return nil, fmt.Errorf("unknown database: %q", cfg.DatabaseName)
}
func main() {
...
var cfg AppConfig
if err := json.Unmarshal(data, &cfg); err != nil {
log.Fatalf("failed to parse config: %s", err)
}
repo, err := getRepo(&cfg)
if err != nil {
log.Fatalln("repo construction failed: %s", err)
}
...
}
Sure, you can replace this with generic reflection-based code. But while that saves a few lines of duplicated code and removes the need to update getRepo if you add a new backend, it introduces a whole mess of confusing abstraction, and you're going to have to edit code anyway if you introduce a new backend (for example, extending your AppConfig type), so saving a couple of lines in getRepo is hardly a saving.
It might make sense to move getRepo and AppConfig into a repos package if this code is used by more than one program.
Related
I have struct of configuration like this(in short version):
type Config struct {
Environment string
Service1 Service
Service2 Service
}
type Service struct {
CRC string
Cards Cards
}
type Cards struct {
GBP CardCfg
USD CardCfg
}
type CardCfg struct {
CRC string
}
func Cfg() *Config {
return &Config{
Environment: os.Getenv("ENVIRONMENT"),
Service1: Service{
CRC: os.Getenv("Service1_CRC"),
Cards: Cards{
GBP: CardCfg{
CRC: os.Getenv("Service1_CARD_GBP_CRC"),
},
USD: CardCfg{
CRC: os.Getenv("Service1_CARD_USD_CRC"),
},
},
},
Service2: Service{
CRC: os.Getenv("Service2_CRC"),
Cards: Cards{
GBP: CardCfg{
CRC: os.Getenv("Service2_CARD_GBP_CRC"),
},
USD: CardCfg{
CRC: os.Getenv("Service2_CARD_USD_CRC"),
},
},
},
}
}
I try to get access to service crc or service card crc by variable like this:
variable := "Service1"
currency := "EUR"
cfg := config.Cfg()
crc := cfg[variable].cards[currency] // DOESN'T WORK
I always tried with map, like this:
package main
import "fmt"
type Config map[string]interface{}
func main() {
config := Config{
"field": "value",
"service1": Config{
"crc": "secret1",
"cards": Config{
"crc": "secret2",
},
},
}
fmt.Println(config["WT"].(Config)["cards"].(Config)["crc"]) //WORK
}
but it looks wierd for me. Do you know better way to write config? It's possible to use struct? I come form Ruby planet, Golang is new for me.
edit:
I receive messages from rabbit queue, based on them I create a payment. Unfortunately, various payment methods require "own" authorization (crc and merchantId). Call looks like this:
trn, err := p24Client.RegisterTrn(context.Background(), &p24.RegisterTrnReq{
CRC: cfg[payinRequested.Service].cards[payinRequested.Currency].CRC,
MerchantId: cfg[payinRequested.Service].cards[payinRequested.Currency].MerchantId,
PosId: cfg[payinRequested.Service].cards[payinRequested.Currency].MerchantId,
SessionId: payinRequested.PaymentId,
Amount: payinRequested.Amount,
Currency: payinRequested.Currency,
Description: payinRequested.Desc,
Email: payinRequested.Email,
Method: payinRequested.BankId,
UrlReturn: payinRequested.ReturnUrl,
UrlStatus: cfg.StatusUri,
UrlCardPaymentNotification: cfg.CardStatusUri,
})
Any ideas on how to do it right?
Ignoring the reflect package, the simple answer is: you can't. You cannot access struct fields dynamically (using string variables). You can, use variables on a map, because accessing data in a map is a hashtable lookup. A struct isn't.
I will reiterate the main point of my comments though: What you're seemingly trying to do is using environment variables to set values on a config struct. This is very much a solved problem. We've been doing this for years at this point. I did a quick google search and found this repo which does exactly what you seem to want to do (and more): called configure
With this package, you can declare your config struct like this:
package config
type Config struct {
Environment string `env:"ENVIRONMENT" cli:"env" yaml:"environment"`
Services []*Service `env:"SERVICE" cli:"service" yaml:"service"`
serviceByName map[string]*Service
}
Then, to load from environment variables:
func LoadEnv() (*Config, err) {
c := Config{
serviceByName: map[string]*Service{},
} // set default values if needed
if err := configure.ParseEnv(&c); err != nil {
return nil, err
}
// initialise convenience fields like serviceByName:
for _, svc := range c.Services {
c.serviceByName[svc.Name] = svc
}
return &c, nil
}
// ServiceByName returns a COPY of the config for a given service
func (c Config) ServiceByName(n string) (Service, error) {
s, ok := c.serviceByName[n]
if !ok {
return nil, errrors.New("service with given name does not exist")
}
return *s, nil
}
You can also define a single Load function that will prioritise one type of config over the other. With these tags, we're supporting environment variables, a Yaml file, and command line arguments. Generally command line arguments override any of the other formats. As for Yaml vs environment variables, you could argue both ways: an environment variable like ENVIRONMENT isn't very specific, and could easily be used by multiple processes by mistake. Then again, if you deploy things properly, that shouldn't be an issue, so for that reason, I'd prioritise environment variables over the Yaml file:
func Load(args []string) (*Config, error) {
c := &Config{
Environment: "devel", // default
serviceByName: map[string]*Service{},
}
if err := configure.ParseYaml(c); err != nil {
return nil, err
}
if err := configure.ParseEnv(c); err != nil {
return nil, err
}
if len(args) > 0 {
if err := configure.ParseCommanLine(c, args); err != nil {
return nil, err
}
}
// initialise convenience fields like serviceByName:
for _, svc := range c.Services {
c.serviceByName[svc.Name] = svc
}
return &c, nil
}
Then in your main package:
func main() {
cfg, err := config.Load(os.Args[1:])
if err != nil {
fmt.Printf("Failed to load config: %v\n", err)
os.Exit(1)
}
wtCfg, err := config.ServiceByName("WT")
if err != nil {
fmt.Printf("WT service not found: %v\n", err)
return
}
fmt.Printf("%#v\n", wtCfg)
}
I am working with lot of config files. I need to read all those individual config file in their own struct and then make one giant Config struct which holds all other individual config struct in it.
Let's suppose if I am working with 3 config files.
ClientConfig deals with one config file.
DataMapConfig deals with second config file.
ProcessDataConfig deals with third config file.
I created separate class for each of those individual config file and have separate Readxxxxx method in them to read their own individual config and return struct back. Below is my config.go file which is called via Init method from main function after passing path and logger.
config.go
package config
import (
"encoding/json"
"fmt"
"io/ioutil"
"github.com/david/internal/utilities"
)
type Config struct {
ClientMapConfigs ClientConfig
DataMapConfigs DataMapConfig
ProcessDataConfigs ProcessDataConfig
}
func Init(path string, logger log.Logger) (*Config, error) {
var err error
clientConfig, err := ReadClientMapConfig(path, logger)
dataMapConfig, err := ReadDataMapConfig(path, logger)
processDataConfig, err := ReadProcessDataConfig(path, logger)
if err != nil {
return nil, err
}
return &Config{
ClientMapConfigs: *clientConfig,
DataMapConfigs: *dataMapConfig,
ProcessDataConfigs: *processDataConfig,
}, nil
}
clientconfig.go
package config
import (
"encoding/json"
"fmt"
"io/ioutil"
"github.com/david/internal/utilities"
)
type ClientConfig struct {
.....
.....
}
const (
ClientConfigFile = "clientConfigMap.json"
)
func ReadClientMapConfig(path string, logger log.Logger) (*ClientConfig, error) {
files, err := utilities.FindFiles(path, ClientConfigFile)
// read all the files
// do some validation on all those files
// deserialize them into ClientConfig struct
// return clientconfig object back
}
datamapconfig.go
Similar style I have for datamapconfig too. Exactly replica of clientconfig.go file but operating on different config file name and will return DataMapConfig struct back.
processdataConfig.go
Same thing as clientconfig.go file. Only difference is it will operate on different config file and return ProcessDataConfig struct back.
Problem Statement
I am looking for ideas where this above design can be improved? Is there any better way to do this in golang? Can we use interface or anything else which can improve the above design?
If I have let's say 10 different files instead of 3, then do I need to keep doing above same thing for remaining 7 files? If yes, then the code will look ugly. Any suggestions or ideas will greatly help me.
Update
Everything looks good but I have few questions as I am confuse on how can I achieve those with your current suggestion. On majority of my configs, your suggestion is perfect but there are two cases on two different configs where I am confuse on how to do it.
Case 1 After deserializing json into original struct which matches json format, I make another different struct after massaging that data and then I return that struct back.
Case 2 All my configs have one file but there are few configs which have multiple files in them and the number isn't fixed. So I pass regex file name and then I find all the files starting with that regex and then loop over all those files one by one. After deserializing each json file, I start populating another object and keep populating it until all files have been deserialized and then make a new struct with those objects and then return it.
Example of above scenarios:
Sample case 1
package config
import (
"encoding/json"
"fmt"
"io/ioutil"
"github.com/david/internal/utilities"
)
type CustomerManifest struct {
CustomerManifest map[int64]Site
}
type CustomerConfigs struct {
CustomerConfigurations []Site `json:"customerConfigurations"`
}
type Site struct {
....
....
}
const (
CustomerConfigFile = "abc.json"
)
func ReadCustomerConfig(path string, logger log.Logger) (*CustomerManifest, error) {
// I try to find all the files with my below utility method.
// Work with single file name and also with regex name
files, err := utilities.FindFiles(path, CustomerConfigFile)
if err != nil {
return nil, err
}
var customerConfig CustomerConfigs
// there is only file for this config so loop will run once
for _, file := range files {
body, err := ioutil.ReadFile(file)
if err != nil {
return nil, err
}
err = json.Unmarshal(body, &customerConfig)
if err != nil {
return nil, err
}
}
customerConfigIndex := BuildIndex(customerConfig, logger)
return &CustomerManifest{CustomerManifest: customerConfigIndex}, nil
}
func BuildIndex(customerConfig CustomerConfigs, logger log.Logger) map[int64]Site {
...
...
}
As you can see above in sample case 1, I am making CustomerManifest struct from CustomerConfigs struct and then return it instead of returning CustomerConfigs directly.
Sample case 2
package config
import (
"encoding/json"
"fmt"
"io/ioutil"
"github.com/david/internal/utilities"
)
type StateManifest struct {
NotionTemplates NotionTemplates
NotionIndex map[int64]NotionTemplates
}
type NotionMapConfigs struct {
NotionTemplates []NotionTemplates `json:"notionTemplates"`
...
}
const (
// there are many files starting with "state-", it's not fixed number
StateConfigFile = "state-*.json"
)
func ReadStateConfig(path string, logger log.Logger) (*StateManifest, error) {
// I try to find all the files with my below utility method.
// Work with single file name and also with regex name
files, err := utilities.FindFiles(path, StateConfigFile)
if err != nil {
return nil, err
}
var defaultTemp NotionTemplates
var idx = map[int64]NotionTemplates{}
// there are lot of config files for this config so loop will run multiple times
for _, file := range files {
var notionMapConfig NotionMapConfigs
body, err := ioutil.ReadFile(file)
if err != nil {
return nil, err
}
err = json.Unmarshal(body, ¬ionMapConfig)
if err != nil {
return nil, err
}
for _, tt := range notionMapConfig.NotionTemplates {
if tt.IsProcess {
defaultTemp = tt
} else if tt.ClientId > 0 {
idx[tt.ClientId] = tt
}
}
}
stateManifest := StateManifest{
NotionTemplates: defaultTemp,
NotionIndex: idx,
}
return &stateManifest, nil
}
As you can see above in my both the cases, I am making another different struct after deserializing is done and then I return that struct back but as of now in your current suggestion I think I won't be able to do this generically because for each config I do different type of massaging and then return those struct back. Is there any way to achieve above functionality with your current suggestion? Basically for each config if I want to do some massaging, then I should be able to do it and return new modified struct back but for some cases if I don't want to do any massaging then I can return direct deserialize json struct back. Can this be done generically?
Since there are config which has multiple files in them so that is why I was using my utilities.FindFiles method to give me all files basis on file name or regex name and then I loop over all those files to either return original struct back or return new struct back after massaging original struct data.
You can use a common function to load all the configuration files.
Assume you have config structures:
type Config1 struct {...}
type Config2 struct {...}
type Config3 struct {...}
You define configuration validators for those who need it:
func (c Config1) Validate() error {...}
func (c Config2) Validate() error {...}
Note that these implement a Validatable interface:
type Validatable interface {
Validate() error
}
There is one config type that includes all these configurations:
type Config struct {
C1 Config1
C2 Config2
C3 Config3
...
}
Then, you can define a simple configuration loader function:
func LoadConfig(fname string, out interface{}) error {
data, err:=ioutil.ReadFile(fname)
if err!=nil {
return err
}
if err:=json.Unmarshal(data,out); err!=nil {
return err
}
// Validate the config if necessary
if validator, ok:=out.(Validatable); ok {
if err:=validator.Validate(); err!=nil {
return err
}
}
return nil
}
Then, you can load the files:
var c Config
if err:=LoadConfig("file1",&c.C1); err!=nil {
return err
}
if err:=LoadConfig("file2",&c.C2); err!=nil {
return err
}
...
If there are multiple files loading different parts of the same struct, you can do:
LoadConfig("file1",&c.C3)
LoadConfig("file2",&c.C3)
...
You can simplify this further by defining a slice:
type cfgInfo struct {
fileName string
getCfg func(*Config) interface{}
}
var configs=[]cfgInfo {
{
fileName: "file1",
getCfg: func(c *Config) interface{} {return &c.C1},
},
{
fileName: "file2",
getCfg: func(c *Config) interface{} {return &c.C2},
},
{
fileName: "file3",
getCfg: func(c *Config) interface{} {return &c.C3},
},
...
}
func loadConfigs(cfg *Config) error {
for _,f:=range configs {
if err:=loadConfig(f.fileName,f.getCfg(cfg)); err!=nil {
return err
}
}
return nil
}
Then, loadConfigs would load all the configuration files into cfg.
func main() {
var cfg Config
if err:=loadConfigs(&cfg); err!=nil {
panic(err)
}
...
}
Any configuration that doesn't match this pattern can be dealt with using LoadConfig:
var customConfig1 CustomConfigStruct1
if err:=LoadConfig("customConfigFile1",&customConfig1); err!=nil {
panic(err)
}
cfg.CustomConfig1 = processCustomConfig1(customConfig1)
var customConfig2 CustomConfigStruct2
if err:=LoadConfig("customConfigFile2",&customConfig2); err!=nil {
panic(err)
}
cfg.CustomConfig2 = processCustomConfig2(customConfig2)
I have a set of functions, which uses the pool of objects. This pool has been mocked. It works fine in most of the cases. But in some functions i call the methods of objects from the pool. So i need to mock this objects too.
Lets say:
// ObjectGeter is a interface that is mocked
type ObjectGeter interface {
GetObject(id int) ObjectType, error
}
// this function is under test
func SomeFunc(og ObjectGeter,id int, otherArgument SomeType) error {
// some actions with otherArgument
// and may be return an error
obj, err := og.GetObject(id)
if err !=nil {
return errors.New("GetObject error")
}
rezult, err := obj.SomeMethod()
if err !=nil {
return errors.New("One of internal errors")
}
return rezult, nil
}
Is there a way to test whole this function? I can create interface SomeMethoder which wraps the SomeMethod(), but i can't find the way how to assign it to obj inside SomeFunc without changing the signature of GetObject to GetObject(id int) SomeMethoder,error.
Currently i see the one approach - testing by a parts.
The only solution i'v found without of changing of paradigm is a wrapper. It is pretty trivial but may be some one will need it once.
Originally i have some type:
type PoolType struct {...}
func (p *PoolType)GetObject(id int) (ObjectType, error) {...}
and interface, that wraps PoolType.GetObject and that i'v mocked.
Now i have the interface:
type SomeMethoder interface {
SomeMethod() (ResultType, error)
}
to wrap object returned by PoolType.GetObject().
To produce it i have interface:
type ObjectGeter interface {
GetObject(id int) (SomeMethoder, error)
}
and type
type MyObjectGeter struct {
pool *PoolType
}
func New(pool *PoolType) *MyObjectGeter {
return &MyObjectGeter{pool: pool}
}
func (p *MyObjectGeter)GetObject(id int) (SomeMethoder, error) {
return p.pool.GetObject(id)
}
that implements it.
So:
// this function is under test
func SomeFunc(og ObjectGeter,id int, otherArgument SomeType) error {
// some actions with otherArgument
// and may be return an error
iface, err := og.GetObject(id)
if err !=nil {
return errors.New("GetObject error")
}
rezult, err := iface.SomeMethod()
if err !=nil {
return errors.New("One of internal errors")
}
return rezult, nil
}
is called by
og := New(pool)
SomeFunc(og,id,otherArgument)
in real work.
After all to test whole SomeFunc i have to:
func TestSomeFuncSuccess (t *testing.T) {
controller := gomock.NewController(t)
defer controller.Finish()
objectGeter := mocks.NewMockObjectGeter(controller)
someMethoder := mocks.NewMockSomeMethoder(controller)
gomock.InOrder(
args.objectGeter.EXPECT().
GetObject(correctIdCOnst).
Return(someMethoder, nil),
args.someMethoder.EXPECT().
SomeMethod().
Return(NewResultType(...),nil).
Times(args.test.times[1]),
)
result, err := SomeFunc(objectGeter,correctIdCOnst,otherArgumentConst)
// some checks
}
So, the only untested part is MyObjectGeter.GetObject that is enough for me.
In the effort of learning Go a bit better, I am trying to refactor a series of functions which accept a DB connection as the first argument into struct methods and something a bit more "idiomatically" Go.
Right now my "data store" methods are something like this:
func CreateA(db orm.DB, a *A) error {
db.Exec("INSERT...")
}
func CreateB(db orm.DB, b *B) error {
db.Exec("INSERT...")
}
These the functions work perfectly fine. orm.DB is the DB interface of go-pg.
Since the two functions accept a db connection I can either pass an actual connection or a transaction (which implements the same interface). I can be sure that both functions issuing SQL INSERTs run in the same transaction, avoiding having inconsistent state in the DB in case either one of them fails.
The trouble started when I decided to read more about how to structure the code a little better and to make it "mockable" in case I need to.
So I googled a bit, read the article Practical Persistence in Go: Organising Database Access and tried to refactor the code to use proper interfaces.
The result is something like this:
type Store {
CreateA(a *A) error
CreateB(a *A) error
}
type DB struct {
orm.DB
}
func NewDBConnection(p *ConnParams) (*DB, error) {
.... create db connection ...
return &DB{db}, nil
}
func (db *DB) CreateA(a *A) error {
...
}
func (db *DB) CreateB(b *B) error {
...
}
which allows me to write code like:
db := NewDBConnection()
DB.CreateA(a)
DB.CreateB(b)
instead of:
db := NewDBConnection()
CreateA(db, a)
CreateB(db, b)
The actual issue is that I lost the ability to run the two functions in the same transaction. Before I could do:
pgDB := DB.DB.(*pg.DB) // convert the interface to an actual connection
pgDB.RunInTransaction(func(tx *pg.Tx) error {
CreateA(tx, a)
CreateB(tx, b)
})
or something like:
tx := db.DB.Begin()
err = CreateA(tx, a)
err = CreateB(tx, b)
if err != nil {
tx.Rollback()
} else {
tx.Commit()
}
which is more or less the same thing.
Since the functions were accepting the common interface between a connection and a transaction I could abstract from my model layer the transaction logic sending down either a full connection or a transaction. This allowed me to decide in the "HTTP handler" when to create a trasaction and when I didn't need to.
Keep in mind that the connection is a global object representing a pool of connections handled automatically by go, so the hack I tried:
pgDB := DB.DB.(*pg.DB) // convert the interface to an actual connection
err = pgDB.RunInTransaction(func(tx *pg.Tx) error {
DB.DB = tx // replace the connection with a transaction
DB.CreateA(a)
DB.CreateB(a)
})
it's clearly a bad idea, because although it works, it works only once because we replace the global connection with a transaction. The following request breaks the server.
Any ideas? I can't find information about this around, probably because I don't know the right keywords being a noob.
I've done something like this in the past (using the standard sql package, you may need to adapt it to your needs):
var ErrNestedTransaction = errors.New("nested transactions are not supported")
// abstraction over sql.TX and sql.DB
// a similar interface seems to be already defined in go-pg. So you may not need this.
type executor interface {
Exec(query string, args ...interface{}) (sql.Result, error)
Query(query string, args ...interface{}) (*sql.Rows, error)
QueryRow(query string, args ...interface{}) *sql.Row
}
type Store struct {
// this is the actual connection(pool) to the db which has the Begin() method
db *sql.DB
executor executor
}
func NewStore(dsn string) (*Store, error) {
db, err := sql.Open("sqlite3", dsn)
if err != nil {
return nil, err
}
// the initial store contains just the connection(pool)
return &Store{db, db}, nil
}
func (s *Store) RunInTransaction(f func(store *Store) error) error {
if _, ok := s.executor.(*sql.Tx); ok {
// nested transactions are not supported!
return ErrNestedTransaction
}
tx, err := s.db.Begin()
if err != nil {
return err
}
transactedStore := &Store{
s.db,
tx,
}
err = f(transactedStore)
if err != nil {
tx.Rollback()
return err
}
return tx.Commit()
}
func (s *Store) CreateA(thing A) error {
// your implementation
_, err := s.executor.Exec("INSERT INTO ...", ...)
return err
}
And then you use it like
// store is a global object
store.RunInTransaction(func(store *Store) error {
// this instance of Store uses a transaction to execute the methods
err := store.CreateA(a)
if err != nil {
return err
}
return store.CreateB(b)
})
The trick is to use the executor instead of the *sql.DB in your CreateX methods, which allows you to dynamically change the underlying implementation (tx vs. db). However, since there is very little information out there on how to deal with this issue, I can't assure you that this is the "best" solution. Other suggestions are welcome!
I try to implement repository pattern in Go app (simple web service) and try to find better way to escape code duplication.
Here is a code
Interfaces are:
type IRoleRepository interface {
GetAll() ([]Role, error)
}
type ISaleChannelRepository interface {
GetAll() ([]SaleChannel, error)
}
And implementation:
func (r *RoleRepository) GetAll() ([]Role, error) {
var result []Role
var err error
var rows *sql.Rows
if err != nil {
return result, err
}
connection := r.provider.GetConnection()
defer connection.Close()
rows, err = connection.Query("SELECT Id,Name FROM Position")
defer rows.Close()
if err != nil {
return result, err
}
for rows.Next() {
entity := new(Role)
err = sqlstruct.Scan(entity, rows)
if err != nil {
return result, err
}
result = append(result, *entity)
}
err = rows.Err()
if err != nil {
return result, err
}
return result, err
}
func (r *SaleChannelRepository) GetAll() ([]SaleChannel, error) {
var result []SaleChannel
var err error
var rows *sql.Rows
if err != nil {
return result, err
}
connection := r.provider.GetConnection()
defer connection.Close()
rows, err = connection.Query("SELECT DISTINCT SaleChannel 'Name' FROM Employee")
defer rows.Close()
if err != nil {
return result, err
}
for rows.Next() {
entity := new(SaleChannel)
err = sqlstruct.Scan(entity, rows)
if err != nil {
return result, err
}
result = append(result, *entity)
}
err = rows.Err()
if err != nil {
return result, err
}
return result, err
}
As you can see differences are in a few words. I try to find something like Generics from C#, but didnt find.
Can anyone help me?
No, Go does not have generics and won't have them in the forseeable future¹.
You have three options:
Refactor your code so that you have a single function which accepts an SQL statement and another function, and:
Queries the DB with the provided statement.
Iterates over the result's rows.
For each row, calls the provided function whose task is to
scan the row.
In this case, you'll have a single generic "querying" function,
and the differences will be solely in "scanning" functions.
Several variations on this are possible but I suspect you have the idea.
Use the sqlx package which basically is to SQL-driven databases what encoding/json is to JSON data streams: it uses reflection on your types to create and execute SQL to populate them.
This way you'll get reusability on another level: you simply won't write boilerplate code.
Use code generation which is the Go-native way of having "code templates" (that's what generics are about).
This way, you (usually) write a Go program which takes some input (in whatever format you wish), reads it and writes out one or more files which contain Go code, which is then compiled.
In your, very simple, case, you can start with a template of your Go function and some sort of a table which maps SQL statement to the types to create from the data selected.
I'd note that your code indeed looks woefully unidiomatic.
No one in their right mind implements "repository patterns" in Go, but that's sort of okay so long it keeps you happy—we all are indoctrinated to a certain degree with the languages/environments we're accustomed to,—but your connection := r.provider.GetConnection() looks alarming: the Go's database/sql is drastically different from "popular" environments and frameworks so I'd highly recommend to start with this and this.
¹ (Update as of 2021-05-31) Go will have generics as the proposal to implement them has been accepted and the work implementing them is in progress.
Forgive me if I'm misunderstanding, but a better pattern might be something like the following:
type RepositoryItem interface {
Name() string // for example
}
type Repository interface {
GetAll() ([]RepositoryItem, error)
}
At the moment, you essentially have multiple interfaces for each type of repository, so unless you're going to implement multiple types of RoleRepository, you might as well not have the interface.
Having generic Repository and RepositoryItem interfaces might make your code more extensible (not to mention easier to test) in the long run.
A contrived example might be (if we assume a Repository vaguely correlates to a backend) implementations such as MySQLRepository and MongoDBRepository. By abstracting the functionality of the repository, you're protecting against future mutations.
I would very much advise seeing #kostix's answer also, though.
interface{} is the "generic type" in Go. I can imagine doing something like this:
package main
import "fmt"
type IRole struct {
RoleId uint
}
type ISaleChannel struct {
Profitable bool
}
type GenericRepo interface{
GetAll([]interface{})
}
// conceptual repo to store all Roles and SaleChannels
type Repo struct {
IRoles []IRole
ISaleChannels []ISaleChannel
}
func (r *Repo) GetAll(ifs []interface{}) {
// database implementation here before type switch
for _, v := range ifs {
switch v := v.(type) {
default:
fmt.Printf("unexpected type %T\n", v)
case IRole:
fmt.Printf("Role %t\n", v)
r.IRoles = append(r.IRoles, v)
case ISaleChannel:
fmt.Printf("SaleChannel %d\n", v)
r.ISaleChannels = append(r.ISaleChannels, v)
}
}
}
func main() {
getter := new(Repo)
// mock slice
data := []interface{}{
IRole{1},
IRole{2},
IRole{3},
ISaleChannel{true},
ISaleChannel{false},
IRole{4},
}
getter.GetAll(data)
fmt.Println("IRoles: ", getter.IRoles)
fmt.Println("ISaleChannels: ", getter.ISales)
}
This way you don't have to end up with two structs and/or interfaces for IRole and ISale