Using inheritance of builders in GO - go

I need to create builder (base) and specific builders for each build type.
e.g.
builder for html project
builder for node.js project
builder for python project
builder for java project
….
The main functionality will be like following:
File:Builder.go
interface
type Builder interface {
Build(string) error
}
File: nodebuilder.go
//This is the struct ???? not sure what to put here...
type Node struct {
}
func (n Node) Build(path string) error {
//e.g. Run npm install which build's nodejs projects
command := exec.Command("npm", "install")
command.Dir = “../path2dir/“
Combined, err := command.CombinedOutput()
if err != nil {
log.Println(err)
}
log.Printf("%s", Combined)
}
...
//return new(error)
}
Main assumptions/process:
To start build on each module I need to get the path to it
I need to copy the module to a temp folder
I need to run the build on it (implement the build interface like mvn build npm install etc)
After build was finished zip the module with dep
Copy it to new target folder
Note: beside from the build and the path (that should be handled specifically ) all other functionality are identical
like zip copy
Where should I put the the zip and copy (in the structure) and for example how should I implement them and path them to the builder ?
Should I structure the project differently according the assumptions?

The first principle of SOLID says that a piece of code should have only one responsibility.
Takes the context in, it really makes no sense that any builder to care about the copy and zip part of the build process. It's beyond builder's responsibility. Even using composition (embedding) is not neat enough.
Narrow it down, the Builder's core responsibility is to build the code, as the name suggests. But more specifically, the Builder's responsibility is to build the code at a path. What path? The most idomatic way is the current path, the working dir. This adds two side methods to the interface: Path() string which returns the current path and ChangePath(newPath string) error to change the current path. The implentation would be simple, preserve a single string field as the current path would mostly do the job. And it can easily be extended to some remote process.
If we looks it carefully, there auctually is two build concepts. One is the whole building process, from making temp dir to copy it back, all five steps; the other is the build command, which is the third step of the process.
That is very inspiring. A process is idomatic to be presented as a function, as classic procedural programing would do. So we write a Build function. It serialize all the 5 steps, plain and simple.
Code:
package main
import (
"io/ioutil"
)
//A builder is what used to build the language. It should be able to change working dir.
type Builder interface {
Build() error //Build builds the code at current dir. It returns an error if failed.
Path() string //Path returns the current working dir.
ChangePath(newPath string) error //ChangePath changes the working dir to newPath.
}
//TempDirFunc is what generates a new temp dir. Golang woould requires it in GOPATH, so make it changable.
type TempDirFunc func() string
var DefualtTempDirFunc = func() string {
name,_ := ioutil.TempDir("","BUILD")
return name
}
//Build builds a language. It copies the code to a temp dir generated by mkTempdir
//and call the Builder.ChangePath to change the working dir to the temp dir. After
//the copy, it use the Builder to build the code, and then zip it in the tempfile,
//copying the zip file to `toPath`.
func Build(b Builder, toPath string, mkTempDir TempDirFunc) error {
if mkTempDir == nil {
mkTempDir = DefaultTempDirFunc
}
path,newPath:=b.Path(),mkTempDir()
defer removeDir(newPath) //clean-up
if err:=copyDir(path,newPath); err!=nil {
return err
}
if err:=b.ChangePath(newPath) !=nil {
return err
}
if err:=b.Build(); err!=nil {
return err
}
zipName,err:=zipDir(newPath) // I don't understand what is `dep`.
if err!=nil {
return err
}
zipPath:=filepath.Join(newPath,zipName)
if err:=copyFile(zipPath,toPath); err!=nil {
return err
}
return nil
}
//zipDir zips the `path` dir and returns the name of zip. If an error occured, it returns an empty string and an error.
func zipDir(path string) (string,error) {}
//All other funcs is very trivial.
Most of things are covered in the comments and I am really felling lazy to write all those copyDir/removeDir things. One thing that is not mentioned in the design part is the mkTempDir func. Golang would be unhappy if the code is in /tmp/xxx/ as it is outside the GOPATH, and it would make more trouble to change GOPATH as it will break import path serching, so golang would require a unique function to generate a tempdir inside the GOPATH.
Edit:
Oh, one more thing I forgot to say. It is terribly ugly and irresponsible to handle errors like this. But the idea is there, and more decent error handling mostly requires the usage contents. So do change it yourself, log it, panic or whatever you want.
Edit 2:
You can re-use your npm example as following.
type Node struct {
path string
}
func (n Node) Build(path string) error {
//e.g. Run npm install which build's nodejs project
command := exec.Command("npm", "install")
command.Dir = n.path
Combined, err := command.CombinedOutput()
if err != nil {
log.Println(err)
}
log.Printf("%s", Combined)
return nil
}
func (n *Node) ChangePath(newPath string) error {
n.path = newPath
}
func (n Node) Path() string {
return n.path
}
And to combine it with other language all together:
func main() {
path := GetPathFromInput()
switch GetLanguageName(path) {
case "Java":
Build(&Java{path},targetDirForJava(),nil)
case "Go":
Build(&Golang{path,cgoOptions},targetDirForGo(),GoPathTempDir()) //You can disable cgo compile or something like that.
case "Node":
Build(&Node{path},targetDirForNode(),nil)
}
}
One trick is get language name. GetLanguageName should return the name of the language the code in path is using. This can be done by using ioutil.ReadDir to detect filenames.
Also note that although I made the Node struct very simple and only stores a path field, you can extend it easily. Like in Golang part, you might add build options there.
Edit 3:
About package structure:
First of all, I think literally everything: the Build function, language builders and other util/helpers should be put into a single package. They all work for a single task: build a language. There is no need and hardly any expectation to isolate any piece of code as another (sub)package.
So that means one dir. The remaining is really some very personal style but I will share mine:
I would put the function Build and interface Builder into a file called main.go. If the front-end code is minimal and very readable, I would put them into main.go as well, but if it is long and have some ui-logic, I would put it into a front-end.go or cli.go or ui.go, depending on the auctual code.
Next, for each language, I would create a .go file with the language code. It makes clear where I can check them. Alternatively, if the code is really tiny, it is not a bad idea to put them all together into a builders.go. After all, mordern editors can be more than capable to get definitaion of structs and types.
Finally, all the copyDir, zipDir functions goes to a util.go. That is simple - they are utilities, most time we just don't want to bother them.

Go is not an object-oriented language. This means that by design, you don't necessarily have all the behaviour of a type encapsulated in the type itself. And this is handy when you think that we don't have inheritance.
When you want to build a type upon another type, you use composition instead: a struct can embed other types, and expose their methods.
Let's say that you have a MyZipper type that exposes a Zip(string) method, and a MyCopier that exposes a Copy(string) method:
type Builder struct {
MyZipper
MyCopier
}
func (b Builder) Build(path string) error {
// some code
err := b.Zip(path)
if err != nil {
return err
}
err := b.Copy(path)
if err != nil {
return err
}
}
An this is composition in Go. Going further, you can even embed non-exposed types (e.g. myZipper and myCopier) if you only want them to be called from within the builder package. But then, why embedding them in the first place?
You can chose between several different valid designs for your Go project.
solution 1: Single package exposing multiple Builder types
In this case, you want a builder package, that will expose multiple builders.
zip and copy are two functions defined somewhere in the package, they don't need to be methods attached to a type.
package builder
func zip(zip, args string) error {
// zip implementation
}
func cp(copy, arguments string) error {
// copy implementation
}
type NodeBuilder struct{}
func (n NodeBuilder) Build(path string) error {
// node-specific code here
if err := zip(the, args); err != nil {
return err
}
if err := cp(the, args); err != nil {
return err
}
return nil
}
type PythonBuilder struct{}
func (n PythonBuilder) Build(path string) error {
// python-specific code here
if err := zip(the, args); err != nil {
return err
}
if err := cp(the, args); err != nil {
return err
}
return nil
}
solution 2: single package, single type embedding specific behaviour
Depending on the complexity of the specific behaviour, you may not want to change the whole behaviour of the Build function, but just to inject a specific behaviour:
package builder
import (
"github.com/me/myproj/copier"
"github.com/me/myproj/zipper"
)
type Builder struct {
specificBehaviour func(string) error
}
func (b Builder) Build(path string) error {
if err := specificBehaviour(path); err != nil {
return err
}
if err := zip(the, args); err != nil {
return err
}
if err := copy(the, args); err != nil {
return err
}
return nil
}
func nodeSpecificBehaviour(path string) error {
// node-specific code here
}
func pythonSpecificBehaviour(path string) error {
// python-specific code here
}
func NewNode() Builder {
return Builder{nodeSpecificBehaviour}
}
func NewPython() Builder {
return Builder{pythonSpecificBehaviour}
}
solution 3: one package per specificBuilder
On the other end of the scale, depending on the package granularity you want to use in your project, you may want to have a distinct package for every builder. With this premise, you want to generalise the shared functionality enough to give it a citizenship as package too. Example:
package node
import (
"github.com/me/myproj/copier"
"github.com/me/myproj/zipper"
)
type Builder struct {
}
func (b Builder) Build(path string) error {
// node-specific code here
if err := zipper.Zip(the, args); err != nil {
return err
}
if err := copier.Copy(the, args); err != nil {
return err
}
return nil
}
solution 4: functions!
If you know your builders will be purely functional, meaning they don't need any internal state, then you may want your Builders to be function types intead of interfaces. You will still be able to manipulate them as a single type from the consumer side, if this is what you want:
package builder
type Builder func(string) error
func NewNode() Builder {
return func(string) error {
// node-specific behaviour
if err := zip(the, args); err != nil {
return err
}
if err := copy(the, args); err != nil {
return err
}
return nil
}
}
func NewPython() Builder {
return func(string) error {
// python-specific behaviour
if err := zip(the, args); err != nil {
return err
}
if err := copy(the, args); err != nil {
return err
}
return nil
}
}
I wouldn't go with functions for your particular case, because you will need to solve very different problems with every BUilder, and you will definitely need some state at some point.
... I'll leave you the pleasure to combine together some of these techniques, if you are having a boring afternoon.
Bonus!
Don't be afraid of creating multiple packages, as this will help you design clear boundaries between the types, and take full advantage of encapsulation.
The error keyword is an interface, not a type! You can return nil if you have no errors.
Ideally, you don't define the Builder interface in the builder package: you don't need it. The Builder interface will sit in the consumer package.

Let's go through each question one by one:
1. Where should I put the the zip and copy (in the structure) and for example how should I implement them and path them to the builder ?
An interface does not carry any data (assuming you wanted to implement one from your code). It is just a blueprint an object can implements in order to pass as a more generic type. In this case, if you are not passing Builder type anywhere, the interface is redundant.
2. Should I structure the project differently according the assumptions?
This is my take on the project. I'll explain each part separately after the code:
package buildeasy
import (
"os/exec"
)
// Builder represents an instance which carries information
// for building a project using command line interface.
type Builder struct {
// Manager is a name of the package manager ("npm", "pip")
Manager string
Cmd string
Args []string
Prefn func(string) error
Postfn func(string) error
}
func zipAndCopyTo(path string) error {
// implement zipping and copy to the provided path
return nil
}
var (
// Each manager specific configurations
// are stored as a Builder instance.
// More fields and values can be added.
// This technique goes hand-in-hand with
// `wrapBuilder` function below, which is
// a technique called "functional options"
// which is considered a cleanest approach in
// building API methods.
// https://dave.cheney.net/2014/10/17/functional-options-for-friendly-apis
NodeConfig = &Builder{
Manager: "npm",
Postfn: zipAndCopyTo,
}
PythonConfig = &Builder{
Manager: "pip",
Postfn: zipAndCopyTo,
}
)
// This enum is used by factory function Create to select the
// right config Builder from the array below.
type Manager int
const (
Npm Manager = iota
Pip
// Yarn
// ...
)
var configs = [...]*Builder{
NodeConfig,
PythonConfig,
// YarnConfig,
}
// wrapBuilder accepts an original Builder and a function that can
// accept a Builder and then assign relevant value to the first.
func wrapBuilder(original *Builder, wrapperfn func(*Builder)) error {
if original != nil {
wrapperfn(original)
return nil
}
return errors.New("Original Builder is nil")
}
func New(manager Manager) *Builder {
builder := new(Builder)
// inject / modify properties of builder with relevant
// value for the manager we want.
wrapBuilder(builder, configs[int(manager)])
})
return builder
}
// Now you can have more specific methods like to install.
// notice that it doesn't matter what this Builder is for.
// All information is contained in it already.
func (b *Builder) Install(pkg string) ([]byte, error) {
b.Cmd = "install"
// if package is provided, push its name to the args list
if pkg != "" {
b.Args = append([]string{pkg}, b.Args...)
}
// This emits "npm install [pkg] [args...]"
cmd := exec.Command(b.Manager, (append([]string{b.Cmd}, b.Args...))...)
// default to executing in the current directory
cmd.Dir = "./"
combined, err := cmd.CombinedOutput()
if err != nil {
return nil, err
}
return combined, nil
}
func (b *Builder) Build(path string) error {
// so default the path to a temp folder
if path == "" {
path = "path/to/my/temp"
}
// maybe prep the source directory?
if err := b.Prefn(path); err != nil {
return err
}
// Now you can use Install here
output, err := b.Install("")
if err != nil {
return err
}
log.Printf("%s", output)
// Now zip and copy to where you want
if err := b.Postfn(path); err != nil {
return err
}
return nil
}
Now this Builder is generic enough to handle most build commands. Notice Prefn and Postfn fields. These are hook functions you can run before and after the command runs within Build. Prefn can check if, say, the package manager is installed and install it if it's not (or just return an error). Postfn can run your zip and copy operations, or any clean up routine. Here's a usecase, provided superbuild is our fictional package name and the user is using it from outside:
import "github.com/yourname/buildeasy"
func main() {
myNodeBuilder := buildeasy.New(buildeasy.NPM)
myPythonBuilder := buildeasy.New(buildeasy.PIP)
// if you wanna install only
myNodeBuilder.Install("gulp")
// or build the whole thing including pre and post hooks
myPythonBuilder.Build("my/temp/build")
// or get creative with more convenient methods
myNodeBuilder.GlobalInstall("gulp")
}
You can predefine a few Prefns and Postfns and make them available as option for the user of your program, assuming it's a command line program, or if it's a library, have the user write them herself.
wrapBuilder function
There are a few techniques used in constructing an instance in Go. First, parameters can be passed into a constructor function (this code is for explanation only and not to be used):
func TempNewBuilder(cmd string) *Builder {
builder := new(Builder)
builder.Cmd = cmd
return builder
}
But this approach is very ad-hoc because it is impossible to pass arbitrary values to configure the returned *Builder. A more robust approach is to pass a config instance of *Builder:
func TempNewBuilder(configBuilder *Builder) *Builder {
builder := new(Builder)
builder.Manager = configBuilder.Manager
builder.Cmd = configBuilder.Cmd
// ...
return builder
}
By using wrapBuilder function, you can write a function to handle (re)assigning of values of an instance:
func TempNewBuilder(builder *Builder, configBuilderFn func(*Builder)) *Builder {
configBuilderFn(builder)
}
Now you can pass in any function for configBuilderFn to configure your *Builder instance.
To read more, see https://dave.cheney.net/2014/10/17/functional-options-for-friendly-apis.
configs array
configs array goes hand-in-hand with the enum of Manager constants. Take a look at New factory function. the enum constant manager passed in pass the parameter is type Manager which is just an int underneath. This means all we had to do is access configs using the manager as the index in wrapBuilder:
wrapBuilder(builder, configs[int(manager)])
For instance, if manager == Npm,configs[int(manager)] will return NodeConfig from configs array.
Structuring package(s)
At this point, it is fine to have zip and copy functions to live in the same package as Build the way I did. There's little use in prematurely optimizing anything or worry about that until you have to. That will only introduce more complexity than you'd want. Optimization comes consistently as you develop the code.
If you feel like structuring the project early is important, you can do it based on the semantic of your API. For instance, to create a new *Builder, it is quite intuitive for the user to call a factory function New or Create from a subpackage buildeasy/builder:
// This is a user using your `buildeasy` package
import (
"github.com/yourname/buildeasy"
"github.com/yourname/buildeasy/node"
"github.com/yourname/buildeasy/python"
)
var targetDir = "path/to/temp"
func main() {
myNodeBuilder := node.New()
myNodeBuilder.Build(targetDir)
myPythonBuilder := python.New()
myPythonBuilder.Install("tensorflow")
}
Another more verbose approach is to include the semantic as part of the function's name, which is also used in Go's standard packages:
myNodeBuilder := buildeasy.NewNodeBuilder()
myPythonBuilder := buildeasy.NewPipBuilder()
// or
mySecondNodeBuilder := buildeasy.New(buildeasy.Yarn)
In Go's standard packages, verbose functions and methods are common. That's because it normally structure sub-packages (subdirectories) for more specific utilities such as path/filepath, which contains utility functions related to file path manipulation while keeping path's API basic and clean.
Coming back to your project, I would keep most common, more generic functions at the top level directory/package. This is how I would tackle the structure:
buildeasy
├── buildeasy.go
├── python
│ └── python.go
└── node/
└── node.go
While package buildeasy contains functions like NewNodeBuilder, NewPipBuilder, or just New that accepts additional options (like the above code), in subpackage buildeasy/node, for instance, can look like this:
package node
import "github.com/yourname/buildeasy"
func New() *buildeasy.Builder {
return buildeasy.New(buildeasy.Npm)
}
func NewWithYarn() *buildeasy.Builder {
return buildeasy.New(buildeasy.Yarn)
}
// ...
or buildeasy/python:
package python
import "github.com/yourname/buildeasy"
func New() *buildeasy.Builder {
return buildeasy.New(buildeasy.Pip)
}
func NewWithEasyInstall() *buildeasy.Builder {
return buildeasy.New(buildeasy.EasyInstall)
}
// ...
Note that in the subpackages you never have to call buildeasy.zipAndCopy because it is a private function that is lower level than the node and python subpackages should care. these subpackages act like another layer of API calling buildeasy's functions and passing in some specific values and configurations that make life easier for the user of its API.
Hope this makes sense.

Related

How can I reach struct member in interface type

I have to keep multi type struct in slice and seed them. I took with variadic parameter of interface type and foreach them. If I call the method of interface it works, but when I trying to reach to struct I can't. How can I solve that?
Note: Seed() method return the file name of datas.
The Interface:
type Seeder interface {
Seed() string
}
Method:
func (AirportCodes) Seed() string {
return "airport_codes.json"
}
SeederSlice:
seederModelList = []globals.Seeder{
m.AirportCodes{},
m.Term{},
}
And the last one, SeedSchema function:
func (db *Database) SeedSchema(models ...globals.Seeder) error {
var (
subjects []globals.Seeder
fileByte []byte
err error
// tempMember map[string]interface{}
)
if len(models) == 0 {
subjects = seederModelList
} else {
subjects = models
}
for _, model := range subjects {
fileName := model.Seed()
fmt.Printf("%+v\n", model)
if fileByte, err = os.ReadFile("db/seeds/" + fileName); err != nil {
fmt.Println("asd", err)
// return err
}
if err = json.Unmarshal(fileByte, &model); err != nil {
fmt.Println("dsa", err)
// return err
}
modelType := reflect.TypeOf(model).Elem()
modelPtr2 := reflect.New(modelType)
fmt.Printf("%s\n", modelPtr2)
}
return nil
}
I can reach exact model but can't create a member and seed.
After some back and forth in the comments, I'll just post this minimal answer here. It's by no means a definitive "this is what you do" type answer, but I hope this can at least provide you with enough information to get you started. To get to this point, I've made a couple of assumptions based on the snippets of code you've provided, and I'm assuming you want to seed the DB through a command of sorts (e.g. your_bin seed). That means the following assumptions have been made:
The Schemas and corresponding models/types are present (like AirportCodes and the like)
Each type has its own source file (name comes from Seed() method, returning a .json file name)
Seed data is, therefore, assumed to be in a format like [{"seed": "data"}, {"more": "data"}].
The seed files can be appended, and should the schema change, the data in the seed files could be changed all together. This is of less importance ATM, but still, it's an assumption that should be noted.
OK, so let's start by moving all of the JSON files in a predictable location. In a sizeable, real world application you'd use something like XDG base path, but for the sake of brevity, let's assume you're running this in a scratch container from / and all relevant assets have been copied in to said container.
It'd make sense to have all seed files in the base path under a seed_data directory. Each file contains the seed data for a specific table, and therefore all the data within a file maps neatly onto a single model. Let's ignore relational data for the time being. We'll just assume that, for now, the data in these files is at least internally consistent, and any X-to-X relational data will have to right ID fields allowing for JOIN's and the like.
Let's start
So we have our models, and the data in JSON files. Now we can just create a slice of said models, making sure that data that you want/need to be present before other data is inserted is represented as a higher entry (lower index) than the other. Kind of like this:
seederModelList = []globals.Seeder{
m.AirportCodes{}, // seeds before Term
m.Term{}, // seeds after AirportCodes
}
But instead or returning the file name from this Seed method, why not pass in the connection and have the model handle its own data like this:
func (_ AirportCodes) Seed(db *gorm.DB) error {
// we know what file this model uses
data, err := os.ReadFile("seed_data/airport_codes.json")
if err != nil {
return err
}
// we have the data, we can unmarshal it as AirportCode instances
codes := []*AirportCodes{}
if err := json.Unmarshal(data, &codes); err != nil {
return err
}
// now INSERT, UPDATE, or UPSERT:
db.Clauses(clause.OnConflict{
UpdateAll: true,
}).Create(&codes)
}
Do the same for other models, like Terms:
func (_ Terms) Seed(db *gorm.DB) error {
// we know what file this model uses
data, err := os.ReadFile("seed_data/terms.json")
if err != nil {
return err
}
// we have the data, we can unmarshal it as Terms instances
terms := []*Terms{}
if err := json.Unmarshal(data, &terms); err != nil {
return err
}
// now INSERT, UPDATE, or UPSERT:
return db.Clauses(clause.OnConflict{
UpdateAll: true,
}).Create(&terms)
}
Of course, this does result in a bit of a mess considering we have DB access in a model, which should really be just a DTO if you ask me. This also leaves a lot to be desired in terms of error handling, but the basic gist of it would be this:
func main() {
db, _ := gorm.Open(mysql.Open(dsn), &gorm.Config{}) // omitted error handling for brevity
seeds := []interface{
Seed(*gorm.DB) error
}{
model.AirportCodes{},
model.Terms{},
// etc...
}
for _, m := range seeds {
if err := m.Seed(db); err != nil {
panic(err)
}
}
db.Close()
}
OK, so this should get us started, but let's just move this all into something a bit nicer by:
Moving the whole DB interaction out of the DTO/model
Wrap things into a transaction, so we can roll back on error
Update the initial slice a bit to make things cleaner
So as mentioned earlier, I'm assuming you have something like repositories to handle DB interactions in a separate package. Rather than calling Seed on the model, and passing the DB connection into those, we should instead rely on our repositories:
db, _ := gorm.Open() // same as before
acs := repo.NewAirportCodes(db) // pass in connection
tms := repo.NewTerms(db) // again...
Now our model can still return the JSON file name, or we can have that as a const in the repos. At this point, it doesn't really matter. The main thing is, we can have the actual inserting of data done in the repositories.
You can, if you want, change your seed slice thing to something like this:
calls := []func() error{
acs.Seed, // assuming your repo has a Seed function that does what it's supposed to do
tms.Seed,
}
Then perform all the seeding in a loop:
for _, c := range calls {
if err := c(); err != nil {
panic(err)
}
}
Now, this just leaves us with the issue of the transaction stuff. Thankfully, gorm makes this really rather simple:
db, _ := gorm.Open()
db.Transaction(func(tx *gorm.DB) error {
acs := repo.NewAirportCodes(tx) // create repo's, but use TX for connection
if err := acs.Seed(); err != nil {
return err // returning an error will automatically rollback the transaction
}
tms := repo.NewTerms(tx)
if err := tms.Seed(); err != nil {
return err
}
return nil // commit transaction
})
There's a lot more you can fiddle with here like creating batches of related data that can be committed separately, you can add more precise error handling and more informative logging, handle conflicts better (distinguish between CREATE and UPDATE etc...). Above all else, though, something worth keeping in mind:
Gorm has a migration system
I have to confess that I've not dealt with gorm in quite some time, but IIRC, you can have the tables be auto-migrated if the model changes, and run either custom go code and or SQL files on startup which can be used, rather easily, to seed the data. Might be worth looking at the feasibility of that...

How can I duplicate entry keys and show it in the same log with Uber Zap?

I would like to duplicate entry keys like caller with another key name method and show both in the log...
{"level":"error", "caller: "testing/testing.go:1193", "method": "testing/testing.go:1193", "message": "foo"}
Any ideas?
You can't change the fields of a zapcore.Entry. You may change how it is marshalled, but honestly adding ghost fields to a struct is a bad hack. What you can do is use a custom encoder, and append to []zapcore.Field a new string item with a copy of the caller. In particular, the default output of the JSON encoder is obtained from Caller.TrimmedPath():
type duplicateCallerEncoder struct {
zapcore.Encoder
}
func (e *duplicateCallerEncoder) Clone() zapcore.Encoder {
return &duplicateCallerEncoder{Encoder: e.Encoder.Clone()}
}
func (e *duplicateCallerEncoder) EncodeEntry(entry zapcore.Entry, fields []zapcore.Field) (*buffer.Buffer, error) {
// appending to the fields list
fields = append(fields, zap.String("method", entry.Caller.TrimmedPath()))
return e.Encoder.EncodeEntry(entry, fields)
}
Note that the above implements Encoder.Clone(). See this for details: Why custom encoding is lost after calling logger.With in Uber Zap?
And then you can use it by either constructing a new Zap core, or by registering the custom encoder. The registered constructor embeds a JSONEncoder into your custom encoder, which is the default encoder for the production logger:
func init() {
// name is whatever you like
err := zap.RegisterEncoder("duplicate-caller", func(config zapcore.EncoderConfig) (zapcore.Encoder, error) {
return &duplicateCallerEncoder{Encoder: zapcore.NewJSONEncoder(config)}, nil
})
// it's reasonable to panic here, since the program can't initialize
if err != nil {
panic(err)
}
}
func main() {
cfg := zap.NewProductionConfig()
cfg.Encoding = "duplicate-caller"
logger, _ := cfg.Build()
logger.Info("this is info")
}
The above replicates the initialization of a production logger with your custom config.
For such a simple config, I prefer the init() approach with zap.RegisterEncoder. It makes it faster to refactor code, if needed, and/or if you place this in some other package to begin with. You can of course do the registration in main(); or if you need additional customization, then you may use zap.New(zapcore.NewCore(myCustomEncoder, /* other args */))
You can see the full program in this playground: https://go.dev/play/p/YLDXbdZ-qZP
It outputs:
{"level":"info","ts":1257894000,"caller":"sandbox3965111040/prog.go:24","msg":"this is info","method":"sandbox3965111040/prog.go:24"}

Go use init method inside multiple function

I’ve the following go code which works, im creating a VTS property which is used in some files under the same package 
File A is creating VTS which should be used in all of the functions below (in different files under the same package) 
File A
package foo
var VTS = initSettings()
func initSettings() *cli.EnvSettings {
conf := cli.New()
conf.RepositoryCache = "/tmp"
return conf
}
In file B im using it like 
package foo
func Get(url string, conf *action.Configuration) (*chart.Chart, error) {
cmd := action.NewInstall(conf)
    // Here see the last parameters 
chartLocation, err := cmd.ChartPathOptions.LocateChart(url, VTS)
return loader.Load(chartLocation)
}
File C 
package foo
func Upgrade(ns, name, url string, vals map[string]interface{}, conf *action.Configuration) (*release.Release, error) {
… 
if url == "" {
ch = rel.Chart
} else {
cp, err := client.ChartPathOptions.LocateChart(url, VTS)
if err != nil {
return nil, err
}
ch, err = loader.Load(cp)
}
And in additional files under the same package.
Is there a cleaner way to initiate the VTS and use it in different files instead of package variable ?
I've tried something like
func Settings() *cli.EnvSettings {
cfg := cli.New()
cfg.RepositoryCache = "/tmp"
return cfg
}
and pass it as param but I got error
func GetChart(url string, Settings func(), cfg *action.Configuration) (*chart.Chart, error) {
Just add a *cli.EnvSettings as an additional parameter toGet() and Upgrade(), and then have the caller pass VTS as an argument.
File A
package foo
func initSettings() *cli.EnvSettings {
conf := cli.New()
conf.RepositoryCache = "/tmp"
return conf
}
File B
package foo
func Get(url string, conf *action.Configuration, vts *cli.EnvSettings) (*chart.Chart, error) {
cmd := action.NewInstall(conf)
// Here see the last parameters
chartLocation, err := cmd.ChartPathOptions.LocateChart(url, vts)
return loader.Load(chartLocation)
}
File C
package foo
func Upgrade(ns, name, url string, vals map[string]interface{}, conf *action.Configuration, vts *cli.EnvSettings) (*release.Release, error) {
…
if url == "" {
ch = rel.Chart
} else {
cp, err := client.ChartPathOptions.LocateChart(url, vts)
if err != nil {
return nil, err
}
ch, err = loader.Load(cp)
}
File D: Some other file of a higher level package
...
vts := foo.initSettings()
foo.Get(myUrl, myConf, vts)
Of course, if you want to call foo.Get() or foo.Update() from several other files and packages throughout your project, and if you want all of those calls to use the same *cli.EnvSettings object, you'll likely have to construct VTS from a higher level and pass it around through more functions (i.e. continue the pattern).
In general, this is a form of dependency injection, where foo.Get() and foo.Update() are the "clients" and VTS is the "service". One big advantage of function parameters over package variables is testability. It is difficult to test how foo.Get() behaves with different *cli.EnvSettings objects if said *cli.EnvSettings object is a global / package variable. It's far easier for the tests to decide what *cli.EnvSettings objects to use, and then pass them to foo.Get().
One disadvantage of this pattern is that you can end up with functions with many parameters if they require many injected services, and they can become a bit unwieldy. However, if a function or object truly depends on many services that are truly independent, then there's really no work-around to this anyways. It's usually better to have a function with many parameters than a function that is very difficult to test.

Golang repostiory pattern

I try to implement repository pattern in Go app (simple web service) and try to find better way to escape code duplication.
Here is a code
Interfaces are:
type IRoleRepository interface {
GetAll() ([]Role, error)
}
type ISaleChannelRepository interface {
GetAll() ([]SaleChannel, error)
}
And implementation:
func (r *RoleRepository) GetAll() ([]Role, error) {
var result []Role
var err error
var rows *sql.Rows
if err != nil {
return result, err
}
connection := r.provider.GetConnection()
defer connection.Close()
rows, err = connection.Query("SELECT Id,Name FROM Position")
defer rows.Close()
if err != nil {
return result, err
}
for rows.Next() {
entity := new(Role)
err = sqlstruct.Scan(entity, rows)
if err != nil {
return result, err
}
result = append(result, *entity)
}
err = rows.Err()
if err != nil {
return result, err
}
return result, err
}
func (r *SaleChannelRepository) GetAll() ([]SaleChannel, error) {
var result []SaleChannel
var err error
var rows *sql.Rows
if err != nil {
return result, err
}
connection := r.provider.GetConnection()
defer connection.Close()
rows, err = connection.Query("SELECT DISTINCT SaleChannel 'Name' FROM Employee")
defer rows.Close()
if err != nil {
return result, err
}
for rows.Next() {
entity := new(SaleChannel)
err = sqlstruct.Scan(entity, rows)
if err != nil {
return result, err
}
result = append(result, *entity)
}
err = rows.Err()
if err != nil {
return result, err
}
return result, err
}
As you can see differences are in a few words. I try to find something like Generics from C#, but didnt find.
Can anyone help me?
No, Go does not have generics and won't have them in the forseeable future¹.
You have three options:
Refactor your code so that you have a single function which accepts an SQL statement and another function, and:
Queries the DB with the provided statement.
Iterates over the result's rows.
For each row, calls the provided function whose task is to
scan the row.
In this case, you'll have a single generic "querying" function,
and the differences will be solely in "scanning" functions.
Several variations on this are possible but I suspect you have the idea.
Use the sqlx package which basically is to SQL-driven databases what encoding/json is to JSON data streams: it uses reflection on your types to create and execute SQL to populate them.
This way you'll get reusability on another level: you simply won't write boilerplate code.
Use code generation which is the Go-native way of having "code templates" (that's what generics are about).
This way, you (usually) write a Go program which takes some input (in whatever format you wish), reads it and writes out one or more files which contain Go code, which is then compiled.
In your, very simple, case, you can start with a template of your Go function and some sort of a table which maps SQL statement to the types to create from the data selected.
I'd note that your code indeed looks woefully unidiomatic.
No one in their right mind implements "repository patterns" in Go, but that's sort of okay so long it keeps you happy—we all are indoctrinated to a certain degree with the languages/environments we're accustomed to,—but your connection := r.provider.GetConnection() looks alarming: the Go's database/sql is drastically different from "popular" environments and frameworks so I'd highly recommend to start with this and this.
¹ (Update as of 2021-05-31) Go will have generics as the proposal to implement them has been accepted and the work implementing them is in progress.
Forgive me if I'm misunderstanding, but a better pattern might be something like the following:
type RepositoryItem interface {
Name() string // for example
}
type Repository interface {
GetAll() ([]RepositoryItem, error)
}
At the moment, you essentially have multiple interfaces for each type of repository, so unless you're going to implement multiple types of RoleRepository, you might as well not have the interface.
Having generic Repository and RepositoryItem interfaces might make your code more extensible (not to mention easier to test) in the long run.
A contrived example might be (if we assume a Repository vaguely correlates to a backend) implementations such as MySQLRepository and MongoDBRepository. By abstracting the functionality of the repository, you're protecting against future mutations.
I would very much advise seeing #kostix's answer also, though.
interface{} is the "generic type" in Go. I can imagine doing something like this:
package main
import "fmt"
type IRole struct {
RoleId uint
}
type ISaleChannel struct {
Profitable bool
}
type GenericRepo interface{
GetAll([]interface{})
}
// conceptual repo to store all Roles and SaleChannels
type Repo struct {
IRoles []IRole
ISaleChannels []ISaleChannel
}
func (r *Repo) GetAll(ifs []interface{}) {
// database implementation here before type switch
for _, v := range ifs {
switch v := v.(type) {
default:
fmt.Printf("unexpected type %T\n", v)
case IRole:
fmt.Printf("Role %t\n", v)
r.IRoles = append(r.IRoles, v)
case ISaleChannel:
fmt.Printf("SaleChannel %d\n", v)
r.ISaleChannels = append(r.ISaleChannels, v)
}
}
}
func main() {
getter := new(Repo)
// mock slice
data := []interface{}{
IRole{1},
IRole{2},
IRole{3},
ISaleChannel{true},
ISaleChannel{false},
IRole{4},
}
getter.GetAll(data)
fmt.Println("IRoles: ", getter.IRoles)
fmt.Println("ISaleChannels: ", getter.ISales)
}
This way you don't have to end up with two structs and/or interfaces for IRole and ISale

Multiple values in single-value context

Due to error handling in Go, I often end up with multiple values functions. So far, the way I have managed this has been very messy and I am looking for best practices to write cleaner code.
Let's say I have the following function:
type Item struct {
Value int
Name string
}
func Get(value int) (Item, error) {
// some code
return item, nil
}
How can I assign a new variable to item.Value elegantly. Before introducing the error handling, my function just returned item and I could simply do this:
val := Get(1).Value
Now I do this:
item, _ := Get(1)
val := item.Value
Isn't there a way to access directly the first returned variable?
In case of a multi-value return function you can't refer to fields or methods of a specific value of the result when calling the function.
And if one of them is an error, it's there for a reason (which is the function might fail) and you should not bypass it because if you do, your subsequent code might also fail miserably (e.g. resulting in runtime panic).
However there might be situations where you know the code will not fail in any circumstances. In these cases you can provide a helper function (or method) which will discard the error (or raise a runtime panic if it still occurs).
This can be the case if you provide the input values for a function from code, and you know they work.
Great examples of this are the template and regexp packages: if you provide a valid template or regexp at compile time, you can be sure they can always be parsed without errors at runtime. For this reason the template package provides the Must(t *Template, err error) *Template function and the regexp package provides the MustCompile(str string) *Regexp function: they don't return errors because their intended use is where the input is guaranteed to be valid.
Examples:
// "text" is a valid template, parsing it will not fail
var t = template.Must(template.New("name").Parse("text"))
// `^[a-z]+\[[0-9]+\]$` is a valid regexp, always compiles
var validID = regexp.MustCompile(`^[a-z]+\[[0-9]+\]$`)
Back to your case
IF you can be certain Get() will not produce error for certain input values, you can create a helper Must() function which would not return the error but raise a runtime panic if it still occurs:
func Must(i Item, err error) Item {
if err != nil {
panic(err)
}
return i
}
But you should not use this in all cases, just when you're sure it succeeds. Usage:
val := Must(Get(1)).Value
Go 1.18 generics update: Go 1.18 adds generics support, it is now possible to write a generic Must() function:
func Must[T any](v T, err error) T {
if err != nil {
panic(err)
}
return v
}
This is available in github.com/icza/gog, as gog.Must() (disclosure: I'm the author).
Alternative / Simplification
You can even simplify it further if you incorporate the Get() call into your helper function, let's call it MustGet:
func MustGet(value int) Item {
i, err := Get(value)
if err != nil {
panic(err)
}
return i
}
Usage:
val := MustGet(1).Value
See some interesting / related questions:
How to pass multiple return values to a variadic function?
Return map like 'ok' in Golang on normal functions
Yes, there is.
Surprising, huh? You can get a specific value from a multiple return using a simple mute function:
package main
import "fmt"
import "strings"
func µ(a ...interface{}) []interface{} {
return a
}
type A struct {
B string
C func()(string)
}
func main() {
a := A {
B:strings.TrimSpace(µ(E())[1].(string)),
C:µ(G())[0].(func()(string)),
}
fmt.Printf ("%s says %s\n", a.B, a.C())
}
func E() (bool, string) {
return false, "F"
}
func G() (func()(string), bool) {
return func() string { return "Hello" }, true
}
https://play.golang.org/p/IwqmoKwVm-
Notice how you select the value number just like you would from a slice/array and then the type to get the actual value.
You can read more about the science behind that from this article. Credits to the author.
No, but that is a good thing since you should always handle your errors.
There are techniques that you can employ to defer error handling, see Errors are values by Rob Pike.
ew := &errWriter{w: fd}
ew.write(p0[a:b])
ew.write(p1[c:d])
ew.write(p2[e:f])
// and so on
if ew.err != nil {
return ew.err
}
In this example from the blog post he illustrates how you could create an errWriter type that defers error handling till you are done calling write.
No, you cannot directly access the first value.
I suppose a hack for this would be to return an array of values instead of "item" and "err", and then just do
item, _ := Get(1)[0]
but I would not recommend this.
How about this way?
package main
import (
"fmt"
"errors"
)
type Item struct {
Value int
Name string
}
var items []Item = []Item{{Value:0, Name:"zero"},
{Value:1, Name:"one"},
{Value:2, Name:"two"}}
func main() {
var err error
v := Get(3, &err).Value
if err != nil {
fmt.Println(err)
return
}
fmt.Println(v)
}
func Get(value int, err *error) Item {
if value > (len(items) - 1) {
*err = errors.New("error")
return Item{}
} else {
return items[value]
}
}
Here's a generic helper function with assumption checking:
func assumeNoError(value interface{}, err error) interface{} {
if err != nil {
panic("error encountered when none assumed:" + err.Error())
}
return value
}
Since this returns as an interface{}, you'll generally need to cast it back to your function's return type.
For example, the OP's example called Get(1), which returns (Item, error).
item := assumeNoError(Get(1)).(Item)
The trick that makes this possible: Multi-values returned from one function call can be passed in as multi-variable arguments to another function.
As a special case, if the return values of a function or method g are equal in number and individually assignable to the parameters of another function or method f, then the call f(g(parameters_of_g)) will invoke f after binding the return values of g to the parameters of f in order.
This answer borrows heavily from existing answers, but none had provided a simple, generic solution of this form.

Resources