Mock interface return type for dep injection - go

EDIT
As pointed out in the accepted answer, the issue here was doing go duck typing in the wrong direction. I'd like to add the following github issue as an attachment, since it provided me useful information in addition to #matt answer below:
https://github.com/golang/mock/issues/316#issuecomment-512043925
ORIGINAL POST
I'm new to dependencies injection, and wanted to test it on a module that uses couchbase go sdk. For this purpose I need interfaces to reproduce both Cluster and Bucket structures.
On the Cluster interface I need the Bucket() method, which has the following signature:
func (c *gocb.Cluster) Bucket(bucketName string) *gocb.Bucket
I also need the two following methods from the Bucket interface:
func (b *gocb.Bucket) Collection(collectionName string) gocb.*Collection
func (b *gocb.Bucket) DefaultCollection() *gocb.Collection
The tricky part is that both Cluster and Bucket methods have pointer receivers. This isn't hard in itself, since I know how to mock such methods alone (you just need to use a pointer to the type that implements the interface).
The issue is that one of the Cluster methods needs to return a pointer that implements the Bucket interface, since it also has pointer receivers methods. I tried many combinations, but each time I use an non-mocked *gocb.Cluster value as an argument to one of my functions, it fails because the Bucket method on the cluster instance isn't implemented correctly by the instance.
Below is my latest attempt:
package deps
import (
"github.com/couchbase/gocb/v2"
)
// Database mocks the gocb.Cluster interface.
type Database interface {
Bucket(bucketName string) *Bucket
}
// Bucket mocks the gocb.Bucket interface.
type Bucket interface {
Collection(collectionName string) *gocb.Collection
DefaultCollection() *gocb.Collection
}
The linter then returns the following error whenever I try to use an actual gocb.Cluster value:
I also tried to replace the Bucket method signature in my Database interface with:
// Database mocks the gocb.Cluster interface.
type Database interface {
Bucket(bucketName string) Bucket
}
Which again gives me the following lint error:
How can I implement an interface to mock both methods ?

I think the key concept that you're missing is that the mock object has to match the interface requirements of what you're mocking. That includes the parameters and return values of the methods.
type Database interface {
// Bucket(bucketName string) *Bucket // Wrong
Bucket(bucketName string) *gocb.Bucket // Correct
}
You can still use the return value of Database.Bucket as a deps.Bucket, given that you've also mocked that interface properly.
Unless I'm missing something about your testing process, this should do what you need.
package main
import (
"github.com/couchbase/gocb/v2"
)
// Database mocks the gocb.Cluster interface.
type Database interface {
Bucket(bucketName string) *gocb.Bucket
}
// Bucket mocks the gocb.Bucket interface.
type Bucket interface {
Collection(collectionName string) *gocb.Collection
DefaultCollection() *gocb.Collection
}
func someFunc(db Database) *gocb.Bucket {
return db.Bucket("")
}
func anotherFunc(bucket Bucket) {
bucket.Collection("")
}
func main() {
var cluster *gocb.Cluster
bucket := someFunc(cluster)
anotherFunc(bucket)
}

Related

A repository/store for database as interface or per table interface?

What I designed first was to have a Store interface as follow:
// store.go
type Store interface {
CreateUser(user model.User) (string, error)
GetProfile(userId string) (model.User, error)
CreateHouse(user model.House) (string, error)
}
And in another file, mongo_store.go, its implementation codes:
type mongoStore struct {
store *mongo.Client
}
func (mc *mongoUserStore) CreateUser(user model.User) (string, error) {
}
// And so on...
In mongo_store.go I have another method that returns an instance of MongoStore:
func NewMongoDBStore() Store {
// Some code to connect to MongoDB and finally
s := &mongoStore{
store: client,
}
return s
}
I've gone this way to abstract away DB layer. So in code we pass store around and call let's say CreateUser as an example.
My team members had the object of creating Store interface per table. So we should have UserStore interface with their methods or HouseStore with their own methods.
First question is that is this a best practice to change the code this way? I could not come up with a good argument to reject their change request. It's been said that this way we can mock less code in tests and also it is not polluted, all in one place for all methods that work with DB.
My Second Question is if we go the second approach, how NewMongoDBStore should return different store types. So instead of Store as return type we have to have different store types like UserStore, HouseStore, etc.
I always try to stick to one rule when designing new interfaces in Go: keep interfaces as small as possible. You can see that stdlib also tries to follow that rule, see for example fmt.Stringer, http.Handler or json.Marshaler. Look how in the json library they even separated json.Marshaler and json.Unmarshaler (same for the io.Reader and io.Writer), which, you can say, seem to be very connected together.
Coming back to your example, I think that your team makes a good point - I would go for the separation of the storages interfaces. The only situation in which I wouldn't do that is if you are sure that this interface will never expand and will always stick to this very limited number of methods. But I think this is very unlikely for the storage-like interfaces. For example in the near future you could like to add some more-grained filtering methods, or e.g. a method to insert storage objects in a batch.
In my opinion you can only benefit from separating the interfaces and here is why:
It's true that it is easier to mock an interface with a 1-2 methods than an interface with, let's say, 10 methods.
It's always better to separate functionalities into smaller pieces as you may not need to use all of them at once in every place. To give you a better picture you can have one service which would use your UserStore and your HouseStore implementations, but you can also have a second service that wouldn't need a HouseStore and would only use a UserStore implementation. Thanks to that it would be much easier to mock the second service (as it uses only a UserStore) and if you later add any methods to the HouseStore there is no possible way it could affect the second service anyhow as it knows nothing about this interface.
I think the above answers your first question. Coming to the second question you can solve it in two ways I think:
First way is something I usually do. You can simply create separate implementations for separate interfaces. So if you have, following your example, a file store.go containing interfaces:
type UserStore interface {
CreateUser(user model.User) (string, error)
// Rest of the methods ...
}
type HouseStore interface {
CreateHouse(house model.House) (string, error)
// Rest of the methods ...
}
I would make a user_mongo_store.go with MongoDB implementation for the UserStore ...
type userMongoStore struct {
store *mongo.Client
}
func (s *userMongoStore) CreateUser(user model.User) (string, error) {
// CreateUser method implementation ...
}
func NewUserMongoStore() UserStore {
// Some code to connect to MongoDB and finally
s := &userMongoStore{
store: client,
}
return s
}
// Rest of the UserStore methods implementations ...
... and I would also make a house_mongo_store.go file with MongoDB implementation for the HouseStore:
type houseMongoStore struct {
store *mongo.Client
}
func (s *houseMongoStore) CreateHouse(house model.House) (string, error) {
// CreateHouse method implementation ...
}
func NewHouseMongoStore() HouseStore {
// Some code to connect to MongoDB and finally
s := &houseMongoStore{
store: client,
}
return s
}
// Rest of the HouseStore methods implementations ...
You could ask here if will not feel inconvinient to keep two MongoDB storages implementations separated as they could contain the same MongoDB-related operations. Answer to that question is no: you can always create e.g. mongo_store.go to keep all the common functions that will be shared by all the MongoDB storages implementations.
The only disadvantage I can see here is a little bit more code in general, but in the end it gives you much cleaner, better separated and more modular code.
Second way, which I would recommend less, is to use the (in my opinion) very powerful Go feature which is a fact that you don't declare implementing an interface (unlike in e.g. Java), you just have to implement all the interfaces methods in your struct and you can use it as all these interfaces implementations. In your case you could stick to the single mongoStore struct and make it implement both the UserStore and the HouseStore interfaces methods. That way you would end up with something like this:
type mongoStore struct {
store *mongo.Client
}
func (s *mongoStore) CreateUser(user model.User) (string, error) {
// CreateUser method implementation ...
}
func (s *mongoStore) CreateHouse(house model.House) (string, error) {
// CreateHouse method implementation ...
}
// Rest of the UserStore and HouseStore
// interfaces methods implementations ...
but this solution leaves us with a problem: how to create a function to create UserStore and HouseStore interfaces implementations. Well, in this situation you could either make mongoStore struct exported and use it directly as both a UserStore and HouseStore implementations or, which looks a little bit more exotic but is still a valid piece of code, you could make a function that would return this single struct as both implementations, e.g.:
func NewMongoStores() (UserStore, HouseStore) {
s := &mongoStore{
store: client,
}
return s, s
}
I think I gave you some options, but to sum up, I would encourage you to keep your interfaces and their implementations separated.

How can I separate generated code package and user code but have them accessible from one place in code

I am newer to golang, so I have some courses that I bought from udemy to help break me into the language. One of them I found very helpful for a general understanding as I took on a project in the language.
In the class that I took, all of the sql related functions were in the sqlc folder with the structure less broken out:
sqlc
generatedcode
store
One of those files is a querier that is generated by sqlc that contains an interface with all of the methods that were generated. Here is the general idea of what it currently looks like: https://github.com/techschool/simplebank/tree/master/db/sqlc
package db
import (
"context"
"github.com/google/uuid"
)
type Querier interface {
AddAccountBalance(ctx context.Context, arg AddAccountBalanceParams) (Account, error)
CreateAccount(ctx context.Context, arg CreateAccountParams) (Account, error)
...
}
var _ Querier = (*Queries)(nil)
Would it be possible to wrap both what sqlc generates AND any queries that a developer creates (dynamic queries) into a single querier? I'm also trying to have it so that the sqlc generated code is in its own folder. The structure I am aiming for is:
sql
sqlc
generatedcode
store - (wraps it all together)
dynamicsqlfiles
This should clear up what a I mean by store: https://github.com/techschool/simplebank/blob/master/db/sqlc/store.go
package db
import (
"context"
"database/sql"
"fmt"
)
// Store defines all functions to execute db queries and transactions
type Store interface {
Querier
TransferTx(ctx context.Context, arg TransferTxParams) (TransferTxResult, error)
}
// SQLStore provides all functions to execute SQL queries and transactions
type SQLStore struct {
db *sql.DB
*Queries
}
// NewStore creates a new store
func NewStore(db *sql.DB) Store {
return &SQLStore{
db: db,
Queries: New(db),
}
}
I'm trying to run everything through that store (both generated and my functions), so I can make a call similar to the CreateUser function in this file (server.store.): https://github.com/techschool/simplebank/blob/master/api/user.go
arg := db.CreateUserParams{
Username: req.Username,
HashedPassword: hashedPassword,
FullName: req.FullName,
Email: req.Email,
}
user, err := server.store.CreateUser(ctx, arg)
if err != nil {
if pqErr, ok := err.(*pq.Error); ok {
switch pqErr.Code.Name() {
case "unique_violation":
ctx.JSON(http.StatusForbidden, errorResponse(err))
return
}
}
ctx.JSON(http.StatusInternalServerError, errorResponse(err))
return
}
I've tried creating something that houses another querier interface that embeds the generated one, then creating my own db.go that uses the generated DBTX interface but has its own Queries struct, and New function. It always gives me an error that the Queries struct I created aren't implementing the functions I made, despite having it implemented in one of the custom methods I made.
I deleted that branch, and have been clicking through the simplebank project linked above to see if I can find another way this could be done, or if I missed something. If it can't be done, that's okay. I'm just using this as a good opportunity to learn a little more about the language, and keep some code separated if possible.
UPDATE:
There were only a few pieces I had to change, but I modified the store.go to look more like:
// sdb is imported, but points to the generated Querier
// Store provides all functions to execute db queries and transactions
type Store interface {
sdb.Querier
DynamicQuerier
}
// SQLStore provides all functions to execute SQL queries and transactions
type SQLStore struct {
db *sql.DB
*sdb.Queries
*dynamicQueries
}
// NewStore creates a new Store
func NewStore(db *sql.DB) Store {
return &SQLStore{
db: db,
Queries: sdb.New(db),
dynamicQueries: New(db),
}
}
Then just created a new Querier and struct for the methods I would be creating. Gave them their own New function, and tied it together in the above. Before, I was trying to figure out a way to reuse as much of the generated code as possible, which I think was the issue.
Why I wanted the Interface:
I wanted a structure that separated the files I would be working in more from the files that I would never touch (generated). This is the new structure:
I like how the generated code put everything in the Querier interface, then checked that anything implementing it satisfied all of the function requirements. So I wanted to replicate that for the dynamic portion which I would be creating on my own.
It might be complicating it a bit more than it would 'NEED' to be, but it also provides an additional set of error checking that is nice to have. And in this case, even while maybe not necessary, it ended up being doable.
Would it be possible to wrap both what sqlc generates AND any queries that a developer creates (dynamic queries) into a single querier?
If I'm understanding your question correctly I think that you are looking for something like the below (playground):
package main
import (
"context"
"database/sql"
)
// Sample SQL C Code
type DBTX interface {
ExecContext(context.Context, string, ...interface{}) (sql.Result, error)
PrepareContext(context.Context, string) (*sql.Stmt, error)
QueryContext(context.Context, string, ...interface{}) (*sql.Rows, error)
QueryRowContext(context.Context, string, ...interface{}) *sql.Row
}
type Queries struct {
db DBTX
}
func (q *Queries) DeleteAccount(ctx context.Context, id int64) error {
// _, err := q.db.ExecContext(ctx, deleteAccount, id)
// return err
return nil // Pretend that this always works
}
type Querier interface {
DeleteAccount(ctx context.Context, id int64) error
}
//
// Your custom "dynamic" queries
//
type myDynamicQueries struct {
db DBTX
}
func (m *myDynamicQueries) GetDynamicResult(ctx context.Context) error {
// _, err := q.db.ExecContext(ctx, deleteAccount, id)
// return err
return nil // Pretend that this always works
}
type myDynamicQuerier interface {
GetDynamicResult(ctx context.Context) error
}
// Combine things
type allDatabase struct {
*Queries // Note: You could embed this directly into myDynamicQueries instead of having a seperate struct if that is your preference
*myDynamicQueries
}
type DatabaseFunctions interface {
Querier
myDynamicQuerier
}
func main() {
// Basic example
var db DatabaseFunctions
db = getDatabase()
db.DeleteAccount(context.Background(), 0)
db.GetDynamicResult(context.Background())
}
// getDatabase - Perform whatever is needed to connect to database...
func getDatabase() allDatabase {
sqlc := &Queries{db: nil} // In reality you would use New() to do this!
myDyn := &myDynamicQueries{db: nil} // Again it's often cleaner to use a function
return allDatabase{Queries: sqlc, myDynamicQueries: myDyn}
}
The above is all in one file for simplicity but could easily pull from multiple packages e.g.
type allDatabase struct {
*generatedcode.Queries
*store.myDynamicQueries
}
If this does not answer your question then please show one of your failed attempts (so we can see where you are going wrong).
One general comment - do you really need the interface? A common recommendation is "Accept interfaces, return structs". While this may not always apply I suspect you may be introducing interfaces where they are not really necessary and this may add unnecessary complexity.
I thought that the Store, which was housing both Queriers, was tying it all together. Can you explain a little with the example above (in the question post) why it's not necessary? How does SQLStore get access to all of the Querier interface functions?
The struct SQLStore is what is "tying it all together". As per the Go spec:
Given a struct type S and a named type T, promoted methods are included in the method set of the struct as follows:
If S contains an embedded field T, the method sets of S and *S both include promoted methods with receiver T. The method set of *S also includes promoted methods with receiver *T.
If S contains an embedded field *T, the method sets of S and *S both include promoted methods with receiver T or *T.
So an object of type SQLStore:
type SQLStore struct {
db *sql.DB
*sdb.Queries
*dynamicQueries
}
var foo SQLStore // Assume that we are actually providing values for all fields
Will implement all of the methods of sdb.Queries and, also, those in dynamicQueries (you can also access the sql.DB members via foo.db.XXX). This means that you can call foo.AddAccountBalance() and foo.MyGenericQuery() (assuming that is in dynamicQueries!) etc.
The spec says "In its most basic form an interface specifies a (possibly empty) list of methods". So you can think of an interface as a list of functions that must be implemented by whatever implementation (e.g. struct) you assign to the interface (the interface itself does not implement anything directly).
This example might help you understand.
Hopefully that helps a little (as I'm not sure which aspect you don't understand I'm not really sure what to focus on).

Creating Per-Provider Loggers in Wire Dependency Injection

I'm using github.com/google/wire for dependency injection in an open source example project that I'm working on.
I have the following interfaces in a package named interfaces:
type LoginService interface {
Login(email, password) (*LoginResult, error)
}
type JWTService interface {
Generate(user *models.User) (*JWTGenerateResult, error)
Validate(tokenString string) (*JWTValidateResult, error)
}
type UserDao interface {
ByEmail(email string) (*models.User, error)
}
I have implementations that look like this:
type LoginServiceImpl struct {
jwt interfaces.JWTService
dao interfaces.UserDao
logger *zap.Logger
}
func NewLoginService(jwt interfaces.JWTService, dao interfaces.UserDao, \
logger *zap.Logger) *LoginServiceImpl {
return &LoginServiceImpl{jwt: jwt, dao: dao, logger: logger }
}
type JWTServiceImpl struct {
key [32]byte
logger *zap.Logger
}
func NewJWTService(key [32]byte, logger *zap.Logger) (*JWTServiceImpl, error) {
r := JWTServiceImpl {
key: key,
logger: logger,
}
if !r.safe() {
return nil, fmt.Errorf("unable to create JWT service, unsafe key: %s", err)
}
return &r, nil
}
type UserDaoImpl struct {
db: *gorm.DB
logger: *zap.Logger
}
func NewUserDao(db *gorm.DB, logger *zap.Logger) *UserDao {
return &UserDaoImpl{ db: db, logger: logger }
}
I'll exclude other factory functions and implementations here because they all look very similar. They may return an error or be infallible.
I have one other interesting factory for creating the database connection, which I'll just show the interface and not the implementation:
func Connect(config interfaces.MySQLConfig) (*gorm.DB, error) { /* ... */ }
Now, onto the problem. In my command-line entry-point, I'm creating a logger:
logger, err := zap.NewDevelopment()
For each of the factory methods above, I need to provide a logger and not the same logger instance, rather as if these methods were called as follows:
logger, err := zap.NewDevelopment()
// check err
db, err := database.Connect(config)
// check err
userDao := dao.NewUserDao(db, logger.Named("dao.user"))
jwtService, err := service.NewJWTService(jwtKey)
// check err
loginService := service.NewLoginService(jwtService, userDao, logger.Named("service.login"))
My wire.ProviderSet construction looks like this:
wire.NewSet(
wire.Bind(new(interfaces.LoginService), new(*service.LoginServiceImpl)),
wire.Bind(new(interfaces.JWTService), new(*service.JWTServiceImpl)),
wire.Bind(new(interfaces.UserDao), new(*dao.UserDaoImpl)),
service.NewLoginService,
service.NewJWTService,
dao.NewUserDao,
database.Connect,
)
I've read through the user guide, the tutorial, and best practices, and I can't seem to find a way to route a unique zap.Logger to each of these factory methods, and routing a random [32]byte for the JWT service.
Since my root logger is not created at compile time, and since each of these factory methods needs its own unique logger, how do I tell wire to bind these instances to the corresponding factory methods? I'm having a tough time seeing how to route custom instances of the same type to disparate factory methods.
In summary:
Wire seems to favor doing everything at compile-time, storing the dependency injection configuration in a static package-level variable. For most of my use-case, this is okay.
For the rest of my use-case, I need to create some instances manually before running the dependency injection and the ability to route various *zap.Logger instances to each service that needs it.
Essentially, I need to have wire do services.NewUserDao(Connect(mysqlConfig), logger.Named("dao.user"), but I don't know how to express this in wire and merge variables at runtime with wire's compile-time approach.
How do I do this in wire?
I had to change what I was doing somewhat, as is recommended in the documentation:
If you need to inject a common type like string, create a new string type to avoid conflicts with other providers. For example:
type MySQLConnectionString string
Adding Custom Types
The documentation is admittedly very terse, but what I ended up doing is creating a bunch of types:
type JWTKey [32]byte
type JWTServiceLogger *zap.Logger
type LoginServiceLogger *zap.Logger
type UserDaoLogger *zap.Logger
Updating Producer Functions
I updated my producer methods to accept these types, but did not have to update my structs:
// LoginServiceImpl implements interfaces.LoginService
var _ interfaces.LoginService = (*LoginServiceImpl)(nil)
type LoginServiceImpl struct {
dao interfaces.UserDao
jwt interfaces.JWTService
logger *zap.Logger
}
func NewLoginService(dao interfaces.UserDao, jwt interfaces.JWTService,
logger LoginServiceLogger) *LoginServiceImpl {
return &LoginServiceImpl {
dao: dao,
jwt: jwt,
logger: logger,
}
}
This above part made sense; giving distinct types meant that wire had less to figure out.
Creating an Injector
Next, I had to create the dummy injector and then use wire to generate the corresponding wire_gen.go. This was not easy and very unintuitive. When following the documentation, things kept breaking and giving me very unhelpful error messages.
I have a cmd/ package and my CLI entrypoint lives in cmd/serve/root.go, which is run as ./api serve from the command-line. I created my injector function in cmd/serve/injectors.go, note that // +build wireinject and the following newline are required to inform Go that this file is used for code generation and not code itself.
I ultimately arrived at the following code after much trial and error:
// +build wireinject
package serve
import /*...*/
func initializeLoginService(
config interfaces.MySQLConfig,
jwtKey service.JWTKey,
loginServiceLogger service.LoginServiceLogger,
jwtServiceLogger service.JWTServiceLogger,
userDaoLogger service.UserDaoLogger,
databaseLogger database.DatabaseLogger,
) (interfaces.LoginService, error) {
wire.Build(
// bind interfaces to implementations
wire.Bind(new(interfaces.LoginService), new(*service.LoginServiceImpl)),
wire.Bind(new(interfaces.JWTService), new(*service.JWTServiceImpl)),
wire.Bind(new(interfaces.UserDao), new(*dao.UserDao)),
// services
service.NewLoginService,
service.NewJWTService,
// daos
dao.NewUserDao,
// database
database.Connect,
)
return nil, nil
}
The wire.Bind calls inform wire which implementation to use for a given interface so it will know that service.NewLoginService which returns a *LoginServiceImpl should be used as the interfaces.LoginService.
The rest of the entities in the call to wire.Build are just factory functions.
Passing Values to an Injector
One of the the issues I ran into was that I was trying to pass values into wire.Build like the documentation describes:
Occasionally, it is useful to bind a basic value (usually nil) to a type. Instead of having injectors depend on a throwaway provider function, you can add a value expression to a provider set.
type Foo struct {
X int
}
func injectFoo() Foo {
wire.Build(wire.Value(Foo{X: 42}))
return Foo{}
}
...
It's important to note that the expression will be copied to the injector's package; references to variables will be evaluated during the injector package's initialization. Wire will emit an error if the expression calls any functions or receives from any channels.
This is what confused me; it sounded like you could only really use constant values when trying to run an injector, but there are two lines in the docs in the "injectors" section:
Like providers, injectors can be parameterized on inputs (which then get sent to providers) and can return errors. Arguments to wire.Build are the same as wire.NewSet: they form a provider set. This is the provider set that gets used during code generation for that injector.
These lines are accompanied by this code:
func initializeBaz(ctx context.Context) (foobarbaz.Baz, error) {
wire.Build(foobarbaz.MegaSet)
return foobarbaz.Baz{}, nil
}
This is what I missed and what caused me to lose a lot of time on this. context.Context doesn't seem to be passed anywhere in this code, and it's a common type so I just kind of shrugged it off and didn't learn from it.
I defined my injector function to take arguments for the JWT key, the MySQL config, and the logger types:
func initializeLoginService(
config interfaces.MySQLConfig,
jwtKey service.JWTKey,
loginServiceLogger service.LoginServiceLogger,
jwtServiceLogger service.JWTServiceLogger,
userDaoLogger service.UserDaoLogger,
databaseLogger database.DatabaseLogger,
) (interfaces.LoginService, error) {
// ...
return nil, nil
}
Then, I attempted to inject them into wire.Build:
wire.Build(
// ...
wire.Value(config),
wire.Value(jwtKey),
wire.Value(loginServiceLogger),
// ...
)
When I attempted to run wire, it complained that these types were defined twice. I was very confused by this behavior, but ultimately learned that wire automatically sends all function parameters into wire.Build.
Once again: wire automatically sends all injector function parameters into wire.Build.
This was not intuitive to me, but I learned the hard way that it's the way wire works.
Summary
wire does not provide a way for it to distinguish values of the same type within its dependency injection system. Thus, you need to wrap these simple types with type definitions to let wire know how to route them, so instead of [32]byte, type JWTKey [32]byte.
To inject live values into your wire.Build call, simply change your injector function signature to include those values in the function parameters and wire will automatically inject them into wire.Build.
Run cd pkg/my/package && wire to create wire_gen.go in that directory for your defined injectors. Once this is done, future calls to go generate will automatically update wire_gen.go as changes occur.
I have wire_gen.go files checked-in to my version control system (VCS) which is Git, which feels weird due to these being generated build artifacts, but this seems to be the way that this is typically done. It might be more advantageous to exclude wire_gen.go, but if you do this, you'll need to find every package which includes a file with a // +build wireinject header, run wire in that directory, and then go generate just to be sure.
Hopefully this clears up the way that wire works with actual values: make them type safe with type wrappers, and simply pass them to your injector function, and wire does the rest.

How do I improve the testability of go library methods

I'm writing some code that uses a library called Vault. In this library we have a Client. My code makes use of this Client but I want to be able to easily test the code that uses it. I use only a couple methods from the library so I ended up creating an interface:
type VaultClient interface {
Logical() *api.Logical
SetToken(v string)
NewLifetimeWatcher(i *api.LifetimeWatcherInput) (*api.LifetimeWatcher, error)
}
Now if my code is pointed at this interface everything is easily testable.. Except let's look at the Logical() method. It returns a struct here. My issue is that this Logical struct also has methods on it that allow you to Read, Write, ex:
func (c *Logical) Read(path string) (*Secret, error) {
return c.ReadWithData(path, nil)
}
and these are being used in my project as well to do something like:
{{ VaultClient defined above }}.Logical().Write("something", something)
Here is the issue. The Logical returned from the call to .Logical() has a .Write and .Read method that I can't reach to mock. I don't want all the logic within those methods to run in my tests.
Ideally I'd like to be able to do something similar to what I did above and create an interface for Logical as well. I'm relatively new to Golang, but I'm struggling with the best approach here. From what I can tell that's not possible. Embedding doesn't work like inheritance so it seems like I have to return a Logical. That leaves my code unable to be tested as simply as I would like because all the logic within a Logical's methods can't be mocked.
I'm sort of at a loss here. I have scoured Google for an answer to this but nobody ever talks about this scenario. They only go as far as I went with the initial interface for the client.
Is this a common scenario? Other libraries I've used don't return structs like Logical. Instead they typically just return a bland struct that holds data and has no methods.
package usecasevaultclient
// usecase.go
type VaultClient interface {
Logical() *api.Logical
SetToken(v string)
NewLifetimeWatcher(i *api.LifetimeWatcherInput) (*api.LifetimeWatcher, error)
}
type vaultClient struct {
repo RepoVaultClient
}
// create new injection
func NewVaultClient(repo RepoVaultClient) VaultClient {
return &vaultClient{repo}
}
func(u *vaultClient) Logical() *api.Logical {
// do your logic and call the repo of
u.repo.ReadData()
u.repo.WriteData()
}
func(u *vaultClient) SetToken(v string) {}
func(u *vaultClient) NewLifetimeWatcher(i *api.LifetimeWatcherInput) (*api.LifetimeWatcher, error)
// interfaces.go
type RepoVaultClient interface {
ReadData() error
WriteData() error
}
// repo_vaultclient_mock.go
import "github.com/stretchr/testify/mock"
type MockRepoVaultClient struct {
mock.Mock
}
func (m *MockRepoVaultClient) ReadData() error {
args := m.Called()
return args.Error(0)
}
func (m *MockRepoVaultClient) WriteData() error {
args := m.Called()
return args.Error(0)
}
// vaultClient_test.go
func TestLogicalShouldBeSuccess(t *testing.T) {
mockRepoVaultClient = &MockRepoVaultClient{}
useCase := NewVaultClient(mockRepoVaultClient)
mockRepoVaultClient.On("ReadData").Return(nil)
mockRepoVaultClient.On("WriteData").Return(nil)
// your logics gonna make this response as actual what u implemented
response := useCase.Logical()
assert.Equal(t, expected, response)
}
if you want to test the interface of Logical you need to mock the ReadData and WriteData , with testify/mock so u can defined the respond of return of those methods and you can compare it after you called the new injection of your interface

Golang monkey patching

I understand that if go code is structured such that it's programmed to interfaces, it's trivial to mock; however, I'm working with a code base that I cannot change (that is not mine) and this is not the case.
This code base is heavily interconnected and nothing is programmed to an interface, only structs, so no dependency injection.
The structs, themselves, only contain other structs, so I can't mock out that way either. I don't believe that I can do anything about methods, and the few functions that exist are not variables, so there's no way that I know of to swap them out. Inheritance isn't a thing in golang, so that's a no go as well.
In scripting languages like python, we can modify the objects at runtime, aka monkey patch. Is there something comparable that I can do in golang? Trying to figure out some way to test/benchmark without touching the underlying code.
When I have run into this situation my approach is to use my own interface as a wrapper which allows mocking in tests. For example.
type MyInterface interface {
DoSomething(i int) error
DoSomethingElse() ([]int, error)
}
type Concrete struct {
client *somepackage.Client
}
func (c *Concrete) DoSomething(i int) error {
return c.client.DoSomething(i)
}
func (c *Concrete) DoSomethingElse() ([]int, error) {
return c.client.DoSomethingElse()
}
Now you can mock the Concrete in the same way you would mock somepackage.Client if it too were an interface.
As pointed out in the comments below by #elithrar, you can embed the type you want to mock so you are only forced to add methods which need mocking. For example:
type Concrete struct {
*somepackage.Client
}
When done like that, additional methods like DoSomethingNotNeedingMocking could be called directly on Concrete without having to add it to the interface / mock it out.
There is an available monkey patching library for Go. It only works for Intel/AMD systems (targeting OSX and Ubuntu in particular).
Depending on the situation, you can apply the "Dependency Inversion Principle" and leverage Go's implicit interfaces.
To do this, you define an interface of your requirements in the package with the usage (as opposed to defining what you provide in the package that implements it; like you might in Java).
Then you can test your code in isolation from the dependency.
Typical object with a struct dependency:
// Object that relies on a struct
type ObjectUnderTestBefore struct {
db *sql.DB
}
func (o *ObjectUnderTestBefore) Delete() error {
o.db.Exec("DELETE FROM sometable")
}
Apply Dependency Inversion Principle (with implicit interface)
// subset of sql.DB which defines our "requirements"
type dbExec interface {
Exec(query string, args ...interface{}) (sql.Result, error)
}
// Same object with it's requirement defined as an local interface
type ObjectUnderTestWithDIP struct {
// *sql.DB will implicitly implement this interface
db dbExec
}
func (o *ObjectUnderTestWithDIP) Delete() error {
o.db.Exec("DELETE FROM sometable")
}

Resources