# I want to mock this function
func testCheckPluginFile(fName string){
plugin, _ := plugin.Open(path.Join("/I/expect/folder/","/plugin-lib-test/"+fName))
plugin.Lookup("symbol")
}
# So I put this func like this
func testCheckPluginFile(fName string,pluginOpen func (path string) (*plugin.Plugin, error)){
plugin, _ := pluginOpen(path.Join("/I/expect/folder/","/plugin-lib-test/"+fName))
plugin.Lookup("symbol")
}
But I can't do it beacuse of the plugin.Plugin.lookup
Do u have another way to solve it ?
How to mock [...]plugin funcs in go[...]?
You cannot.
If you want to test that you have to use a real plugin.
Mocking in Golang is a bit tricky, and usually requires to generate some code. You need to write interfaces for the plugin package as well as for plugin.Plugin (if the package does not offer these by itself; something which alas is often the case). Then, use a mock generator (the usual suspect being gomock) to create mocks with all the usual mock functionality (expect, conditional return, etc.). In the production code, provide production implementations which you write yourself; they only consist of simple pass-through methods which call the real thing (Open() and Lookup() in this case). (Remember, you want to test whether testCheckPluginFile works correctly, not whether the plugin package works correctly.) By doing so, you should be able to follow proper TDD workflow.
Related
I know this has been asked in various forms many times before but I just can't seem to implement what I'm learning in the way that I need. Any help is appreciated.
I have a series of exchanges which all implement roughly the same APIs. For example, each of them have a GetBalance endpoint. However, some have one or two unique things which need to be accessed within the functions. For example, exchange1 needs to use a client when calling it's balance API, while exchange2 requires both the client variable as well as a clientFutures variable. This is an important note for later.
My background is normal OOP. Obviously Go is different in many ways, hence I'm getting tripped up here.
My current implementation and thinking is as follows:
In exchanges module
type Balance struct {
asset string
available float64
unavailable float64
total float64
}
type Api interface {
GetBalances() []Balance
}
In Binance module
type BinanceApi struct {
key string
secret string
client *binance.Client
clientFutures *futures.Client
Api exchanges.Api
}
func (v *BinanceApi) GetBalance() []exchanges.Balance {
// Requires v.client and v.clientFutures
return []exchanges.Balance{}
}
In Kraken module
type KrakenApi struct {
key string
secret string
client *binance.Client
Api exchanges.Api
}
func (v *KrakenApi) GetBalance() []exchanges.Balance {
// Requires v.client
return []exchanges.Balance{}
}
In main.go
var exchange *Api
Now my thought was I should be able to call something like exchange.GetBalance() and it would use the GetBalance function from above. I would also need some kind of casting? I'm quite lost here. The exchange could either be Binance or Kraken--that gets decided at runtime. Some other code basically calls a GetExchange function which returns an instance of the required API object (already casted in either BinanceApi/KrakenApi)
I'm aware inheritance and polymorphism don't work like other languages, hence my utter confusion. I'm struggling to know what needs to go where here. Go seems to require loads of annoying code necessary for what other languages do on the fly š
using *exchanges.Api is quite weird. You're wanting something that implements a given interface. What the underlying type is (whether it's a pointer or a value receiver) is not important, so use exchanges.Api instead.
There is another issue, though. In golang, interfaces are implicit (sometimes referred to as duck-type interfaces). Generally speaking, this means that the interface is not declared in the package that implements it, but rather in the package that depends on a given interface. Some say that you should be liberal in terms of what values you return, but restrictive in terms of what arguments you accept. What this boils down to in your case, is that you'd have something like an api package, that looks somewhat like this:
package api
func NewKraken(args ...any) *KrakenExchange {
// ...
}
func NewBinance(args ...any) *BinanceExchange {
}
then in your other packages, you'd have something like this:
package kraken // or maybe this could be an exchange package
type API interface {
GetBalances() []types.Balance
}
func NewClient(api API, otherArgs ...T) *KrakenClient {
}
So when someone looks at the code for this Kraken package, they can instantly tell what dependencies are required, and what types it works with. The added benefit is that, should binance or kraken need additional API calls that aren't shared, you can go in and change the specific dependencies/interfaces, without ending up with one massive, centralised interface that is being used all over the place, but each time you only end up using a subset of the interface.
Yet another benefit of this approach is when writing tests. There are tools like gomock and mockgen, which allow you to quickly generate mocks for unit tests simply by doing this:
package foo
//go:generate go run github.com/golang/mock/mockgen -destination mocks/dep_mock.go -package mocks your/module/path/to/foo Dependency
type Dependency interface {
// methods here
}
Then run go generate and it'll create a mock object in your/module/path/to/foo/mocks that implements the desired interface. In your unit tests, import he mocks package, and you can do things like:
ctrl := gomock.NewController(t)
dep := mocks.NewDependencyMock(ctrl)
defer ctrl.Finish()
dep.EXPECT().GetBalances().Times(1).Return(data)
k := kraken.NewClient(dep)
bal := k.Balances()
require.EqualValues(t, bal, data)
TL;DR
The gist of it is:
Interfaces are interfaces, don't use pointers to interfaces
Declare interfaces in the package that depends on them (ie the user), not the implementation (provider) side.
Only declare methods in an interface if you are genuinely using them in a given package. Using a central, overarching interface makes this harder to do.
Having the dependency interface declared along side the user makes for self-documenting code
Unit testing and mocking/stubbing is a lot easier to do, and to automate this way
I'm on my first Golang project, which consists of a small router for an MVC structure. Basically, what I hope it does is take the request URL, separate it into chunks, and forward the execution flow to the package and function specified in those chunks, providing some kind of fallback when there is no match for these values in the application .
An alternative would be to map all packages and functions into variables, then look for a match in the contents of those variables, but this is not a dynamic solution.
The alternative I have used in other languages (PHP and JS) is to reference those names sintatically, in which the language somehow considers the value of the variable instead of considering its literal. Something like: {packageName}.{functionName}() . The problem is that I still haven't found any syntactical way to do this in Golang.
Any suggestions?
func ParseUrl(request *http.Request) {
//out of this URL http://www.example.com/controller/method/key=2&anotherKey=3
var requestedFullURI = request.URL.RequestURI() // returns '/controller/method?key=2key=2&anotherKey=3'
controlFlowString, _ := url.Parse(requestedFullURI)
substrings := strings.Split(controlFlowString.Path, "/") // returns ["controller","method"]
if len(substrings[1]) > 0 {
// Here we'll check if substrings[1] mathes an existing package(controller) name
// and provide some error return in case it does not
if len(substrings[2]) > 0 {
// check if substrings[2] mathes an existing function name(method) inside
// the requested package and run it, passing on the control flow
}else{
// there's no requested method, we'll just run some fallback
}
} else {
err := errors.New("You have not determined a valid controller.")
fmt.Println(err)
}
}
You can still solve this in half dynamic manner. Define your handlers as methods of empty struct and register just that struct. This will greatly reduce amount of registrations you have to do and your code will be more explicit and readable. For example:
handler.register(MyStruct{}) // the implementation is for another question
Following code shows all that's needed to make all methods of MyStruct accessible by name. Now with some effort and help of reflect package you can support the routing like MyStruct/SomeMethod. You can even define struct with some fields witch can serve as branches so even MaStruct/NestedStruct/SomeMethod is possible to do.
dont do this please
Your idea may sound like a good one but believe me its not. Its lot better to use framework like go-chi that is more flexible and readable then doing some reflect madness that no one will understand. Not to mention that traversing type trees in go was newer the fast task. Your routes should not be defined by names of structures in your backend. When you commit into this you will end up with strangely named routes that use PascalCase instead of something-like-this.
What you're describing is very typical of PHP and JavaScript, and completely inappropriate to Go. PHP and JavaScript are dynamic, interpreted languages. Go is a static, compiled language. Rather than trying to apply idioms which do not fit, I'd recommend looking for ways to achieve the same goals using implementations more suitable to the language at hand.
In this case, I think the closest you get to what you're describing while still maintaining reasonable code would be to use a handler registry as you described, but register to it automatically in package init() functions. Each init function will be called once, at startup, giving the package an opportunity to initialize variables and register things like handlers and drivers. When you see things like database driver packages that need to be imported even though they're not referenced, init functions are why: importing the package gives it the chance to register the driver. The expvar package even does this to register an HTTP handler.
You can do the same thing with your handlers, giving each package an init function that registers the handler(s) for that package along with their routes. While this isn't "dynamic", being dynamic has zero value here - the code can't change after it's compiled, which means that all you get from being dynamic is slower execution. If the "dynamic" routes change, you'd have to recompile and restart anyway.
I am building a simple CLI tool in in Go that acts as a wrapper for various password stores (Chef Vault, Ansible Vault, Hashicorp Vault, etc). This is partially as an exercise to get familiar with Go.
In working on this, I came across a situation where I was writing tests and found I needed to create interfaces for many things, just to have the ability to mock dependencies. As such, a fairly simple implementation seems to have a bunch of abstraction, for the sake of the tests.
However, I was recently reading The Go Programming Language and found an example where they mocked their dependencies in the following way.
func Parse() map[string]string {
s := openStore()
// Do something with s to parse into a mapā¦
return s.contents
}
var storeFunc = func openStore() *Store {
// concrete implementation for opening store
}
// and in the testing fileā¦
func TestParse(t *testing.T) {
openStore := func() {
// set contents of mockā¦
}
parse()
// etc...
}
So for the sake of testing, we store this concrete implementation in a variable, and then we can essentially re-declare the variable in the tests and have it return what we need.
Otherwise, I would have created an interface for this (despite currently only having one implementation) and inject that into the Parse method. This way, we could mock it for the test.
So my question is: What are the advantages and disadvantages of each approach? When is it more appropriate to create an interface for the purposes of a mock, versus storing the concrete function in a variable for re-declaration in the test?
For testing purposes, I tend to use the mocking approach you described instead of creating new interfaces. One of the reasons being, AFAIK, there are no direct ways to identify which structs implement an interface, which is important to me if I wanted to know whether the mocks are doing the right thing.
The main drawback of this approach is that the variable is essentially a package-level global variable (even though it's unexported). So all the drawbacks with declaring global variables apply.
In your tests, you will definitely want to use defer to re-assign storeFunc back to its original concrete implementation once the tests completed.
var storeFunc = func *Store {
// concrete implementation for opening store
}
// and in the testing fileā¦
func TestParse(t *testing.T) {
storeFuncOriginal := storeFunc
defer func() {
storeFunc = storeFuncOriginal
}()
storeFunc := func() {
// set contents of mockā¦
}
parse()
// etc...
}
By the way, var storeFunc = func openStore() *Store won't compile.
There is no "right way" of answering this.
Having said this, I find the interface approach more general and more clear than defining a function variable and setting it for the test.
Here are some comments on why:
The function variable approach does not scale well if there are several functions you need to mock (in your example it is just one function).
The interface makes more clear which is the behaviour being injected to the function/module as opposed to the function variable which ends up hidden in the implementation.
The interface allows you to inject a type with a state (a struct) which might be useful for configuring the behaviour of the mock.
You can of course rely on the "function variable" approach for simple cases and use the "interface" for more complex functionality, but if you want to be consistent and use just one approach I'd go with the "interface".
I tackle the problem differently. Given
function Parse(s Store) map[string] string{
// Do stuff on the interface Store
}
you have several advantages:
You can use a mock or a stub Store as you see fit.
Imho, the code becomes more transparent. The signature alone makes clear that a Store implementation is required. And the code does not need to be polluted with error handling for opening the Store.
The code documentation can be kept more concise.
However, this makes something pretty obvious: Parse is a function which can be attached to a store, which most likely makes more sense than to parse the store around.
at the moment I have a package store with following content:
package store
var (
db *Database
)
func Open(url string) error {
// open db connection
}
func FindAll(model interface{}) error {
// return all entries
}
func Close() {
// close db connection
}
This allows me to use store.FindAll from other packages after I have done store.Open in main.go.
However as I saw so far most packages prefer to provide a struct you need to initialize yourself. There are only few cases where this global approach is used.
What are downsides of this approach and should I avoid it?
You can't instantiate connections to 2 storages at once.
You can't easily mock out storage in unit tests of dependent code using convenient tools like gomock.
The standard http package has a ServerMux for generic usecases and but also has one default instance of ServerMux called DefaultServerMux (http://golang.org/pkg/net/http/#pkg-variables) for convenience. So that when you call http.HandleFunc it creates the handler on the default mux. You can find the same approach used in log and many other packages. This is essentially your "singleton" approach.
However, I don't think it's a good idea to follow that pattern in your use case, since the users need to call Open regardless of the default database. And, because of that, using a default instance would not really help and instead would actually make it less convenient:
d := store.Open(...)
defer d.Close()
d.FindAll(...)
is much easier to both write and read than:
store.Open(...)
defer store.Close()
store.FindAll(...)
And, also there are semantic problems: what should happen if someone calls Open twice:
store.Open(...)
defer store.Close()
...
store.Open(...)
store.FindAll(...) // Which db is this referring to?
When I'm writing an interface, its often convenient to define my tests in the same package as the interface, and then define multiple packages that implement the interface set, eg.
package/
package/impl/x <-- Implementation X
package/impl/y <-- Implementation Y
Is there an easy way to run the same test suite (in this case, located in package/*_test.go) in the sub packages?
The best solution I've come up with so far is to add a test package:
package/tests/
Which implements the test suite, and a test in each of the implementations to run the tests, but this has two downsides:
1) The tests in package/tests are not in _test.go files, and end up being part of the actual library, documented by godoc, etc.
2) The tests in package/tests are run by a custom test runner, which has to basically duplicate all the functionality of go test to scan for go tests and run them.
Seems like a pretty tacky solution.
Is there is a better way of doing this?
I don't really dislike the idea to use a separate testing library. If you have an interface and you have generic tests for each interface, other people that implement that interface might like to use these tests as well.
You could create a package "package/test" that contains a function
// functions needed for each implementation to test it
type Tester struct {
func New() package.Interface
func (*package.Interface) Done()
// whatever you need. Leave nil if function does not apply
}
func TestInterface(t *testing.T, tester Tester)
Notice that the signature of TestInterface does not match to what go test expects. Now, for each package package/impl/x you add one file generic_test.go:
package x
import "testing"
import "package/test"
// run generic tests on this particular implementation
func TestInterface(t *testing.T) {
test.TestInterface(t,test.Tester{New:New})
}
Where New() is the constructor function of your implementation. The advantage with this scheme is that
Your tests are reusable for whoever implements your interface, even from other packages
It is immediately obvious that you run the generic test suite
The test cases are where the implementation is and not at another, obscure place
The code can be adapted easily if one implementation needs special initialization or similar stuff
It's go test compatible (big plus!)
Of course, in some cases you need a more complicated TestInterface function, but this is the basic idea.
If you share a piece of code for reuse by different packages then yes, it is a library by definition. Even when used only for testing from *_test.go files. It's no different from importing "testing" of "fmt" in the _test.go file. And having the API documented by godoc is a plus, not minus IMHO.
Maybe something gets mixed up here a bit:
If package a defines an interface only than there is no code to
test as interfaces in Go are implementation free.
So I assume the methods in your interface in package a
have constraints. E.g. in
interface Walker {
Walk(step int)
Tired() bool
}
you contract assumes that Tired returns true if more than
500 steps have been Walk'ed (and false otherwise)
and your test code checks these dependencies
(or assumption, contracts, invariants whatever you
name it).
If this is the case I would provide (in package a) an exported
function
func TestWalkerContract(w Walker) error {
w.Walk(100)
if w.Tired() { return errors.New("Tired after 100 steps") }
w.Walk(450)
if !w.Tired() { return errors.New("Not tired after 100+450 steps") }
}
Which documents the contract properly and can be used by packages
b and c with types implementing walker to test their implementations
in b_test.go and c_test.go. IMHO it is perfectly okay that these
function like TestWalkerContract are displayed by godoc.
P.S. More common than Walk and Tired might be an error state
which is kept and reported until cleared/reseted.