Golang object destructor for C/C++ bindings - go

We are building cryptorgraphic libraries with C/C++, and now adding also Golang support for it.
CGO binding works fine except one thing We need to call some function to free C pointers from memory manually.
Currently we are doing like this, by making some Go interface wrapper for cleaning up memory.
func SomeFunc() {
cObj := NewObjectFromCPP()
defer cObj.Free()
}
We also tried to use runtime.SetFinilizer to clean memory when Golang GC trying to clean wrapped object. BUT it turns out that runtime.SetFinilizer callback is not running every time, or not running at all, because in documentation it says it will run eventually.
Our current solution is hacky from my point of view, and wanted to get some input from people who already done something like this.
What is the right way of cleaning C/C++ memory from Go besides directly calling manual methods?

The convention in Go for disposing of things going out of scope is to use defer(), as you are doing.
There is also another convention of using Close() as your disposal method, and in fact many parts of the library assume this convention (such as Closer).
func doThings() {
if thing, err := openThingHoldingResources(); err != nil {
// TODO: Handle error
}
defer thing.Close()
// TODO: Do stuff with thing
}
One of the primary design goals of Go is to keep magic to a minimum, and functions that invoke automatically on object lifecycle events (constructors & destructors) is one of those magics they eschew.

Related

Interface management in Go

I know this has been asked in various forms many times before but I just can't seem to implement what I'm learning in the way that I need. Any help is appreciated.
I have a series of exchanges which all implement roughly the same APIs. For example, each of them have a GetBalance endpoint. However, some have one or two unique things which need to be accessed within the functions. For example, exchange1 needs to use a client when calling it's balance API, while exchange2 requires both the client variable as well as a clientFutures variable. This is an important note for later.
My background is normal OOP. Obviously Go is different in many ways, hence I'm getting tripped up here.
My current implementation and thinking is as follows:
In exchanges module
type Balance struct {
asset string
available float64
unavailable float64
total float64
}
type Api interface {
GetBalances() []Balance
}
In Binance module
type BinanceApi struct {
key string
secret string
client *binance.Client
clientFutures *futures.Client
Api exchanges.Api
}
func (v *BinanceApi) GetBalance() []exchanges.Balance {
// Requires v.client and v.clientFutures
return []exchanges.Balance{}
}
In Kraken module
type KrakenApi struct {
key string
secret string
client *binance.Client
Api exchanges.Api
}
func (v *KrakenApi) GetBalance() []exchanges.Balance {
// Requires v.client
return []exchanges.Balance{}
}
In main.go
var exchange *Api
Now my thought was I should be able to call something like exchange.GetBalance() and it would use the GetBalance function from above. I would also need some kind of casting? I'm quite lost here. The exchange could either be Binance or Kraken--that gets decided at runtime. Some other code basically calls a GetExchange function which returns an instance of the required API object (already casted in either BinanceApi/KrakenApi)
I'm aware inheritance and polymorphism don't work like other languages, hence my utter confusion. I'm struggling to know what needs to go where here. Go seems to require loads of annoying code necessary for what other languages do on the fly šŸ˜“
using *exchanges.Api is quite weird. You're wanting something that implements a given interface. What the underlying type is (whether it's a pointer or a value receiver) is not important, so use exchanges.Api instead.
There is another issue, though. In golang, interfaces are implicit (sometimes referred to as duck-type interfaces). Generally speaking, this means that the interface is not declared in the package that implements it, but rather in the package that depends on a given interface. Some say that you should be liberal in terms of what values you return, but restrictive in terms of what arguments you accept. What this boils down to in your case, is that you'd have something like an api package, that looks somewhat like this:
package api
func NewKraken(args ...any) *KrakenExchange {
// ...
}
func NewBinance(args ...any) *BinanceExchange {
}
then in your other packages, you'd have something like this:
package kraken // or maybe this could be an exchange package
type API interface {
GetBalances() []types.Balance
}
func NewClient(api API, otherArgs ...T) *KrakenClient {
}
So when someone looks at the code for this Kraken package, they can instantly tell what dependencies are required, and what types it works with. The added benefit is that, should binance or kraken need additional API calls that aren't shared, you can go in and change the specific dependencies/interfaces, without ending up with one massive, centralised interface that is being used all over the place, but each time you only end up using a subset of the interface.
Yet another benefit of this approach is when writing tests. There are tools like gomock and mockgen, which allow you to quickly generate mocks for unit tests simply by doing this:
package foo
//go:generate go run github.com/golang/mock/mockgen -destination mocks/dep_mock.go -package mocks your/module/path/to/foo Dependency
type Dependency interface {
// methods here
}
Then run go generate and it'll create a mock object in your/module/path/to/foo/mocks that implements the desired interface. In your unit tests, import he mocks package, and you can do things like:
ctrl := gomock.NewController(t)
dep := mocks.NewDependencyMock(ctrl)
defer ctrl.Finish()
dep.EXPECT().GetBalances().Times(1).Return(data)
k := kraken.NewClient(dep)
bal := k.Balances()
require.EqualValues(t, bal, data)
TL;DR
The gist of it is:
Interfaces are interfaces, don't use pointers to interfaces
Declare interfaces in the package that depends on them (ie the user), not the implementation (provider) side.
Only declare methods in an interface if you are genuinely using them in a given package. Using a central, overarching interface makes this harder to do.
Having the dependency interface declared along side the user makes for self-documenting code
Unit testing and mocking/stubbing is a lot easier to do, and to automate this way

Handling package and function names dynamically

I'm on my first Golang project, which consists of a small router for an MVC structure. Basically, what I hope it does is take the request URL, separate it into chunks, and forward the execution flow to the package and function specified in those chunks, providing some kind of fallback when there is no match for these values in the application .
An alternative would be to map all packages and functions into variables, then look for a match in the contents of those variables, but this is not a dynamic solution.
The alternative I have used in other languages (PHP and JS) is to reference those names sintatically, in which the language somehow considers the value of the variable instead of considering its literal. Something like: {packageName}.{functionName}() . The problem is that I still haven't found any syntactical way to do this in Golang.
Any suggestions?
func ParseUrl(request *http.Request) {
//out of this URL http://www.example.com/controller/method/key=2&anotherKey=3
var requestedFullURI = request.URL.RequestURI() // returns '/controller/method?key=2key=2&anotherKey=3'
controlFlowString, _ := url.Parse(requestedFullURI)
substrings := strings.Split(controlFlowString.Path, "/") // returns ["controller","method"]
if len(substrings[1]) > 0 {
// Here we'll check if substrings[1] mathes an existing package(controller) name
// and provide some error return in case it does not
if len(substrings[2]) > 0 {
// check if substrings[2] mathes an existing function name(method) inside
// the requested package and run it, passing on the control flow
}else{
// there's no requested method, we'll just run some fallback
}
} else {
err := errors.New("You have not determined a valid controller.")
fmt.Println(err)
}
}
You can still solve this in half dynamic manner. Define your handlers as methods of empty struct and register just that struct. This will greatly reduce amount of registrations you have to do and your code will be more explicit and readable. For example:
handler.register(MyStruct{}) // the implementation is for another question
Following code shows all that's needed to make all methods of MyStruct accessible by name. Now with some effort and help of reflect package you can support the routing like MyStruct/SomeMethod. You can even define struct with some fields witch can serve as branches so even MaStruct/NestedStruct/SomeMethod is possible to do.
dont do this please
Your idea may sound like a good one but believe me its not. Its lot better to use framework like go-chi that is more flexible and readable then doing some reflect madness that no one will understand. Not to mention that traversing type trees in go was newer the fast task. Your routes should not be defined by names of structures in your backend. When you commit into this you will end up with strangely named routes that use PascalCase instead of something-like-this.
What you're describing is very typical of PHP and JavaScript, and completely inappropriate to Go. PHP and JavaScript are dynamic, interpreted languages. Go is a static, compiled language. Rather than trying to apply idioms which do not fit, I'd recommend looking for ways to achieve the same goals using implementations more suitable to the language at hand.
In this case, I think the closest you get to what you're describing while still maintaining reasonable code would be to use a handler registry as you described, but register to it automatically in package init() functions. Each init function will be called once, at startup, giving the package an opportunity to initialize variables and register things like handlers and drivers. When you see things like database driver packages that need to be imported even though they're not referenced, init functions are why: importing the package gives it the chance to register the driver. The expvar package even does this to register an HTTP handler.
You can do the same thing with your handlers, giving each package an init function that registers the handler(s) for that package along with their routes. While this isn't "dynamic", being dynamic has zero value here - the code can't change after it's compiled, which means that all you get from being dynamic is slower execution. If the "dynamic" routes change, you'd have to recompile and restart anyway.

Can context.Context variables be copied and still function normally in all ways in golang?

I'm seeing the following situation:
func foo(ctx context.Context) {
localCtx := ctx
... //do stuff
}
Can both of these context.Context variables be used interchangeably in all ways?
Looking at the source I see that the returned context.Context variables from WithCancel, WithDeadline, WithTimeout, and WithValue returned are implemented internally by a pointers to structs, which would make me think that yes, they could be used interchangeably if the parent context came from one of these functions. However, the emptyCtx returned by context.Background() is an int internally, so here I'm thinking perhaps they could not be used internally if the parent context was the background context.
But context.Context is actually an interface and I'm not sure if/how that changes things.
Yes, you can use them interchangeably.
As you already noticed, the context variables returned by WithCancel, etc, are actually pointers, so both localCtx and ctx will be pointing to the same thing.
About emptyCtx being an int, it should not change things as it is also being used as a pointer.
The background variable you mention is actually a pointer to an emptyCtx, as it is initialized by using the new keyword:
background = new(emptyCtx)
context.Context is an interface. Its size is two pointers. Copying up to 16 bytes is not scary at all

What is the preferred way to implement testing mocks in Go?

I am building a simple CLI tool in in Go that acts as a wrapper for various password stores (Chef Vault, Ansible Vault, Hashicorp Vault, etc). This is partially as an exercise to get familiar with Go.
In working on this, I came across a situation where I was writing tests and found I needed to create interfaces for many things, just to have the ability to mock dependencies. As such, a fairly simple implementation seems to have a bunch of abstraction, for the sake of the tests.
However, I was recently reading The Go Programming Language and found an example where they mocked their dependencies in the following way.
func Parse() map[string]string {
s := openStore()
// Do something with s to parse into a mapā€¦
return s.contents
}
var storeFunc = func openStore() *Store {
// concrete implementation for opening store
}
// and in the testing fileā€¦
func TestParse(t *testing.T) {
openStore := func() {
// set contents of mockā€¦
}
parse()
// etc...
}
So for the sake of testing, we store this concrete implementation in a variable, and then we can essentially re-declare the variable in the tests and have it return what we need.
Otherwise, I would have created an interface for this (despite currently only having one implementation) and inject that into the Parse method. This way, we could mock it for the test.
So my question is: What are the advantages and disadvantages of each approach? When is it more appropriate to create an interface for the purposes of a mock, versus storing the concrete function in a variable for re-declaration in the test?
For testing purposes, I tend to use the mocking approach you described instead of creating new interfaces. One of the reasons being, AFAIK, there are no direct ways to identify which structs implement an interface, which is important to me if I wanted to know whether the mocks are doing the right thing.
The main drawback of this approach is that the variable is essentially a package-level global variable (even though it's unexported). So all the drawbacks with declaring global variables apply.
In your tests, you will definitely want to use defer to re-assign storeFunc back to its original concrete implementation once the tests completed.
var storeFunc = func *Store {
// concrete implementation for opening store
}
// and in the testing fileā€¦
func TestParse(t *testing.T) {
storeFuncOriginal := storeFunc
defer func() {
storeFunc = storeFuncOriginal
}()
storeFunc := func() {
// set contents of mockā€¦
}
parse()
// etc...
}
By the way, var storeFunc = func openStore() *Store won't compile.
There is no "right way" of answering this.
Having said this, I find the interface approach more general and more clear than defining a function variable and setting it for the test.
Here are some comments on why:
The function variable approach does not scale well if there are several functions you need to mock (in your example it is just one function).
The interface makes more clear which is the behaviour being injected to the function/module as opposed to the function variable which ends up hidden in the implementation.
The interface allows you to inject a type with a state (a struct) which might be useful for configuring the behaviour of the mock.
You can of course rely on the "function variable" approach for simple cases and use the "interface" for more complex functionality, but if you want to be consistent and use just one approach I'd go with the "interface".
I tackle the problem differently. Given
function Parse(s Store) map[string] string{
// Do stuff on the interface Store
}
you have several advantages:
You can use a mock or a stub Store as you see fit.
Imho, the code becomes more transparent. The signature alone makes clear that a Store implementation is required. And the code does not need to be polluted with error handling for opening the Store.
The code documentation can be kept more concise.
However, this makes something pretty obvious: Parse is a function which can be attached to a store, which most likely makes more sense than to parse the store around.

Should I avoid package singletons in golang?

at the moment I have a package store with following content:
package store
var (
db *Database
)
func Open(url string) error {
// open db connection
}
func FindAll(model interface{}) error {
// return all entries
}
func Close() {
// close db connection
}
This allows me to use store.FindAll from other packages after I have done store.Open in main.go.
However as I saw so far most packages prefer to provide a struct you need to initialize yourself. There are only few cases where this global approach is used.
What are downsides of this approach and should I avoid it?
You can't instantiate connections to 2 storages at once.
You can't easily mock out storage in unit tests of dependent code using convenient tools like gomock.
The standard http package has a ServerMux for generic usecases and but also has one default instance of ServerMux called DefaultServerMux (http://golang.org/pkg/net/http/#pkg-variables) for convenience. So that when you call http.HandleFunc it creates the handler on the default mux. You can find the same approach used in log and many other packages. This is essentially your "singleton" approach.
However, I don't think it's a good idea to follow that pattern in your use case, since the users need to call Open regardless of the default database. And, because of that, using a default instance would not really help and instead would actually make it less convenient:
d := store.Open(...)
defer d.Close()
d.FindAll(...)
is much easier to both write and read than:
store.Open(...)
defer store.Close()
store.FindAll(...)
And, also there are semantic problems: what should happen if someone calls Open twice:
store.Open(...)
defer store.Close()
...
store.Open(...)
store.FindAll(...) // Which db is this referring to?

Resources