Encapsulation of third party configuration structs - go

I am working on a Go project where I am utilizing some rather big third-party client libraries to communicate with some third-party REST apis. My intent is to decouple my internal code API from these specific dependencies.
Decoupling specific methods from these libraries in my code is straightforward as I only need a subset of the functionality and I am able to abstract the use cases. Therefore I am introducing a new type in my code which implements my specific use cases; the underlying implementation then relies on the third-party dependencies.
Where I have a problem to understand how to find a good decoupling are configuration structs. Usually, the client libraries I am using provide some functions of this form
createResourceA(options *ResourceAOptions) (*ResourceA, error)
createResourceB(options *ResourceBOptions) (*ResourceB, error)
where *ResourceA and *ResourceB are the server-side configurations of the corresponding resources after their creation.
The different options are rather big configuration structs for the resources with lots of fields, nested structs, and so on. In general, these configurations hold more options then needed in my application, but the overall overlap is in the end rather big.
As I want to avoid that my internal code has to import the specific dependencies to have access to the configuration structs I want to encapsulate these.
My current approach for encapsulation is to define my own configuration structs which I then use to configure the third party dependencies. To give a simple example:
import a "github.com/client-a"
// MyClient implements my use case functions
type MyClient struct{}
// MyConfiguration wraps more or less the configuration options
// provided by the client-a dependency
type MyConfiguration struct{
Strategy StrategyType
StrategyAOptions *StrategyAOptions
StrategyBOptions *StrategyBOptions
}
type StrategyType int
const (
StrategyA StrategyType = iota
StrategyB
)
type StrategyAOptions struct{}
type StrategyBOptions struct{}
func (c *MyClient) UseCaseA(options *MyConfiguration) error {
cfg := &a.Config{}
if (options.Strategy = StrategyA) {
cfg.TypeStrategy = a.TypeStrategyXY
}
...
a.CreateResourceA(cfg)
}
As the examples shows with this method I can encapsulate the third-party configuration structs, but I think this solution does not scale very well. I already encounter some examples where I am basically reimplementing types from the dependency in my code just to abstract the dependency away.
Here I am looking for maybe more sophisticated solutions and/or some insights if my approach is generally wrong.
Further research from me:
I looked into struct embedding and if that can help me. But, as the configurations hold non-trivial members, I end up importing the dependency in my calling code as well to fill the fields.
As the usual guideline seems to be Accept interfaces return structs I tried to find a good solution with this approach. But here I can end up with a rather big interfaces as well and in the go standard library configuration structs seem not to be used via interfaces. I was not able to find an explicit statement if hiding configurations behind interfaces is a good practice in Go.
To sum it up:
I would like to know how to abstract configuration structs from third-party libraries without ending up redefining the same data types in my code.

What about a very simple thing - redefining the struct types you need in your wrapper package?
I am very new to go, so this might be not the best way to proceed.
package myConfig
import a "github.com/client-a"
type aConfig a.Config
then you only need to import your myConfig package
import "myConfig"
// myConfig.aConfig is actually a.Config
myConfig.aConfig
Not really sure if this helps a lot since this is not real decoupling, but at least you will not need to import "github.com/client-a" in every place

Related

Interface management in Go

I know this has been asked in various forms many times before but I just can't seem to implement what I'm learning in the way that I need. Any help is appreciated.
I have a series of exchanges which all implement roughly the same APIs. For example, each of them have a GetBalance endpoint. However, some have one or two unique things which need to be accessed within the functions. For example, exchange1 needs to use a client when calling it's balance API, while exchange2 requires both the client variable as well as a clientFutures variable. This is an important note for later.
My background is normal OOP. Obviously Go is different in many ways, hence I'm getting tripped up here.
My current implementation and thinking is as follows:
In exchanges module
type Balance struct {
asset string
available float64
unavailable float64
total float64
}
type Api interface {
GetBalances() []Balance
}
In Binance module
type BinanceApi struct {
key string
secret string
client *binance.Client
clientFutures *futures.Client
Api exchanges.Api
}
func (v *BinanceApi) GetBalance() []exchanges.Balance {
// Requires v.client and v.clientFutures
return []exchanges.Balance{}
}
In Kraken module
type KrakenApi struct {
key string
secret string
client *binance.Client
Api exchanges.Api
}
func (v *KrakenApi) GetBalance() []exchanges.Balance {
// Requires v.client
return []exchanges.Balance{}
}
In main.go
var exchange *Api
Now my thought was I should be able to call something like exchange.GetBalance() and it would use the GetBalance function from above. I would also need some kind of casting? I'm quite lost here. The exchange could either be Binance or Kraken--that gets decided at runtime. Some other code basically calls a GetExchange function which returns an instance of the required API object (already casted in either BinanceApi/KrakenApi)
I'm aware inheritance and polymorphism don't work like other languages, hence my utter confusion. I'm struggling to know what needs to go where here. Go seems to require loads of annoying code necessary for what other languages do on the fly 😓
using *exchanges.Api is quite weird. You're wanting something that implements a given interface. What the underlying type is (whether it's a pointer or a value receiver) is not important, so use exchanges.Api instead.
There is another issue, though. In golang, interfaces are implicit (sometimes referred to as duck-type interfaces). Generally speaking, this means that the interface is not declared in the package that implements it, but rather in the package that depends on a given interface. Some say that you should be liberal in terms of what values you return, but restrictive in terms of what arguments you accept. What this boils down to in your case, is that you'd have something like an api package, that looks somewhat like this:
package api
func NewKraken(args ...any) *KrakenExchange {
// ...
}
func NewBinance(args ...any) *BinanceExchange {
}
then in your other packages, you'd have something like this:
package kraken // or maybe this could be an exchange package
type API interface {
GetBalances() []types.Balance
}
func NewClient(api API, otherArgs ...T) *KrakenClient {
}
So when someone looks at the code for this Kraken package, they can instantly tell what dependencies are required, and what types it works with. The added benefit is that, should binance or kraken need additional API calls that aren't shared, you can go in and change the specific dependencies/interfaces, without ending up with one massive, centralised interface that is being used all over the place, but each time you only end up using a subset of the interface.
Yet another benefit of this approach is when writing tests. There are tools like gomock and mockgen, which allow you to quickly generate mocks for unit tests simply by doing this:
package foo
//go:generate go run github.com/golang/mock/mockgen -destination mocks/dep_mock.go -package mocks your/module/path/to/foo Dependency
type Dependency interface {
// methods here
}
Then run go generate and it'll create a mock object in your/module/path/to/foo/mocks that implements the desired interface. In your unit tests, import he mocks package, and you can do things like:
ctrl := gomock.NewController(t)
dep := mocks.NewDependencyMock(ctrl)
defer ctrl.Finish()
dep.EXPECT().GetBalances().Times(1).Return(data)
k := kraken.NewClient(dep)
bal := k.Balances()
require.EqualValues(t, bal, data)
TL;DR
The gist of it is:
Interfaces are interfaces, don't use pointers to interfaces
Declare interfaces in the package that depends on them (ie the user), not the implementation (provider) side.
Only declare methods in an interface if you are genuinely using them in a given package. Using a central, overarching interface makes this harder to do.
Having the dependency interface declared along side the user makes for self-documenting code
Unit testing and mocking/stubbing is a lot easier to do, and to automate this way

Should I embed interface from standard library or write my own?

There are some common interfaces in Go's standard library, e.g, the io.Closer:
type Closer interface {
Close() error
}
If I would want to define an interface in my code that has a Close method, would I embed io.Closer like this:
type example interface {
io.Closer
// ... some other functions or embedded types
}
or would I just define the function itself like:
type example interface {
Close() error
// ... some other functions or embedded types
}
Is there any best practice for this?
For such common and simple interfaces I would definately embed the one from the standard lib (such as io.Closer, io.Reader and io.ByteReader).
But not any interface type. In general interfaces should be defined where they are needed. Embedding any interface defined in other packages (including the standard library) has the danger of failing to implicitly implement them if they are changed or extended.
The "owner" (definer) of the package may change it (e.g. extend it with a new method) and properly update all its types implementing it, so the package can continue to work from the outside, but obviously the package owner will not update your implementations.
For example the reflect.Type interface type had no Type.ConvertibleTo() method back in Go 1.0, it was added in Go 1.1. The same thing may happen: interfaces in the standard lib may get altered or extended in future Go versions, resulting in your existing code failing to implement them.
What's the difference between small, common interfaces and the "rest"? The bigger the interface, the weaker the abstraction – so goes the Go proverb. Small interfaces like io.Closer and io.Reader capture a tiny yet important functionality. They are so common, "every" library tries to implement them, every utility functions build upon them. I don't ever expect them to change. If there will be a reason to change / extend them, they will be rather added as new interfaces. Not like bigger interfaces where abstraction is harder to capture accurately. They have a better chance to change / evolve over time.

Using multiple interfaces in Golang

I'm learning Golang and as an exercise in using interfaces I'm building a toy program. I'm having some problem trying to use a type that "should implement" two interfaces - one way to solve that in C++ and Java would be to use inheritance(there are other techniques, but I think that is the most common). As I lack that mechanism in Golang, I'm not sure how to proceed about it. Below is the code:
var (
faces = []string{"Ace", "Two", "Three", "Four", "Five", "Six", "Seven", "Eight", "Nine", "Ten", "Jack", "Queen", "King"}
suits = []string{"Hearts", "Diamonds", "Spades", "Clubs"}
)
type Card interface {
GetFace() string
GetSuit() string
}
type card struct {
cardNum int
face string
suit string
}
func NewCard(num int) Card {
newCard := card{
cardNum: num,
face: faces[num%len(faces)],
suit: suits[num/len(faces)],
}
return &newCard
}
func (c *card) GetFace() string {
return c.face
}
func (c *card) GetSuit() string {
return c.suit
}
func (c *card) String() string {
return fmt.Sprintf("%s%s ", c.GetFace(), c.GetSuit())
}
What I'm trying to achieve:
I would like to hide my struct type and only export the interface so that the clients of the code use only the "Card" interface
I would like to have a string representation of the struct, hence the implementation of the interface with the "String()" method in order to be able to call "fmt.Println()" on instantiations of my struct
The problem comes when I'm trying to use a new card though the "Card" interface and also trying to get the string representation. I cannot pass the interface as the parameter of the implementation of the "String()" method as there is a compiler error which is related to the addressability of an interface at the core language level(still digging through that documentation). The very simple example of testing exposes the issue:
func TestString(t *testing.T) {
card := NewCard(0)
assert.EqualValues(t, "AceHearts ", card.String(), " newly created card's string repr should be 'AceHearts '")
}
The compiler tells me, for good reason, that "card.String undefined (type card has no field or method string)". I could just add the "String()" method to my "Card" interface, but I do not find that to be clean: I have other entities implemented with the same model and I would have to add that redundancy everywhere; there is already an interface with that method.
What would be a good solution for the above issue that I'm having?
Edit:(to address some of the very good comments)
I do not expect to have another implementation of the Card interface; I'm not sure I grasp why would I want to do that, that is change the interface
I would like to have the Card interface to hide away implementation details and for the clients to program against the interface and not against the concrete type
I would like to always have access to the String() interface for all clients of the "card struct" instantiations(including the ones instantiated via the Card interface). I'm not interested in having clients only with the String interface. In some other languages this can be achieved by implementing both interfaces - multiple inheritance. I'm not saying that is good or wrong, or trying to start a debate about that, I'm just stating a fact!
My intent is to find out if the language has any mechanism to fulfill those requirements simultaneously. If that is not possible or maybe from the point of view of the design the problem should be tackled in a different manner, then I'm ready to be educated
Type assertions are very verbose and explicit and would expose implementation details - they have their places but I do not think they are appropriate in the situation I have
I should go over some prefacing points first:
Interfaces in Go are not the same as interfaces in other languages. You shouldn't assume that every idea from other languages should transfer over automatically. A lot of them don't.
Go has neither classes nor objects.
Go is not Java and Go is not C++. It's type system is significantly and meaningfully different than those languages.
From your question:
I would like to have the Card interface to hide away implementation details and for the clients to program against the interface and not against the concrete type
This is the root of your other problems.
As mentioned in the comments, I see this in multiple other packages and regard it as a particularly pesky anti-pattern. First, I will explain the reasons why this pattern is "anti" in nature.
Firstly and most pertinently, this point is proven by your very example. You employed this pattern, and it has resulted in bad effects. As pointed out by mkopriva, it has created a contradiction which you must resolve.
this usage of interfaces is contrary to their intended use, and you are not achieving any benefit by doing this.
Interfaces are Go's mechanism of polymorphism. The usage of interfaces in parameters makes your code more versatile. Think of the ubiquitous io.Reader and io.Writer. They are fantastic examples of interfaces. They are the reason why you can patch two seemingly unrelated libraries together, and have them just work. For example, you can log to stderr, or log to a disk file, or log to an http response. Each of these work exactly the same way, because log.New takes an io.Writer parameter, and a disk file, stderr, and http response writer all implement io.Writer. To use interfaces simply to "hide implementation details" (I explain later why this point fails), does not add any flexibility to your code. If anything, it is an abuse of interfaces by leveraging them for a task they weren't meant to fulfill.
Point / Counterpoint
"Hiding my implementation provides better encapsulation and safety by making sure all the details are hidden."
You are not achieving any greater encapsulation or safety. By making the struct fields unexported (lowercase), you have already prevented any clients of the package from messing with the internals of your struct. Clients of the package can only access the fields or methods that you have exported. There's nothing wrong with exporting a struct and hiding every field.
"Struct values are dirty and raw and I don't feel good about passing them around."
Then don't pass structs, pass pointers to struct. That's what you're already doing here. There's nothing inherently wrong with passing structs. If your type behaves like a mutable object, then pointer to struct is probably appropriate. If your type behaves more like an immutable data point, then struct is probably appropriate.
"Isn't it confusing if my package exports package.Struct, but clients have to always use *package.Struct? What if they make a mistake? It's not safe to copy my struct value; things will break!"
All you realistically have to do to prevent problems is make sure that your package only returns *package.Struct values. That's what you're already doing here. A vast majority of the time, people will be using the short assignment :=, so they don't have to worry about getting the type correct. If they do set the type manually, and the choose package.Struct by accident, then they will get a compilation error when trying to assign a *package.Struct to it.
"It helps to decouple the client code from the package code"
Maybe. But unless you have a realistic expectation that you have multiple existent implementations of this type, then this is a form of premature optimization (and yes it does have consequences). Even if you do have multiple implementations of your interface, that's still not a good reason why you should actually return values of that interface. A majority of the time it is still more appropriate to just return the concrete type. To see what I mean, take a look at the image package from the standard library.
When is it actually useful?
The main realistic case where making a premature interface AND returning it might help clients, is this:
Your package introduces a second implementation of the interface
AND clients have statically and explicitly (not :=) used this data type in their functions or types
AND clients want to reuse those types or functions for the new implementation also.
Note that this wouldn't be a breaking API change even if you weren't returning the premature interface, as you're only adding a new type and constructor.
If you decided to only declare this premature interface, and still return concrete types (as done in the image package), then all the client would likely need to do to remedy this is spend a couple minutes using their IDE's refactor tool to replace *package.Struct with package.Interface.
It significantly hinders the usability of package documentation
Go has been blessed with a useful tool called Godoc. Godoc automatically generates documentation for a package from source. When you export a type in your package, Godoc shows you some useful things:
The type, all exported methods of that type, and all functions that return that type are organized together in the doc index.
The type and each of its methods has a dedicated section in the page where the signature is shown, along with a comment explaining it's usage.
Once you bubble-wrap your struct into an interface, your Godoc representation is hurt. The methods of your type are no longer shown in the package index, so the package index is no longer an accurate overview of the package as it is missing a lot of key information. Also, each of the methods no longer has its own dedicated space on the page, making it's documentation harder to both find and read. Finally it also means that you no longer have the ability to click the method name on the doc page to view the source code. It's also no coincidence that in many packages that employ this pattern, these de-emphasized methods are most often left without a doc comment, even when the rest of the package is well documented.
In the wild
https://pkg.go.dev/github.com/zserge/lorca
https://pkg.go.dev/github.com/googollee/go-socket.io
In both cases we see a misleading package overview, along with a majority of interface methods being undocumented.
(Please note I have nothing against any of these developers; obviously every package has it's faults and these examples are cherry picked. I'm also not saying that they had no justification to use this pattern, just that their package doc is hindered by it)
Examples from the standard library
If you are curious about how interfaces are "intended to be used", I would suggest looking through the docs for the standard library and taking note of where interfaces are declared, taken as parameters, and returned.
https://golang.org/pkg/net/http/
https://golang.org/pkg/io/
https://golang.org/pkg/crypto/
https://golang.org/pkg/image/
Here is the only standard library example I know of that is comparable to the "interface hiding" pattern. In this case, reflect is a very complex package and there are several implementations of reflect.Type internally. Also note that in this case, even though it is necessary, no one should be happy about it because the only real effect for clients is messier documentation.
https://golang.org/pkg/reflect/#Type
tl;dr
This pattern will hurt your documentation, while accomplishing nothing in the process, except you might make it slightly quicker in very specific cases for clients to use parallel implementations of this type that you may or may not introduce in the future.
These interface design principles are meant for the benefit of the client, right? Put yourself in the shoes of the client and ask: what have I really gained?
Not entirely sure if this is what you are looking for but you could try embedding the other interface in Card interface as shown below.
type Printer interface {
String() string
}
type Card interface {
Printer // embed printer interface
GetFace() string
GetSuit() string
}
Interface Card hasn't method String, it doesn't matter, that underlying type card have it, because method is hidden from you (unless you access it via reflection).
Adding String() string method to Card will solve problem:
type Card interface {
GetFace() string
GetSuit() string
String() string
}
The go language does not have subtype polymorsphism. Therefore, the pattern you want to achieve is not encouraged by the very foundations of the language. You may achieve this undesirable pattern by composing structs and interfaces, though.

Cyclic imports and lack of generics causing headache

Say I have these two files in golang:
// main/a/a.go
import "main/b"
type Model struct {
ID int `json:"id"`
Me int `json:"me"`
You int `json:"you"`
}
func zoom(v b.Injection){
}
func Start(){
// ...
}
and then the second file looks like:
// main/b/b.go
import "main/a"
type Injection struct {
ModelA a.Model
}
func GetInjection() Injection {
return Injection{
ModelA: a.Start(),
}
}
so as you can see, these are circular imports, each file imports the other.
So I need to use a 3rd file, and have these two files import the 3rd file.
But I am really struggling how to get this functionality and avoid cyclic imports.
My first step, is to move the Injection type into a 3rd file:
// main/c/c.go
type Injection struct {
ModelA interface{} // formerly a.Model
}
so now this is what it looks like:
a imports c
b imports a,c
so no more cycles, however the problem is that I don't know how to create an interface for a.Model in c.go? An empty interface{} like I used above doesn't work, for the normal reasons.
How do I solve this cyclic import problem with these 2 original files?
If you want them to put into separate packages, you can't have Model and zoom() in the same package, as zoom() refers to Injection and Injection refers to Model.
So a possible solution is to put Model into package a, zoom() into package b, and Injection into package c. c.Injection can refer to a.Model, b.zoom() can refer to c.Injection. There's no circle in this:
b.zoom() --------> c.Injection ---------> a.Model
I assume there are other references in your real code which are not in the question which may prevent this from working, but you can move "stuff" around between packages, or you can break it down into more.
Also, if things are coupled this "tight", you should really consider putting them into the same package, and then there is no problem to solve.
Another way to solve circular import issue is to introduce interfaces. E.g. if your zoom() function would not refer to Injection, the package containing Model and zoom() would not need to refer to Injection's package.
Inspect what zoom() needs to do with Injection. If that is method calls, that's already good. If not, add methods to Injection. Then you may define an interface in zoom()'s package containing the methods zoom() needs to call, and change its parameter type to this interface. Implementing interfaces in Go is implicit, there is no declaration of intent. So you can remove the reference in the parameter type, still you will be able to pass Injection values to zoom().
Also related, check Dave Cheney's thoughts about organizing code:
I believe code should be organised into packages names for what the package provides, not what it contains. This can sometimes be subtle, but usually not.
For example, http, provides http clients and servers.
As a counter example, package utils is a poor name, yes it provides utilities, but you have no idea what from the name, in truth this package is named for what it contains.
If your project is a library, it should contain one package (excluding examples and possibly utility commands), if it contains more packages, that is a sign that the library is trying to do too many things.
Prefer to avoid multiple packages by default, only split code by package if there is a clear separation of concerns. In my experience many frustrations with complex and possibly circular package structures are the result of too many packages in a project.

Cyclic dependencies and interfaces

I am a long time python developer. I was trying out Go, converting an existing python app to Go. It is modular and works really well for me.
Upon creating the same structure in Go, I seem to land in cyclic import errors, a lot more than I want to. Never had any import problems in python. I never even had to use import aliases. So I may have had some cyclic imports which were not evident in python. I actually find that strange.
Anyways, I am lost, trying to fix these in Go. I have read that interfaces can be used to avoid cyclic dependencies. But I don't understand how. I didn't find any examples on this either. Can somebody help me on this?
The current python application structure is as follows:
/main.py
/settings/routes.py contains main routes depends on app1/routes.py, app2/routes.py etc
/settings/database.py function like connect() which opens db session
/settings/constants.py general constants
/apps/app1/views.py url handler functions
/apps/app1/models.py app specific database functions depends on settings/database.py
/apps/app1/routes.py app specific routes
/apps/app2/views.py url handler functions
/apps/app2/models.py app specific database functions depends on settings/database.py
/apps/app2/routes.py app specific routes
settings/database.py has generic functions like connect() which opens a db session. So an app in the apps package calls database.connect() and a db session is opened.
The same is the case with settings/routes.py it has functions that allow apps to add their sub-routes to the main route object.
The settings package is more about functions than data/constants. This contains code that is used by apps in the apps package, that would otherwise have to be duplicated in all the apps. So if I need to change the router class, for instance, I just have to change settings/router.py and the apps will continue to work with no modifications.
There're two high-level pieces to this: figuring out which code goes in which package, and tweaking your APIs to reduce the need for packages to take on as many dependencies.
On designing APIs that avoid the need for some imports:
Write config functions for hooking packages up to each other at run time rather than compile time. Instead of routes importing all the packages that define routes, it can export routes.Register, which main (or code in each app) can call. In general, configuration info probably flows through main or a dedicated package; scattering it around too much can make it hard to manage.
Pass around basic types and interface values. If you're depending on a package for just a type name, maybe you can avoid that. Maybe some code handling a []Page can get instead use a []string of filenames or a []int of IDs or some more general interface (sql.Rows) instead.
Consider having 'schema' packages with just pure data types and interfaces, so User is separate from code that might load users from the database. It doesn't have to depend on much (maybe on anything), so you can include it from anywhere. Ben Johnson gave a lightning talk at GopherCon 2016 suggesting that and organizing packages by dependencies.
On organizing code into packages:
As a rule, split a package up when each piece could be useful on its own. If two pieces of functionality are really intimately related, you don't have to split them into packages at all; you can organize with multiple files or types instead. Big packages can be OK; Go's net/http is one, for instance.
Break up grab-bag packages (utils, tools) by topic or dependency. Otherwise you can end up importing a huge utils package (and taking on all its dependencies) for one or two pieces of functionality (that wouldn't have so many dependencies if separated out).
Consider pushing reusable code 'down' into lower-level packages untangled from your particular use case. If you have a package page containing both logic for your content management system and all-purpose HTML-manipulation code, consider moving the HTML stuff "down" to a package html so you can use it without importing unrelated content management stuff.
Here, I'd rearrange things so the router doesn't need to include the routes: instead, each app package calls a router.Register() method. This is what the Gorilla web toolkit's mux package does. Your routes, database, and constants packages sound like low-level pieces that should be imported by your app code and not import it.
Generally, try to build your app in layers. Your higher-layer, use-case-specific app code should import lower-layer, more fundamental tools, and never the other way around. Here are some more thoughts:
Packages are good for separating independently usable bits of functionality from the caller's perspective. For your internal code organization, you can easily shuffle code between source files in the package. The initial namespace for symbols you define in x/foo.go or x/bar.go is just package x, and it's not that hard to split/join files as needed, especially with the help of a utility like goimports.
The standard library's net/http is about 7k lines (counting comments/blanks but not tests). Internally, it's split into many smaller files and types. But it's one package, I think 'cause there was no reason users would want, say, just cookie handling on its own. On the other hand, net and net/url are separate because they have uses outside HTTP.
It's great if you can push "down" utilities into libraries that are independent and feel like their own polished products, or cleanly layer your application itself (e.g., UI sits atop an API sits atop some core libraries and data models). Likewise "horizontal" separation may help you hold the app in your head (e.g., the UI layer breaks up into user account management, the application core, and administrative tools, or something finer-grained than that). But, the core point is, you're free to split or not as works for you.
Set up APIs to configure behavior at run-time so you don't have to import it at compile time. So, for example, your URL router can expose a Register method instead of importing appA, appB, etc. and reading a var Routes from each. You could make a myapp/routes package that imports router and all your views and calls router.Register. The fundamental idea is that the router is all-purpose code that needn't import your application's views.
Some ways to put together config APIs:
Pass app behavior via interfaces or funcs: http can be passed custom implementations of Handler (of course) but also CookieJar or File. text/template and html/template can accept functions to be accessible from templates (in a FuncMap).
Export shortcut functions from your package if appropriate: In http, callers can either make and separately configure some http.Server objects, or call http.ListenAndServe(...) that uses a global Server. That gives you a nice design--everything's in an object and callers can create multiple Servers in a process and such--but it also offers a lazy way to configure in the simple single-server case.
If you have to, just duct-tape it: You don't have to limit yourself to super-elegant config systems if you can't fit one to your app: maybe for some stuff a package "myapp/conf" with a global var Conf map[string]interface{} is useful.
But be aware of downsides to global conf. If you want to write reusable libraries, they can't import myapp/conf; they need to accept all the info they need in constructors, etc. Globals also risk hard-wiring in an assumption something will always have a single value app-wide when it eventually won't; maybe today you have a single database config or HTTP server config or such, but someday you don't.
Some more specific ways to move code or change definitions to reduce dependency issues:
Separate fundamental tasks from app-dependent ones. One app I work on in another language has a "utils" module mixing general tasks (e.g., formatting datetimes or working with HTML) with app-specific stuff (that depends on the user schema, etc.). But the users package imports the utils, creating a cycle. If I were porting to Go, I'd move the user-dependent utils "up" out of the utils module, maybe to live with the user code or even above it.
Consider breaking up grab-bag packages. Slightly enlarging on the last point: if two pieces of functionality are independent (that is, things still work if you move some code to another package) and unrelated from the user's perspective, they're candidates to be separated into two packages. Sometimes the bundling is harmless, but other times it leads to extra dependencies, or a less generic package name would just make clearer code. So my utils above might be broken up by topic or dependency (e.g., strutil, dbutil, etc.). If you wind up with lots of packages this way, we've got goimports to help manage them.
Replace import-requiring object types in APIs with basic types and interfaces. Say two entities in your app have a many-to-many relationship like Users and Groups. If they live in different packages (a big 'if'), you can't have both u.Groups() returning a []group.Group and g.Users() returning []user.User because that requires the packages to import each other.
However, you could change one or both of those return, say, a []uint of IDs or a sql.Rows or some other interface you can get to without importing a specific object type. Depending on your use case, types like User and Group might be so intimately related that it's better just to put them in one package, but if you decide they should be distinct, this is a way.
Thanks for the detailed question and followup.
Possible partial, but ugly answer:
Have struggled with the import cyclic dependency problem for a year. For a while, was able to decouple enough so that there wasn't an import cycle. My application uses plugins heavily. At the same time, it uses encode/decode libraries (json and gob). For these, I have custom marshall and unmarshall methods, and equivalent for json.
For these to work, the full type name including the package name must be identical on data structures that are passed to the codecs. The creation of the codecs must be in a package. This package is called from both other packages as well as from plugins.
Everything works as long as the codec package doesn't need to call out to any package calling it, or use the methods or interfaces to the methods. In order to be able to use the types from the package in the plugins, the plugins have to be compiled with the package. Since I don't want to have to include the main program in the builds for the plugins, which would break the point of the plugins, only the codec package is included in both the plugins and the main program. Everything works up until I need to call from the codec package in to the main program, after the main program has called in to the codec package. This will cause an import cycle. To get rid of this, I can put the codec in the main program instead of its own package. But, because the specific datatypes being used in the marshalling/unmarshalling methods must be the same in the main program and the plugins, I would need to compile with the main program package for each of the plugins. Further, because I need to the main program to call out to the plugins I need the interface types for the plugins in the main program. Having never found a way to get this to work, I did think of a possible solution:
First, separate the codec in to a plugin, instead of just a package
Then, load it as the first plugin from the main program.
Create a registration function to exchange interfaces with underlying methods.
All encoders and decoders are created by calls in to this plugin.
The plugin calls back to the main program through the registered interface.
The main program and all the plugins use the same interface type package for this.
However, the datatypes for the actual encoded data are referenced in the main program
with a different name, but same underlying type than in the plugins, otherwise the same import cycle exists. to do this part requires doing an unsafe cast. Wrote
a little function that does a forced cast so that the syntax is clean:
(<cast pointer type*>Cast(<pointer to structure, or interface to pointer to structure>).
The only other issue for the codecs is to make sure that when the data is sent to the encoder, it is cast so that the marshall/unmarshall methods recognize the datatype names. To make that easier, can import both the main program types from one package, and the plugin types from another package since they don't reference each other.
Very complex workaround, but don't see how else to make this work.
Have not tried this yet. May still end up with an import cycle when everything is done.
[more on this]
To avoid the import cycle problem, I use an unsafe type approach using pointers. First, here is a package with a little function Cast() to do the unsafe typecasting, to make the code easier to read:
package ForcedCast
import (
"unsafe"
"reflect"
)
// cast function to do casts with to hide the ugly syntax
// used as the following:
// <var> = (cast type)(cast(input var))
func Cast(i interface{})(unsafe.Pointer) {
return (unsafe.Pointer(reflect.ValueOf(i).Pointer()))
}
Next I use the "interface{}" as the equivalent of a void pointer:
package firstpackage
type realstruct struct {
...
}
var Data realstruct
// setup a function to call in to a loaded plugin
var calledfuncptr func(interface)
func callingfunc() {
pluginpath := path.Join(<pathname>, "calledfuncplugin")
plug, err := plugin.Open(pluginpath)
rFunc, err := plug.Lookup("calledfunc")
calledfuncptr = rFunc.(interface{})
calledfuncptr (&Data)
}
//in a plugin
//plugins don't use packages for the main code, are build with -buildmode=plugin
package main
// identical definition of structure
type realstruct struct {
...
}
var localdataptr *realstruct
func calledfunc(needcast interface{}) {
localdataptr = (*realstruct)(Cast(needcast))
}
For cross type dependencies to any other packages, use the "interface{}" as a void pointer and cast appropriately as needed.
This only works if the underlying type that is pointed to by the interface{} is identical wherever it is cast. To make this easier, I put the types in a separate file. In the calling package, they start with the package name. I then make a copy of the type file, change the package to "package main", and put it in the plugin directory so that the types are built, but not the package name.
There is probably a way to do this for the actual data values, not just pointers, but I haven't gotten that to work right.
One of the things I have done is to cast to an interface instead of a datatype pointer. This allows you to send interfaces to packages using the plugin approach, where there is an import cycle. The interface has a pointer to the datatype, and then you can use it for calling the methods on the datatype from the caller from the package that called in to the plugin.
The reason why this works is that the datatypes are not visible outside of the plugin. That is, if I load to plugins, which are both package main, and the types are defined in the package main for both, but are different types with the same names, the types do not conflict.
However, if I put a common package in to both plugins, that package must be identical and have the exact full pathname for where it was compiled from. To accommodate this, I use a docker container to do my builds so that I can force the pathnames to always be correct for any common containers across my plugins.
I did say this was ugly, but it does work. If there is an import cycle because a type in one package uses a type in another package that then tries to use a type from the first package, the approach is to do a plugin that erases both types with interface{}. You can then make method and function calls back and forth doing the casting on the receiving side as needed.
In summary:
Use interface{} to make void pointers (that is, untyped).
Use the Cast() to force them to a pointer type that matches the underlying pointer. Use the plugin type localization so that types in the package main in separate plugins, and in the main program do not conflict If you use a common package between plugins, the path must be identical for all built plugins and the main program. Use the plug package to load the plugins, and exchange function pointers
For one of my issues I'm actually calling from a package in the main program out to a plugin, just to be able to call back to another package in the main program, avoiding the import cycle between the two packages. I ran in to this problem using the json and gob packages with custom marshaller methods. I use the types that are custom marshalled both in my main program, and in other plugins, while at the same time, I want the plugins to be built independent of the main program. I accomplish this by using a package for json and gob encode/decode custom methods that is included both in the main program and the plugins. However, I needed to be able to call back to the main program from the encoder methods, which gave me the import cycle type conflict. The above solution with another plugin specifically to solve the import cycle works. It does create an extra function call, but I have yet to see any other solution to this.
Hope this helps with this issue.
A shorter answer to your question (using interface), that does not take away the correctness and completeness of the other answers, is this example:
UserService is causing cyclic import, where it should not really be called from AuthorizationService. It's just there to be able to extract the user details, so we can declare only the desired functionality in a separated receiver-side interface UserProvider:
https://github.com/tzvatot/cyclic-import-solving-exaple/commit/bc60d7cfcbd4c3b6540bdb4117ab95c3f2987389
Basically, extracting an interface that contains only the required functionality on the receiver side, and use it instead of declaring a dependency on something external.

Resources